Thursday, June 11, 2009
LifeMasters & Tailoring Disease Management to Degree of Patient Activation. A Summary & a Good Example of How "Applied" Real World Research Can Work
Check out this study appearing in the American Journal of Managed Care by Judith Hibbard, Jessica Green and Martin Tusler. In it, LifeMasters tested the use of a 'Patient Activation Measure' (PAM) by nurses on call duration, the usual clinical measures (A1c, blood pressure, LDL cholesterol, flu shots, aspirin, statins, beta blockers and ACE use) as well as admissions, emergency room visits and outpatient office visits.
Two separate LifeMasters call centers were involved. Both administered the 13 item PAM survey that was designed to assess patient 'activiation,' on a 1-4 scale. The Disease Management Care Blog thinks of this as interest and willingness to participate in their own health care, where 1 is low (think of Bea Wilderd who works in the Finance Office downstairs and is baffled by the thought of actually finding out how all those pills work) and 4 is high (think Chip Ind, the IT guy upstairs who 'bings' the name of his pills to be on the look out for side effects). Both centers telephoned patients with chronic illness on behalf of payers to 'coach' behaviors designed to increase quality and lower cost. The difference was that the nurses in one center were specifically trained on how to tailor their messaging according to the PAM, while the nurses in the other center acted as a control.
The analysis was done at the University of Oregon and "Decision Research," also located in Oregon. It appears the PAM survey is owned and/or marketed by an entity called 'Insignia Health.' One author has investments in that company, and another author has been a paid consultant.
There was a one year baseline period and six months of follow-up. The study was hobbled by considerable amounts of drop outs, spotty access to insurance claims data and ad hoc additions of patient data to serve as controls. Ultimately, the two comparison groups seemed similar enough at baseline and, after statistical adjustments, there were greater changes in in the 'PAM" patients in all the clinical measures listed above with the exception of systolic blood pressure and A1c (which were reported in weird units that ranged from "692.3" to "764.6." Perhaps they meant 6.9 or 7.6?). What's more, there were statistically and financially significant reductions in admissions and emergency room use in the "PAM" patients that were extrapolated to a decrease in the per member per month (PMPM) of $145 and $11, respectively. PAM seemed to be responsible because nurse call times shifted: Bea (Level 1) spent more time on the phone if she was with a PAM-trained nurse vs. a control nurse.
The DMCB congratulates LifeMasters at several levels. Yes, this is another study demonstrating the benefit of remote patient coaching and there appears to be something to this 'PAM' approach to telephony. But what the DMCB likes most of all is LifeMasters' ability to simultaneously run a business and perform studies that not only serve their investors' and customers interests but to perform those studies with a sufficient degree of rigor that is good enough to pass muster with peer review and make it to the public domain. This is not a perfect study and, with admirable but necessary honesty, the authors point out the various weaknesses of their study design in the manucript. That being said, the DMCB (and plenty of others for example) has found that perfect studies often don't answer questions that address real world needs. Read the traditional big name medical journals and you are guaranteed to learn a lot about a little. Read studies like this about PAM and you often learn enough about a lot.
Anxious customers may tut-tut that they shouldn't pay for the direct and indirect costs of a PAM trial. Actually, they probably didn't. Pointy headed Chief Financial Officers may tut-tut that their business model shouldn't support research. Actually, that's only bulls***: a) without innovation, the industry will die and b) the Feds' granting agencies and the academic community have no idea how to perform this kind of inquiry. The DMCB speculates that customers and CFOs can find comfort in knowing that LifeMasters was unable to train all nurses in all call centers at once over the use of PAM. Since the roll-out had to be one center at a time, this gave them a perfect opportunity to perform a 'quasi-experimental' study because a 'control' comparator group was readily available. This study is a role model for all sectors of health care.
Limitations? the DMCB may be all wet, but wonders about two issues not mentioned in the manuscript.
While 'propensity matching' was used to compare and statistically adjust the two groups of patients, it's not clear from the manuscript if the propensity score included some assessment of hospitalization risk. The DMCB is concerned about this because the control group hospitalization rate went from ".04" hospitalizations per month to ".04" (ie, no change) while the intervention or PAM group went from ".06" to ".04" hospitalizations per month. Did the intervention group start out with a higher baseline and simply fall to level of the control group, or did they really do better?
The DMCB was also confused by blinding. If the nurses in both centers knew they were being compared in a study, the Hawthorne effect may have played a role in the behavior of the nurses, not PAM training.
And a small insight: want to know how much time a typical disease management nurse spends on the phone with patients in a commerical setting? According to this paper, 16-18 minutes.
Last but not least, while the PAM nurses beat the control group nurses, this was not a study comparing a superiorly tailored disease management program vs. usual care. This was one type of disease management versus another type of disease management. We are no closer to answering that Big Burning Question: does disease management work? If you read the press release, it's clear that LifeMasters believes this gives them a leg up versus their DM competition, not a reason for Federal health reform to include disease management.
Two separate LifeMasters call centers were involved. Both administered the 13 item PAM survey that was designed to assess patient 'activiation,' on a 1-4 scale. The Disease Management Care Blog thinks of this as interest and willingness to participate in their own health care, where 1 is low (think of Bea Wilderd who works in the Finance Office downstairs and is baffled by the thought of actually finding out how all those pills work) and 4 is high (think Chip Ind, the IT guy upstairs who 'bings' the name of his pills to be on the look out for side effects). Both centers telephoned patients with chronic illness on behalf of payers to 'coach' behaviors designed to increase quality and lower cost. The difference was that the nurses in one center were specifically trained on how to tailor their messaging according to the PAM, while the nurses in the other center acted as a control.
The analysis was done at the University of Oregon and "Decision Research," also located in Oregon. It appears the PAM survey is owned and/or marketed by an entity called 'Insignia Health.' One author has investments in that company, and another author has been a paid consultant.
There was a one year baseline period and six months of follow-up. The study was hobbled by considerable amounts of drop outs, spotty access to insurance claims data and ad hoc additions of patient data to serve as controls. Ultimately, the two comparison groups seemed similar enough at baseline and, after statistical adjustments, there were greater changes in in the 'PAM" patients in all the clinical measures listed above with the exception of systolic blood pressure and A1c (which were reported in weird units that ranged from "692.3" to "764.6." Perhaps they meant 6.9 or 7.6?). What's more, there were statistically and financially significant reductions in admissions and emergency room use in the "PAM" patients that were extrapolated to a decrease in the per member per month (PMPM) of $145 and $11, respectively. PAM seemed to be responsible because nurse call times shifted: Bea (Level 1) spent more time on the phone if she was with a PAM-trained nurse vs. a control nurse.
The DMCB congratulates LifeMasters at several levels. Yes, this is another study demonstrating the benefit of remote patient coaching and there appears to be something to this 'PAM' approach to telephony. But what the DMCB likes most of all is LifeMasters' ability to simultaneously run a business and perform studies that not only serve their investors' and customers interests but to perform those studies with a sufficient degree of rigor that is good enough to pass muster with peer review and make it to the public domain. This is not a perfect study and, with admirable but necessary honesty, the authors point out the various weaknesses of their study design in the manucript. That being said, the DMCB (and plenty of others for example) has found that perfect studies often don't answer questions that address real world needs. Read the traditional big name medical journals and you are guaranteed to learn a lot about a little. Read studies like this about PAM and you often learn enough about a lot.
Anxious customers may tut-tut that they shouldn't pay for the direct and indirect costs of a PAM trial. Actually, they probably didn't. Pointy headed Chief Financial Officers may tut-tut that their business model shouldn't support research. Actually, that's only bulls***: a) without innovation, the industry will die and b) the Feds' granting agencies and the academic community have no idea how to perform this kind of inquiry. The DMCB speculates that customers and CFOs can find comfort in knowing that LifeMasters was unable to train all nurses in all call centers at once over the use of PAM. Since the roll-out had to be one center at a time, this gave them a perfect opportunity to perform a 'quasi-experimental' study because a 'control' comparator group was readily available. This study is a role model for all sectors of health care.
Limitations? the DMCB may be all wet, but wonders about two issues not mentioned in the manuscript.
While 'propensity matching' was used to compare and statistically adjust the two groups of patients, it's not clear from the manuscript if the propensity score included some assessment of hospitalization risk. The DMCB is concerned about this because the control group hospitalization rate went from ".04" hospitalizations per month to ".04" (ie, no change) while the intervention or PAM group went from ".06" to ".04" hospitalizations per month. Did the intervention group start out with a higher baseline and simply fall to level of the control group, or did they really do better?
The DMCB was also confused by blinding. If the nurses in both centers knew they were being compared in a study, the Hawthorne effect may have played a role in the behavior of the nurses, not PAM training.
And a small insight: want to know how much time a typical disease management nurse spends on the phone with patients in a commerical setting? According to this paper, 16-18 minutes.
Last but not least, while the PAM nurses beat the control group nurses, this was not a study comparing a superiorly tailored disease management program vs. usual care. This was one type of disease management versus another type of disease management. We are no closer to answering that Big Burning Question: does disease management work? If you read the press release, it's clear that LifeMasters believes this gives them a leg up versus their DM competition, not a reason for Federal health reform to include disease management.
Subscribe to:
Post Comments (Atom)
1 comment:
Thank you for your review of our recent study. As the lead investigator on the study, I am happy answer the questions you raised about the methodology. You asked if hospitalization risk was part of the propensity score risk adjustment we used to equalize the two study groups. Yes, the risk severity score, based on claims data, was part of the risk adjustment approach. You also asked about the differences in hospitalization rates between the two groups. The analytic approach we used assessed the trajectory of change and examined whether this trajectory was different for the intervention group as compared to the control group. This analytic approach reduced the need to control for multiple factors, because most of the characteristics of the individuals remained fixed and changes that were observed could be attributed to the intervention. However, because there were some key differences at baseline (including their baseline utilization rates), we constructed the propensity weights to equalize the 2 groups. That is to say, after controlling for other differences, the statistical significance in the utilization tables indicates that the trajectory of change significantly differed for the intervention group as compared to the control group. In the case of Emergency Department use, the control group’s trajectory was up, while the intervention group was down. In the case of hospitalizations, the control group trajectory was flat, while the intervention group went down.
Finally you asked about a possible Hawthorne effect, with the nurses who were responding to this “observation” rather than the PAM intervention. This is a very unlikely explanation, as both the intervention nurses and the control group nurses knew they were in a study and were being observed, thus the effect of “observation,” would have been the same for both groups of nurses.
Post a Comment