You may have seen some dispirited reports (
here,
here, and
here) about
the published summary from Mathematica (Deborah Peikes, Arnold Chen, Jennifer Schore, Randall Brown: Effects of care coordination on hospitalization, quality of care and health care expenditures among medical beneficiaries) appearing in
JAMA (2009;301(6):603-618) on the federally funded Medicare Coordinated Care Demonstration (MCCD). This involved a total of 15 participating healthcare entities (5 disease management organizations, 3 community hospitals, 3 academic medical centeres, 1 integrated delivery system, 1 hospice, 1 long-term facility and 1 retirement community) serving fee-for-service (FFS) Medicare beneficiaries with one of several chronic conditions. Each of the 15 entities ran their own randomized clinical trial with varying inclusion criteria and risk factors. Patients were randomly assigned to usual care vs. being assigned a care coordinator who, depending on the program, used different types of behaviorally-based patient education programs that were ultimately aimed at increasing self care. Final fees ranged from $60 to $270 per member per month (PMPM).
The Medicare beneficiaries entered into these programs were sick and therefore expensive at baseline, averaging just over $1535 per member per month (PMPM). One program had to drop out because of low enrollment. Of the 14 that were remaining, only one program reduced hospitalizations in a statistically significant manner. Two programs had a significant change in costs, but in the wrong direction: both went up. Two programs had non-statistically significant reductions in cost; if outlier costs were deleted from the analysis, one of the two programs turned statistically significant. Bottom line: Medicare funded care coordination programs for FFS beneficiaries, based on this research, will not reduce healthcare costs.
Those are the facts. But the authors of the report then went on to offer up some subjective impressions, presumably based on their close working relationship with each of the MCCD entities. The Disease Management Care Blog forgives them for going on a speculative bender - but only up to a limit.
Those subjective impressions? The care coordination programs that came close to saving money were:
a) High Touch - care coordination personnel seemed to have more face-to-face time with the patients, even if that meant travelling out to the doctors' offices to meet them. Relying exclusively on the telephone seem to have less success.
b) Not Too Hot, Not Too Cold - patients who had a low burden of disease and patients that were extremely ill continued to use little or high amounts of care, respectively, no matter what intervention was used. Programs that aimed their interventions at the 'just right' patients seemed to do best.
c) Aimed At the Pills As Well As the Ills: helping patients understand why and how they need to take their medicines seemed to be helpful, and
d) All About the Fundamentals: Keying on 1) patients when they got out of the hospital and 2) assigning care coordinators by physician (and not by the patient) kept patients out of the hospital and kept physicians from having to deal with too many nurses.
So where did the authors cross the line? You guessed it: by force-fitting their subjective impressions into an editorializing closing paragraph about the supposed virtues of the Medical Home:
'...the medical home model may be able to replicate or exceed the success of the most effective MCCD programs.'
Really? This is lecturing based on what data? The DMCB believes the statistics showed that care coordination programs failed to achieve statistically significant reductions in healthcare costs. Statistical significance was only achieved in one program when high costs were censured out of the data, which is a luxury that the real world Medicare program does not have. Finally, while the authors' impressions of successful program characteristics made sense, they selectively focused on the one that fit their unfounded admiration for a yet unproven - if promising - care strategy. As an aside, an
accompanying editorial by John Ayanian of Harvard didn't do much better.
Oh... as for waiting for Medicare to catch up? The DMCB isn't too sure about that. We'll see.
2 comments:
Articles in peer-reviewed scientific journals such as JAMA generally follow a standard format, which includes guidelines for sections like an introduction, methods, results, etc. The last section is in fact reserved for editorializing to some extent, based on the results of what is being reported. If the reviewers felt that the authors took too long of a leap in their conclusions, then it would have been revised. Of course the authors have a certain viewpoint or bias, but that viewpoint should only be expressed in the conclusions that readers either accept, reject, or argue against by writing a reply. Readers are also encouraged to make their own conclusions based on the results.
Anonymous raises a good point: the Discussion section is supposed to contain some editorializing. Accordingly, if a topic (Medical Home) is only distantly related to the subject of the paper (MCCD), what's the harm? Reader caveat emptor.
And the DMCB, like any good reader, also reached for its own conclusions.
That being said, the DMCB still thinks the editors could have done a better job:
1) While it can be hard to draw the line that separates being on-topic vs. wandering, the Medical Home is distinctly different from the intervention described in MCCD. Why stop at Medical Home, and include, say.... the possible role of other equally attractive care options, like disease management?
2) And speaking of disease management, the paranoid DMCB suspects that the real agenda of the authors had less to do with a dispassionate discussion of the implications of the study and more to do with the explicit promotion of a physician-friendly policy option that has a LONG way to do in terms of an evidence base.
2)
Post a Comment