Sunday, December 6, 2009
Public Reporting of Hospital Quality With Media Attention Can Make a Difference: The EFFECT Study from JAMA
In addition to widespread agreement that financial penalties are a good way to batter hospitals over the head on quality of care issues, many policy makers, insurers and academicians also believe in public reporting. After all, nothing like the glare of publicity to shame the poor performers and make patients think again about going to St. Infectya's ER when Washdahands County Hospital has much better numbers. This threat could prompt hospitals to improve their care processes and bend the curve, right?
Well, finally we have some clever and well performed research that says ‘maybe.’
Check out this study by Dr. Tu and colleagues on the EFFECT Trial that was published in the December 2 2009 issue of JAMA. A mix of 86 Canadian Ontario province hospitals participated in a randomized trial to assess the impact of public reporting on how well they treated heart attacks and heart failure.
Readers may recall that the Disease Management Care Blog has pointed out that formal randomized controlled clinical trials can be time consuming, cumbersome and expensive. This Ontario research is an example of a staggered roll out design in which everyone eventually receives an intervention. Since it can be administratively cumbersome to intervene on all hospitals at the same time, it can make sense to do a fraction of the group first and then follow-up with the rest later. During the period of time when the laggards are waiting their turn, they can act as a functional control/comparator group. That’s what happened here.
After 5 hospitals eventually withdrew from the study, 42 hospitals were assigned to and completed an ‘early’ and aggressive roll-out of public reporting that started in January of 2004. 39 hospitals were assigned a ‘late’ and more modest roll-out that started 21 months later. The study design can be found here.
Both groups of hospitals had baseline performance measures collected on the quality of care for all of their heart attack and heart failure patients that had been admitted over a two year period from 1999 through 2001. The ‘early’ hospitals had these baseline performance measures released in January of 2004 with considerable fanfare. It was reported in many TV, radio and print media outlets. In September of 2005, the delayed hospital’s data were released on the web only and without any media fanfare. It was left up to the individual hospitals to decide how to respond to the measures. After a period of time, the hospitals’ heart attack and heart failure measures were assessed again.
So what happened? Most of the twelve measures of quality in heart attack care, such as the use of standard admission orders, assessments of cardiac function, measuring blood cholesterol levels, use of clot busting medications and the use of aspirin, beta blockers, cholesterol medications and ACE inhibitors went up in both hospital groups. The degree of change was no different in the two groups except for one key measure. The aggressive and early use of 'clot busting' medications prior to transfer to a cardiac care unit turned out to be statistically significantly higher in the ‘early’ group. The ‘early’ group also had a 30 day mortality rate that was 2.5% lower compared to the ‘late’ release hospitals. This difference turned out to be statistically significant.
There were six measures of quality in heart failure care, including assessment of left ventricular function, getting daily weights, patient counseling, use of ACE inhibitors, beta blockers and warfarin. Once again, most measures went up in both sets of hospitals, but only the greater use of ACE inhibitors in the ‘early’ group achieved statistical significance. One year mortality rate for heart failure was also statistically significantly lower in the early intervention hospitals.
While the authors were critical of relatively blunted effect of high publicity quality reporting (‘only one of 12 heart attack process measures and only one of six heart failure measures showed any improvement,’ and ‘public release of data may not be particularly effective.’) the DMCB is favorably impressed. Interventions that reduce mortality rates in heart attack and heart failure are difficult to come by and this intervention seemed to make a difference.
What’s more, the DMCB also finds the data are intuitively credible. It's been known for a long time that giving clot busters early in the course of care instead of waiting until the patient is in the cardiac care unit saves lives among heart attack victims. In addition, increasing the use of ACE inhibitors is well known to reduce the mortality rate among persons with heart failure.
Take aways:
For the first time, we have good research showing that public reporting with media uptake can lead to real changes in processes of care that are linked to tangible reductions in death rates. While the reductions are not huge, they are meaningful and of a similar magnitude compared to those obtained from other modern interventions in cardiac care.
While public reporting and media attention didn't lead to across the board improvements in all corners of cardiac care, the DMCB wonders if ‘fine tuning’ the measures and the public reporting on interventions that are shown to meaningfully change mortality rates should be considered. For example, should future public reporting be limited to measures on the use of clot busters or ACE inhibitors?
This is also an excellent example of the difference between peer-review published articles that double as marketing materials versus those that are credible scientific reports. The former typically extol their modest findings as robust or ground-breaking advances, while the latter are hyper-critical and tend to understate the significance of their findings while letting the numbers speak for themselves. This particular article is definitely in the latter category.
Last but not least, this is also a good example of clever regional collaboration among care providers to answer important research questions. It seems to the DMCB that all the Ontario Hospitals had pretty much agreed to public reporting. They saw the roll-out as an opportunity to test various methods of carrying it out and came up with some important insights. Hopefully future U.S. comparative effectiveness research will see these opportunities for what they are and the Medicare Chronic Care Practice Research Network will also be willing to adopt these kinds of efficient models.
Image from Wikipedia
Well, finally we have some clever and well performed research that says ‘maybe.’
Check out this study by Dr. Tu and colleagues on the EFFECT Trial that was published in the December 2 2009 issue of JAMA. A mix of 86 Canadian Ontario province hospitals participated in a randomized trial to assess the impact of public reporting on how well they treated heart attacks and heart failure.
Readers may recall that the Disease Management Care Blog has pointed out that formal randomized controlled clinical trials can be time consuming, cumbersome and expensive. This Ontario research is an example of a staggered roll out design in which everyone eventually receives an intervention. Since it can be administratively cumbersome to intervene on all hospitals at the same time, it can make sense to do a fraction of the group first and then follow-up with the rest later. During the period of time when the laggards are waiting their turn, they can act as a functional control/comparator group. That’s what happened here.
After 5 hospitals eventually withdrew from the study, 42 hospitals were assigned to and completed an ‘early’ and aggressive roll-out of public reporting that started in January of 2004. 39 hospitals were assigned a ‘late’ and more modest roll-out that started 21 months later. The study design can be found here.
Both groups of hospitals had baseline performance measures collected on the quality of care for all of their heart attack and heart failure patients that had been admitted over a two year period from 1999 through 2001. The ‘early’ hospitals had these baseline performance measures released in January of 2004 with considerable fanfare. It was reported in many TV, radio and print media outlets. In September of 2005, the delayed hospital’s data were released on the web only and without any media fanfare. It was left up to the individual hospitals to decide how to respond to the measures. After a period of time, the hospitals’ heart attack and heart failure measures were assessed again.
So what happened? Most of the twelve measures of quality in heart attack care, such as the use of standard admission orders, assessments of cardiac function, measuring blood cholesterol levels, use of clot busting medications and the use of aspirin, beta blockers, cholesterol medications and ACE inhibitors went up in both hospital groups. The degree of change was no different in the two groups except for one key measure. The aggressive and early use of 'clot busting' medications prior to transfer to a cardiac care unit turned out to be statistically significantly higher in the ‘early’ group. The ‘early’ group also had a 30 day mortality rate that was 2.5% lower compared to the ‘late’ release hospitals. This difference turned out to be statistically significant.
There were six measures of quality in heart failure care, including assessment of left ventricular function, getting daily weights, patient counseling, use of ACE inhibitors, beta blockers and warfarin. Once again, most measures went up in both sets of hospitals, but only the greater use of ACE inhibitors in the ‘early’ group achieved statistical significance. One year mortality rate for heart failure was also statistically significantly lower in the early intervention hospitals.
While the authors were critical of relatively blunted effect of high publicity quality reporting (‘only one of 12 heart attack process measures and only one of six heart failure measures showed any improvement,’ and ‘public release of data may not be particularly effective.’) the DMCB is favorably impressed. Interventions that reduce mortality rates in heart attack and heart failure are difficult to come by and this intervention seemed to make a difference.
What’s more, the DMCB also finds the data are intuitively credible. It's been known for a long time that giving clot busters early in the course of care instead of waiting until the patient is in the cardiac care unit saves lives among heart attack victims. In addition, increasing the use of ACE inhibitors is well known to reduce the mortality rate among persons with heart failure.
Take aways:
For the first time, we have good research showing that public reporting with media uptake can lead to real changes in processes of care that are linked to tangible reductions in death rates. While the reductions are not huge, they are meaningful and of a similar magnitude compared to those obtained from other modern interventions in cardiac care.
While public reporting and media attention didn't lead to across the board improvements in all corners of cardiac care, the DMCB wonders if ‘fine tuning’ the measures and the public reporting on interventions that are shown to meaningfully change mortality rates should be considered. For example, should future public reporting be limited to measures on the use of clot busters or ACE inhibitors?
This is also an excellent example of the difference between peer-review published articles that double as marketing materials versus those that are credible scientific reports. The former typically extol their modest findings as robust or ground-breaking advances, while the latter are hyper-critical and tend to understate the significance of their findings while letting the numbers speak for themselves. This particular article is definitely in the latter category.
Last but not least, this is also a good example of clever regional collaboration among care providers to answer important research questions. It seems to the DMCB that all the Ontario Hospitals had pretty much agreed to public reporting. They saw the roll-out as an opportunity to test various methods of carrying it out and came up with some important insights. Hopefully future U.S. comparative effectiveness research will see these opportunities for what they are and the Medicare Chronic Care Practice Research Network will also be willing to adopt these kinds of efficient models.
Image from Wikipedia
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment