Monday, April 13, 2009
Listen Up Federal Coordinating Council on Clinical Effectiveness Research: Here's Some Advice From the Disease Management Care Blog
Darn it.
The Disease Management Care Blog spent the day traveling to D.C. for the World Health Care Congress and missed the deadline to submit commentary to the Federal Coordinating Council on Clinical Effectiveness Research (FCCCER). Looks like it will miss the April 14 listening session too, since it'll be listening to other wisemen in some audience somewhere in the bowels of the Wardman Park Marriott. The listening sessions starts at 2 PM and you can link in here.
Undaunted, the DMCB assumes, hopes and aspires to the notion that many on or associated with FCCCER regularly read this blog and that they're really into listening and not pandering. So for better or worse, here's some tardy advice for the CER crowd:
Comparative effectiveness research (CER) is a style of scientific medical inquiry that compares a new proposed treatment to the state of the art treatment. Up until now, much of previous research on new advances in care has pitted new treatments against placebo or against one or more weak approaches that are predestined to fall short. Often engineered that way on purpose, the result has been a mash of care options that flummox physicians, cause escalading variation and force insurers to pay for everything.
Unfortunately, proponents of CER assume using state-of-the-art comparators will enlighten physicians and reduce costs. That’s unlikely unless CER addresses other far more serious shortcomings that have gone largely under-examined in the CER debate. Unless these shortcomings are also managed, physician inattention will continue, clinical research will be marginalized, variation will linger and cost will continue to spiral upward.
These shortcomings are three fold:
Most of the medical research published today is typically focused on a limited number of variables among the many obscure interrelated details of disease, laced with picayune caveats and weighted down with dense methodologic jargon. As a result, many reductionist peer review publications are opaque data blobs remarkably lacking in the kind of insight needed by today’s modern physician. Many become dated by the time they appear in print anyway. They can be ignored with little risk to most patients most of the time week after week. Why should we expect CER to be any different?
What’s more, published research is too often only partially about the patients. The granting agencies, the research scientists, the medical schools and the medical journals – while well meaning - don’t necessarily exist for the benefit of patients, but for the benefit of more grants, bigger medical schools, higher academic promotion and the luster that comes with appearing in print. As we know more and more about less and less, why should we expect CER to be any different?
Last but not least, typical medical research often resembles a successful operation on a patient that died: it too often falls short in answering the bigger questions. These unanswered questions often include: what combination of approaches works best for typical patients given a particular set of cultural and economic circumstances? When and by how much do they improve patients’ well being, extend life, save money and cause physicians to go out of business? Since decades of traditional research, with a few exceptions, has repeatedly failed to ask the right questions while substituting statistical significance for real world consequence, why should we expect CER to be any different?
Until we know if the current medical-industrial complex is a) up to the task of asking the right questions, b) creating the right protocols, c) accessing the right research settings, d) getting meaningful data, e) extracting useful information and f) easily and efficiently transmitting the insights in a manner that is useful to docs who take care of patients day in and day out, CER will mostly likely stand as the newest testimony to the futility of other uni-dimensional big-bang central solutions, like capitation, new and improved CME, RVUs, P4P and EHRs.
Here's four suggestions on how to make CER different:
Target funding for CER at researchers with a track record in meaningful effectiveness research performed in community settings untouched by ivory tower academia.
Efficiently test multiple mutually supporting interventions designed to create breakthroughs and reward success unencumbered by the Priesthood of the Shrine of 0.05 and the pyramid system of academic promotion.
All results, both positive and negative, need to be divorced from the hidebound paper-based review process and routinely fast-tracked through multiple channels in the expanding medical information system (including on-line publishing, open access data and dare we mention blogs?).
The Disease Management Care Blog spent the day traveling to D.C. for the World Health Care Congress and missed the deadline to submit commentary to the Federal Coordinating Council on Clinical Effectiveness Research (FCCCER). Looks like it will miss the April 14 listening session too, since it'll be listening to other wisemen in some audience somewhere in the bowels of the Wardman Park Marriott. The listening sessions starts at 2 PM and you can link in here.
Undaunted, the DMCB assumes, hopes and aspires to the notion that many on or associated with FCCCER regularly read this blog and that they're really into listening and not pandering. So for better or worse, here's some tardy advice for the CER crowd:
Comparative effectiveness research (CER) is a style of scientific medical inquiry that compares a new proposed treatment to the state of the art treatment. Up until now, much of previous research on new advances in care has pitted new treatments against placebo or against one or more weak approaches that are predestined to fall short. Often engineered that way on purpose, the result has been a mash of care options that flummox physicians, cause escalading variation and force insurers to pay for everything.
Unfortunately, proponents of CER assume using state-of-the-art comparators will enlighten physicians and reduce costs. That’s unlikely unless CER addresses other far more serious shortcomings that have gone largely under-examined in the CER debate. Unless these shortcomings are also managed, physician inattention will continue, clinical research will be marginalized, variation will linger and cost will continue to spiral upward.
These shortcomings are three fold:
Most of the medical research published today is typically focused on a limited number of variables among the many obscure interrelated details of disease, laced with picayune caveats and weighted down with dense methodologic jargon. As a result, many reductionist peer review publications are opaque data blobs remarkably lacking in the kind of insight needed by today’s modern physician. Many become dated by the time they appear in print anyway. They can be ignored with little risk to most patients most of the time week after week. Why should we expect CER to be any different?
What’s more, published research is too often only partially about the patients. The granting agencies, the research scientists, the medical schools and the medical journals – while well meaning - don’t necessarily exist for the benefit of patients, but for the benefit of more grants, bigger medical schools, higher academic promotion and the luster that comes with appearing in print. As we know more and more about less and less, why should we expect CER to be any different?
Last but not least, typical medical research often resembles a successful operation on a patient that died: it too often falls short in answering the bigger questions. These unanswered questions often include: what combination of approaches works best for typical patients given a particular set of cultural and economic circumstances? When and by how much do they improve patients’ well being, extend life, save money and cause physicians to go out of business? Since decades of traditional research, with a few exceptions, has repeatedly failed to ask the right questions while substituting statistical significance for real world consequence, why should we expect CER to be any different?
Until we know if the current medical-industrial complex is a) up to the task of asking the right questions, b) creating the right protocols, c) accessing the right research settings, d) getting meaningful data, e) extracting useful information and f) easily and efficiently transmitting the insights in a manner that is useful to docs who take care of patients day in and day out, CER will mostly likely stand as the newest testimony to the futility of other uni-dimensional big-bang central solutions, like capitation, new and improved CME, RVUs, P4P and EHRs.
Here's four suggestions on how to make CER different:
Target funding for CER at researchers with a track record in meaningful effectiveness research performed in community settings untouched by ivory tower academia.
Efficiently test multiple mutually supporting interventions designed to create breakthroughs and reward success unencumbered by the Priesthood of the Shrine of 0.05 and the pyramid system of academic promotion.
All results, both positive and negative, need to be divorced from the hidebound paper-based review process and routinely fast-tracked through multiple channels in the expanding medical information system (including on-line publishing, open access data and dare we mention blogs?).
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment