Tuesday, July 12, 2011

"Metabias," the Limits of Evidence Based Medicine and Implications for Disease Management

The font of all wisdom?
It was the Disease Management Care Blog that alerted readers to the term "surveillance bias."  The ever-alert DMCB has found another catch phrase that nicely sums up another limitation of evidence-based medicine (EBM). 

This one is "metabias."

Recall that EBM is the practice of applying the high quality science of peer-reviewed medical literature to day-to-day clinical practice.  Knowing the details of the study populations, the absolute risk improvement, sample sizes and possible biases reported in prestigious medical journals should enable scientist-physicians to divine the "gold standard" of "Level 1" evidence for the betterment of their patients.  The young DMCB grew up on this.  It remembers the hospital "roundsmanship" of debating the latest published clinical trial results before entering patient rooms, much like rabbis debating the finer details of the Talmud.  Heady stuff.

Yet, EBM has had more than its fair share of problemsAdd one more, courtesy of Stephen Goodman and Kay Dickersin, writing in the Annals of Internal Medicine.  They point out that it is not unusual for physicians to "group" all of the published studies about a particular condition and pool the results, in what is commonly referred to as a "meta-analysis."  If, after summing things up, one treatment appears to result in better outcomes versus another, physicians can be more confident about the merits of the treatment, right?

Maybe not.  It turns out that one example of metabias is a "publication" bias.  That describes the phenomenon that authors are more likely to submit and medical journals are more likely to report only "positive" studies.  Contrarian negative studies typically do not see the light of day. Think of it as selective publication.

A new metabias discussed by the authors is the finding that positive studies from single institutions generally show a larger effect of treatment than studies performed in multiple institutions.  One explanation is that solo scientist-investigators may not be subject to the checks and balances of a third party and be unconsciously tilting the execution of the study in a direction that they want. 

In other words, all those prospective randomized clinical trials from single academic institutions may not be much of a gold standard after all. 

No kidding says Alison Stuebe, writing in the July 6 New England Journal.  She was a believer in EBM when she started out, but she quickly found that it didn't quite compare to the wisdom she gained through personal experience. Metabias may be one explanation.

This is important from several perspectives.  The disease and population health management community has long been criticized for the "lack of evidence" that their processes result in greater quality and lower costs.  Maybe the processes for creating that evidence are less robust than generally realized.

Last but not least, the "science" of EBM has a role to play in health care reform, but it is not the answer to all that ails health care.  Much like Dr. Stuebe, we need to be more circumspect about what the medical literature is - and isn't - telling us.

No comments: