Tuesday, July 20, 2010

The Limits of a "Successionist" Mind Set in the Search for Payment Innovation in Medicare

The Disease Management Care Blog survived today's panel discussion at the 6th Annual Leadership Summit on Medicare. We discussed CMS' search for payment innovation that leads to increased quality with lower costs.

The DMCB started out by noting that not all quality means lower claims expense. What's more, even if there are savings, the cost of delivering the quality can negate any savings. There are ways around that, but that's for another post. Something far more interesting came up.

The DMCB focused on the more fundamental issue of measurement methodology. If you think about it, this typically uses a "successionist" mind set. In other words, scientists and policymakers expect that if an intervention is made in a system and everything else is held neutral, that anything that follows must have been "caused" by the intervention. Succession means "the act or process of following in order or sequence." The study of interventions and how they lead to outcomes forms the basis of health services research, evidence-based medicine, clinical research trials and the like.

There are two problems with using the successionist mind set to assess the quality and costs that follow medical interventions:

1) The measurement environment is statistically "noisy," including a general upward trend in overall health care costs. Picking out the impact of a single intervention is like trying to hear the East River's tug boats from atop of New York City's Empire State Building.

2) Health care is a messy, complex social system with many moving parts. It's hard to control the other factors that play a role in quality and cost, especially from a patient's point of view. That makes it hard to know just what caused the movement of those tug boats.

There are solutions though. The DMCB paraphrases from an excellent article that appeared in JAMA back in 2008:

A successionist tool kit that holds all sources of bias neutral while [what] it measures ... may be unequal to the task of evaluating complex, nonlinear, interpersonal social programs, with findings that ... [are] ... typically non-cumulative, low-impact [and] prone-to-equivocation. The emphasis on knowing whether something works leads to little insight on how/why it works. There are four ways to overcome this:

1. Use a wider range of research methodologies that draw on ethnography, anthropology and other qualitative methods.

2. Reconsider the threshold for action on evidence, especially if the status quo is unacceptable.

3. Measure bias, don’t seek to eliminate it. The knowledge of trusted insiders may be more powerful than the conclusions of distant third party evaluators.

4. Don’t get in the way with insistence on weighing evidence with impoverished tools but asking what is everyone learning.


The DMCB agrees that CMS needs to think about incorporating these approaches in its search for value.

Who wrote this, you ask? Don Berwick, the new Administrator at CMS. The link is here.

No comments: