Thursday, February 19, 2009

An Instructive Study Gets Published in the American Journal of Managed Care

Regular readers of the Disease Management Care Blog are already aware that the DMAA took the lead in developing a ‘real world’ approach to evaluating the economic impact of population-based care management programs. Since prospective, randomized clinical trials are not reasonable in business settings and parallel control cohorts may not be available, the ‘DMAA approach’ allows for a pre-post design, including the cost trend of a ‘non-chronic’ population to predict what the cost would have been absent the intervention. It also recommends that the costs measured pre and post be generally based on persons who are ‘requalified.’

Confused? You don't have to be:

The ‘trend of the non-chronic population’ is based on the observation that, from year to year, there is a baseline rate of health care cost inflation. If health care costs among persons without chronic illness climbs 10% per year, then a 9% cost increase in the disease management population from year 1 (pre) to year 2 (post) implies you saved 1%. It may be counterintuitive (versus relying on the trend among persons with chronic illness), but a lot of background work has shown this is a conservative and better approach.

The term ‘pre and post be based on persons who are requalified’ is also counterintuitive. In any insured population, there are persons in the pre or baseline year with one or more insurance claims for chronic illness care and then there are persons in the post or follow-up year with one or more claims for chronic illness care. The 'requalified approach' says that you are only allowed to include the costs of persons with active claims in the year of measurement. So, if a person with a claim for diabetes in year 1 (pre) turns out to not have a claim for diabetes in year 2 (post), that person’s costs are not used to measure cost in year 2. They are taken out.

Which is why a paper by DMCB colleague Soeren Mattke and colleagues (Seth Serxner, Sarah Zakowski, Arvind Jain and Daniel Bold) appearing in the latest issue of the American Journal of Managed Care should be of great interest. It not only is another carefully conducted study of the impact of disease management but it’s a good exercise in what you should look for when you read studies like this. As an aside, it was Dr. Mattke who was responsible for this oft-quoted paper, so when he writes it, the DMCB reads it.

The authors used insurance claims data to assess the cost impact of unnamed vendor-owned case management, disease management, medical advice telephone line and wellness programs sponsored by two large unnamed employers. The DMCB thinks the employers were probably self-insured and that the program designs were of the usual type, including HRAs and predictive modeling. Two year baseline costs 'pre' were compared to costs one year 'post' baseline and the programs' own predictive modeling was used to statistically adjust for other factors that could have influenced the numbers.

So, now that you know about the outlines of ‘non-chronic trend’ and the ‘requalification’ method, you probably want to know about that too. The DMCB notes the non-chronic trend was not explicitly mentioned by Mattke et al; rather they used a ‘nonpurchased trend.’ It also appears that there was not a requalification but an ‘intention to treat’ approach that probably means all persons eligible in year 1 were included in the year 2 cost analysis.

Bottom line? The programs in aggregate were associated with a non-statistically significant $13.75 increase in the per member per month (PMPM) claims expense over what was expected, based on (the 'non-purchased') trend. Case management dropped costs by $1.35, disease management increased them by $8.63, the advice line increased them by $21.71 and wellness increased them by $20.14; only wellness was statistically significant and it went in the wrong direction. Ouch.

The DMCB thinks this is a first-rate study for the following reasons:

1. This probably started out as a standard inquiry into the performance of a disease management vendor for a purchaser. It was disciplined, methodologically rigorous and turned out to be not that far from being good enough for peer-review publication. It marries the real world and evaluation science. Bravo. The disease management industry has come a long way.

2. It offers up a template that could be used by purchasers to economically assess the impact of their wellness, nurse-advice lines, disease management and care management programs. Yes, we can debate the merits of the trend that was used and whether the lack of a requalification was important (and it probably is), but the DMCB thinks that can be swapped in or out depending on the preferences of the purchaser and its actuaries. Dr. Mattke and colleagues have given purchasers a public-domain benchmark on how these analyses could be conducted.

Think you don't have the time/resources to bother with publication? The DMCB says that if you perform an adequate evaluation, you should be 90% there. The other 10% is the cost of doing business.

3. Think we'll ever get to a standard methodology to assess disease management programs? Think again. As testimony to this, Mattke et al strengthened their paper by conducting other analyses that had their own merits (and didn't really change the overall conclusions).

4. The DMCB is reluctant to generalize the findings from this single study to the entire industry. Other purchasers are discovering savings, or they wouldn't be buying into these programs. The conclusion is not that disease management doesn't work, but more DMAA members need to publish their findings. And by the way, in addition to quibbling over the methodology, we don’t know enough about the employee population or their insurance benefit design or about the disease management programs.

5. It would appear that the package of interventions purchased by these unnamed employers did not give them their money’s worth. The DMCB says fire the disease management vendor and issue a new RFP – including the warning that another analysis based on the methodology above will be used to assess their performance next year.

Hopefully, that will be shared in a peer-reviewed setting.

No comments: