Thursday, August 27, 2015
"Fusing" Randomized Clinical Trials and Big Data: Another Value Proposition for Population Health?
Randomized controlled trials are the crown jewel of clinical research. By allocating patients to one of two or more treatment protocols (or "arms"), they can ascertain cause and effect while also eliminating any known or unknown bias from the results. As a result, they often provide "the" answer to the big questions about the true value of medical interventions.
Unfortunately, they're also difficult, expensive, time-consuming, can only gauge average impact, often exclude many "real world" patients, require patient consent and have made little impact on day-to-day health care.
Enter Big Data.
Defined as the "rapid analysis of (multiple) data sets using sophisticated machine-learning strategies," it is inexpensive, fast, uses readily available information, can give insight at an individual level, don't necessarily require patient consent, and also have had little impact on day-to-day health care.
So, Derek Angus suggests in the latest issue of JAMA that the two approaches can be fused:
1. Use electronic record based machine intelligence to scour the clinical data bases to find candidates for the trials, prompt the doctors to recruit the patients and then enter them immediately at the point of care;
2. Change the entry criteria as the application of Data to the randomized trial results begin to show that one arm is showing greater promise versus other arms;
3. Tilt the randomization toward one arm of the randomized trial if it begins to show a clinical advantage;
4. Have organizations commit to recruit ALL patients who would meet entry criteria to participation in the randomized trial.
To his credit, the author points out that there would be some challenges. Increased complexity could increase the threat of hacking of electronic health records. Convoluted recruitment, assignment and data analysis could be vulnerable to manipulation. Without high numbers of participants, heterogeneity could introduce hidden biases and undermine confidence that any observed results are real. Despite assurances that there is increased odds of actually benefitting from participation, physicians and their patients may still be reluctant to cooperate.
While this paper is really about using Big Data to help increase the efficiency of randomized trials, the Population Health Blog finds the concept intriguing. It wonders if large academic centers and traditional research sponsors have the flexibility to change their usual way of doing business.
The PHB makes note of one additional barrier: a small but additional burden on clinical workflows. While it may only take a few more minutes for a physician or nurse to deal with the prospect of a clinical trial, the multiple inefficiencies of the EHR have already added up to a significant burden. While the merits of clinical research are significant, front-line nurses and docs could view this as just one more hassle.
Since Population Health Management service providers already possess expertise in big data and electronic records, applying this to randomized trials may represent a new value proposition for the industry. Now that would be a big impact.
Unfortunately, they're also difficult, expensive, time-consuming, can only gauge average impact, often exclude many "real world" patients, require patient consent and have made little impact on day-to-day health care.
Enter Big Data.
Defined as the "rapid analysis of (multiple) data sets using sophisticated machine-learning strategies," it is inexpensive, fast, uses readily available information, can give insight at an individual level, don't necessarily require patient consent, and also have had little impact on day-to-day health care.
So, Derek Angus suggests in the latest issue of JAMA that the two approaches can be fused:
1. Use electronic record based machine intelligence to scour the clinical data bases to find candidates for the trials, prompt the doctors to recruit the patients and then enter them immediately at the point of care;
2. Change the entry criteria as the application of Data to the randomized trial results begin to show that one arm is showing greater promise versus other arms;
3. Tilt the randomization toward one arm of the randomized trial if it begins to show a clinical advantage;
4. Have organizations commit to recruit ALL patients who would meet entry criteria to participation in the randomized trial.
To his credit, the author points out that there would be some challenges. Increased complexity could increase the threat of hacking of electronic health records. Convoluted recruitment, assignment and data analysis could be vulnerable to manipulation. Without high numbers of participants, heterogeneity could introduce hidden biases and undermine confidence that any observed results are real. Despite assurances that there is increased odds of actually benefitting from participation, physicians and their patients may still be reluctant to cooperate.
While this paper is really about using Big Data to help increase the efficiency of randomized trials, the Population Health Blog finds the concept intriguing. It wonders if large academic centers and traditional research sponsors have the flexibility to change their usual way of doing business.
The PHB makes note of one additional barrier: a small but additional burden on clinical workflows. While it may only take a few more minutes for a physician or nurse to deal with the prospect of a clinical trial, the multiple inefficiencies of the EHR have already added up to a significant burden. While the merits of clinical research are significant, front-line nurses and docs could view this as just one more hassle.
Since Population Health Management service providers already possess expertise in big data and electronic records, applying this to randomized trials may represent a new value proposition for the industry. Now that would be a big impact.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment