The Reproducibility Crisis in Clinical Trial Research
Theoretically, a researcher should be able to reproduce any clinical trial and see the same or similar findings. Yet the long-standing “reproducibility crisis” in science persists, resulting in a surge of new analyses and recommendations.1-3
Meanwhile, evidence is mounting, leaving no doubt the problem is ongoing. One group of researchers recently used artificial intelligence and robotics to analyze 74 clinical trials on breast cancer cell biology. They found fewer than one-third were reproducible.4
“We were motivated to try and stop the vast waste of money spent on no reproducible biomedical science. This could be stopped if it would be possible to automate reproducibility testing,” asserts Ross King, one of the study’s authors and a fellow at the Alan Turing Institute in London.
In analyzing reproducibility of 212 emergency medicine studies, another group of researchers found few of those studies included the components required for replication.5 Only 2.49% of the 212 studies provided a material statement, and just 9.91% provided a data statement. None provided access to analysis scripts.
Over an eight-year period, researchers from the Reproducibility Project: Cancer Biology explored the replicability of the preclinical cancer biology literature. They repeated 50 experiments from 23 high-impact cancer research papers published between 2010 and 2012.6 Overall, there was weaker evidence for the findings in comparison to the original experiments.
“We found, on average, the effect sizes were 15% of what was reported in the original study,” says Timothy M. Errington, PhD, senior director of research at the Center for Open Science in Charlottesville, VA.
This means if an original finding indicated a drug extended tumor-free survival in mice for 20 days compared to control mice, in the replication it would be only three days. “That’s a substantial decrease in the efficacy of the drug. In terms of what impact this has on transitioning the findings from the preclinical space to the clinical space, this has a substantial impact as well,” Errington says.
Advancing investment in cancer therapies is based on not just whether there is an effect, but usually the magnitude of that effect, among other factors. Errington and colleagues also encountered significant difficulty in attempting to replicate the studies.7 Notably, none of the experiments were described with enough detail to replicate without the researchers going back to the original authors to seek clarifications. About one-third of the authors did not offer help or did not respond to requests at all.
“For researchers, the evidence we report suggests challenges faced in trying to build off, reuse, challenge, and/or replicate others’ work,” Errington says.
For IRBs, it signals an opportunity to raise the issue of replicability with study investigators. Errington would like to see IRBs go beyond just reviewing how study protocols are designed. They should ask more questions about how the research will be managed and disseminated. Are research labs set up to track protocols, data, and materials? Are researchers sharing protocols, data, and materials when they make their findings public? Will researchers follow practices to minimize bias, such as preregistration, blinding, randomization, and sample size reporting? Will researchers report all findings, not just those that are positive?
“Responsible conduct of research is more than data fabrication. It is the ethical way we conduct and share research,” Errington says. “The findings from this project suggest we have room to improve here.”
Adil Shamoo, PhD, MSc, CIP, has been studying reproducibility for four decades.8 When Shamoo served on the National Human Research Protections Advisory Committee in the early 2000s, “the issue of reproducibility was not even on the radar,” he says.
There are multiple ethical concerns if a study is not reproducible. “First, the science that comes out of it could harm human subjects down the line,” says Shamoo, professor at the University of Maryland School of Medicine.
Random audits of clinical trials are essential. “There are a variety of reasons to randomly audit studies. A very small sample would be statistically valid. If it is done for a clinical trial on a very visible policy issue, everybody in the country would be paying attention,” Shamoo says.
Two medical journals retracted papers on COVID-19 treatments over data integrity questions.9 Shamoo says it is vital for faculty members to understand the importance of reproducibility so they can properly convey it to students. “Everybody involved in research should be involved in responsible conduct. It’s a poor model for young researchers to see shortcuts being taken, and that their data, finally, is not reproducible,” Shamoo says.
David B. Resnik, JD, PhD, a senior ethics specialist at the National Institutes of Health, says, “we are definitely much more aware of the issue of reproducibility than we were 10 years ago.”
For any study, there is an accepted range of variation; researchers will not achieve exactly the same results if the study is repeated. “There will always be studies that are not reproducible. Science is hard to do, and peer review is not perfect,” Resnik admits. “But over time, science is self-correcting, or we hope it is.”
If researchers try to reproduce the results and cannot, the scientific community concludes the previous conclusion is no longer valid. For IRBs, sample size and statistical significance are two important factors to consider with reproducibility. Resnik suggests that IRBs ask researchers these two questions: Is the sample size appropriate for the study? Is the study well-powered statistically?
“A lot of the studies that were not reproductible were underpowered,” Resnik explains. “They were small studies and got an interesting result, but were just not large enough.”
Researchers should consider other factors that affect reproducibility, such as how many participants are needed to produce an adequate degree of statistical power and whether the study has adequate controls. Also consider whether researchers are strictly adhering to study protocols. “Investigators and IRBs need to pay more attention to factors relating to scientific rigor,” Resnik adds.
If the study is poorly designed and cannot be reproduced, it will not generate useful knowledge. That raises all kinds of ethical concerns. “That’s part of the main justification for exposing humans to risk, that you are going to get something good out of it,” Resnik says. “If you are not expected to get useful knowledge, that completely undermines your justification for exposing people to risk.”
If the IRB finds problems with the study design, researchers sometimes push back. “The investigator may say, ‘That’s none of your business as an IRB. You focus on the ethics, and leave the science to me. Don’t tell me how to do my research.’ That can be a difficult conversation, and I’ve had it before as an IRB chair,” Resnik says.
IRBs must convey it is within their purview to review the science because the science affects the ethics of the research. For example, scientific design affects risks and benefits for study participants. “You can’t really completely disentangle these two things,” Resnik asserts. “If it’s not well-designed scientifically, then it’s not an ethically sound, reproducible study.”
REFERENCES
- Doerksen E, Boivin JC. The socio-political perspectives of neuroethics: An approach to combat the reproducibility crisis in science? AJOB Neurosci 2022;13:31-32.
- Macleod M; University of Edinburgh Research Strategy Group. Improving the reproducibility and integrity of research: What can different stakeholders contribute? BMC Res Notes 2022;15:146.
- Munafò MR, Chambers C, Collins A, et al. The reproducibility debate is an opportunity, not a crisis. BMC Res Notes 2022;15:43.
- Roper K, Abdel-Rehim A, Hubbard S, et al. Testing the reproducibility and robustness of the cancer biology literature by robot. J R Soc Interface 2022;19:20210821.
- Johnson BS, Rauh S, Tritz D, et al. Evaluating reproducibility and transparency in emergency medicine publications. West J Emerg Med 2021;22:963-971.
- Errington TM, Mathur M, Soderberg CK, et al. Investigating the replicability of preclinical cancer biology. Elife 2021;10:e71601.
- Errington TM, Denis A, Perfito N, et al. Challenges for assessing replicability in preclinical cancer biology. Elife 2021;10:e67995.
- Resnik DB, Shamoo AE. Reproducibility and research integrity. Account Res 2017;24:116-123.
- Piller C, Servick K. Two elite medical journals retract coronavirus papers over data integrity questions. Science. June 4, 2020.
Theoretically, a researcher should be able to reproduce any clinical trial and see the same or similar findings. Yet the long-standing “reproducibility crisis” in science persists, resulting in a surge of new analyses and recommendations.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.