Clinical Trials: More is Not Necessarily Better
Amid a rising tide of research, IRBs should look for ‘social value’
While one may reasonably assume that more clinical research could increase the likelihood of medical breakthroughs, a contrarian’s view is that the effect could be quite the opposite — and it falls to IRBs to intervene and reduce the risks of the current glut of trials.
“It probably seems intuitive to most people that if research is good, then more research is better,” says Kirstin Borgerson, PhD, associate professor of philosophy and a bioethics researcher at Dalhousie University in Halifax, Nova Scotia. “But we know that many studies conducted today are of low quality. For instance, they are too small to produce statistically significant results, or have unjustified exclusion criteria. When these low-quality trials are published alongside high-quality trials, and when both are published at astonishingly high rates — as they are today — this leads to what I call the ‘sorting problem.’ Basically, anyone wanting to make use of the research evidence has to first sort through hundreds, and even thousands, of bad studies to find the good ones. Alongside skill and effort, this takes time, and that means that research is slow to filter into practice.”
In a recent paper1 on this situation, Borgerson cited studies that document the problem, with one concluding that “every day there are now 11 systematic reviews and 75 trials, [published] and there are no signs of this slowing down — but there are still only 24 hours in a day.”2 Other studies estimate that medical research output doubles every seven years, and to date, some 1 million clinical trials are in print.3,4
“Even if it were true that all research was good — that is, high quality — at some point the publication output would exceed the time available to clinicians for reading and critically assessing those studies,” she says. “And if clinicians then turn to experts to do this critical work for them, they are faced with challenges in determining which experts to trust since so many have their own agendas. They also have to allow those experts to apply some rules of evidence — all of which have shortcomings and limitations that aren’t always acknowledged. So even in this ideal scenario, things aren’t straightforward.”
Borgerson argues in the paper that the “overproduction of low-quality clinical research is very likely to be harmful to patients. On ethical grounds, there are persuasive reasons to endorse the position that we should conduct fewer clinical trials. Researchers and research ethics committees should work together to ensure that trials truly benefit society, as they are meant to do.”
In that regard, IRBs should look at research that has “social value” and pragmatic implications for patients, she notes. Borgerson cites tools like PRECIS (Pragmatic Explanatory Continuum Indicator Summary) and PRECIS-2.5 The latter provides nine areas to assess trial design, including eligibility criteria, recruitment, setting, and primary outcome.
“In general, the more these design elements match usual care, the more pragmatic the trial,” Borgerson notes in the paper. “… There is growing support for this position, for instance, in trends toward comparative effectiveness and translational research, research-practice integration, and quality improvement studies.”
That said, there may be disincentives in place that may give pause to IRBs or researchers wanting to adopt such strategies, Borgerson concedes.
“There may be some sense that requiring that trials are assessed for social value, using a tool like PRECIS, is overreaching on the part of IRBs,” she says. “Researchers don’t always respond well to ethics committees that question their methods — there seems to be this idea that science and ethics are entirely distinct from each other. I think this is just false, but it is nevertheless a view that will impede efforts to move in a pragmatic direction.”
In addition to the challenges of fighting this battle, IRB members may feel unqualified to assess the design of a study, uneasy with the idea of predicting future social value, or be already so overworked that the idea of adding further work is disheartening, she says.
“These are all real — and serious — concerns, many having to do with the support and training available to IRB members,” Borgerson says. “What I try to emphasize in the paper is the fact that the obligation to assess the social benefits and social harms of all research trials is already a responsibility of IRBs. I am just identifying a particular harm of permitting poor-quality studies to proceed — they contribute to the sorting problem. So, IRBs can’t really ignore this responsibility.”
Small sample sizes may certainly limit extrapolation to other patient populations, but there are several factors to consider before drawing an arbitrary minimum line for trials to move forward.
“There will be exceptions for Phase I trials and research on orphan diseases — and, of course, different methods and questions will require different cut-offs,” she says. “But otherwise, yes, I think that trials that are too small, or which are likely to fall short of recruiting enough participants, shouldn’t be conducted.”
Another issue that is being reported with concern is that a surprising number of clinical trial findings cannot be reproduced. We asked Borgerson if this should be factored into the equation.
“I’ve been following this debate over reproducibility and it seems that failure to reproduce results is sometimes the result of poor quality in the initial study,” she says. “That suggests to me that we can make some progress on the reproducibility problem by ensuring that all studies approved by IRBs are genuinely high-quality studies, which is exactly what I argue for in the paper.”
REFERENCES
- Borgerson, K. An Argument for Fewer Clinical Trials. Hastings Cent Rep 2016;46(6):25-35.
- Bastian H, Glasziou P, Chalmers I. Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? PLoS Medicine 2010; doi:10.1371/journal. pmed/10000326.s001.
- Hoffman T, et al. The Scatter of Research: Cross-Sectional Comparison of Randomised Trials and Systematic Reviews across Specialties. BMJ 2012; 344: doi:10.1136/bmj.e3223.
- Ioannidis, J. Why Most Clinical Research Is Not Useful. PLoS Medicine 2016; 13(6): doi:10.1371/journal.pmed.1002049
- Loudon K, et al. The PRECIS-2 Tool: Designing Trials That Are Fit for Purpose. BMJ 2015; 350: doi:10.1136/bmj.h2147.
While one may reasonably assume that more clinical research could increase the likelihood of medical breakthroughs, a contrarian’s view is that the effect could be quite the opposite — and it falls to IRBs to intervene and reduce the risks of the current glut of trials.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.