By Stacey Kusterbeck
Implementation research improves the adoption of evidence-based interventions in actual clinical practice. “Implementation science is becoming more popular as a way to bridge the gap between research and practice by closely studying the integration of research findings into clinical care,” says Justin Clapp, PhD, MPH, an assistant professor of anesthesiology and critical care, medical ethics and health policy at Perelman School of Medicine.
Lack of guidance from existing ethics frameworks makes it difficult for implementation science researchers to anticipate what will be ethically problematic. “Current frameworks for evaluating the ethics of medical research don’t account for many of the features of implementation science studies,” observes Clapp.
Traditional clinical trials are about testing the efficacy of new drugs or techniques in highly controlled settings so the effect of an intervention can be isolated and measured. In contrast, implementation studies try to figure out how to best integrate new approaches into everyday clinical settings. “Trying to tightly control these settings would introduce an element of artificiality and lessen their ability to achieve sustained uptake of whatever intervention they’re trying to implement,” Clapp explains. Also, implementation studies often target group- or system-level clinical processes instead of individual patients. “Several core tenets of research ethics are of questionable relevance for implementation research,” says Clapp. Clapp offers these examples of ethical questions that often go unaddressed:
- Since implementation research often targets clinician behavior, how serious are the risks posed to clinicians?
- Do clinicians have the right to opt out of implementation studies that target entire units?
- Should clinicians feel obligated to participate in this kind of research?
In light of these concerns, Clapp and colleagues sought to learn more about how implementation studies affected clinicians. The researchers interviewed 32 clinicians working at sites participating in an implementation study on patient handoffs from the operating room to the intensive care unit (ICU).1 Some key findings:
Clinicians were divided on whether informed consent from clinicians was necessary or not.
By and large, clinicians were satisfied with the researchers’ approach of sending out mass communications about the study. In this particular study of patient handoffs, the research team sent out mass emails to each participating ICU’s clinicians. Clinicians who did not respond were assumed to be OK with participation. “This is sometimes called ‘broadcast consent’ in the research ethics literature. The clinicians we interviewed often argued that it did not achieve the functions of traditional informed consent. Nevertheless, they were largely fine with it,” says Clapp. Most clinicians were comfortable being included in the study.
Clinicians did not think that it was feasible for them to opt out of some components of the study.
However, clinicians did not find that problematic, seeing little difference between implementation research and routine quality improvement (QI) efforts.
“A lot of ink has been spilled by bioethicists discussing exactly what distinguishes QI from clinical research,” says Clapp. QI initiatives typically are undertaken by health systems to implement widely recognized best practices. Unlike QI initiatives, implementation studies are not just implementing a new practice, but also are simultaneously testing whether the new practice is effective or not. Clinicians are sometimes unaware of the experimental aspect or the analysis that is going on behind the scenes. “Or they’re confident the intervention being implemented is going to have a positive effect. These tendencies can cause them to equate implementation studies with QI,” says Clapp.
Some clinicians expressed concern that working in the hectic ICU setting would prevent them from being thoroughly engaged in the research study.
Clinicians worried that perceived lack of enthusiasm could damage relationships with senior colleagues involved in the study.
“While employees aren’t generally considered a vulnerable research population, the ramifications of employees acting as research subjects are undertheorized,” argues Clapp. The study authors argue that risks based on employment status are the central type of risk in implementation research. “We encourage both researchers and IRBs [Institutional Review Boards] to be sensitive to this issue, even though it isn’t a classic feature of research ethics frameworks,” says Clapp.
Randomization, a common research design in implementation science studies, allows researchers to identify whether a specific implementation strategy is effective at moving evidence-based treatments into the clinical setting. “In this context, the ‘intervention’ is the ‘implementation strategy’ used to address gaps between research evidence and real-world practice. However, when treatment benefits are known, randomization can raise significant ethical concerns,” says Tara Coffin, PhD, MEd, CIP, regulatory chair at WCG IRB.
The central ethical concern, in Coffin’s view, is that by limiting access to effective treatments, researchers may inadvertently perpetuate health disparities. For example, in a study comparing uptake of an evidence-based treatment for type 2 diabetes across two arms, where one arm will have access to a patient navigator (the intervention) and the other (control arm) lacks this additional support, regulatory authorities may raise concerns around how individuals are randomized. IRBs may question if the lack of a patient navigator is restricting access to the evidence-based treatment. “In this context, a pre- and post-intervention assessment may be more appropriate,” says Coffin.
Randomization can be perceived as arbitrary and unjust by participants or other community stakeholders. That is particularly likely when control groups are required to forgo effective treatments. To mitigate these concerns, researchers can consider alternative study designs that ensure all participants will have access to any evidence-based treatments offered within the context of the research. For instance, in a crossover design, while the control arm might not initially have access to a patient navigator, they will at a later timepoint. “This balances the need for robust evidence with the imperative to minimize harm and respect the rights of vulnerable populations,” says Coffin.
For older patient populations, implementation science poses some additional ethical considerations because of potential cognitive impairment and differences in function and medical care access.2 “Another ethical issue with implementation science is that people and situations differ. The intervention you are implementing rarely applies to everyone,” says Lauren T. Southerland, MD, MPH, associate professor in the Department of Emergency Medicine at The Ohio State University.
Trying to get compliance to 100% can have unintended consequences. For example, emphasis on compliance with a sepsis intervention bundle may result in early diagnostic closure. “Clinicians could miss other potentially life-threatening causes of similar symptoms, such as medication withdrawal/overdose, pulmonary embolism, or ischemic bowel,” warns Southerland.
REFERENCES
- Clapp JT, Zucker N, Hernandez OK, et al. Ethical issues in implementation science: A qualitative interview study of participating clinicians. AJOB Empir Bioeth. 2024; Aug 13:1-10. doi: 10.1080/23294515.2024.2388537. [Online ahead of print].
- Carpenter CR, Southerland LT, Lucey BP, Prusaczyk B. Around the EQUATOR with clinician-scientists transdisciplinary aging research (Clin-STAR) principles: Implementation science challenges and opportunities. J Am Geriatr Soc. 2022;70(12):3620-3630.