By Stacey Kusterbeck
With the growing integration of artificial intelligence (AI) into healthcare, there is a need to validate AI applications through rigorous clinical trials. “It’s crucial to examine whether existing ethical research frameworks — originally designed for traditional medical trials — adequately address the complexities of AI trials,” says Alaa Youssef, PhD, post-doctoral scholar in the Department of Radiology at Stanford Center for Artificial Intelligence in Medicine and Imaging.
Youssef and colleagues conducted a study to explore the unique challenges researchers face in ensuring AI trials promote patient safety, fairness, and effective outcomes in real-world clinical settings. The researchers interviewed 11 investigators involved in the design of clinical trials of AI for diabetic retinopathy screening.1 The investigators identified these as the main ethical challenges:
- ensuring fair participant selection;
- measuring social value;
- establishing scientific validity;
- evaluating risk-benefit ratios in various patient subgroups.
“While the issue of algorithmic bias is well-known, especially when AI tools are deployed in under-resourced health systems, our findings revealed deeper ethical considerations regarding the principles of justice and fair subject selection,” says Youssef.
For instance, participants noted cases where algorithms developed using data collected in the United States were deployed in under-resourced countries. In such contexts, there is a risk of patient harm because the deployment environment is vastly different from the one where the model was trained. This undermines the ethical principle of fair subject selection. “Fair subject selection, therefore, should not only focus on clinical criteria but also on representative sampling from the actual context where the AI tool will be deployed,” concludes Youssef.
Investigators also raised concerns about how AI might exacerbate inequities in care quality. AI tools for cost-effective solutions, such as diabetic retinopathy screening, could be seen as a quick fix to expand access. However, there are significant ethical concerns. For example, patients diagnosed with diabetic retinopathy in under-resourced areas may not have access to follow-up ophthalmology care compared to people in well-resourced settings. “This raises serious questions about justice in healthcare infrastructure, particularly when AI creates disparities in treatment access based on geographic or economic contexts,” explains Youssef.
Study investigators often encounter unanticipated ethical challenges during their trials, particularly in ensuring fair recruitment and equitable access to care. “Researchers should document these challenges and actively seek input from ethicists throughout the trial process. The dynamic nature of AI means that ethical considerations may evolve over the course of the trial, underscoring the need for continuous ethical oversight and adaptation,” says Youssef.
The study identified a significant gap in patients’ health literacy, often influenced by socioeconomic and educational factors. “Therefore, IRBs [Institutional Review Boards] should inquire about the strategy to ensure equitable access to understanding the risks and benefits of participation,” advises Youssef. For instance, IRBs can ask questions such as, “Will Spanish-speaking patients have access to interpreters to facilitate the informed consent process?”
Privacy and bias are frequent ethical concerns in all types of clinical research. “But AI tools may amplify the risks, so they may need to be considered differently,” says Kelly FitzGerald, PhD, CIP, executive IRB chair and vice president of Institutional Biosafety Committee Affairs at WCG.
According to the Common Rule, the standard for identifiable information is whether the participant’s identity may be readily ascertained by the researcher. The problem is that AI tools may combine disparate data points to identify an individual. “This could make that combination of data that was previously not identifiable, identifiable,” says FitzGerald.
Additionally, AI tools are trained on human output, and humans are biased. “Whatever errors or biases humans have, AI can amplify them, and in ways that might be opaque to our understanding,” says FitzGerald.
Explainability is another ethical concept that comes up in discussions of AI tools. Researchers may question: Do people really need to understand how the AI tool works?
“Given the sophistication of the tool and the way it is developed, the researchers themselves may not even be able to explain it. Medical research does include other examples of drugs and devices where the mechanism of action was not well-understood at first, but the intervention still provided a benefit and could be researched ethically. So explainability is something to consider, but may not always be necessary,” says FitzGerald.
Challace Pahlevan-Ibrekic, MBE, CIP, director of regulatory affairs at The Feinstein Institutes for Medical Research, says IRBs should be asking these questions about the AI tool:
- What specific AI tool is being used and what, if any, standard of care procedures are involved?
- What data were used for the tool’s development? Are they representative of the target population, especially if the tool was procured rather than developed in-house?
- What decisions are being made by the AI tool?
- Will clinicians have access to the data variables that underly the resulting output?
- Are there alternatives to use (such as a clinician opting to dismiss or ignore the AI system output)?
- What are the potential risks to participants from the proposed AI tool use?
- Are there any cybersecurity risks?
- What methods will be used to detect and mitigate bias?
- What plans are there to mitigate the identified risks?
- What is the proposed monitoring plan to detect safety concerns?
IRBs are federally mandated to include members with diverse experience and expertise. However, IRBs can ask a subject matter expert to provide specialized review as a consultant if necessary.
“Most IRBs have the necessary expertise to evaluate AI-related protocols, but effective communication between the IRB and the research team is essential. The IRB must understand the tool’s intended use for approval, while the research team should be knowledgeable about the tool’s development and functionality to address any questions from experts or the IRB,” says Pahlevan-Ibrekic.
Reference
- Youssef A, Nichol AA, Martinez-Martin N, et al. Ethical considerations in the design and conduct of clinical trials of artificial intelligence. JAMA Netw Open. 2024;7(9):e2432482.