By Stacey Kusterbeck
Did a physician factor in the recommendations of an artificial intelligence (AI) tool when ruling out a diagnosis or deciding whether to order a diagnostic test? If so, the clinician may wonder whether there is an ethical obligation to tell the patient.
“One of the things that we need to think about when incorporating AI into healthcare is how to incorporate the patient in a meaningful way,” says Susannah L. Rose, PhD, associate professor at Vanderbilt University Medical Center and Vanderbilt University in the Center for Biomedical Ethics and Society and the Department of Biomedical Informatics.
The informed consent process is based on respect for the patient’s autonomy and a trust between patients and physicians. “While AI is new to healthcare, it’s important for people to understand the risks and benefits, and to be able to participate in informed consent conversations,” says Devora Shapiro, PhD, an associate professor of medical ethics at Ohio University’s Heritage College of Osteopathic Medicine.
Rose and Shapiro developed a framework to assist clinicians on informed consent practices involving the use of AI.1 The authors outlined which situations require formal informed consent, which require notification only without formal informed consent, and which require neither. According to the authors, these factors should be considered when determining if informed consent is needed for an AI tool:
• How autonomous and independent an AI tool is.
If the AI tool is just providing useful information to clinicians, it is not much different from the clinical decision support tools that clinicians currently use. However, some AI tools are making autonomous decisions separate from human decision-making. “At that point, we feel that the patient should not only be notified, but also that the patient should be involved in a prospective informed consent process,” says Rose.
• Whether the AI tool is suggesting something that is going to change the care plan.
“If nothing is changing in their care, we may not need to notify the patient,” says Rose. If the tool is suggesting something that is going to change the treatment plan significantly, though, the authors recommend a formal informed consent process.
• The amount of clinical risk introduced by the model.
All medical interventions, AI included, can be beneficial or harmful to patients. AI tools, like any medical intervention, fall into a range of low to high risk. “The riskier the AI, the more we want to involve patients in the decision-making process,” says Rose.
• How much administrative burden the informed consent process will entail.
For clinicians, obtaining formal informed consent can be time-
consuming. “In some settings, the informed consent process can potentially interfere with providing high-quality patient care,” says Rose. Lengthy informed consent processes also can be burdensome to patients. Patients might be confused by the sheer amount of information they are being presented with, creating confusion and raising anxiety levels.
The authors offered the guidance as a starting point to create consensus on ethical practices for AI in healthcare. “It provides a great resource for institutions as they develop policies for AI regarding patient education and informed consent,” says Rose.
Another group of researchers looked at patients’ views of use of AI tools in their care.2 Researchers surveyed 600 adults in the state of Florida about their level of comfort if AI were used for various tasks. Most respondents were comfortable with the use of AI to schedule appointments or follow-ups. However, respondents were less comfortable with AI used to assist in making a diagnosis, recommend treatment plans, read and interpret medical imaging, or assist with surgical procedures.
Respondents also were asked to share free-text responses about how they feel about the use of AI in healthcare. Fear of losing the “human touch” was a common theme. Losing decision-making control was another common concern. For example, one participant stated, “I don’t think it is ideal to completely rely on computers, especially because a big part of healthcare is human interaction.” Another stated, “I don’t want AI making my medical decisions.”
“Patients within Florida may be more concerned with losing the ability to have that ‘human connection’ with their doctors,” concludes Kaila Witkowski, PhD, MSW, an assistant professor in the School of Public Administration at Florida Atlantic University.
As clinical AI becomes more widespread, discussions about the clinician’s use of AI must be integrated into the routine informed consent process in a way that is clear and easy to understand, says Kenneth V. Iserson, MD, MBA, professor emeritus in the Department of Emergency Medicine at The University of Arizona. Patients need to know any time AI is used in clinical decision-making, including triage, according to Iserson.3 In most cases, it can be covered in consent for treatment forms. Those forms should explicitly mention whether AI tools are used in patient care.
“But clinicians must be prepared to discuss it in more detail,” argues Iserson. Explaining how AI systems decide on a course of action may be difficult, however. This is because the tools use random probability distributions that may not result in accurate or consistent predictions for all patients (especially those whose demographic groups were not included in the AI’s training dataset). “This has made it difficult to develop an AI tool that provides a clear understanding of how its recommendations were generated,” says Iserson.
Clinicians generally lack sufficient knowledge about clinical AI to offer informed consent effectively. Clinicians need to understand these things, says Iserson: how AI systems operate, whether AI systems are understandable and trustworthy, the limitations of and errors AI systems make, how disagreements between the physician and AI are resolved, whether the patient’s personally identifiable information will be secure, if the AI system functions reliably (has been validated), and if the AI program exhibits bias.
“In general, the probability of error will be higher for AI programs that are used to perform complex tasks, are trained on small datasets, or are used with biased datasets,” explains Iserson. Iserson recommends that ethicists:
• learn if AI systems are being used in the hospital’s clinical setting;
• ascertain how much their clinicians know about its use;
• help to develop standard informed consent dialogues to use with patients and their families about AI;
• investigate whether the AI systems in use are unbiased, protect patient information, and function reliably;
• help to develop a method to resolve clinician-AI disputes. “For example, an approach could stratify the confidence level of the clinician and AI’s decisions and have another clinician or a different AI program review the result,” offers Iserson.
- Rose SL, Shapiro D. An ethically supported framework for determining patient notification and informed consent practices when using artificial intelligence in health care. Chest 2024; May 22. doi: 10.1016/j.chest.2024.04.014. [Online ahead of print].
- Witkowski K, Okhai R, Neely SR. Public perceptions of artificial intelligence in healthcare: Ethical concerns and opportunities for patient-centered care. BMC Med Ethics 2024;25:74.
- Iserson KV. Informed consent for artificial intelligence in emergency medicine: A practical guide. Am J Emerg Med 2024;76:225-230.