Artificial Intelligence Soon Could Transform the Field of Clinical Ethics
Conflict erupts between a family member and the clinical team over whether to withdraw life-sustaining interventions. Instead of requesting an ethics consultation, a clinician turns to an artificial intelligence (AI) tool for answers. Some ethicists are wondering about this possibility, as machine learning is used in more areas of healthcare. “The realm of medical ethics, however, has so far been an exception,” says Lukas J. Meier, PhD, a junior research fellow at University of Cambridge.
AI and machine learning programs will “dramatically affect clinical ethics,” predicts Gavin G. Enck, PhD, HEC-C, a clinical ethicist at OhioHealth. The tools can serve as decision aids and identify errors in clinical ethics judgments, but there are limitations. “We must set realistic expectations,” Enck says.
It is unlikely anyone would use an AI tool to determine if they should marry their partner, embark on a new career, or start a family. “Similarly, we cannot expect AI or machine learning programs to provide a conclusive answer in every clinical ethics situation in healthcare,” Enck says.
Unlike ethicists, the tools can be programmed to avoid systematic biases and errors. “While it may seem strange, I would argue that patients, providers, and ethicists should expect, and even want, AI and machine learning programs to serve as decision aids in clinical ethics consultations,” Enck says.
AI ethics “has evolved as a field, drawing ethicists from both health ethics and engineering ethics,” says Craig M. Klugman, PhD, who co-authored several papers on this topic.1,2 The unanswered question is whether a sophisticated AI tool can perform the job of a clinical ethicist. If so, it could mean the end of some hospital-based clinical ethics roles.
“If you can buy an ethics AI off the shelf, the role of a human clinical ethicist may go away,” suggests Klugman, a professor of bioethics and health humanities at DePaul University and ethics committee member at Northwestern Memorial Hospital.
It is more likely to happen at smaller hospitals without full-time ethicists. Even for larger hospitals, software is much cheaper than full-time ethicists’ salaries. “However, as they currently exist, AIs are only supposed to be decision aids. They are not supposed to be the final word,” Klugman cautions.
AI has been used successfully to maximize use of OR space. “The AI is a much better predictor of how long a procedure will take. This is one of the most successful examples of AI in medicine,” Klugman says.
Similarly, AI could be used for distribution-related suggestions in ethics (e.g., questions about allocation of scarce resources). However, there are concerns that AI ethics tools could incorporate various biases. “For example, an AI built on a million records from a single hospital system assumes those are all of the records that exist,” Klugman offers.
It could be most of those patients presented with health insurance, were employed, or were the same race. The rules and guidance an AI gives does not “know” the data set on which it evolved included biases. Even if all identifying information is stripped, systems still develop biases.3 “That’s because medicine asks different questions of different patients, even if they have the same disease. We even run different tests for different patients with the same disease. AI bias will force us to confront human bias,” Klugman explains.
Using a tool that could introduce bias into a clinical situation or during an ethics consult is problematic. To address these and other issues, says Klugman, “ethicists can and should be part of AI oversight boards in a hospital.”
Researchers are working on bioethics AI tools that are trained to make moral decisions. “These tools raise complex issues. For example, there is a risk of automation bias,” says Sara Gerke, Dipl-Jur Univ, MA, assistant professor at Penn State Dickinson Law whose research focuses on ethical and legal challenges of AI in healthcare.
Humans tend to rely on machines rather than making complex decisions by themselves. This can be dangerous if an AI tool is untrustworthy. If AI tools make complex moral decisions in the future or “assist” physicians in doing so, it raises the question of whether the tools should be required to undergo the same training as certified human clinical healthcare ethics consultants. “We still have much to figure out before we trust bioethics AI to make moral decisions,” Gerke says.
AI tools are unlikely to ever be compassionate, empathic, or understand human emotion — all cornerstones of the ethics consult process. “It may have protocols that look like those things, but the appearance of compassion is very different than actual compassion,” Klugman notes.
Can clinicians ever put total trust into an AI to make ethical decisions? “No matter how sophisticated an AI gets, it will never be a moral agent. It will never have to face the emotional and existential consequences that its decision may end a life,” Klugman says.
Say an ethicist decides to save money for her hospital by no longer offering treatment to any diabetic patient with heart or kidney disease. That would save money, but also would result in a lot of sick or dying patients. “A human being could recognize that saving money has to come second to some other value. But the AI doesn’t have that ability to check itself, see the moral and emotional outcomes of its choices, and then go back and refine them,” Klugman explains.
In a pilot study, Meier and colleagues from the Technical University of Munich set out to answer two questions: Is it technologically possible for algorithms to solve bioethical problems? If so, would that be desirable? The researchers created an algorithm that could be used to advise clinicians on moral dilemmas.4 “It was not our goal to develop a product for actual clinical application that would replace human decision-making, “ Meier reports.
Rather, Meier and colleagues tried to create an example of what such a system would look like, and how it would work. For example, they wanted to know if the algorithm could answer questions on whether a patient should be able to refuse a treatment that is likely to extend their life. They concluded machine intelligence is not sophisticated enough to risk passing judgment on real patients. “However, as in the case of other innovations, it is important that we have discussions about the virtues and vices of novel technologies before they become widely available,” Meier suggests.
There are two important questions to ask about use of AI in the clinical ethics field, according to Meier: In which areas would automated decision-making be beneficial, and where should we avoid it? How would it affect relationships between patients and clinicians? “The sooner ethicists and other stakeholders address these and other questions, the better,” Meier says.
Some concerns also apply to AI used in healthcare generally, but the field of medical ethics presents some additional considerations. “If machine intelligence was employed in this area, one would need to proceed with the utmost care,” Meier cautions.
Researchers must consider whether human empathy is a necessary component of ethical case discussions, and whether the data set on the basis of which the algorithm issues its recommendations is inclusive and representative. Also, who should be held legally responsible for the algorithm’s decisions? Is it the programmers who “trained” the AI, or the hospital staff who use it? “All of these issues must be addressed before the clinical application of machine intelligence in ethics can even be considered,” Meier adds.
Fabrice Jotterand, PhD, MA, says it is important for ethicists to understand the difference between AI (i.e., when computer systems perform tasks that mimic human intelligence, such as problem-solving or learning) and machine learning tools (i.e., algorithms that allow computers to learn from data instead of just operating according to human programmers’ instructions). Jotterand says it is unlikely any clinical ethicists are using AI during consults because AI cannot grasp the complexities of moral issues in the clinical context — patient/family values and patient/physician relationships.
“That said, I could imagine a clinical ethicist doing research on cancer, for instance, and using a large data set to determine the outcome of a particular controversial procedure,” says Jotterand, director of the bioethics graduate program at the Medical College of Wisconsin.
During a consultation, ethicists could use data from the study to help the decision-making process. In that context, machine learning could provide helpful insights regarding the ethical justification of a procedure, by factoring in risk/benefit assessments, cost estimates, and long-term outcomes. “But I would not want an AI making an ethical claim without human agency — ethicist, physician, patient, family, or proxy — and interpretation,” Jotterand cautions.
The complexities of clinical practice, the structure of the healthcare system, and how ethical decisions are made in the clinical context cannot be captured via algorithms.
“In addition, I don’t know whether clinicians or patients and families would want to have ‘RoboEthicus’ as a consultant,” Jotterand adds. “This is an empirical question worth exploring.”
REFERENCES
1. Klugman CM, Gerke S. Rise of the bioethics AI: Curse or blessing? Am J Bioeth 2022;22:35-37.
2. Michelson KN, Klugman CM, Kho AN, Gerke S. Ethical considerations related to using machine learning-based prediction of mortality in the pediatric intensive care unit. J Pediatr 2022;247:125-128.
3. Gichoya JW, Banerjee I, Bhimireddy AR, et al. AI recognition of patient race in medical imaging: A modelling study. Lancet Digit Health 2022;4:e406-e414.
4. Meier LJ, Hein A, Diepold K, Buyx A. Algorithms for ethical decision-making in the clinic: A proof of concept. Am J Bioeth 2022;22:4-20.
Using a tool that could introduce bias into a clinical situation or during an ethics consult is problematic. To address these and other issues, ethicists can and should be part of their facility's artificial intelligence oversight board.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.