Ethics Involvement Is Needed with Medical Artificial Intelligence
As medical artificial intelligence (AI) transforms healthcare rapidly, debate is ongoing on a multitude of ethical issues. Yet high-level ethical principles and guidelines developed by ethicists do not always mirror the conclusions of empirical research, according to a group of researchers.1 “As bioethicists, we are seeing a sort of silo effect, where different experts are talking about the ethics of AI in various vacuums,” notes Sophia Fantus, PhD, one of the study authors and an assistant professor at the University of Texas at Arlington School of Social Work.
There is plenty of debate regarding how to mitigate the ethical issues that arise with medical AI. What was less clear is whether frameworks and guidelines actually inform best practice, and whether they align with the actual experiences of stakeholders. “Although these may be big conversations in more academic spaces, researchers, AI developers, and programmers may not be really thinking about these ethical issues, especially the long-term implications,” Fantus says.
For example, AI programmers working on a medical Alzheimer’s disease project may not be considering the long-term effects on actual patient care.
Fantus and colleagues analyzed 36 studies on various aspects of medical AI ethics published from 2013 to 2022. Fantus was struck by the fact few authors or research teams included individuals who were trained in ethics or were part of ethics departments.
“Even the definition of ‘ethics,’ or what was considered to be an ethical issue vs. a practice issue, was very different among our included studies,” Fantus reports.
This means even those conducting research on stakeholder values and attitudes regarding the ethics of medical AI give different interpretations and definitions of the ethical issues.
“This is definitely a disconnect,” Fantus says. “There is not even a communal understanding of how we should be assessing values and attitudes as it relates to the ethics of AI.”
For many studies, ethics was not the central focus, and was included only as one or two questions in a survey. There also was an absence of empirical research on AI developers and research teams. “When we rely on AI, we may forget that we are relying on programs that have been created by real people — that can include human error, bias, and social justice issues,” Fantus observes.
Fantus believes there is a need for future research on the values and attitudes of these teams as well as how they are addressing the ethics of medical AI.
In their analysis, Fantus and colleagues also suggested physicians are concerned about losing autonomy and negative effects on the patient-physician relationship. “Healthcare providers ought to start really thinking about medical AI in practice and its ethical use with patients and families,” Fantus advises.
Ethics involvement is needed in the design, development, and implementation of medical AI programs. “That may mean that we involve ethicists on research teams — and that we involve clinical ethicists who are trained in the ethics of medical AI into health systems, whether at the administrative or leadership level or at the practice level,” Fantus offers.
REFERENCE
1. Tang L, Li J, Fantus S. Medical artificial intelligence ethics: A systematic review of empirical studies. Digit Health 2023 Jul 6;9:20552076231186064. doi: 10.1177/20552076231186064. eCollection 2023 Jan-Dec.
Ethicists can help research, design, develop, and implement artificial intelligence programs at the administrative or practice levels.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.