By Stacey Kusterbeck
There is a lot of speculation on the part of ethicists as to how ChatGPT or other adaptive artificial intelligence (AI) systems that have been or will be developed, could affect their jobs, their role, ethics education, and the overall field of bioethics.1-4
“There have already been attempts to create bioethics AIs that can offer recommendations to ethical dilemmas in the clinic. But they are limited in what they offer and in the complexity of the problem they can handle, so we are several years away from an AI that can replace a bioethicist,” according to Craig M. Klugman, PhD, a professor of bioethics and health humanities at DePaul University and ethics committee member at Northwestern Memorial Hospital.
Klugman expects to see ethicists use AI to help with dictating and writing notes, summarizing the content of large patient charts, and helping to write recommendations and reports. AI tools also could offer ethicists useful resources in looking at a case, help with doing intakes for new consults, and scheduling one’s day more efficiently, says Klugman.
“ChatGPT has made the field of ethics and ethical expertise much more popular, because the ethical impact of AI and its potential threats for the society require ethicists,” says Mohammad Hosseini, PhD, an assistant professor of preventive medicine at Northwestern University’s Feinberg School of Medicine.
Bioethicists should be aware of where, and for what purposes, ChatGPT and similar Generative AI (GenAI) systems are used, recommends Hosseini. Hospitals and other healthcare providers should engage ethicists in conversations with GenAI service providers to ensure that conveniences offered by AI systems do not undermine patients’ rights. “In contexts where patient data or their interaction with physicians are recorded and processed with so-called ‘medical scribes,’ patient privacy might be compromised,” cautions Hosseini.
Bioethics also can proactively form alliances to develop guidelines. As a member of the Health Care Artificial Intelligence Code of Conduct project initiated by the National Academy of Medicine, Hosseini believes that regulating AI ultimately will foster a responsible use of AI.
Bioethicists should try to educate themselves about this technology and reflect on its ethical implications, advises Hosseini. “Hospitals that have an ethics board can encourage their in-house ethicists to employ existing ethical frameworks to provide context-specific suggestions on how to use AI,” adds Hosseini.
ChatGPT and similar GenAI tools are designed to predict a sequence of words — as in a sentence, or a block of code. “These are quite impressive when they work. But these tools are often mistaken and never in doubt,” says Michael Miller, system vice president of mission and ethics at SSM Health in St. Louis. The tools are not equipped to discern context in complex relational environments such as healthcare — or ethical issues. “When values conflicts arise around life-or-death situations, I would be very concerned about utilizing a tool that could be confidently wrong,” warns Miller.
Trained bioethicists are skilled navigators of the relationships and esoteric conversations that happen in the context of care delivery. “A human will always be best suited to facilitate conversations that support ethical decision-making for the patients we serve,” asserts Miller.
AI technologies clearly have the potential to improve health outcomes. “However, I am worried about healthcare professionals falling into ‘technosolutionism’ — the mindset of every problem requiring a technological solution. An AI technology like ChatGPT is not always the best solution for a problem,” says Miller.
In Miller’s view, the American Medical Association’s use of the term “augmented intelligence” in place of “artificial intelligence” is appropriate.5 “This really helps to keep these tools in perspective. They should only be used in support of the very human work we call healthcare,” says Miller. That includes the work of ethicists within healthcare. Although AI tools might be helpful and/or supportive to the ethicist, Miller says that a purely technological solution for the challenges that underlie clinical ethics consultations would be inappropriate. “Human problems require human solutions. I think the best bioethics-related use case for a tool like ChatGPT is summarizing text,” says Miller. This has implications for the educational role of an ethics committee. For instance, it could be helpful for an ethicist to quickly survey the literature on a particular topic. “However, this should be done cautiously and critically, knowing that these tools are still in development,” adds Miller.
Vasiliki Rahimzadeh, PhD, co-authored a paper exploring ChatGPT’s potential effect on bioethics education.6 “We anticipate both opportunities and challenges for the tools in our academic discipline. We intended for our paper to be a conversation starter among bioethics educators,” says Rahimzadeh, an assistant professor at Baylor College of Medicine’s Center for Medical Ethics and Health Policy.
ChatGPT is effective at summarizing bioethical concepts and synthesizing emerging bioethics issues. That can be helpful for beginning ethicists early in their careers. “It can also be used to help learners dig deeper into niche research topics and help them persuasively structure ethical arguments,” offers Rahimzadeh.
There is a danger of ethics educators over-relying on ChatGPT. One issue is ChatGPT favors a principlism approach to ethical analysis, based on the ethical principles of autonomy, beneficence, nonmaleficence, and justice. “We found in our own experiments that ChatGPT only applied alternative ethical paradigms when specifically prompted. This can further sideline the ethical perspectives of marginalized populations,” reports Rahimzadeh.
Privacy is another concern if ChatGPT is used by ethicists in clinical settings. “When used to help ethical case consults, users must never share identifying information about patients with public AI tools,” warns Rahimzadeh.
In academic settings, there is a need for ethics faculty to communicate clear rules to students about the appropriate use of ChatGPT. “This is the most proactive way, in our view, to best prepare learners for a future in which generative AI is integrated into clinical practice and research,” says Rahimzadeh.
- Cohen IG. What should ChatGPT mean for bioethics? Am J Bioeth 2023;23:8-16.
- Laacke S, Gauckler C. Why personalized large language models fail to do what ethics is all about. Am J Bioeth 2023;23:60-63.
- Barnhart AJ, Barnhart JEM, Dierickx K. Why ChatGPT means communication ethics problems for bioethics. Am J Bioeth 2023;23:80-82.
- Meier LJ. ChatGPT’s responses to dilemmas in medical ethics: The devil is in the details. Am J Bioeth 2023;23:63-65.
- American Medical Association. Augmented intelligence in health care. https://www.ama-assn.org/system/files/2019-01/augmented-intelligence-policy-report.pdf
- Rahimzadeh V, Kostick-Quenet K, Blumenthal Barby J, McGuire AL. Ethics education for healthcare professionals in the era of ChatGPT and other large language models: Do we still need it? Am J Bioeth 2023;23:17-27.