By Stacey Kusterbeck
Inaccurate or biased responses, concerns about patient data privacy, and risk of harm from medical misinformation are some well-known ethical concerns about large language models (LLM) in healthcare. LLMs are advanced computer programs that use patterns in large amounts of text to understand and generate human-like language.
“Overlooking ethical concerns could lead to missed opportunities to effectively utilize LLMs for healthcare processes and potential societal resistance to LLM adoption in healthcare,” says Pouyan Esmaeil Zadeh, PhD, an associate professor of information systems and business analytics at Florida International University’s College of Business.
To learn more about ethical concerns from clinicians’ perspective, Esmaeil Zadeh and colleagues analyzed 3,049 online posts made by clinicians.1 “The ethical implications of LLM integration in healthcare are more complex than might have been initially assumed, with some unexpected areas of concern,” says Esmaeil Zadeh.
The study was conducted during a period of rapid LLM adoption in healthcare (November 2022 to November 2023). This coincided with ChatGPT’s release, making it particularly timely. The authors identified four major ethical domains: clinical applications, data governance, health equity, and user-system relationships. Some key findings:
Clinicians were ambivalent about artificial intelligence (AI)-assisted decision-making.
Clinicians recognized LLM’s potential benefits. However, they worried about maintaining clinical autonomy. “There is a need to balance LLM assistance with maintaining independent clinical judgment and ensure LLM use doesn’t erode critical clinical reasoning skills. Clinicians should avoid overreliance on algorithmic assessments, especially in critical situations,” says Esmaeil Zadeh.
Clinicians worried that LLMs trained primarily on urban patient data might fail to accurately represent rural health conditions and socioeconomic factors.
“This highlights an overlooked aspect of bias in LLM training data,” says Esmaeil Zadeh.
Clinicians were concerned about mental health applications of LLMs.
Specifically, clinicians worried about how information shared by patients during therapy sessions could be retained and used. “Extra caution is needed with sensitive mental health information,” says Esmaeil Zadeh.
Communication-related ethical issues were a primary concern.
Clinicians saw a need to ensure LLMs do not compromise the quality of patient-provider communication or lead to misunderstandings.
The study authors emphasize that clinicians should be a part of the process and approach LLM integration thoughtfully. “Clinicians should actively participate in shaping how LLMs are integrated into healthcare, rather than being passive recipients of the technology,” advises Esmaeil Zadeh. Hospital ethicists can ensure that LLM use aligns with medical ethics and professional standards in these ways, offers Esmaeil Zadeh:
- Policy development: Ethicists can create frameworks for evaluating the ethical implications of new LLM applications, establish protocols for handling ethical dilemmas arising from LLM use, and design consent procedures for LLM use in patient care.
- Implementation planning: Ethicists can provide ethical perspective in vendor selection processes, assist in developing ethical risk assessment tools, and advocate for equitable access and fair distribution of LLM resources.
- Education and training: Ethicists can train clinicians to recognize potential ethical issues, lead discussions about balancing technology benefits with ethical concerns, and create case studies for ethical decision-making with LLMs.
- Ongoing oversight: Ethicists can monitor LLM implementation for emerging ethical issues, provide consultation for complex cases involving LLM use, and assess the effect on vulnerable populations.
- Stakeholder engagement: Ethicists can communicate ethical considerations to patients and families, mediate conflicts arising from LLM implementation, and gather feedback from various stakeholders about ethical concerns.
“The goal is to harness LLM benefits while mitigating potential risks and maintaining high standards of medical care,” underscores Esmaeil Zadeh.
Reference
1. Mirzaei T, Amini L, Esmaeilzadeh P. Clinician voices on ethics of LLM integration in healthcare: A thematic analysis of ethical concerns and implications. BMC Med Inform Decis Mak. 2024;24(1):250.
Inaccurate or biased responses, concerns about patient data privacy, and risk of harm from medical misinformation are some well-known ethical concerns about large language models in healthcare.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.