Ethical Problems When Using Artificial Intelligence Assistance During Surgery
Researchers surveyed 650 trauma and emergency surgeons from 71 countries, asking about their knowledge of using artificial intelligence (AI)-based tools in clinical decision-making.1 The authors reported the surgical community is divided into groups that believe in AI’s potential and those who do not trust or understand AI.
As AI becomes more sophisticated, scientists are raising important ethical concerns. “We need to figure out how best to interact with the AI systems to ensure we are using AI safely and ethically for the benefit of people,” asserts Sara Gerke, Dipl-Jur Univ, MA, assistant professor of law at Penn State. AI-assisted surgery raises some of the same ethical issues as similar tools in healthcare, such as bias and data privacy. “However, some problems are specific to surgery,” Gerke says.
• An AI tool might make a recommendation during surgery in real time. For example, an AI tool could classify tissue as benign or malignant. In that case, the surgeon has little time to evaluate this recommendation and decide whether to follow the tool’s advice.
• Surgeons might be reluctant to use an AI tool in surgery due to potential liability risks. Gerke and colleagues are conducting a focus group to explore surgeons’ concerns about liability with using AI-driven technology in surgery in the United States and the European Union.2
• There is debate about how much information surgeons must give to patients on the use of AI in their care. “I believe the right way forward is to be more transparent with patients, rather than hide the use of technology in their care,” Gerke offers. For example, consent discussions should include the benefits and shortcomings of using AI.
It is important for surgeons to know how the AI models are trained, on what data, and whether the data are generalizable to the broad population or the population the clinician/researcher is applying the data to, according to Jonathan M. Vigdorchik, MD, an orthopedic surgeon at New York City’s Hospital for Special Surgery who specializes in robotic technology and AI for orthopedic surgery. “Otherwise, you are applying AI models making predictions to a patient who may not fit within that model,” Vigdorchik warns.
Lack of transparency is another ethical concern. Many of the AI models used in medicine are “black box” models, meaning they give data without clinicians knowing how the machine produced the answer. Other models, called “glass box” models, explain how the data were analyzed. Thus, humans can make decisions together with the machine. “The ultimate goal of AI in medicine is to augment human intelligence and decision-making, not to replace it,” Vigdorchik says.
In Vigdorchik’s experience, when surgeons conduct robotic surgery, patients do not ask questions about who is controlling the robot. Patients incorrectly assume the robot operates on its own, and the surgeon is just watching. “But patients don’t know that unless they ask,” Vigdorchik offers.
Usually, patients do not bring it up. “Only recently, in light of the new ChatGPT and AI craze, I have received a few questions — but only about the future of AI in medicine,” Vigdorchik reports. “Patients do not think it is being used in medicine yet, and they are correct. It is only at the very early beginnings of being used.”
As AI becomes more ubiquitous in medicine, hospitals will need to make a change to the consent form process, according to Vigdorchik. Consent will need to address how the AI tool is used, whether for diagnostic purposes or therapeutic purposes. “It really gets back to shared decision-making with patients, and explaining risks and benefits, with transparency in treatment options,” Vigdorchik explains.
In the future, nearly all robotic and technology platforms will use AI to inform decision-making and help improve outcomes, Vigdorchik predicts: “We should all be ready for the mass adoption, but we need to do it in a safe, controlled way.”
REFERENCES
1. Cobianchi L, Piccolo D, Dal Mas F, et al. Surgeons’ perspectives on artificial intelligence to support clinical decision-making in trauma and emergency contexts: Results from an international survey. World J Emerg Surg 2023;18:1.
2. Project CLASSICA. Surgeons wanted: Join a study on liability perspectives for AI technology in surgery. Feb. 27, 2023.
AI-assisted surgery raises some of the same ethical issues as similar tools in healthcare, such as bias and data privacy. However, some problems are specific to surgery.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.