By Stacey Kusterbeck
Most oncologists feel responsible for protecting patients from biased artificial intelligence (AI) tools, but few were confident in their ability to do so, a recent study found.1 “We were motivated by the accelerating, yet highly heterogenous development and adoption of AI tools in cancer care,” says Thomas Walsh, MPH, one of the study authors and research project manager in the Abel Lab at Dana-Farber Cancer Institute.
Walsh and colleagues surveyed 2,024 oncologists in 2022-2023 on their views on ethical issues with the use of AI in cancer care. Some key findings:
• Oncologists expressed concerns about model “explainability.” Most felt strongly that oncologists should be able to explain AI tools, but far fewer felt that patients should be able to explain the tools.
• More oncologists felt that patients should provide explicit consent for the use of AI tools in treatment decisions than for the use of AI tools in diagnostic decisions.
• Oncologists were asked about the ethical response if an AI model selected a different treatment regimen than the oncologist planned to recommend. The most frequent response (36.8%) was that oncologists should present both options and allow the patient to decide.
• Most respondents reported that physicians should protect their patients from biased tools. Yet a minority of oncologists felt that they could identify if data used in an AI model was sufficiently representative.
• Oncologists reported different views depending on AI model type. For instance, an AI tool used in treatment decisions was held to a higher bar than a tool used for diagnostic decisions.
• Some oncologists’ views were paradoxical. For example, oncologists did not expect patients to understand AI tools, but they did expect patients to make decisions related to recommendations generated by AI.
“Our analysis highlights the critical need for situation-dependent implementation of AI tools,” underscores Walsh. Using an AI tool to assist in radiology or pathology diagnostics raises different concerns than using a tool in treatment decision-making, for example.
“Physicians do not seem to want to assume ultimate responsibility for AI tools in the clinical setting,” observes Walsh.
Oncologists deferred some responsibility to patients (as in treatment decision-making) or to AI developers (with respect to the medico-legal responsibility for adverse events resulting from AI tool use).
“Informed consent that explicitly addresses such issues will be necessary before implementing tools in the clinical setting,” says Walsh.
As these and other issues with AI tools get sorted out, “ethicists can play a critical role as advocates and arbiters,” says Walsh.
Ethicists can get involved in designing informed consent practices that explicitly address accountability and deference to AI, for example. Ethicists also can delineate the rights of patients when AI tools are used in their care. “Hospital ethicists can also provide rigorous assessments of the effects of AI on specific care decisions and help to guide clinical experts when problems related to AI use inevitably occur,” adds Walsh.
AI tools have the potential to improve both diagnosis and treatment in medicine, and oncological care is no exception. “However, ethical and legal challenges remain before many of these advances are ready for implementation,” asserts Jacob M. Appel, MD, JD, MPH, HEC-C, director of ethics education in psychiatry at Icahn School of Medicine at Mount Sinai and an attending physician at Mount Sinai Health System.
Skilled oncologists may have instincts about particular cases (based on years of experience) that differ from algorithmic protocols or AI guidance. The question arises as to whether the oncologist should follow their instincts over the recommendations of the AI tool, says Appel. Another concern is whether an oncologist would be blamed for a poor outcome for not following the AI tool.
Appel calls this the “negative outcome penalty paradox.” “Physicians are at high risk of being penalized by juries in malpractice cases when they use AI — both in cases where they overrule AI determinations and in those where they embrace them,” Appel explains.
Using AI does not absolve physicians of their basic ethical duties. Clinicians still have an ethical obligation to maintain confidentiality and ensure that patients provide informed consent before interventions. “But sometimes, even explaining the risks involved in the use of AI may prove difficult,” says Appel. This can result in the patient being underinformed. For instance, a physician may not know the process by which the AI arrives at its conclusions or its likelihood of error. Conveying risks to patients accurately, without solid data, becomes impossible.
Appel would like to see ethicists ask questions of oncologists to identify potential blind spots in the use of AI tools. “Clinicians and researchers are so focused on achieving positive outcomes for their patients and expanding knowledge that they may not fully appreciate the risks involved in certain interventions,” explains Appel. For instance, the clinical team might feel they are providing a patient with a reasonable amount of information about the use of AI tools. Ethicists can call to attention ways in which the average patient (or research subject) is likely to misinterpret this same information. Ethicists also are more aware of concerns related to equity, resource allocation, and the interest of innocent third parties. “Often, an additional set of eyes — especially experts who approach issues from different vantage points — will recognize ethical pitfalls that may have previously gone unnoticed,” says Appel.
Ethicists have distinctive expertise that enables them to recognize issues that otherwise might be overlooked. “In the same way that an oncologist may recognize nuances in a cancer patient’s clinical presentation from years of exposure to other patients, an ethicist may recognize ethical issues that others do not, as a result of many years working in the field,” underscores Appel.
- Hantel A, Walsh TP, Marron JM, et al. Perspectives of oncologists on the ethical implications of using artificial intelligence for cancer care. JAMA Netw Open 2024;7:e244077.