Artificial Intelligence Viewed Favorably by Juries, Research Suggests
EXECUTIVE SUMMARY
Jurors may accept the use of artificial intelligence (AI) in medicine more than commonly thought. Research suggests jurors might be sympathetic to a physician who used AI even if it harmed the patient.
- Study participants favored physicians who followed AI recommendations even when they turned out to be wrong.
- Physicians should follow AI recommendations unless they feel strongly that it is wrong.
- The tort system does not have to hinder more adoption of AI in medicine.
As artificial intelligence (AI) is used more and more to guide clinical decisions, the perception of jurors in any related malpractice case becomes important. Will jurors look favorably on physicians following AI advice even it if proved to be the wrong decision? Or will they hold the physician responsible for not overruling the AI suggestion?
A recent study revealed potential jurors might not be strongly opposed to providers following AI advice, suggesting physicians and other clinicians could follow AI recommendations with less fear of malpractice liability.1
The researchers noted that in recent years, clinical decision support tools have become increasingly reliant on AI to guide physicians regarding diagnosis and treatment recommendations. Sometimes, these recommendations deviate from standard care, and the clinician must decide to follow the AI or overrule it.
The researchers studied a representative sample of 2,000 adults in the United States. They provided each participant with one of four scenarios involving an AI algorithm suggesting a drug dosage to a physician.
Less Liability Than Expected
The results suggest physicians who follow AI advice might be at less risk for liability than commonly thought, says Alexander Stremitzer, PhD, JD, study co-author and professor at the ETH Zürich Center for Law & Economics in Switzerland. “They are less skeptical than is commonly thought,” he says.
The scenarios in the study varied the AI recommendation from standard and nonstandard drug dosage. The physician either followed the recommendation or rejected it. No matter the physician’s decision, each scenario led to patient harm.
The participants determined if the physician’s action was reasonable. Two factors seemed to guide participants’ decisions: whether the treatment provided was standard, and whether the physician followed the AI recommendation.
The participants judged physicians who accepted a standard AI recommendation more favorably than those who rejected it. For a physician who received a nonstandard AI recommendation, rejecting it did not make him or her safer from liability.
The main finding is the threat of physician liability from following AI recommendations is smaller than might be expected, Stremitzer says. The basic question the researchers wanted to answer is whether physicians who rely on AI tools are likely to face legal liability after an adverse outcome.
“Medical malpractice requires a deviation from a care standard, and this standard is usually met if the physician exercises due care. Physicians might expose themselves to increased liability when accepting nonstandard AI advice,” Stremitzer says. “On the other hand, it could be that accepting the AI advice is the new standard. We recruited a sample of U.S. adults and said this is a model of a jury, presenting them with different versions of a scenario in which a physician had to decide what dosage of a chemotherapy drug to give to a patient.”
Favorable to Following AI
The researchers found the “jurors” evaluated the physician more favorably when the physician accepted the AI advice, Stremitzer says. That finding was particularly strong when the physician followed AI advice to provide standard care, even when that decision was wrong and harmed the patient. But the participants still supported physicians who followed AI advice to provide nonstandard care, which harmed the patient.
“This experiment is just one piece of the complex picture of tort liability. The determination of liability depends also on the testimony of experts and a dynamic with the jurors at trial, and other factors,” Stremitzer says. “The bottom line is we found that people were very open to the use of AI.”
The researchers noted the study should not dissuade healthcare organizations from adopting AI. The experimental scenarios studied assume that an AI recommendation is already offered routinely.
“The study has nothing to say about the relative likelihood of liability for physicians who have not received advice from an AI system and therefore does not support any inference that healthcare institutions should avoid introducing AI systems,” the researchers noted. “Additionally, those decisions will likely involve nonlegal factors as well, such as the competitive pressure to maintain state-of-the-art facilities and their ability to set guidelines for the appropriate use of the AI system.”1
This research appears to be the first of its kind, Stremitzer says. Previous thinking on the matter suggested a physician’s safest play was to reject a nonstandard recommendation from AI to minimize potential liability. That is a reasonable theory, Stremitzer says, because one could assume a jury would be skeptical of a physician trusting AI over his or own decision.
People seem to trust AI more than that. “We came to this with an open mind and said, ‘Let’s try to test it,’” he says. “We have plans to test this further because an interesting difference between the U.S. and Europe is that in the U.S., this is something jurors get to decide, but in Europe, this is something that judges get to decide.”
Advice to Physicians
Stremitzer says the research suggests:
- accepting the AI advice on standard treatment if the physician does not have a strong intuition that the recommendation is wrong;
- using best judgment when the AI recommends a nonstandard treatment. But when in doubt, accept the recommendation.
“Everything our study speaks to is the liability of doctors, conditional on harm occurring,” he says. “Doctors still should use their judgment. If they have a strong opinion that actually the AI might be wrong in its recommendations, it is still a good idea not to follow the advice because it could prevent harm.”
Stremitzer and his colleagues may conduct the study in Switzerland to see if laypeople hold different attitudes toward AI and malpractice liability, but also with a sample of judges or the medical experts they rely on for deciding malpractice cases. Policymakers also should take note of the research results.
“A common opinion is that the tort system might actually undermine the use of AI tools, and we suggest that is not the case,” Stremitzer says.
REFERENCE
- Tobia K, Nielsen A, Stremitzer A. When does physician use of AI increase liability? J Nucl Med 2021;62:17-21
SOURCE
- Alexander Stremitzer, PhD, JD, Professor, ETH Zürich Center for Law & Economics, Switzerland. Phone: +41 44 632 94 74. Email: [email protected].
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.