Artificial Intelligence Could Affect ED Provider’s Malpractice Risk
Inaccurate artificial intelligence (AI) algorithms could harm patients and result in liability exposure, the authors of a recently published paper argued.1
“A significant risk, in quality of care as well as malpractice, is that the clinician substitutes the AI ‘judgment’ for hers,” says Kenneth N. Rashbaum, JD, a partner at New York City-based Barton.
AI should be used as a tool, along with physical exam findings, narrative history, review of prior records, and clinical judgment. ED providers should document the thought process and tools used to arrive at a diagnosis, including AI, Rashbaum says.
To avoid risk management issues in the ED, Rashbaum says hospitals should be testing algorithms, using several methodologies; training clinical staff on misuses of AI (e.g., substituting AI findings for well-reasoned clinical judgment); directing risk management, finance, and legal departments to review professional liability policies (to confirm claims based in whole or in part on allegations regarding AI are not excluded from coverage); and ensuring service agreements with the AI platform provider include indemnification provisions. “This could allow the hospital to shift some or all of the liability risks and expenses to the platform provider,” Rashbaum says.
AI adoption will require “significant time, attention, and funding to better understand the benefits in the ED setting,” says Rick Newell, MD, MPH, BCCI, FACEP, chief transformation officer at Emeryville, CA-based Vituity.
Currently, AI tends to reproduce existing systems with greater efficiency. If care in the ED is problematic to begin with, AI could make it worse. Instead of addressing safety issues, AI blindly applied to the ED (or other healthcare settings) may figure out how to most efficiently recreate the safety issues. “AI in healthcare, currently, leaves much to be desired,” Newell observes.
That does not mean an EP can blame AI for negligent care. Ultimately, ED providers are responsible for the care of the patient. “As such, the malpractice risk will continue to lie with them,” Newell cautions.
AI and machine learning could affect malpractice risk for EDs in a positive way. “The ability to gain insights into disease processes in previously impossible ways is staggering,” Newell says. “We will see better care, better outcomes, and, theoretically, reduced malpractice risk.”
AI could even become the new standard of care for EDs. For instance, if AI is used at most EDs in a region, and a patient is misdiagnosed at one facility without AI, a plaintiff attorney could argue the AI tool is the legal standard of care for EDs.
“These are important questions that we’ll need to consider as AI is rolled out across healthcare more broadly,” Newell notes. AI can assist ED clinicians in making better diagnoses. “However, it is crucial to understand both the pros and cons of AI on patient safety outcomes,” says Cynthia A. Haines, Esq., principal in the Harrisburg, PA, office of Post & Schell.
In the ED, faulty AI could harm patients on a broad scale. “A system error in an AI product could lead to widespread patient injuries resulting from inappropriate diagnosis or medication errors compared to limited patient injuries attributable to a provider’s error,” Haines says.
REFERENCE
- Maliha G, Gerke S, Cohen IG, Parikh RB. Artificial intelligence and liability in medicine: Balancing safety and innovation. Milbank Q 2021; Apr 6. doi: 10.1111/1468-0009.12504. [Online ahead of print].
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.