AI Creates Liability Risks for Healthcare Organizations
EXECUTIVE SUMMARY
Artificial intelligence (AI) is becoming more common in healthcare and offers substantial benefits. There also are serious risks to consider.
- Clinicians may rely too much on AI instead of their own judgment.
- AI may “hallucinate” and offer unfounded assessments.
- The introduction of AI should be carefully controlled.
Artificial intelligence (AI) is entering a variety of industries including healthcare, where it offers the opportunity to improve diagnoses and patient care in many ways. The potential benefits come with significant risks that must be anticipated and mitigated.
More basic forms of AI have been around for a while, and their use was limited, but that is changing rapidly, says Sue Boisvert, BSN, MHSA, CPPS, CPHRM, DFASHRM, senior patient safety risk manager with The Doctors Company, a malpractice insurer based in Napa, CA. ChatGPT changed the face of AI by entering many aspects of society quickly.
“It’s hard to predict what’s going to happen in 2024, but I think what we need to pay attention to in healthcare — and especially risk managers — is that it is very clear that there needs to be some guardrails around the advanced AI,” Boisvert says. “The federal and state governments are starting to create some regulations, and we need to be aware of those and start incorporating those models.”
The most prominent regulations or guidelines come from the National Institute of Standards and Technology (NIST), Boisvert says, and IT departments in healthcare organizations must be familiar with that framework. It is easy to implement, and it is based on the idea that organizations should map, measure, and manage their use of AI, she says.
The first crucial step is to know where AI is being used in the organization. “They need to know whether they’re chatbots or AI, what healthcare instruments they’re using that are augmented by AI,” Boisvert explains. “The reason that it’s so important for them to be aware is if you’re using an instrument on a patient, you need to be aware of the functionality and you need to be aware of the risks to watch out for.”
Two things make this a distinct challenge in healthcare, Boisvert says. First, most providers or risk managers never received any education in advanced computing. Second, there is currently no commercial insurance product specifically dedicated to AI.
“When telehealth first came out, a lot of the professional liability companies added a rider or language for telehealth, but we’re not seeing that with artificial intelligence,” Boisvert says. “I think there are a lot more risks associated with a program that assists in decision-making than there is with videoconferencing software. Organizations will have to give thought to whether their current policies cover it.”
AI risk management should begin with the earliest consideration of introducing AI to the clinical process, Boisvert says. She suggests using the NIST framework when purchasing AI, noting that it should be introduced only as an aid to clinicians and not a replacement for their judgment.
“People will tell you that their product is diagnostic. That’s kind of a misnomer. The only person who can diagnose is the provider,” Boisvert says. “The artificial intelligence can make recommendations, but the diagnosis is up to the physician. If you’re going to use an advanced tool to help make a decision, it needs to be fully vetted.”
An enterprise risk management approach, looking at the implications in a complex environment, is necessary with AI, Boisvert says. For example, look at how it will affect staff education and training. No physician should use a piece of AI-enabled technology without a good understanding of the capabilities, how it works, what a failure would look like, and what they would do if there was a failure. Support staff need the same understanding.
Every organization should have a multidisciplinary team that guides purchasing, implementation, and use of AI, Boisvert recommends. That team should include end users, administration, finance, and IT. AI purchasing decisions also should be guided by the organizational culture.
“If you’re a risk-tolerant organization, your AI implementation is going to look a lot different than that of a risk-averse organization. In fact, a risk-averse organization may be limiting themselves to static algorithms as opposed to generative AI,” Boisvert explains. “Organizations need to do their own risk analysis. What’s the worst thing that could happen? What would we do about it? Identify the risks, and then they can evaluate whether those risks are present.”
If so, those risks must be monitored, Boisvert says. A good method is with scorecards, already common for monitoring many issues in healthcare. “It is really important for IT, leadership, and end users to collaborate on their scorecards because they are known to be so valuable in clinical quality improvement as well as leadership and finance,” she says. “They need to apply the same gravitas and dedication to AI monitoring scorecards as they do for their other scorecards.”
Not Always Up to Date
One shortcoming of AI might not be obvious to healthcare professionals, notes Mihai Nadin, PhD, the Ashbel Smith Professor Emeritus at the University of Texas in Dallas. The technology is not always up to date, and that could have serious consequences in clinical care, he says.
Even with the casual use of ChatGPT, a query about recent events might produce no answer or an incorrect answer because the technology’s “learning” stopped a few years ago. Similarly, AI used in healthcare may not reflect the most current thinking or data, Nadin says.
“My major concern is that AI as it is practiced today is generalizing from the medicine of the past, represented by the data that was used in order to train models,” he explains. “Generalizing from the past puts us in a really dangerous situation.”
The continuing problem of medical errors and iatrogenic harm may be exacerbated by AI that relies on decades of data and clinical assumptions that are now being challenged. “We know for a fact that medical care based on some wrong assumptions is the third most common killer. First is the heart, which means cardiology, then comes cancer and then comes people killed by errors in medical care,” Nadin says. “This being the case, the question that should be asked now is if we take advantage of AI, are we going to automate the killing? Are we going to generalize from a model in which medicine, instead of making progress as a discipline on its own, becomes more and more captive to explanations that come from physics and chemistry, and less and less aware of the real complexities of the living of the biological?”
Nadin supports the use of AI to streamline administrative functions and minimize the burden on clinicians. However, he says the use of AI in the actual care of patients should be carefully controlled.
“I’m not very excited by the fact that we’re going to automate more and more of what in principle is an activity that involves an interaction between those who provide medical care and those who need medical care,” Nadin says. “The automation of anything that has nothing to do with the patient is, for me, not 100% justified. Don’t start automating things related to the relationship between the patient and the doctor.”
Risk managers should be concerned about whether AI will diagnose accurately and whether a physician is going to overly rely on an AI model to make that decision, says John F. Howard, JD, senior attorney with Clark Hill in Scottsdale, AZ.
“There also is concern over whether there is any inherent bias that is built into the training algorithm of the AI, which may impact certain populations based on bias,” Howard says. “Then, of course, there is cybersecurity because there are always privacy concerns when you’re training these things. We have physicians turning over large amounts of identifiable data to try and train the AI, or we have third-party vendors that just ended up with access to the whole trove of electronic health records. From a risk perspective, there’s a large amount of risk in here that up until now is largely unregulated.”
Not all AI products in the healthcare setting are the same and have the same sort of risk associated with them, says Paul F. Schmeltzer, JD, senior attorney with Clark Hill in Los Angeles. One way to look at AI products is to categorize their risks from unacceptable to minimal or no risk, he suggests.
“Is this going to be a product or service using AI that providers will lean on to their detriment — meaning it will be a disservice to the patient, whether it’s a misdiagnosis or incorrectly prescribing medication, based on overreliance on the AI?” Schmeltzer asks. “Or, does it introduce the risk of something even more nefarious, like the discriminatory aspect of what the AI might spew out as a result of its algorithm, with an inherent bias baked into that?”
More Guidance Coming
Healthcare organizations need more guidance on how to introduce and monitor AI, says Robert Andrews, JD, CEO of Health Transformation Alliance (HTA) in Washington, DC, which oversees the strategic direction of more than 50 major corporations to fix the U.S. healthcare system.
HTA is set to release suggested ethical practice guidelines for the use of AI and healthcare soon, Andrews says. The guidelines will address the proper balance between humans and AI. As an example, Andrews notes that when AI scans radiology studies, the error rate is only slightly higher than when a human performs the analysis. But when the AI looks at the study and then a skilled, experienced radiologist also checks it, the error rate is much lower.
“That’s what we’re after,” Andrews notes. “We want the right balance of the AI and the human, who has a perhaps more nuanced understanding of what’s going on with the patient.”
Additionally, the guidance will caution against AI perpetuating prejudicial patterns based on outdated or uninformed learning. The guidelines also will call for “maximum appropriate transparency,” according to Andrews.
“It’s a little tricky to define, but we do think that when someone’s data is being used in a way to feed an AI algorithm, to the extent it’s practical, they should know that,” he says. “That is not to say that necessarily patients have the right to opt out, but we just think that when someone’s personal information is being used in a database, they should know it, and consent to it where appropriate.”
Need Sufficient Infrastructure
A hospital or health system incorporating AI must first ensure it has the appropriate cybersecurity in place and the appropriate infrastructure to handle any type of software that employs AI, says Bill Bower, senior vice president with Gallagher Bassett, a healthcare professional liability claims and risk management consulting company in Rolling Meadows, IL.
Bower has seen health system C-suites push for the rapid implementation of AI without first considering all the underlying IT support it requires and the security additions it might entail. He advises a slow and deliberate approach to incorporating AI into a healthcare organization.
Bower also is concerned that the use of AI may increase the potential harm from a ransomware attack. “I start to worry that if we use AI and have a robust database, threat vectors will say, ‘Not only do we have your system hostage, but we also could contaminate all your data. And by contaminating all your data, we completely ruin the ability to engage in artificial intelligence and machine learning,’” Bower says. “It hasn’t happened yet, to my knowledge, but I certainly could see it as a high-stakes game for those institutions that rely on it.”
On the plus side, AI could improve patient care in areas like telehealth services, says Jolie Apicella, JD, partner with Wiggin and Dana in New York City. There are risks, but healthcare organizations should balance them with the potential benefits.
“There is the possibility of reducing costs and making healthcare more accessible to larger populations. With the advent of the wider use of telehealth, we saw that certain demographics, [such as] indigent people, were having greater health results and greater access to healthcare,” Apicella explains. “I would expect the same results with AI if you can just open a computer and start to have an honest dialogue about all of the symptoms because sometimes you have a limited amount of time with your doctor. But I think there needs to be humans involved. Otherwise, the less human oversight there is into this, I think the greater the potential risk.”
Subject to Liability
Healthcare systems and providers may be subject to liability for AI systems under a malpractice theory or other negligence theories when using AI tools to provide care to patients, says A.J. Bahou, JD, partner with Bradley Arant Boult Cummings in Nashville, TN. Likewise, AI vendors might be subject to product liability regarding the AI tool used in healthcare.
“Regarding doctors, they should be concerned about using a new AI tool because that tool could be criticized as deviating from the standard of care,” Bahou says. “Until the AI tool is widely used and accepted by the medical profession, doctors should always evaluate the outputs from AI tools and maintain the physician’s judgment in the ultimate decision on patient care. Until AI tools become part of the standard of care by the profession, this early adopter concern will be a persistent risk by the doctor, and vicariously by the health system, during this evolution of using AI tools in healthcare.”
There also is a product liability risk for the AI system designer, such as for the design of the algorithm used in the AI tool, Bahou notes. In this instance, the AI vendor could be liable for the product if it causes harm to the patient. Legal theories in this area could include failure to warn about risks, poor design, unmanageable adjustments to the algorithm due to updates in the machine learning process, or manufacturing defects. For example, if a surgical robot is driven by AI algorithms and causes harm, the AI manufacturer might be liable for that injury to patients if the product is proven to be defective, Bahou explains.
AI vendors are promoting AI assistant tools for the physician-patient interaction, Bahou notes. The benefit is like having a smart speaker, such as Alexa or Siri, listen to the physician-patient conversation and transcribe that conversation. The transcription can then become the medical record of that visit. Providers will appreciate the reduced burden of taking notes and documenting everything, Bahou says.
“The doctor may also ask the AI tool for assistance in diagnosis, allergic interactions, medical history, or prescription assistance. In doing so, the AI system can check the patient’s medical history, automatically find available time on the patient’s mobile device to schedule a follow-up visit, and/or read data from the patient’s mobile device for collecting health data as part of the treatment plan,” Bahou says. “The AI tool could send the prescription to your pharmacy of choice and automate that e-prescription process. There are many benefits, but also increased risks.” Some risks with transcribing the conversation include a concern about how the AI tool will interpret or misinterpret sarcasm, he notes.
There will be more concern about patient privacy, cybersecurity, and inherent bias in AI tools because those methods are implemented more deeply in the spectrum of patient care, Bahou says. Some risks include the risk of malpractice if the provider relies too much on the AI tool for assistance and misses a diagnosis.
“Likewise, the AI vendor may have product liability if its outputs cause harm or fail in meeting the standard of care with an improper diagnosis. Cybersecurity risks remain prevalent but with increasing concern about the biometric data now added to the medical record,” Bahou says. “The record of a person’s oral conversation being hacked is much more intrusive as compared to a cryptic medical note written by the doctor in the doctor’s own words.”
Watch for Bias Toward AI
“Technology bias” is a real concern with the increasing use of AI, says Wendell J. Bartnick, JD, partner with Reed Smith in Houston. Providers may perceive AI solutions as highly accurate and rely too much on the technology in making treatment decisions rather than use their medical judgment.
Such overreliance on technology could result in negligence lawsuits against providers with allegations that providers did not meet a reasonable standard of care by failing to use medical judgment to override the AI technology’s recommendations, Bartnick says. “However, the flip side will likely become true, with claims that providers must use AI technology to meet the prevailing standard of care. A failure to do so may result in negligence claims,” he says. “Organizations will need to continue to monitor the quality and use of AI technology when providing patient care.”
Providers should update governance and compliance programs to account for the use of AI and other advanced technologies so that they are appropriately used, Bartnick notes. Organizations should be clear about which technologies may or must be used and when.
Bartnick says there should be a process of approving and introducing the use of AI in clinical care. Healthcare organizations that do not follow an approval process for adopting AI technology in clinical care likely are taking on significant risk.
Many healthcare organizations are creating or expanding existing governance teams and programs to account for the adoption and use of AI technology, Bartnick notes. AI technology proposals should be submitted to the governance team and undergo a formal approval process. Many AI risk management frameworks recommend that AI governance programs be reviewed and approved by the board or other senior management, he says. Corporate compliance/audit teams also should play a significant role in ensuring ongoing compliance with corporate AI policies on adoption and use.
“Many organizations are in the process of improving their knowledge and awareness of AI technology capabilities and use cases, and they are developing governance and compliance processes to account for AI technology,” Bartnick says. “While organizations have made significant progress in a short time, significant work remains.”
Possible HIPAA Risk
For covered entities under HIPAA, an AI risk arises under Section 1557 of the Patient Protection and Affordable Care Act, says Bradley Merrill Thompson, JD, an attorney with Epstein Becker Green in Washington, DC. The Office for Civil Rights (OCR) has proposed a regulation to prohibit discrimination by an algorithm used in clinical care, he says. For developers, the primary risk is a violation of the Federal Food Drug & Cosmetic Act, which regulates medical devices. (More information is available at: https://www.hhs.gov/civil-righ....)
“An algorithm can constitute a medical device if it’s used in the diagnosis or treatment of a disease or other condition. So much clinical decision support software that provides patient-specific assessments and treatment recommendations qualifies if it isn’t exempted,” Thompson explains. “Under a 2016 amendment in the 21st Century Cures Act, clinical decision support can be exempt if the basis for recommendation is fully transparent. But that’s a difficult standard to meet for a machine learning algorithm.”
The primary issue under both laws is whether a clinical algorithm provides variable levels of accuracy depending on the patient demographic or other factors, Thompson explains. An algorithm that is 99% accurate for white males but only 70% accurate for Black females would trigger a violation, he says, but such disparities in performance are common.
The problem for developers is that such variation often is unintentional — it is simply a reflection of the fact that they had less data for one demographic on which to train their algorithm, Thompson says. An algorithm might be undertrained for one minority just because of a lack of data. Both OCR and the Food and Drug Administration (FDA) have wide enforcement powers, he notes.
There are three primary tactics to mitigate the risks related to AI, Thompson notes. The first is to put a governance process in place to ensure that employees use good data management practices to catch the variability before the algorithm is released, Second, testing and auditing the algorithm before it is released is necessary to catch unintended bias.
Finally, healthcare organizations need to put monitoring processes in place because the reliability of these algorithms tends to evolve, and performance for one subpopulation can decline again without anyone intending it, Thompson says.
Must Protect Data
These products, if they meet the definition of a medical device, must undergo the FDA approval process. The FDA legal standard for when the approval process is triggered is an appropriate judgment about when the algorithm can affect the safety or effectiveness of the technology, Thompson says.
“With the rapid rise of generative AI, a whole bunch of people are now focused on figuring out how AI tools can advance healthcare clinically. Many people are calling for federal regulation, not realizing that federal regulation already exists. Many people in this space do not understand the FDA requirements, and don’t understand the HHS [Department of Health and Human Services] proposed regulation that will frankly just codify what the statute already requires,” Thompson says. “The statute already imposes nondiscrimination requirements, and the regulation will just make it clear how they apply to algorithms. This domain is full of doctors and computer scientists who are not well-versed in the regulatory requirements.”
AI tools often need access to lots of data, notes Melissa Soliz, JD, an attorney with Coppersmith Brockelman in Phoenix. Healthcare organizations that are interested in using AI tools will need to carefully consider whether they can use those tools in compliance with privacy and security laws.
For example, will the AI developer agree to enter into a HIPAA business associate agreement that will strictly regulate how the AI developer uses and discloses the data? Consider whether the AI tool will run on the healthcare organization’s servers or if copies of the data will be hosted, stored, and processed on the AI developer’s systems, Soliz advises. Will the AI developer agree to not use the individually identifiable health information for its own purposes to develop new commercial products? Also, does that AI developer have adequate security measures in place to meet the requirements of the HIPAA Security Rule?
“Additionally, AI is not perfect. Every AI product has limitations. Overreliance on AI may expose a healthcare organization to liability for medical errors and misdiagnoses,” Soliz warns. “For example, ‘hallucinations’ are a well-known generative AI limitation where the AI presents a seemingly reasonable response that is factually inaccurate, misleading, or completely invented — even citing fake sources. A healthcare provider using generative AI to create patient notes, a treatment plan, or discharge instructions, for example, should beware of hallucinations and review the AI’s response to avoid inaccurate records, medical error or malpractice, and vicarious liability for the healthcare organization.”
SOURCES
- Robert Andrews, JD, CEO, Health Transformation Alliance, Washington, DC. Phone: (847) 273-3880.
- Jolie Apicella, JD, Partner, Wiggin and Dana, New York City. Phone: (212) 551-2844. Email: [email protected].
- A.J. Bahou, JD, Partner, Bradley Arant Boult Cummings, Nashville, TN. Phone: (615) 252-2242. Email: [email protected].
- Wendell J. Bartnick, JD, Partner, Reed Smith, Houston. Phone: (713) 469-3838. Email: [email protected].
- Sue Boisvert, BSN, MHSA, CPPS, CPHRM, DFASHRM, Senior Patient Safety Risk Manager, The Doctors Company, Napa, CA. Phone: (800) 421-2368.
- Bill Bower, Senior Vice President, Gallagher Bassett, Rolling Meadows, IL. Phone: (630) 773-3800.
- John F. Howard, JD, Senior Attorney, Clark Hill, Scottsdale, AZ. Phone: (480) 684-1133. Email: [email protected].
- Mihai Nadin, PhD, Ashbel Smith Professor Emeritus, University of Texas, Dallas. Phone: (972) 883-2111.
- Paul F. Schmeltzer, JD, Clark Hill, Los Angeles. Phone: (213) 417-5163. Email: [email protected].
- Melissa Soliz, JD, Coppersmith Brockelman, Pheonix. Phone: (602) 381-5484. Email: [email protected].
- Bradley Merrill Thompson, JD, Epstein Becker Green, Washington, DC. Phone: (202) 861-1817. Email: [email protected].
Artificial intelligence is entering a variety of industries including healthcare, where it offers the opportunity to improve diagnoses and patient care in many ways. The potential benefits come with significant risks that must be anticipated and mitigated.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.