Watch for ‘Hallucinations’ When Using AI for Healthcare
Artificial intelligence (AI) developers caution that there are limitations to the technology. Healthcare organizations must consider them when seeking the benefits AI offers.
AI can be helpful, but it can introduce errors to the healthcare process, says Tony Lee, JD, chief technology officer with Hyperscience, an AI company in New York City.
“Proponents of AI in healthcare have long discussed the benefits of leveraging this technology for efficiency and time savings, but there are considerable risks that must be taken into account when developing models for this purpose,” Lee says. “One example is the potential risk of hallucinations by large language models when the model misinterprets massive amounts of data. When dealing with an industry that literally makes a difference between life and death, inaccurate outputs from AI models are simply not acceptable. Patients rely on physicians to make quick, accurate decisions when treating illnesses, making human involvement critical throughout the process.”
Lee notes one instance in which an oncology nurse was alerted by AI that a patient had sepsis but was confident the diagnosis was incorrect. However, hospital procedure required her to draw blood from the patient due to AI’s diagnosis, which could have exposed him to infection.
“In fact, the patient was not septic, which is why it’s so important for organizations to remember that humans must always have the final say,” Lee stresses.
Although these risks are concerning, Lee says it is important to remember that humans play a role in overseeing AI models and can unlock huge benefits for the healthcare industry. The federal government should consider standard ethical principles for every healthcare AI use case, but in the absence of such legislation, healthcare organizations can take it upon themselves to minimize these risks and potential consequences, he says.
One way to do so is building an ethical AI committee that governs how the organization uses AI, Lee suggests. The most important standard is oversight of AI systems that always puts a human at the forefront of supervising system outputs, he says. Additional considerations include data privacy, mitigating bias, and ensuring the algorithm is transparent. In addition to improving the overall quality of the model’s outputs, Lee says these practices will build trust with a public that is concerned with how their data are used by healthcare organizations.
“All organizations need to self-regulate and promote safe use of AI models, but healthcare companies in particular must be stringent in their safety considerations, especially given how much is at stake,” Lee says. “An inherent risk with learning models is if there are biases in the training data — especially as it relates to underrepresented population groups. Some examples include race, nationality, and socioeconomic background. Training data transparency and explainability will be critical to build confidence that the model is taking into consideration the many variables that go into patient care.”
In addition to benefits and risks in clinical care, hackers can use AI to bolster their threats to healthcare organizations. The next year will have a more challenging digital threat landscape with the rise of AI-driven cyber threats, says Kevin Heineman, vice president of corporate IT at Lyric, an AI healthcare technology company in Sunnyvale, CA. The sophistication of these threats, leveraging AI, will transform traditional cybersecurity strategies, necessitating a shift toward more dynamic and comprehensive security measures, he says.
The collaboration between chief information security officers, IT security professionals, and business units in securing AI business processes will be more critical than ever, Heineman notes. This will involve creating AI systems with inherent security features and moving beyond restrictive policies to embrace inclusive, proactive security plans that can adapt to the evolving digital threat landscape.
“In response to these challenges, healthcare organizations should focus on enhancing their cybersecurity infrastructure, investing in advanced threat detection and response systems. Training employees in cybersecurity best practices and the implications of AI in digital security will also be essential,” Heineman explains. “As we navigate through the year, a proactive, informed approach to cybersecurity, especially in AI-related areas, will be crucial for safeguarding digital assets and maintaining trust in technology-led processes.”
While healthcare organizations must embrace new and emerging technologies, diligence in the selection process is equally important, says Vince Cole, CEO of Ontellus, a records retrieval company in Houston. Adopting AI technologies offers numerous advantages, including enhancing patient care, engagement, operational efficiencies, and clinical advancements, he says, but organizations also must address the ever-changing legal and regulatory landscape, ensure data security, and address ethical concerns.
Continuous monitoring of new technologies is essential for innovation in the healthcare ecosystem, Cole says. Although AI presents opportunities for increased innovation, caution is advised in implementing it in areas where technologies may not be fully developed.
Organizations should view AI as a tool that complements their clients and staff rather than a replacement, Cole notes. While AI can significantly enhance efficiencies and reduce costs, it remains crucial to involve humans in analysis and review to ensure accuracy and quality controls.
“It is imperative for healthcare organizations to integrate these new technologies with a comprehensive oversight plan, enabling continuous monitoring and adjustments to processes, as necessary,” Cole says. “This approach will enhance security measures and ensure compliance with regulations.”
Cole says AI technologies hold immense potential in the legal and healthcare sectors. This potential lies in the seamless integration of health systems with patient medical and billing records, as well as in the ability of legal industries to access optimal evidentiary materials for their cases.
“However, the healthcare industry has been slow in adopting new technologies, creating a challenging environment for legal entities to navigate the legal system efficiently,” Cole says. “Despite this, ongoing back office innovations have already had a significant impact on the storage and sharing of patient data. These innovations empower legal firms to obtain and thoroughly review crucial information, ultimately enhancing the outcomes of their cases for clients.”
SOURCES
- Vince Cole, CEO, Ontellus, Houston. Phone: (800) 467-9181.
- Kevin Heineman, Vice President of Corporate IT, Lyric, Sunnyvale, CA.
- Tony Lee, JD, Chief Technology Officer, Hyperscience, New York City. Phone: (646) 767-6210.
Artificial intelligence (AI) developers caution that there are limitations to the technology. Healthcare organizations must consider them when seeking the benefits AI offers. AI can be helpful, but it can introduce errors to the healthcare process.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.