By Stacey Kusterbeck
Multiple artificial intelligence (AI) -based cardiovascular devices are currently being developed and evaluated. “It is critical to understand how these technologies can be effectively and ethically implemented,” underscores Maryam Mooghali, MD, MSc, an internal medicine resident at Yale New Haven Hospital.
Mooghali and colleagues sought to identify specific barriers and facilitators to trustworthy and ethical use of AI in cardiovascular care. The researchers conducted a literature review and identified 145 articles covering ethical concerns, transparency, or trust associated with AI-based medical devices used for cardiovascular care. Key concerns from patients’ and healthcare providers’ perspectives include privacy, security, healthcare inequity, patient harm, accountability, informed consent, and data ownership.
“Privacy risks arise from potential data breaches and misuse of sensitive patient information. This highlights the need for robust security measures, such as data anonymization and secure storage practices,” says Mooghali.
Questions about healthcare inequity are another central ethical concern. AI tools can exacerbate existing disparities, particularly if they rely on biased training datasets. Additionally, the complexity of AI systems creates new challenges with obtaining informed consent.
“Patients may struggle to understand different aspects of care provided by AI-enabled medical devices, and how patient data is being collected and used,” Mooghali explains.
To address informed consent challenges with AI-enabled medical care, institutions can develop clear educational materials and consent forms. These should explain how AI-enabled medical devices function and how patient data are collected, stored, and used.
“Additionally, clinicians need to ensure effective communication about the role of AI in patient care and address any concerns or questions in each patient visit,” says Mooghali.
Ethicists can help to develop policies for the use of AI in cardiovascular care to ensure that ethical considerations are integrated into the design and implementation of AI tools. “Ethicists can offer training for clinicians on the ethical implications of AI-enabled medical care, addressing key concerns around data privacy, healthcare inequity, and informed consent,” adds Mooghali.
The issue of accountability also is a significant ethical worry. The involvement of multiple stakeholders in AI development makes it unclear who ultimately is responsible for outcomes driven by these technologies. “To address these ethical questions, establishing further regulatory oversight on the use of patient data and improving transparency of AI tools seems necessary,” Mooghali suggests.
To proactively address ethical concerns regarding the use of AI in cardiovascular care, organizations should prioritize data privacy and security, and enhance transparency about how AI-enabled medical devices function and how patient data are stored and used.
“Moreover, they could regularly evaluate the performance of AI tools to ensure minimal patient harm,” says Mooghali.
Reference
- Mooghali M, Stroud AM, Yoo DW, et al. Trustworthy and ethical AI-enabled cardiovascular care: A rapid review. BMC Med Inform Decis Mak. 2024;24(1):247.