Use Caution if Machine Learning Models Are Used to Predict Mortality
As a pediatric ICU (PICU) provider, Kelly Michelson, MD, MPH, has engaged in many conversations with families about prognosis. Michelson and her colleagues recently explored ethical concerns with the growing practice of using of machine learning models to predict mortality.1 The group focused on the PICU in particular.
“Clinicians need to know as much as they can about machine learning and some of the challenges associated with these emerging technologies,” says Michelson, director of the Center for Bioethics and Medical Humanities at Northwestern University Feinberg School of Medicine.
To gather a multidisciplinary take, Michelson and colleagues talked with a pediatrician bioethicist, an artificial intelligence (AI) expert, a health law scholar, and a health bioethicist.
“Machine learning/AI will definitely be part of the caring for patients in some way or another going forward. In some areas, it already is. As clinicians, we need to know what it all means and what the potential pitfalls are,” Michelson offers.
Machine learning/AI does not preclude the need for communication between clinicians and patients. “In some ways, the more complicated technology gets, the more we need clinicians to help make sure communication with patients is happening appropriately,” Michelson says.
To do so, clinicians must know the pros and cons of the technology, and its potential consequences. “Clinicians need to push for efforts to proactively assess limitations and biases,” says Michelson, adding that as for ethicists, “one of the biggest things we can do is increase awareness.” Ethicists can do that by providing input on the use and development of technology at their institutions.
For PICU staff, “AI is poised to change their work, not always for the better. More and more, AI will be a tool and perhaps a partner in making clinical decisions,” predicts Craig M. Klugman, PhD, professor of bioethics and health humanities at DePaul University in Chicago.
Computers are only as free from bias as the data set from which they are built. “The data set on which it learned could be limited, or come from a different population than your patient, which means that its recommendations could be biased,” Klugman warns.
Clinicians could find themselves trying to justify when their plan of care differs from the AI recommendation. The tools are meant to bring together data that are too large, complex, or difficult to gather to help clinicians make an informed decision.
“However, conversations between the physician and the patient about using these tools are not always happening,” Klugman notes. “Rather than hiding this technology, physicians should be frank about their use.”
Clinicians are faced with the need to explain to patients what these systems are, how they are used, and why they are following (or not following) the recommendations. Klugman suggests this wording: “Our hospital has recently started using an intelligent computer system to assist clinicians in making decisions. The computer has access to millions of cases and it can show us how your situation is similar or different. As your physician, I ultimately make the recommendation, but the computer can assist me in making good recommendations for your care. What questions do you have?”
Klugman says clinicians should be asking hospital leaders these questions about AI tools: What are the institution’s policies regarding oversight? Will clinicians be questioned if their recommendations differ from the AI tool? What about liability? If clinicians follow the AI recommendations and things go wrong, does the hospital’s malpractice policy cover the clinician in that scenario? What patient demographics were used in the source data that created the system? It can be difficult (or even impossible) to fully understand how the AI is making recommendations.
“Don’t assume these systems are more objective or more equitable,” Klugman cautions. “Directed efforts to understand what is driving algorithms is important.”
When institutions choose to adopt an AI system, ethicists should be part of the conversation from the beginning.
“Ethicists should do what we do best — keep asking questions,” Klugman recommends. “Ethicists are experts at asking difficult questions and providing different perspectives.”
Klugman says ethicists should ask these questions of hospital leaders: Why was this AI tool chosen? How was this AI tool developed? How will the AI tool be used and monitored in the clinical environment?
“AI ethics is a whole new field that is growing quickly,” Klugman observes. “These systems have the potential for benefit in terms of diagnosis and treating patients. But guidelines around their use are still developing.”
In light of this, Klugman suggests ethicists encourage hospitals to create an AI oversight group. This could function similarly to a Data and Safety Monitoring Board as a way to review the system’s benefits and harms. A complicating factor is AI companies do not want to share, or in many cases do not know, how the tools make decisions.
Ideally, clinicians should gather information on the demographics of the patient records used in machine learning to ascertain if it reflects the hospital’s patient population. “The AI should be able to explain how it came to its decisions,” Klugman adds.
It is not enough to make statements like “This is the recommendation of the AI tool” or “There is an 85% probability of X outcome, according to the AI tool.” Clinicians should be able to ask how the AI tool reached those conclusions. Blindly accepting what the AI recommends is ethically problematic.
“We ask clinicians to explain how they came to their differential diagnoses and treatment plans,” Klugman says. “We should ask no less of the AI.”
REFERENCE
- Michelson KN, Klugman CM, Kho AN, Gerke S. Ethical considerations related to using machine learning-based prediction of mortality in the pediatric intensive care unit. J Pediatr 2022; Jan 14:S0022-3476(21)01276-2. doi: 10.1016/j.jpeds.2021.12.069. [Online ahead of print].
Machine learning and artificial intelligence will be part of caring for patients in some way or another going forward. Clinicians need to know what it all means and what the potential pitfalls are.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.