By Stacey Kusterbeck
Surrogate decision-makers are faced with a formidable task: To make decisions based on the ethical principle of substituted judgment. “The idea is supposed to be, when these surrogates are making decisions, they are not supposed to choose what they want,” says David Wendler, MA, PhD, head of the section of research ethics in the Department of Bioethics at the National Institutes of Health Clinical Center. The surrogates instead must ask: What decision would the patient make?
“The basic idea is that patients are supposed to get to decide if they are treated and how,” says Wendler. Unfortunately, most hospitalized patients are not able to make their own decisions when treatment decisions must be made. “And when it comes to end-of-life decisions, estimates suggest that more than 75% of those decisions are made by somebody other than the patient. And those are really important decisions, such as whether to take somebody off a ventilator,” says Wendler.
Wendler estimates that about 20% of people have designated a surrogate decision-maker as part of advance care planning. In the remaining cases, clinicians follow a legal hierarchy and rely on the next of kin. Those individuals are asked to use the substituted judgment standard when making treatment decisions for patients who are incapacitated. “There [are] a lot of data that suggest that all of us are pretty bad at predicting the preferences of other people. Related to that, there [are] a fair amount of data that a lot of surrogates experience distress and burden,” says Wendler.1
Surrogates end up feeling responsible for the patient passing away if they decide to withdraw or withhold treatment; on the other hand, the surrogate feels badly if the patient is put through burdensome medical care.
“Surrogates are often uncertain what their loved one would have wanted and feel a lot of stress over having to decide about what may be a life-or-death issue,” says Brian D. Earp, PhD, an associate director of the Yale-Hastings Program in Ethics and Health Policy.
Ethicists can facilitate ethical decision-making by ensuring that surrogates try to accurately predict what the patient would want (instead of interjecting the surrogates’ own values). “Loving surrogates are reluctant to let a patient die, even when they know the patient would want to die. Potential inheritance and medical expenses can motivate surrogates to let a patient die when the patient would not want to die. These surrogates are making unethical decisions that need to be detected and prevented,” says Walter Sinnott-Armstrong, PhD, Chauncey Stillman professor of practical ethics at Duke University’s Kenan Institute of Ethics.
Patient preference predictors (PPPs) potentially can help with all these situations.2 Such a tool would draw on population-level data and information about the patient to make a prediction about what a particular patient would want, based largely on their demographic features.
Patient treatment preferences tend to correlate with age, gender, religion, and other factors. “For instance, a young patient is highly likely to think that several months of aggressive ICU (intensive care unit) care is worth it to get over an illness. If the patient is over 90 [years of age], fewer will think it’s worth it,” says Wendler. Wendler and colleagues built a preliminary tool that takes into account just a few characteristics of a patient. “And now what we’ve learned from the computer age is that you can make surprisingly accurate predictions from people by their social media, the movies they like, or the clothes they wear. The thought is, we can also build some of those things into the PPP — that is when you get more of an AI (artificial intelligence) version of it,” says Wendler. Ideally, an AI tool could factor in all available information about a person and predict what treatments that person would want in a given circumstance. “We don’t know whether or not it is going to work, because no one has built one and tested it out,” says Wendler.
However, if such a tool was more accurate than surrogates, patients more frequently would be treated consistent with their own preferences. If the tool provides more confidence on what the patient would have wanted, it could take some burden from the surrogate. For instance, the clinician could inform the surrogate that the decision tool predicts that a patient would not want to be put on dialysis and that the clinical team is planning to follow the recommendation, unless the surrogate has reason to believe the patient would want something else. “If the surrogate spoke with the patient and is confident they’d want the opposite, they could still direct the treatment. If not, the surrogate can rely on the computer tool instead of feeling completely responsible for the decision,” Wendler explains.
Some surrogate decision-makers might be glad to have a resource available to take the pressure off having to decide on their own, even if the tool does not incorporate personalized information. However, others might think it disrespectful to choose for an individual based on what other people — meaning individuals in the general population who happen to share the patient’s demographic features — have said they would want in a similar situation.3 “Instead, they may feel that substituted judgments should be based on the specific beliefs, values, and preferences of the particular individual concerned,” says Earp.
Earp and colleagues currently are developing a personalized PPP (P4) tool.4 This tool would infer a patient’s preferences based on the patient’s own previous treatment decisions, based on an algorithm that uses a type of AI (a large language model, similar to ChatGPT) to learn or infer the unique preferences of an individual based on data specific to that person. This could include the person’s electronic health records, transcriptions of discussions with clinicians, or even broader data such as social media posts. The model would be “fine-tuned” on data that previously were authorized by the patient (or their surrogate) for use in the P4. “The P4 is still mostly theoretical. At this point, we are working on a prototype that we hope to test for accuracy in the near future,” reports Earp.
AI tools potentially can alleviate surrogate distress and ensure the patient’s wishes and values are respected, if programmed to predict the patient’s own treatment preferences. “The AI need not be followed blindly. But surrogates and doctors can consult the AI’s predictions and give them some weight in making treatment decisions,” offers Sinnott-Armstrong.
- Wendler D, Rid A. Systematic review: The effect on surrogates of making treatment decisions for others. Ann Intern Med 2011;154:336-346.
- Rid A, Wendler D. Use of a patient preference predictor to help make medical decisions for incapacitated patients. J Med Philos 2014;39:104-129.
- Jardas EJ, Wasserman D, Wendler D. Autonomy-based criticisms of the patient preference predictor. J Med Ethics 2022;48:304-310.
- Earp BD, Porsdam Mann S, Allen J, et al. A personalized patient preference predictor for substituted judgments in healthcare: Technically feasible and ethically desirable. Am J Bioeth 2024; Jan 16:1-14. doi: 10.1080/15265161.2023.2296402. [Online ahead of print].