When Machine Learning Tools Are Used to Predict Suicide Risk
Predictive analytics are used for online shopping, weather forecasting, and infectious disease modeling. Recent advances allow the same technology to identify patients at high risk of suicide.
Larry Pruitt, PhD, recently co-authored a paper on clinical and ethical implications.1 “Predictive analytic and machine learning algorithms are powerful tools for identifying those at risk. But human behavior, especially when it comes to suicide, is exceedingly complex,” Pruitt notes.
In a previous paper, Pruitt and colleagues considered the low positive predictive value that machine learning algorithms offer when it comes to suicide, especially when trying to predict that outcome outside the immediate future.2 That first paper was a technical investigation into suicide prediction models and machine learning. The 2022 paper focused on real-world scenarios revolving around whether the technology works. “The idea was to demonstrate just how difficult suicide is to predict, especially as we try to do so well before the moment of crisis,” says Pruitt, director for suicide prevention at VA Puget Sound Health Care System.
As advanced as these tools can be, they are prone to producing high rates of false-positives (incorrect conclusion that someone is at risk for suicide when he or she is not) and false-negatives (incorrect conclusion that someone is not at risk for suicide when he or she is).
“Because of this, these tools cannot be the only way that risk is evaluated, and should be paired with psychometrically sound risk assessment and strong clinical judgment,” Pruitt says.
Vast amounts of data are required for these tools to operate correctly, including deeply private information. Many individuals are not comfortable making such data available to others. “As such, there must be an enhancement to standard informed consent processes if these tools are to be applied,” Pruitt says.
Individuals must be able to opt out so as to protect autonomy and privacy. “Suicide risk is still a highly stigmatized label that can have very real consequences,” Pruitt notes.
Some tools assign individuals a high, moderate, or low risk label. Clinicians must educate those with access to those labels about what that means and how that information should be used.
“We must also attend to the ways that the information that feeds these models can be biased based on existing inequities in healthcare,” Pruitt says.
Another ethical concern is enacting predictive analytic tools likely will increase the number of high-risk patients within a given healthcare setting.
“We need to ensure that we have the infrastructure, including clinical staff, to support risk reduction interventions in a meaningful and expeditious manner,” Pruitt says.
REFERENCES
- Luk JW, Pruitt LD, Smolenski DJ, et al. From everyday life predictions to suicide prevention: Clinical and ethical considerations in suicide predictive analytic tools. J Clin Psychol 2022;78:137-148.
- Belsher BE, Smolenski DJ, Pruitt LD, et al. Prediction models for suicide attempts and deaths: A systematic review and simulation. JAMA Psychiatry 2019;76:642-651.
As advanced as these tools can be, they are prone to producing high rates of false-positives and false-negatives. Thus, these tools cannot be the only way that risk is evaluated, and should be paired with psychometrically sound risk assessment and strong clinical judgment.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.