By Stacey Kusterbeck
When a patient first arrives at an ED, there usually is not much hard data available. This makes it very challenging for triage nurses to determine if sepsis is a possibility. “To conclude that an ED patient has sepsis, generally a lot of data is needed, such as lab or imaging results and multiple vital signs. In the ER, you don’t have that information when people first walk in,” notes Justin Schrager, MD, MPH, an assistant professor of clinical emergency medicine at Indiana University School of Medicine.
ED clinicians only have basic demographic information and some vital signs to go on. Schrager and colleagues set out to analyze the ability of a machine learning-based tool to predict sepsis in the ED based only on the information available at the point of triage.1 “There’s a lot of research in the machine learning world to predict all kinds of things in the hospital setting. What we did that was somewhat different was, we looked at it the way an ER nurse or ER doctor would look at it. ‘What’s going on with this person who just walked in the door?’” says Schrager.
The researchers analyzed 1,059,386 adult ED encounters and trained a model to predict sepsis based on the data that was available at the time of triage, including nursing notes. “If you asked an experienced ER nurse or doctor if they thought somebody who walked in the door had sepsis, you are going to get heterogeneity in their answers,” says Schrager. The clinician would factor in their own observations on how the person looked, along with other available data such as vital signs. “Above and beyond those data points, what we’ve found to be the most useful information is the unstructured data gathered by the triage nurses,” says Schrager. The researchers wanted to include all the pieces of information available at the start of the ED visit, which included nursing notes.
“We are trying to solve a very difficult problem of trying to predict something right away that usually isn’t detected for several hours after arrival. In my opinion, the best way to do that is to lean on the clinical experience of the people who are actually visualizing the patient and checking their vitals, and the only way to do that is to use their words,” Schrager explains. The researchers included unstructured texts — whatever the triage nurse wrote about the patient — as a unit of analysis in the model, in the same way models would use heart rate or temperature.
Overall, the model found that sepsis occurred in 3.4% (35,318) of the ED encounters. The model accurately predicted sepsis in 76% of cases where sepsis screening was not done at triage, and accurately predicted sepsis in 97.5% of cases where sepsis screening was done at triage.
“The model was extremely effective in finding sepsis in those patients, and it does so at a very early stage in the visit,” says Schrager. The real challenge was to find people who are at risk for decompensation or developing severe illness as early as possible. That way, in the real world, they do not get left in the waiting room for too long, or placed in a care area that is inappropriate for their level of sickness without the charge nurse alerted that the patient could be sicker than they appear.
“At least in our testing, it did a pretty good job of finding septic patients that would have been missed,” says Schrager. The goal is to come up with solutions that are applicable in any ED, regardless of what triage system is used or who does the triage. Just because the model appeared highly effective in the ED does not mean it will be used in EDs anytime soon. There are multiple obstacles for AI models that perform clinical decision support, including recent regulatory changes, says Schrager. “These have changed how this type of research could be translated into the real world,” says Schrager. “The main barrier has to do with explainability.” If an AI model tells an ED clinician to do something, the ED clinician needs to know what the recommendation is based on.
Traditionally, decision support tools for sepsis screening were based on certain criteria that all providers could plainly see, such as the patient’s age or white blood cell count. This allowed EPs to interpret the recommendations of the tools and decide how much weight to give those in their medical decision-making. “But a lot of machine learning models, even the ones that do clinical decision support — even the ones that have FDA approval and are used in health systems around the country — a lot of them do struggle with explainability,” says Schrager.
Still, EDs are closer than ever to using AI tools like this one at the bedside. “If you are going to ask clinicians to use a tool to identify possible sepsis in the ED, it has to be something they can use at the time they need it, without waiting hours for lab results to come back,” says Schrager. “And you have to know what led the model to make its conclusion.”
- Brann F, Sterling NW, Frisch SO, Schrager JD. Sepsis prediction at emergency department triage using natural language processing: Retrospective cohort study. JMIR AI 2024;3:e49784.