Ethics Services Want to Know How Consult Data Compare to Other Hospitals
Ethics services often struggle to obtain data to improve the quality of consults even at their own hospitals, let alone outside institutions. Yet some ethicists are forging ahead with this challenging proposition. “We are trying to move the bar on what we can assess, and move from the qualitative to the quantitative,” says Thomas V. Cunningham, PhD, MA, MS, bioethics director of Kaiser Permanente’s Southern California region.
Quantifiable data usually are limited in scope (e.g., the number of consults performed annually). Unlike clinical areas, ethics documentation leans heavily on a narrative approach. “What we are trying to do is take that same approach, but then quantify things,” Cunningham explains.
Many ethics services are seeing a sustained growth in volume. “It then becomes harder to measure and assess the way we used to,” Cunningham notes.
If a service conducts about a dozen consults a year, with roughly half of those highly complex cases, it is feasible to try to glean some insights on trends from narrative charting. “But when you are doing 120 consults, you have to create new ways of measuring to capture the quality, and assess the quality, and improve. We need some new methods for that,” Cunningham explains.
Kelly Armstrong, PhD, director of clinical and organizational ethics at Inova Health System, developed the Armstrong Clinical Ethics Coding System (ACECS) tool, a standardized approach to data-gathering on ethics consults. The challenge was to sufficiently differentiate between different types of cases that involve the same theme. “Not all informed consent cases are the same,” Armstrong observes.
Informed consent cases range from an adolescent partnering in her own medical decision-making about a birth plan, to a person with questionable capacity because of a mental illness, to a substitute decision-maker making questionable choices for a dying patient. “The coding system uses common definitions and avoids bioethics-specific language or specific medical technology or procedures. It can be used in any healthcare setting,” Armstrong reports.
Armstrong hoped that when the codes were paired with other metrics, it would allow ethicists to observe institutional and cross-institutional trends. To find out, ethicists analyzed data on 703 cases over a two-year period at two academic medical centers, both of which used the ACECS tool.1 Comparing ethics consults across institutions could be handled effectively as a way to improve quality, the researchers concluded. “The approach uses some advanced statistical methods that usually aren’t applied to ethics consultation,” Cunningham notes.
Researchers wanted to go beyond just comparing individual ethicists; instead, they compared ethics services at two different health systems. “That has not been done before,” Cunningham says.
The same approach could be used to compare ethics services in hospitals nationwide. “Once you can do two hospitals, maybe you can do the whole region. Or maybe you can do all major academic medical centers with a similar population,” Cunningham offers. “We are trying to lay the groundwork for doing that in the future.”
Currently, there is no standard way of collecting ethics consult data. “There’s a lot of provincialism. They count different things because that’s the way they started doing it,” Cunningham reports.2
Armstrong wanted to create a shared language that was relevant not only to ethicists, but also to providers and administration. “Some hospitals may see only one case in years, but could look at the database and see that Hospital X sees multiple cases every year,” Armstrong says.
To compare data, though, ethics services have to document in the same way. “Once we know we are using the same method, we can compare how we are doing, whether we are seeing different things, or whether we are seeing the same things,” Cunningham explains. “It gives you a yardstick to assess yourself against external [data].”
This gives ethics consult services more insight into how they can improve, whether in terms of quality, volume, or both. It also allows them to advocate for more resources if they learn another hospital has higher volume or better quality. Then, once ethics services obtain additional resources (e.g., another full-time ethicist), the comparison data can show whether it paid off. “You invest in some personnel resources, and see if you grow the way you anticipated,” Cunningham says. “Now, you can look outside of yourself, at another institution, to ask those questions.”
Ethicists have relied on methods such as Excel spreadsheets to track consults. “You can do it that way, but it’s inefficient,” Cunningham notes. “It limits your ability to compare yourself to other institutions because they are not doing it the same way.”
Kaiser Permanente is adopting the ACECS tool in its Southern California region’s 13 hospitals, which conduct around 1,200 consults a year. The tool was piloted at the West Los Angeles Medical Center for two years before it was adopted across the region. Ethicists will use the data to compare different hospitals within the system.
For instance, one hospital might conduct two consults every year for a patient who is unrepresented. However, another facility with similar volume might perform eight consults. “You can use comparison data to find the reason for the discrepancy,” Cunningham explains. It could be that ethicists are coding consults differently, that some ethicists round in the ICU more often, that one of the hospitals sees more homeless patients who are more likely to be unrepresented, or another reason.
Data collected on ethics consults also can be used to support efforts to improve quality of clinical care. “The goal is to say, ‘our metrics align with yours. You are looking at stroke, or length of stay; we can look at it, too,’” Cunningham says.
For example, clinical areas often track time frames from ED admission to inpatient beds for stroke patients. Ethicists can offer more nuanced insights on this quality metric. There might be a connection between a long length of stay and the chances of conflict between the family and the clinical team. “That is the kind of thing our coding system can do,” Cunningham says. “We are hoping to align our data with clinical measures and use that to create conversations with the clinical team.”
REFERENCES
- Harris KW, Cunningham TV, Hester DM, et al. Comparison is not a zero-sum game: Exploring advanced measures of healthcare ethics consultation. AJOB Empir Bioeth 2020;Nov 20:1-14.
- deSante-Bertkau JE, McGowan ML, Antommaria AHM. Systematic review of typologies used to characterize clinical ethics consultations. J Clin Ethics 2018;29:291-304.
Ethics services often struggle to obtain data to improve the quality of consults even at their own hospitals, let alone outside institutions. Yet some ethicists are forging ahead with this challenging proposition, trying to move from the qualitative to the quantitative.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.