New Tool Released for Investigating Diagnostic Errors
By Greg Freeman
EXECUTIVE SUMMARY
- A tool from the Agency for Healthcare Research and Quality provides a way to effectively collect and analyze data on diagnostic errors. The tool is free to use.
- Measure Dx provides multiple methods of data collection and analysis.
- The tool can be used by all types and sizes of healthcare organizations.
- Risk managers may lead the team in using the tool.
A new tool developed by the Agency for Healthcare Research and Quality (AHRQ) promises to help risk managers and quality improvement professionals analyze adverse events involving diagnostic errors, still one of the most challenging patient safety issues.
Measure Dx was developed to help healthcare organizations effectively identify and investigate diagnostic errors with a specific methodology. “As of now, reliable, valid, and usable measures of diagnostic safety are still under development,” AHRQ noted. “Still, simply identifying and analyzing diagnostic safety events is useful because the measurement process itself can bolster learning.1
Measure Dx includes a guide with background and step-by-step instructions for developing, implementing, and sustaining ways to measure diagnostic safety.
Part I offers tactics for engaging healthcare professionals in the implementation of measurement and learning activities related to diagnostic errors. That section also addresses psychological safety by ensuring privacy and confidentiality. It also discusses compliance with HIPAA and related requirements.
Part II provides a self-assessment of current readiness to address diagnostic errors as well as guidance on how to choose among the four tactics to measure diagnostic safety.
Part III explains how to implement diagnostic safety measurement plans, including the identification and systematic analysis of cases involving diagnostic error. Part IV addresses how to review and analyze the resulting data and ways to make improvements that will reduce errors.
Four Strategies Offered
Measure Dx suggests four strategies for addressing diagnostic errors. Strategy A is based on information from cases identified and studied by the healthcare organization. Strategy B involves soliciting information on diagnostic safety from clinicians, while Strategy C relies on feedback from patients. Strategy D requires leveraging the organization’s electronic health record (EHR) to find previously unknown diagnostic safety events.
“Although a robust measurement program incorporates multiple strategies, most organizations new to this work should begin with only one and expand their portfolio of strategies over time,” according to the guide.
The development of Measure Dx was prompted by requests from the healthcare community about how to collect and analyze data related to diagnostic errors to improve patient safety, says Hardeep Singh, MD, MPH, chief of health policy, quality and informatics program at the Center for Innovations in Quality, Effectiveness and Safety at the Michael E. DeBakey Veterans Affairs Medical Center and Baylor College of Medicine in Houston. Singh and Andrea Bradford, PhD, assistant professor in medicine-gastroenterology at Baylor, developed the tool for AHRQ.
Measure Dx draws on several processes created to address diagnostic accuracy but provides a framework for individual organizations to select the right path and implement the four strategies most effectively.
“The goal is fairly broad. We’re not saying this is only for large health systems or smaller health systems,” Singh explains. “This is for anyone interested in diagnostic safety improvement. We think that the quality and safety officer, the risk manager, could be owner of this in terms of making it work at their organizations.”
Formalizing Analyses
Bradford notes the goal of the tool is to introduce some rigor and a systematic process to the collection and analysis of diagnostic accuracy data.
“Organizations that are interested in looking at their events through that diagnostic safety lens can do so using processes and tools that have already been developed, tested, and validated, not just in research settings but also in some early adopter types of organizations,” she says. “These tools were not just developed in a research setting. They have proof of concept in an operational setting.”
When the tool was pilot-tested at several organizations, participants noted the importance of Part I, which involves engaging people in the process of improving diagnostic accuracy This includes encouraging the participation of staff and professionals who will need to contribute to the effort, but also those who may benefit from the process.
“The first step is thinking about who you might pull into this diagnostic safety initiative. It might be just a couple of people, or it might be a larger coalition,” Bradford notes. “In our experience, speaking to the people who are kind of pioneering this type of work, it starts as typically a small, grassroots effort, and then it grows as they learn new things, share it with others in the organization, and others get pulled in.” In smaller organizations, the effort might be led by only the risk manager, or a clinician with a particular interest in diagnostic accuracy.
It will be important for healthcare organizations to use the parts of the tool that are appropriate for their organization and not automatically try to jump into the more complex strategies. Risk managers should match the resources available to the strategy that fits best.
“We don’t anticipate that most organizations are going to use every single measurement strategy. The toolkit is modular, made for different kinds of organizations so that they can use what they already have,” Bradford explains. “We do not expect everyone using this toolkit to go and develop whole new systems to gather data about diagnostic safety events.”
Instead, anyone who is already collecting quality and safety data in any routine fashion — which should be almost everyone — can use the tool without taking on a new burden of data collection, Bradford says.
Can Use Available Data
Strategy A is intended to use the data already available, such as the quality and safety events that are tracked by risk management, mortality, and events that are reviewed by quality and safety committees. Those events are already documented, and possibly investigated, for lessons learned, but Bradford notes incidents involving diagnostic accuracy are not always indexed as such.
“If you look at those events from a diagnostic safety lens, perhaps looking more upstream in the patient’s record, you might find new information that was not the initial focus of whatever initial investigation you did on that event,” she says. “You might find an improvement opportunity related to the diagnostic process.”
Measure Dx includes a case example for Strategy A in which the hospital reviewed all its mortalities to identify diagnostic opportunities. The hospital discovered previously unmeasured delayed or missed diagnoses, delayed recognition of severity of illness, and related issues, prompting “a pivot toward the broader strategy of learning from every patient experience, not just deaths.”
Singh and Bradford expect Strategy A to be the choice most widely implemented by hospitals. This was confirmed by the pilot participants.
Some May Have Options
Other organizations may have other data streams available, such as a robust clinician and staff reporting system that includes a hotline or website. In those cases, they may be able to modify those processes to analyze diagnostic issues more directly, Bradford notes.
For example, with Strategy B, which focuses on soliciting reports from clinicians and staff, Measure Dx recommends trying to obtain information on these events:
- Cases in which it took longer than expected to make a correct diagnosis, regardless of whether an adverse outcome resulted;
- Cases with potential problems related to the diagnostic process or decision-making;
- Any case that could serve as an example for teaching or learning about how to improve diagnoses;
- Cases in which a system factor interfered with the diagnostic process.
Some hospitals use a similar system for patient reports. Still others may employ advanced trigger tools and advanced algorithms they run through the EHR data warehouse to retrieve events in which a patient is deteriorating unexpectedly or an expected test result or follow-up visit was missed.
“Regardless of how those events or anomalies of care come to your attention, Measure Dx provides a common set of tools and review processes that guide step-by-step analysis,” Bradford says. “It goes through how to look at a case through that diagnostic safety lens to identify whether there was an earlier opportunity for a correct and timely diagnosis.”
Measure Dx can be used for a retrospective analysis of previous events, and in the immediate aftermath of an event that may involve diagnostic accuracy.
“A current adverse event could be a good portal of entry into this,” Singh says. “You may use it for one case and realize how much information is available for review and analysis. Then, you could consider whether you should be doing this routinely for other cases, too.”
Measure Dx should not impose any additional burden on risk managers or others involved in addressing diagnostic accuracy. Rather, it offers a systematic and effective way to address a high-priority patient safety issue. Singh does not consider using Measure Dx to be additional work because organizations already should address diagnostic errors.
“It will be a similar amount of work as for any other patient safety improvement activity. Just like if you are trying to reduce your readmissions, hospital-acquired infections, falls, pressure ulcers — there is always work to be done,” Singh explains. “The difference is that most of those things have some external motivator, like CMS or The Joint Commission, telling you that you have to do that work. There is no one driving that for diagnostic errors, other than maybe malpractice carriers.”
Most pilot participants found they could incorporate Measure Dx into their existing framework for quality and safety. The tool can complement existing safety activities and reduce any duplication of effort.
“If you have an event and put it through your usual review process or root cause analysis, you may get different information than if you applied this strategy,” Bradford says. “One of the things teams can do is to be sensitive to what might differentiate a diagnostic safety event so they can drill down and start identifying contributing factors based on these known tools and frameworks.”
Also, Bradford notes the pilot participants became more proficient and faster with the review process as they continued using Measure Dx. The first cases took the longest to review, but the time required steadily diminished. Some pilot hospitals reported the first review took up to 90 minutes, but the time required quickly fell to sometimes half that.
The tool was pilot-tested during the pandemic, so participants were already under a heavy workload, Bradford notes. Under those circumstances, the favorable reviews were especially noteworthy.
During testing, participants sometimes ran into difficulty with how to review identified cases. The process sometimes hit a bottleneck when the clinician reviewer was unfamiliar with a process that might be familiar to a risk manager or quality professional. As a result, the final tool provides more guidance on how to make sure the clinicians on the team have training on how to use some of the review tools.
“That’s when you’re opening up a case, using a systematic review tool to go through it item by item, and some people had questions about how best to use those tools,” Bradford explains. “One thing people should know is that a clinician who makes diagnoses and has that experience needs to be involved because these are complex events. You do need that clinician judgment to determine whether there was a missed opportunity.”
REFERENCE
- Agency for Healthcare Research and Quality. Measure Dx. 2022.
SOURCES
- Andrea Bradford, PhD, Assistant Professor, Medicine-Gastroenterology, Baylor College of Medicine, Houston. Email: [email protected].
- Hardeep Singh, MD MPH, Chief, Health Policy, Quality and Informatics Program, Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey Veterans Affairs Medical Center and Baylor College of Medicine, Houston. Email: [email protected].
A new tool developed by the Agency for Healthcare Research and Quality promises to help risk managers and quality improvement professionals analyze adverse events involving diagnostic errors, still one of the most challenging patient safety issues.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.