Benchmarking and safety: Natural fit if you know what to do with data
Benchmarking and safety: Natural fit if you know what to do with data
Misreading results, using wrong benchmarks is formula for failure
Given the steady drumbeat for improving patient safety from diverse corners of the QI world, it’s only logical for quality professionals to use all the tools at their disposal — and that includes benchmarking.
However, experts warn, while benchmarking can prove extremely valuable in your efforts to boost patient safety, those efforts can be for naught if you aren’t careful about your decisions concerning what to benchmark, what your goals are, and how you interpret your data.
"People respond better when they have a goal, and physicians are notoriously good at goal seeking," notes Stephen Lawless, MD, MBA, chief knowledge officer for Wilmington, DE-based Nemours. "If you do not give them something to go for, what’s the impetus to change? The real question is: What do you benchmark against — the overall average or an idealized goal — and what should that goal be?"
"Are safety and benchmarking a fit? Yes, absolutely," says Ann Nakamoto, JD, MSN, a quality improvement manager with Children’s Regional Medical Center in Seattle. "More so because we now look at health care on a national basis; and given that we’re trying to learn from each other and share learning, I think it’s critical to lift the level of patient safety."
"From a consultant’s standpoint, absolutely," says Sharon Lau, a consultant with Medical Management Planning (MMP) in Los Angeles. "If you don’t have a comparison group, how do you know you’re doing it right? You can know your own internal trends and if you’re getting better, but you’ve still got to have some kind of mark out there in the world to know if you’re in the ballpark."
But not everyone is sure. "I’m strongly in the maybe camp. I think there’s potential value, but I have real reservations based on what’s currently available," notes Matthew Scanlon, MD, assistant professor of pediatric critical care at the Medical College of Wisconsin and patient safety officer at Children’s Hospital of Wisconsin, Milwaukee.
The challenge in benchmarking for safety is not so much the benchmarking process itself as it is the comparative tools available, observers say. "You can benchmark anything in patient safety as long as you can measure it," Lau says. "The difficulty comes in finding an appropriate measuring scale. How you classify some patient safety issues can be challenging."
In the case of errors, for example, "we are very lucky the Institute for Safe Medical Practices has a national rating scale we have been using for years," she notes.
"I know that some areas have benchmarks in place, like in infection control, the NGCPR [National Group on Cardiopulmonary Resuscitation], and several others, including MMP," adds Nakamoto.
"All these groups move toward developing and further enhancing databases working in that direction. The Joint Commission [on Accreditation of Healthcare Organizations], ORYX, and CMS’ [the Centers for Medicare & Medicaid] core measures are moving on a national basis to identify benchmarks and to establish a common language on how to boost patient safety," she says.
Nakamoto adds a word of caution, however. "I believe that as we all move our efforts toward achieving this goal, we need to find our common definitions. NGCPR, for the medication groups nationally, for example, has classifications of injuries including close calls. I don’t see the other medical events having something similar to that, so we haven’t yet quantified these things on an agreed-upon basis."
Scanlon also presents a mixed picture. "When you look at benchmarking, the first question you ask is why are you doing this — for improvement or accountability — and, of course, how will those data be used? I think there’s a lot of value for improvement of patient safety, and those of us who are seriously interested in improving our organization would have value from a peer group to compare ourselves to — but right now that’s not possible."
Why is that? "Because of legal ramifications, discovery issues," he adds. "Are you opening yourself up to legal issues if you show a certain error rate?"
Even good benchmarks can present problems, Scanlon continues. "AHRQ’s [the Agency for Healthcare Research and Quality] quality indicators theoretically could be benchmarked against, but most people don’t have the sophistication to be able to compare apples to apples. Also, there are various versions of software available, and some people have been publishing papers using data that are outdated. If those data points are used to benchmark around, it could be problematic."
In addition, a number of databases do not adjust for severity of illness, he explains. "A lot of administrative screening databases use ICD-9 codes," he observes. "The problem with attribution of those is this: If you are a center that gets a lot of referrals, and the center that sends a patient to you contributed to the error but didn’t document it, you get credit for it even though you inherited it."
Data: The devil’s in the details
Even if you have decent benchmarks available, the way you approach the task and interpret the data can have a significant impact on your end results, experts agree.
For example, Lawless notes, how high you aim is a critical consideration. "Say the average compliance rating is 96%, and your goal is to reach 98%," he poses. "People usually benchmark against the average, and they get the average."
How do you set those higher goals? "You search very heavily and set parameters," Lawless says. "As a group, we concentrate a lot on outcomes, but not on processes and structures. The tough search piece is finding those pieces of outcomes that measure real change."
For example, at one time length of stay (LOS) was a great benchmark, he explains. "Later on we said, So what if you have 4.6 vs. 4.2?’ The 4.6s tried to get their LOS down to 4.2, but it did not impact outcomes. So what is the ideal LOS with minimum cost and maximum satisfaction? It’s very complex; it has to make a difference to someone."
Data also can be deceiving, observers note, and the way that data are presented can actually penalize institutions that are doing a good job.
"Look at the accountability required by people like The Leapfrog Group and the Joint Commission," Scanlon offers.
"The people with the most to lose are those who do the best job trying to deal with errors. The leading organizations in patient safety will be punished because they are open — whether that results in Leapfrog refusing to pay them or the newspaper lambasting them for a high error rate, or attorneys coming after them."
Lau concedes this can be a real problem. "You have to interpret your data appropriately and carefully," she warns.
She recalls recently doing some charts for a hospital on unplanned returns.
"The control charts showed this hospital was out of control for two data points for two months," Lau notes. "But on all the other months, it had a baseline of zero, and this time it had one. You have to make sure you interpret your control chart, or whatever method you use, very carefully."
In patient safety, she notes, a lot of hospitals can do well and then have one issue, and the data will show a spike. "You may not want to reformat all your processes based on that," she says.
On the other hand, because your error rate is low and your performance looks good, that doesn’t automatically mean this really is a good thing, Lau warns.
"It may simply indicate your processes for capturing those errors may not be that good," she asserts. "If, for example, you rely on incident reports, written or phone, you may not be gathering accurate results and may look far better than you are."
On the other side of the fence, she says, if you are really high on an indicator or a trend, it might mean you are doing worse than similar organizations, but it also could just mean you are capturing more data. "We are always suspicious of hospitals showing very low numbers on medical errors," Lau points out. "We know there are medical errors."
How do you know the difference? "You have to know your process, flow chart it, work with it, understand how errors get reported, and what your method is," she advises. "We also do it through networking and inter-rater reliability questions from benchmarking."
Learn not to settle
For those seeking to successfully employ benchmarking to help improve patient safety, there is one strategy Lawless values over all others. "The trick is not just settling for average," he asserts.
It is for that reason that he has adopted a Six Sigma-type approach in this area.
"We implemented a Six Sigma-like program, accepting only three errors per million," Lawless reports.
"By doing that over the last couple of years, we have gotten our critical error rate [giving medications that could have caused major harm] down to a Six Sigma level — maybe two or three a year. We are getting real close; next year, we should be at the point where no patients received a medication that could have caused major harm. But the only way to get there is to not accept the [average] benchmark," he adds.
Experts say there are a number of different areas where benchmarking for patient safety improvement can be enhanced. For example, Scanlon notes, data sources are inadequate.
"There are very little good data around patient safety," he complains. "What often is reported are voluntary report data; if you look at incident reports or voluntary reports, it’s meaningless; you don’t know what the true rate is."
Are mandatory reports the answer? "That’s naïve," Scanlon asserts. "It’s like telling me I have to drive the speed limit; most people don’t."
The federal aviation system has an interesting model to try to force reporting, he notes. "They’ve shifted the carrot and the stick; if you report an error, there’s little chance of being punished. The whole issue of reporting is important; that’s where the lion’s share of the data comes from."
There are, in fact, a number of well-written articles in the literature about what’s necessary for good quality and safety measures, Scanlon insists. "They are evidence-based, easy to collect, and severity adjusted when they need to be," he says. "But that kind of discussion is not happening around safety."
"We have not yet looked at contributing causes," Nakamoto adds. "We need to employ tools like root-cause analysis to look at orders, for example."
Another key area is sustaining improvement. "Why do we have to experience the same mistakes as others in terms of unsustained improvements?" Nakamoto asks. "We all can get better through benchmarking; it will bring us all to a higher level of patient safety."
"Where safety falls down most of the time is in not using standard definitions," adds Lau. "The key is to always know your process. In benchmarking, each person has to feel comfortable with the other guy who is doing the same thing — comfortable, for example, that you are counting things the same way I do."
In addition, she says, frontline professionals such as nurses and pharmacy staff, who understand processes, tend to be uncomfortable with benchmarking patient safety issues because they don’t know if their executives will understand what the data show in they same way they do. "That’s why everyone needs to be educated," she explains.
Lawless sums up a successful approach: "Have a goal and someone with whom to compare your processes and structures. Ask yourself where you can make changes in those processes and structures that will make your outcomes better. Then, ask yourself where you can get even better."
Need More Information?
For more information, contact:
- Stephen Lawless, MD, MBA, Chief Knowledge Officer, Nemours, Wilmington, DE. Phone: (302) 651-6404. E-mail: [email protected].
- Matthew Scanlon, MD, Assistant Professor, Pediatric Critical Care, Medical College of Wisconsin, Patient Safety Officer, Children’s Hospital of Wisconsin, Milwaukee. Phone: (414) 266-2498. Fax: (414) 266-3563.
- Sharon Lau, Medical Management Planning Inc., BENCHmarking Effort for Networking Children’s Hospitals, 2049 Balmer Drive, Los Angeles, CA 90039. Phone: (323) 644-0056. Fax: (323) 644-0057. E-mail: [email protected].
- Ann Nakamoto, JD, MSN, Quality Manager, Children’s Hospital and Regional Medical Center, Seattle. Phone: (206) 987-1170. E-mail: [email protected].
Given the steady drumbeat for improving patient safety from diverse corners of the QI world, its only logical for quality professionals to use all the tools at their disposal and that includes benchmarking. However, experts warn, while benchmarking can prove extremely valuable in your efforts to boost patient safety, those efforts can be for naught if you arent careful about your decisions concerning what to benchmark, what your goals are, and how you interpret your data.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.