Care Improved by Providing Better Feedback to Hospitalists
Providing detailed feedback to hospitalists, including key quality metrics, can improve the quality of care they provide patients, according to the results of a program at a Wisconsin medical college.1
Quality metric scores and rank order lists can change hospitalist behavior, says Ankur Segon, MD, MPH, FACP, SFHM, associate professor of medicine and program director of the Hospital Medicine Fellowship at Medical College of Wisconsin (MCW).
The effort began in 2017 when Segon undertook a significant restructuring of the section to address issues of quality, faculty development, and morale. He launched performance feedback packets for faculty, drawing on his previous work involving the effect of dynamic feedback, especially on Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) scores.
In a study, Segon and colleagues administered questions similar to the HCAHPS questions to inpatients. They created a website and entered the information. Then, the website would generate an email every day to the hospitalists with their own scores and comparative scores to the rest of the section, for nine months.
“We saw an improvement in the questions from the survey of about 5% based on that study,” Segon says. “With HCAHPS, it is hard to move the scores, so any improvement is a really good improvement. I’m a believer in comparative dynamic feedback, as dynamic as you can make it, but it needs to be comparative to both goals and the threshold you’re shooting for and your peers.”
Clinicians are inherently competitive. When they see they are underperforming in comparison to peers, that can be a reflection point for how to do better. It also is positive reinforcement for top performers.
With that experience, Segon and colleagues created a feedback packet that MCW continues to use. All 46 hospitalist faculty receive a dashboard and rank order list every month.
MCW leaders supported the effort and provided necessary resources. They assigned a hospital data analyst who is familiar with the electronic health record (EHR) and other data systems, along with some web design experience. The data analyst spent about two weeks to create the initial algorithms and the process for mining the EHR for the necessary data. Then, it takes him about two hours to put together the feedback packets each month.
Originally, the packet included two components: the dashboard and the rank order list. The dashboard shows how each doctor is performing on several quality metrics, with section comparisons and the goals for that metric. The rank order list shows where the physician is performing on each metric in comparison to peers.
Some identification is masked to protect underperforming providers, but each clinician can see where he or she stands in relation to everyone else. The data analyst is vital to creating the rank order list.
“The dashboard was created through the department of medicine because it has a data support team. Their data person, who has a master’s in statistics, said it took her about a week or two to work through the algorithms that generate the dashboard,” Segon says. “Then, it takes her a couple hours a month to get the packet together. The actual putting together of the various elements of the packet — the rank order and the dashboard — is done by an administrative assistant who is in the division of internal medicine. It takes her about half a day every month.”
Spreading out the work among different staff members helped make the program manageable.
Improvement on Many Measures
Segon says the hospitalist feedback contributed to improvement in many quality measures, although the effect is hard to isolate because the hospital was simultaneously implementing other improvements, such as adding FTEs.
“It was more about creating a culture of efficient, high-quality work, and the feedback packets were one part of that, although I think it was a very important part,” Segon says. “It created a conversation around how we were doing around various quality metrics and gave us a launching pad to improve our metrics.”
MCW saw an improvement in length of stay index, 30-day readmission rate, catheter-associated urinary tract infections, and central line-associated bloodstream infections. It also improved scores in the provider component of HCAHPS, attendance at care coordination rounds and the percentage of discharge orders placed by 10 p.m., and discharge summaries completed in 24 hours. There was a decline in the overall HCAHPS scores in the period studied with the feedback packets.
“I think because the feedback packets were focused more on providers — and we saw an improvement there, while the overall HCAHPS scores declined — there is a little bit of a story there regarding the effectiveness of what we did,” Segon says. “It was pretty much an across-the-board improvement when you look at all the metrics. The improvement in the process type metrics was more than in the systems metrics. That ties into how there are certain things that are under the control of the hospitalists, like putting in early discharge orders, more so than in improving HCAHPS scores, which is more of a team-based effort.”
Not a Standalone Project
To replicate these results, Segon suggests it is important to make the hospitalist feedback packet part of a broader quality improvement effort that includes making sure clinicians are seeing the right number of patients. He does not recommend it as a standalone project. “You have to make sure they are not understaffed and take away some of the non-value-added tasks from their workflow. You are improving the opportunity for the faculty to improve their engagement. Then, you can bring in tactics related to improving their performance and quality,” Segon says.
Peer support and feedback is important, too. MCW leaders discussed high performers in quality metrics at quarterly section meetings, using that as a moment of celebration and an opportunity to share with the group any insight into how top performers achieve high scores on the metrics.
“You also want to be very clear about what metrics are more systems-based and which ones are more individual, hospitalist-based. Hospitalists can’t change length of stay on their own,” Segon says. “I was very up front about that, explaining that they would not be expected to improve something like readmission rates for the section. But they have a role to play in moving that metric in the right direction by some of the actions they can control. That messaging was very important.”
Segon does not advise tying any significant portion of a clinician’s compensation to performance-based metrics. It may be reasonable to designate a token portion of compensation tied to quality metrics, but no more. “It can be reasonable to have group-based incentives tied to metrics like that because that is a way to empower people,” Segon suggests.
REFERENCE
- Becker B, Nagavally S, Wagner N, et al. Creating a culture of quality: Our experience with providing feedback to frontline hospitalists. BMJ Open Qual 2021;10:e001141.
SOURCE
- Ankur Segon, MD, MPH, FACP, SFHM, Associate Professor, Medicine; Program Director, Hospital Medicine Fellowship, Medical College of Wisconsin, Milwaukee. Email: [email protected].
Providing detailed feedback to hospitalists, including key quality metrics, can improve the quality of care they provide patients, according to the results of a program at a Wisconsin medical college.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.