Are we just teaching to the test?
December 1, 2014
Are we just teaching to the test?
Studies question value of quality measurement in current form
You can see the quote from the Agency for Healthcare Research and Quality on almost any Web page devoted to quality improvement: "Quality health care means doing the right thing at the right time in the right way for the right person and having the best possible results."
Getting there, obviously, requires effort, and part of that involves determining where you are and setting goals for where you want to be. Measuring and collecting data are part of that. But what if the data we are collecting aren’t the things that will best get us to that right place, right time, right person place? What if all it does is get us to a place where we are doing better on that particular metric, which may or may not make a difference to actual patients? What if all that work is keeping healthcare from making the big changes that could take a real bite out of the tens of thousands of cases of unintended harm caused each year?
Several recently published papers are making that case and challenging stakeholders to do better. First up is a study from Elizabeth Howell, MD, MPP, and colleagues looking at obstetrical quality measures and their association with maternal and neonatal mortality and morbidity.1 The measures were early elective delivery between 37 and 39 weeks and Caesarean section rates.
The findings show that the rates for both of the measures varied widely by hospital in the New York area they considered, as did rates for complications. And the researchers found no relationship between the measures and morbidity and mortality.
Howell, an associate professor in the Department of Population Health Science and Policy, the Department of Obstetrics, Gynecology and Reproductive Science, and Associate Director at the Center for Health Equity and Community Engaged Research at the Icahn School of Medicine at Mount Sinai in New York City, says she was in no way surprised by the findings. "These measures look at only a small slice of deliveries, so there is a whole other set of moms and infants for whom these metrics are not important and to whom they are not linked. If you want to capture quality indicators for all deliveries, you are going to have to expand the array of measures you look at beyond these."
Looking at C-section and early delivery rates is a good thing, she says. It’s valid and a great goal to reduce those rates. But "when we only have a few measures available in obstetrics — and early electability is the only one used on Hospital Compare — you have to wonder if we are capturing what people want to know."
An editorial commenting on Howell’s work2 added the recommendation to compare the information on outcomes for patients who had early deliveries or C-sections with patients who didn’t, which might give a better idea of their meaning to that small slice of patients to whom this metric applies.
Expanding the data collected and reported is particularly important because delivery is one of the few areas of healthcare where consumers have a chance to do in-depth research of different hospitals and make a decision based on the information gleaned, she says. Someone with chest pain isn’t going to take time to look up the quality data on a hospital website. But a young couple newly pregnant? They probably will, she says, particularly because they have been raised in an electronic world and expect information to be available at the click of a mouse or through an app on their phone.
Yet there is a paucity of quality data available, Howell notes. Some states — New York is one of them — have expanded quality data they require hospitals to report. Not all of it is endorsed by the National Quality Forum (NQF), but it’s at least an improvement on what most states offer, she says.
"Obstetrics is one of those specialties that doesn’t have the breadth and scope that one might hope," says Peter Lindenauer, MD, MSc, medical director of clinical informatics at the Center for Quality of Care Research at Baystate Medical Center in Springfield, MA. "It could get a lot more traction than many other public reporting issues do just because you have young informed people who are used to using the Internet to make decisions."
In 2013, Lindenauer and some colleagues published a paper on infection rates among obstetric patients across hospitals.3 There were significant differences. He sees this as the kind of data that would be meaningful to patients and is readily available.
That study is an example of available work that can be drawn upon to beef up data available for parents-to-be, Howell says. Other recently published studies have focused on patient-centered care, which would address many more women and babies and would be relatively easy to include.
Howell also thinks there should be more emphasis on maternal outcomes — like the maternal infection rates Lindenauer studied — because severe maternal morbidity occurs more often than most people think, and a certain number of the occurrences are preventable. "Think about your hypertension and hemorrhage protocols," she says. "We have to do more work in those areas, but those are the kinds of things that in the future we should be reporting alongside what is already there."
The campaigns conducted in conjunction with C-section and early deliveries are wonderful, Howell says, and she’d like to see similar ones done with other maternal/child metrics. "One measure doesn’t cut it. We need to do similar things for other issues that occur in hospitals."
But she knows that will take time, and policy-makers at the state and federal level will have to see more research before they agree to more measures that will lead to standardization of labor and delivery. In the meantime, track what happens with every delivery where there is a complication. "How well do you take care of women in different situations? How often do these things occur, and what steps can you take to make sure they don’t happen again?"
It’s basic quality work, only it’s directed at a wider array of patients and for data points that are likely more meaningful because they relate directly to the health and well-being of mother and child, she says.
Public reporting impacts QI efforts
Lindenauer was part of a team that published a study in October that looked at how hospital leadership viewed publicly reported quality metrics and how they influenced overall QI efforts.4 Among the results: For more than two-thirds of the respondents, what the public saw did influence overall QI efforts. However, less than half of them believed that differences between hospitals reported in those public portals for mortality, readmissions, cost, and volume measures had any significant clinical meaning. Differences in process and patient experience measures, however, were considered meaningful. And around half of respondents were also worried that by focusing on the public reporting aspects of QI, they were missing out on other important opportunities to improve quality of care. The study found that hospitals with better or worse than expected performance were more likely to include publicly reported metrics in their annual goals.
"These are goals that just about everyone believes are important," Lindenauer says. "It’s not that they are the wrong measures to focus on. People think that patient experience is important and that heart attack patients should get their aspirin. But there is a concern that these measures limit a hospital’s quality department from making choices about where to focus quality improvement efforts. They feel compelled to achieve high levels of improvement in these particular measures and put a large amount of resources into moving the needle on performance for them."
That needle may already be at 95% on a particular metric, but hospitals are putting time, money, and manpower into getting it to 98 or 99, he says, when those additional points have a limited return on improving health, and at the expense of other measures and outright neglect of other conditions.
An accompanying editorial5 included several suggestions for change. Chief among them was getting clinicians "actively engaged" so that the "connection between measurement and improvement [is] ensured." Further, data collection should be only about specific clinical questions. Payers should provide incentives for quality improvement, but with a degree of latitude to account for local conditions, rather than national priorities. The editorial calls for programs that include paid, dedicated time for clinicians who participate in this QI, a separate budget, and IT support. And rather than being accountable for measures, hospitals should be accountable for actual accomplishments.
Lindenauer has his own recipe for ameliorating the problem. First, he’d like to see more measures, not fewer. "If you are measuring across a broad range of conditions and markers, it becomes less of a concern that you are teaching to the test and you don’t have to worry as much that you are missing something."
A more radical shift — and one that many are talking about — is to focus more on outcome measures rather than process measures. Although the leaders responding to the survey about which Lindenauer and his colleagues wrote said they viewed process measures as meaningful and outcomes measures like mortality as inadequate markers of whether one hospital is better than another, Lindenauer believes that perhaps hospital-specific mortality rates is something to be considered again.
"Purchasers aren’t interested in buying processes," he says. "They are interested in outcomes. Patients are interested in outcomes. How you got there isn’t as important. Sure, it’s nice to have the map and to use those evidence-based processes, and for internal purposes, measuring processes is a good way to evaluate departments and to find gaps and opportunities to improve. But for payer-sponsored measurement programs? I think outcomes are better."
Lindenauer says even at his own hospital, he can see how much focus is put on those publicly reported measures. There, it impacts variable compensation for physicians. "You can see our attitudes, where we feel we have more control and less concern, and you can see how that plays out on a national scale." The data from the survey he did of hospital leaders suggests that there is a recognition that these measures are less than perfect, and perhaps that is a bit of a salve to quality managers. They aren’t alone in the fight to make sure that what hospitals are focusing on to improve quality is the right thing. He hopes that officials from the Centers for Medicare & Medicaid Services and the National Quality Forum will see his study. The latter organization has had a group of 52 stakeholders called the National Priorities Partnership whose job it is to help set the measurement agenda. (The list of members is available at http://www.qualityforum.org/Setting_Priorities/NPP/NPP_Partner_Organizations.aspx.) Perhaps some of them will see his study, too.
Until that happens, he thinks the data from his study might provide good talking points for discussions between leadership and the quality department, perhaps to figure out if there is somewhere that energy should be focused that is being left out because of all the interest in publicly reported data.
For more information on this topic, contact:
- Peter Lindenauer, MD, MSc., Medical Director, Clinical Informatics, Center for Quality of Care Research, Baystate Medical Center, Springfield, MA. Email: [email protected].
- Elizabeth Howell, MD, MPP, Associate Professor, Department of Population Health Science & Policy, Department of Obstetrics, Gynecology, and Reproductive Science and Associate Director, Center for Health Equity and Community Engaged Research, Icahn School of Medicine at Mount Sinai, New York City. Email: [email protected].
References
- Howell EA, Zeitlin J, Hebert PL, et al. Association Between Hospital-Level Obstetric Quality Indicators and Maternal and Neonatal Morbidity. JAMA. 2014;312(15):1531-1541
- McGlynn EA, Adams JL. John L. Adams, PhD. What Makes a Good Quality Measure? JAMA 2014;312(15):1517-1518.
- Goff SL, Pekow PS, Avrunin J, et al. Patterns of obstetric infection rates in a large sample of US hospitals. Am J Obstet Gynecol. 2013 Jun;208(6):456.e1-13.
- Lindenauer PK, Lagu T, Ross JS, et al. Attitudes of hospital leaders toward publicly reported measures of health care quality [published online October 6, 2014]. JAMA Intern Med. doi:10.1001/jamainternmed.2014.5161.
- Goitein L. Virtual Quality The Failure of Public Reporting and Pay-for-Performance Programs [published online October 6, 2014]. JAMA Intern Med. doi:10.1001/jamainternmed.2014.3403
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.