The Joint Commission moving to link accountability to accreditation
The Joint Commission moving to link accountability to accreditation
TJC sets out to define 'accountability'
Accountability. It's an oft-heard word these days in health care. But just what is accountability? The Joint Commission is setting out to turn the buzzword into something more meaningful, more definable with distinct criteria. Criteria to be used to develop new and improve existing quality measures in part, to prepare hospitals for the coming of value-based purchasing, when quality and reimbursement are inextricably linked, and to promote measurable, true quality improvement.
The concept of accountability measures versus non-accountability measures was discussed in an article authored by The Joint Commission's president, Mark Chassin, MD, MPP, and its executive vice president, division of quality measurement and research, Jerod Loeb, PhD, as well as Stephen Schmaltz, PhD, and Robert Wachter, MD.1 (See article at http://www.nejm.org/doi/pdf/10.1056/NEJMsb1002320.) The article begins with a brief history of quality measurement and reporting, opening to a dialogue about room for improvement in both areas.
The authors created four criteria for accountability measures addressing processes of care to improve outcomes. The criteria are:
- There is a strong evidence base showing that the care process leads to improved outcomes.
- The measure accurately captures whether the evidence-based care process has, in fact, been provided.
- The measure addresses a process that has few intervening care processes that must occur before the improved outcome is realized.
- Implementing the measure has little or no chance of inducing unintended adverse consequences.
Loeb told Hospital Peer Review last year that the organization was considering bucketing the core measures, and now, this paper seems to be setting the stage for that to happen.
"We wrote this paper with specific attention to the core measures, but it is a concept that probably could be applied to measures in general terms not just a Joint Commission or CMS [Centers for Medicare & Medicaid Services] core measure. It's generic enough to have more widespread applicability," says Loeb. If a measure doesn't meet all four criteria, "we would argue that it ought not to be used for accountability, and by our definition, accountability means public reporting, pay for reporting, or pay-for-performance programs." So those measures that don't meet the muster of the criteria should not be used for value-based purchasing or for accreditation purposes, he says. The article suggests existing, as well as proposed, measures "be vetted" against these criteria, and those that do not meet all four be replaced with improved ones.
As an example of a non-accountability measure, Loeb points to the smoking cessation measure. "[T]he measure, we would argue, really fails to meet the accountability criterion that specifically relates to the issue of accuracy. That is, while the measure rates have certainly improved, we really don't know... whether or not that is translatable to improved health and health outcomes." He says what's happened over time is that the measure has become a check box on a chart denoting if a patient has been given smoking cessation advice in whatever manner the hospital is doing that, whether with a brochure or a discussion with a clinician. "But we don't really know how well that process has been carried out," says Loeb.
Clearly, he says, the measure has value and is the "right thing to do" as "we know there is a strong correlation between smoking and cardiovascular disease or pulmonary disease and so on. So we are not suggesting it is the wrong thing to do. What we are suggesting is that the measure, in and of itself, doesn't accurately capture the process of care that is, the evidence-based process we are trying to get out... How do we get more clearly to the root of the problem? We need a measure that, in fact, really will address the ability of that process to be captured accurately or not."
Loeb addresses the long-running debate on the viability of process versus outcome measures in truly improving quality. "There is an interesting debate in health care, which has been around for as long as I can remember, about getting away from processes and shifting to outcomes. My answer to that is I don't disagree, but I think we need both. I think to make an educated decision, you need myriad information, including information about the patient's experience of care, information about utilization rates, information about clinical outcomes, and, of course, information about processes of care. You have to put all of that together and weave a portrait rather than saying outcome measures trump everything else and that's all we need."
He says that the criteria addressed in the report apply to process measures, and adds that The Joint Commission is working now on creating accountability criteria for "the outcome side of the equation." Clearly, though, the requirement for evidence-based background suggests that a process measure, if followed appropriately, would lead to a good outcome, he says.
The "problem" with outcome measures, he says, is that the "outcome is in the eyes of the beholder." For example, looking at the endpoint of mortality, he asks which mortality rate is most important. Are you looking at the rate for 30 days, 60 days, 90 days, one year, three years, five years? "The answer is that it's all of the above, but mortality isn't the only outcome. The other outcomes that come to mind are things like readmission rates, returns to the operating room. So it's sort of a myopic view to say, 'What about the outcome?' Firstly, if you have chosen good process measures that are inextricably linked to outcomes, you're getting at it in a more substantive and less subject-to-gaming way." He adds that outcome measurement requires risk adjustment and each patient brings his or her own comorbidities. For instance, a heart attack patient who is obese and has hypertension and diabetes and who has had three prior attacks is less likely to have a good outcome than a healthier patient. Another obstacle to measuring outcomes, Loeb says, is that if you ask 10 clinicians the most appropriate factors around which to risk-adjust, "you get 16 more opinions, and no one is really right and no one is really wrong." And risk-adjusting via billing data versus clinical information "are two vastly different things."
He says in Massachusetts there recently was "a big discussion" about whether to publicly report risk-adjusted mortality data. "Their state health commissioner ultimately decided against it on the basis of an expert panel... because they showed clearly that when you use different models, the very same hospital could be ranked significantly higher or significantly lower depending on the model that was used."
Reaction from the quality field
One quality leader questions whether any compliance organization can truly drive change and improvement. "In spite of [The Joint Commission's] attempts to 'market' themselves as 'quality leaders' (e.g., this New England Journal of Medicine article, which, while conveying important information, is actually a marketing piece authored by The Joint Commission, with Bob Wachter's collaboration), compliance organizations will never 'drive' provider organizational leaders to genuine excellence," says Martin D. Merry, MD, CM, health care quality consultant and associate professor health management and policy at the University of New Hampshire Exeter.
"At best, compliance organizations weed out the worst performers, help weak organizations get better, and can spur mediocre organizations to somewhat improved performance. General excellence is always driven by institutional leaders of true quality organizations (e.g., Mayo Clinic, Kaiser Permanente, Wisconsin's ThedaCare, Intermountain Health Care, and many others) who pay proper respect to compliance organizations, but are truly guided by their own 'good to great' internal quality management vision, mission, and management systems."
He does say that the criteria established in the NEJM article are "very good and represent a significant advance in 'fine tuning' quality indicators that can relate to outcomes with greater validity." He also lauds The Joint Commission and CMS in their drive to improve accountability and the article for validating the "notion of external measures," comparing providers on a variety of measures. Criteria dealing with more downstream processes, he adds, "is a real advance; I've never seen that in print before."
But, he says, the article "while appropriately citing commendable improvements in hospital quality measures, in my opinion does a disservice in implying that 90-95% compliance with a quality measure constitutes excellent performance... [E]ven 95% compliance equates to 50,000 defects per million opportunities, i.e., only slightly better than 3 Sigma quality the quality measure of globally competitive organizations."
He says one of his personal "'pet peeves' is that quality professionals still have to spend so much time on compliance issues that little is left over for true creativity and innovation... I hope to see the day when our industry matures to the stage that creativity/innovation replaces compliance as our quality/safety 'driver.'"
Patrice Spath, RHIT, of Brown-Spath & Associates in Forest Grove, OR, says while most organizations have not truly realized high reliability, "there are very encouraging signs of change" with providers placing more emphasis on quality, more use of comparative data, and greater teamwork and sharing of best practices among health care organizations.
She praises The Joint Commission for moving toward coordinating measure development, as "the politics of measure development and varying priorities among developers have caused the efforts to be siloed."
The bottom line, she says, is "the value of any measurement system hinges on results. Providers should not be focused just on gathering data or looking good in publicly available comparative reports. It's not just about gaining financial rewards when your organization's performance exceeds some externally defined threshold. The real value of an organization's measurement system is knowing where improvements are needed and acting on that information to improve patient care."
Loeb agrees that ultimately "the devil is in the details" on how the measures are applied and says the organization is now preparing to follow up on the article with more of a "how-to guide." He acknowledges that the literal application of this framework "ultimately is where the rubber is going to hit the road."
Reference
- Chassin MR, Loeb JM, Schmaltz SP, Wachter RM. Accountability measures using measurement to promote quality improvement. N Engl J Med. 2010 Aug 12;363(7):683-8.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.