Evaluation of a Consumer-Oriented Internet Health Care Report Card
Evaluation of a Consumer-Oriented Internet Health Care Report Card
Abstract & Commentary
Synopsis: Consumer-oriented rating systems of hospitals may successfully categorize groups of hospitals but do not accurately reflect differences among individual hospitals.
Source: Krumholtz H, et al. JAMA. 2002;287:1277-1287; 1323-1325.
Many different organizations, including for profit web-based companies, have attempted to develop "report cards" that compare the "quality of care" of hospitals. Little has previously been published about the accuracy of such ratings. Krumholtz and associates attempt to examine closely one rating system (HealthGrades.com) for one disease (acute myocardial infarction [AMI]). HealthGrades.com places hospitals that deal with a specific disease—in this case AMI—into various rating groups. At the time this article was prepared, the company used a 5-star rating system that has since been modified. Five-star hospitals were reported to be the best and 1 star the worse. The company uses publicly available Medicare administrative data and a proprietary (never publicly revealed) algorithm to arrive at its ratings. The company determined which hospital fit in which category by comparing the "predicted mortality rate" to the "observed mortality rate" using demographic, clinical, and procedural information. The 1-star and 5-star ratings were assigned to the "worst" and "best" hospitals.
For comparison, Krumholtz et al used the data that had been collected by the Cooperative Cardiovascular Project (CCP). This was a federally funded project that abstracted nationwide data from hospital charts for more than 200,000 hospitalizations for AMI. Rather than using only mortality data, Krumholtz et al used CCP data on 6 process-of-care measures. These included the use of reperfusion therapy, early aspirin use, early beta-blocker use, aspirin at discharge, beta-blockers at discharge, and ACE inhibitors at discharge.
Krumholtz et al detailed the reasons for excluding a large percentage of the CCP cases that had been abstracted at nearly 1500 hospitals. They also described their analyses in sufficient detail.
In general, Krumholtz et al confirmed that these CCP data correlated, in a general way, with the 5-star ratings of HealthGrades.com. The only differences in process between the highest- and lowest-rated hospitals were for the use of aspirin and beta-blockers at the time of admission and discharge. Likewise, there was lower in-hospital mortality at 5-star hospitals when compared to 1-star hospitals both in adjusted and unadjusted mortality rates.
While Krumholtz et al confirmed general differences in the hospital groupings, they also reported that most of the hospitals in all 5 groups performed exactly the same. Indeed, when 1-star and 5-star hospitals were compared ". . . in 92.3% of comparisons, 1-star hospitals had a risk standardized mortality rate that was not statistically different than that of a 5-star hospital . . ." Nonetheless, the 30 day crude-mortality rate for 1-star hospitals was 23% and 15.4% for 5-star hospitals.
Krumholtz et al carefully note the strengths and weaknesses of their analyses. They carefully point out that it is difficult to develop rating systems, and that all are subject to error as they only have a given set of data to review. Overall, they remain somewhat negative concerning the public value of rating systems because "the public . . . often becomes focused on identifying single winners and losers’ rather than using these data to inform quality improvement efforts."
In an entertaining accompanying editorial, Dr. Naylor first provides a fictitious comic introduction then briefly discusses the physician’s changing role in discussing disease and treatment with patients, reprises the findings of the Krumholtz et al article, and finally (and in my opinion most importantly) discusses the fact that confidential (nonpublic) ratings of doctors and hospitals have resulted in some improvement in care.
Comment by Kenneth L. Noller, MD
I suspect that almost all of our readers have worked at a hospital that has either voluntarily or reluctantly participated in some type of quality assessment survey. Such surveys generally sound like a good idea to those who have not participated in them.
I have been involved with many of these surveys over the past dozen years or so and have continuously worried about the accuracy. I have seen the results of hospital ratings based on patient comments that have become published widely, and those hospitals scoring poorly suffering as a result. You might wonder, why am I worried about the results of a poor patient satisfaction becoming known? The reason is that I have observed manipulation of patients in an attempt to improve ratings. Some hospitals actually hire individuals to circulate among in-patients and remind them how good their care is, and ask them to note that "fact" when they receive their patient survey in the mail. That alone would invalidate the results.
However, it gets even worse. In the past few years I have seen hospitals, hospital administrators, and department chairs suffer because their hospital (for a specific service or a portion of a service) received a "statistically significant lower rating" than another hospital. On one occasion, this difference was the result of 21 patients at hospital A rating the service as excellent and only 20 doing so at hospital B. Hospital B was listed in the local paper as being "significantly" lower rated.
I strongly recommend that you read this JAMA article and the accompanying editorial. While the disease chosen for the study is myocardial infarction, it could just as easily have been some OB/GYN malady. The importance of the article is that it carefully examines the problems with hospital rating systems. The accompanying editorial points out that it is not at all certain that such rating systems make any particular difference in the quality of care received by patients, and suggests alternate methods that have been shown to be (more) useful. Unlike some articles that deal with complicated analyses, this one happens to be quite easy to read and understand.
Dr. Noller is Professor and Chairman, Department of OB/GYN, Tufts University School of Medicine, Boston, Massachusetts.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.