Publicly reported data are misleading, says study
Publicly reported data are misleading, says study
Researchers find inconsistent ratings
Data were old, inconsistent, and incomplete. The same hospital was rated both best and worst for colon removal.
Those are two things researchers found when they compared hospital ratings on six Internet sites for three common procedures: laparoscopic gallbladder removal, hernia repair, and colon removal.1
The researchers evaluated the Centers for Medicare & Medicaid Services (CMS) Hospital Compare site (www.hospitalcompare.hhs.gov), The Joint Commission's Quality Check site (www.qualitycheck.org), the Leapfrog Group's Hospital Quality and Safety Survey Results site (www.leapfroggroup.org), and three sites run by private companies.
A growing number of web sites rank hospitals, but since there is no standard way of calculating quality differences, different results are listed for the same hospitals.
For accessibility and data transparency, the government and nonprofit sites were the most reliable. However, the private sites were best for appropriateness, because they compared surgical procedures using a combination of information, including patient outcomes.
Another problem was that data were at least one year old on all the sites tested, and many had data that were two or more years old.
The study demonstrates the wide array of quality information that is now available to patients regarding hospitals and surgical departments, but it also shows that this information can be very inconsistent, says lead researcher Michael J. Leonardi, MD, faculty in the department of surgery at David Geffen School of Medicine at University of California at Los Angeles.
There is a bright spot, though, says Leonardi: "Hospital-based quality professionals should be encouraged that the quality measures they are working hard to implement, such as Medicare's Surgical Care Improvement Project, are actually being used to measure quality," he says. "The reality of public reporting may assist in encouraging compliance with such measures."
However, quality professionals are "in a difficult position" when it comes to the validity of publicly reported data, says Albert Wu, MD, MPH, a professor of health policy and management at Johns Hopkins Bloomberg School of Public Health in Baltimore.
Guidelines are needed
"This is not something they can do on their own," he says. "One thing they can do is make sure that they collect data using standardized definitions. And also, they need to lobby their state medical boards to agree upon and adopt standard definitions of the most important quality indicators."
"The drive to make quality data public and make it transparent is long overdue and much needed. But it's only going to be helpful if it's truthful," says Peter Pronovost, MD, PhD, medical director of the Johns Hopkins Center for Innovation in Quality Patient Care.
To clear up misleading comparisons, Pronovost and other experts at Johns Hopkins have developed guidelines to standardize hospital safety ratings. Without this standardization, safety problems may not be fixed and the public may be misled, he says.
The Johns Hopkins researchers adapted elements of the American Medical Association's Users' Guide to the Medical Literature: A Manual for Evidence-Based Clinical Practice to create guidelines that hospitals can use to ensure validity and accuracy in patient safety reporting.2
The guide has been used successfully for years to help clinicians evaluate the validity and accuracy of research data. The same principles can be used to evaluate the validity and accuracy of the methods used by an institution to gauge patient safety, says Pronovost.
The guidelines address three key questions: Are the measures important? Are they valid? Are they useful to improve safety in health care organizations?
An assessment tool asks such questions as: Is the measure required by an external group or agency? Is the measure supported by empiric evidence or a consensus of experts? Do clinicians believe that improvement in performance on the measure will be associated with improved patient outcomes? Is the risk for selection bias minimized?
The problem with patient safety reports, such as those required by CMS and The Joint Commission, is that they are "snapshots" instead of long-term system analyses, says Pronovost.
Increasingly, hospitals are also using these "snapshots" as marketing tools, but this could be misleading to the public, according to another study by the Johns Hopkins researchers.3
Researchers found that one institution advertised on its web site that its rate of staph infection is zero, but it didn't say how many people were sampled or whether this represents one month of results or a decade's worth. Another hospital reported that it saved 242 lives over 18 months, but the sample size, methods of risk adjustment, and a measure of precision for the mortality estimates were not given. A hospital's web site stated that 90% of pneumonia patients were screened and given pneumococcal vaccination, but on the same day, CMS' Hospital Compare site reported the statistic as 64%.
To more accurately assess patient safety, all the elements that "make up the big picture" are needed, says Pronovost.
"Health care workers and physicians need to make sure that the data that are put forth are indeed accurate and any biases are made transparent. And also, that it's not being used solely for marketing activities," says Pronovost.
[For more information, contact:
Michael J. Leonardi, MD, Department of Surgery, David Geffen School of Medicine, University of California at Los Angeles. E-mail: [email protected].
Albert Wu, MD, MPH, Professor, Department of Health Policy and Management, Johns Hopkins Bloomberg School of Public Health, Hampton House 663, 615 North Wolfe St., Baltimore, MD 21205. Phone: (410) 955-6567. Fax: (410) 955-0470. E-mail: [email protected].]
References
- Leonardi MJ, McGory ML, Ko CY. Publicly available hospital comparison web sites: Determination of useful, valid, and appropriate information for comparing surgical quality. Arch Surg 2007; 142:863-869.
- Pronovost PJ, Berenholtz SM, Needham DM. A framework for health care organizations to develop and evaluate a safety scorecard. JAMA 2007; 298(17):2063-2065.
- Pronovost PJ, Miller M, Wachter RM. The GAAP in quality measurement and reporting. JAMA 2007; 298:1800-1802.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.