Agencies start TalkingQuality, give hospitals tools to educate public
Agencies start TalkingQuality, give hospitals tools to educate public
But some still question its ability to aid patients
The Agency for Healthcare Research and Quality (AHRQ), the Centers for Medicare & Medicaid Services, and the U.S. Office of Personnel Management are banding together to help health systems and hospitals that want to provide consumers with quality report cards and other information regarding health care quality measurement. The web-based project, www.talkingquality.gov, was launched in late April.
Providing both guidance and real-world examples of quality measurement reports, the web site is supposed to give health plans, hospitals, and even providers some of the tools they need to create quality reports that consumers increasingly demand. The web site includes sections on what to say, how to say it, and how to get the information out to consumers. Examples come from public and private health care organizations that already have started to measure and distribute quality data to consumers.
There are sections on getting started on a quality project, collecting and analyzing data, presenting and disseminating information, providing ongoing support for a quality reporting effort, and evaluating the project. There also is a feature called the Planning Workbook, a downloadable file to help develop customized plans for presenting quality health care information. Throughout the site, there are icons that remind users when this feature may be useful.
The site was developed through the Quality Interagency Coordination (QuIC) Task Force, established to ensure that all Federal agencies with health care responsibilities are working in a coordinated way to improve the quality of care.
One of the initial goals of the project simply was to have all of this information on how to create a quality measurement program in one place, explains Carla Zema, PhD, a Kerr White Visiting Scholar at AHRQ who worked on Talking Quality.
So far, there has been some feedback from government agencies, which have had access to the site since December. Further reaction from others who use the site could lead to changes in the future, says Zema. For instance, there may be parts of the site that need more depth, and there is an acknowledged lack of information on giving consumers data at the provider level right now. "We hope we will be [updating] the site often," she notes.
Report cards the way to go?
But there are some who question whether quality report cards — particularly those that rely on simplistic measures such as mortality data — are giving consumers a skewed vision of how well a particular health care organization is doing or the quality of care they can provide.
In a study published in the March 13 edition of the Journal of the American Medical Association (JAMA),1 Harlan Krumholz, MD, an assistant professor at Yale’s School of Medicine found that — at least when it came to one particular Internet-based report card site that looked at ratings for acute myocardial infarction (AMI) — the ratings "poorly discriminated between any two individual hospitals’ process of care or mortality rates during the study period."
Krumholz and his colleagues looked at information from the Cooperative Cardiovascular Project that includes more than 141,000 Medicare patients hospitalized with AMI at more than 3,300 hospitals between 1994 and 1996. They compared the outcomes to ratings from HealthGrades. Based in Lakewood, CO, HealthGrades provides health care quality information and services to consumers, hospitals, insurance companies, and other health care agencies. Among the measures the researchers looked at were use of acute reperfusion therapy, aspirin, beta-blockers, ACE inhibitors, and 30-day mortality rates.
While patients treated at higher-rated hospitals significantly were more likely to receive aspirin (75.4% at a HealthGrades.com five-star rated hospital vs. 66.4% at a one-star facility on admission and 79.7% at a five-star hospital vs. 68.0% at a one-star rated hospital on discharge) and beta-blockers (admission: 54.8% at a five-star hospital vs. 35.7% at a one-star; discharge: 63.3% at a five-star vs. 52.1% at a one-star), the same wasn’t true for ACE inhibitors. In that case, there only was a 2.3% difference between five- and one-star facilities. Acute reperfusion therapy rates were highest for patients treated at two-star hospitals (60.6%) and lowest for five-star hospitals (53.6%).
Krumholz also noted that there was a lot of difference within the rating groups, so some five-star hospitals markedly were better than others.
"I am strongly committed to trying to improve accountability and want to develop indicators of quality," Krumholz says. "I want to help the larger health care system get to a place where there is competition on quality. But right now, there isn’t a lot of good information that can help people choose between hospitals and physicians."
Krumholz says that many organizations that are interested in health care report cards rely on billing data, and that concerned him, spurring his decision to investigate further. "Although these efforts are in the right direction — that of full disclosure of organization performance — it can lead to misperceptions about differences between specific hospitals."
He and his colleagues on the study chose HealthGrades not because of who the company is, but because of the strategy it employs of using billing data and mathematical billing models. "What we found is that on average, when you cluster all the five-star hospitals together, they were better than the one-star facilities," says Krumholz. "But there was a lot of heterogeneity between hospitals in the various groups. The data they provide is better than nothing, but not much. It’s not as good as it might appear, and we have to find better ways of providing consumers with quality information."
The information Krumholz found was what he would expect from billing data. "It just can’t capture chart information well. It’s not bad as a surveillance method, and there is often some signal of quality in that data. But that’s different than trying to draw conclusions about a facility."
HealthGrades doesn’t agree with Krumholz’s conclusions, citing flaws in several aspects of the study. First, a prepared statement from the company notes, "to validate a methodology, it is essential to use the same patients, the same time period, and the same rating system. Krumholz et al. use a different time period, a different rating system, and then systematically excludes certain types of patients that HealthGrades includes, thus rendering their results invalid."
The company says using different patient populations introduces a bias in the ratings assigned to hospitals, and differences "become blurred, and therefore the categories would become less distinct."
The company also criticized the study’s use of only 18 months of data, compared to Health-Grades’ 36-month sample, and the researchers’ decision to exclude patients readmitted for AMI, and those who were transferred to a hospital — some of whom may have been transferred to a different facility for reperfusion therapy.
HealthGrades also noted that Krumholz evaluated an older system that has a five-star rating system. HealthGrades now uses a three-star system, which the prepared statement notes may increase the distinctions between hospitals.
Krumholz tells Healthcare Benchmarks that the editors and peer reviewers at JAMA saw all of HealthGrades’ criticisms before publication, and they agreed not with the company, but with the researchers’ approach. Besides, he notes, although the study focused on HealthGrades, Krumholz says that most such programs for quality measurement reporting have the same limitations.
"What patients want and need to know is are providers doing the right thing and are patients satisfied with their care. Whether or not a hospital has a PET [position emission tomography] scanner isn’t a good measure of quality. Even something like nursing ratios may not be. You really want to know how good the nursing care is and how the nurses are deployed."
In the future, using some of the Joint Commission’s core measures as a basis for quality data reporting will be more meaningful, Krumholz says.
In the end, Krumholz says he hopes his study, and its criticisms of the current state of consumer data reporting will lead to better efforts in this area. "I would like full disclosure of information at the hospital level, and also full disclosure of measurement systems. I’m not suggesting that in the meantime, you don’t publish this kind of information, but there has to be a warning about its limitations. We need to be looking for more meaningful measures of quality. (To see tips from talkingquality.gov, click here.)
Reference
1. Krumholz HM, Rathore SS, Chen J, et al. Evaluation of a consumer-oriented Internet health care report card: The risk of quality ratings based on mortality data. JAMA 2002; 287(10):1,277-87.
[For more information, contact:
- Harlan Krumholz, MD, Associate Professor of Internal Medicine and Cardiology, Co-Director of the Clinical Scholars Program and Epidemiology/Public Health, Yale University Medical School, New Haven, CT 06520. Telephone: (203) 737-1717.
- Carla Zema, PhD, Kerr White Visiting Scholar, AHRQ, 6011 Executive Blvd., Suite 200, Rockville, MD 20852. Telephone: (301) 594-1364.
- HealthGrades, 44 Union Blvd., Suite 600, Lakewood, CO 80228. Telephone: (303) 716-0041.]
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.