Hospital Report Cards: What Do the Grades Mean?
ABSTRACT & COMMENTARY
By Deborah J. DeWaay, MD, FACP
Assistant Professor, Medical University of South Carolina, Charleston, SC
Dr. DeWaay reports no financial relationships in this field of study
Summary
Hospital-acquired pressure ulcers (HAPUs) are associated with significant morbidity and mortality for patients. The cost of a stage III or IV ulcer can range between $5,000 and $151,700. The principle motivator for hospitals to prevent HAPUs has been "value-based purchasing" programs. In 2008, Medicare began the Hospital-Acquired Conditions (HAC) Initiative, which eliminated payments for certain nosocomial complications. Beginning in October 2014, the quartile of hospitals with the highest risk-adjusted HAC rates will receive a 1% pay reduction for all of their Medicare patients. The HAPU rate will be a part of the HAC rate, which determines who is in the bottom quartile as determined by a measure known as the Agency for Healthcare Research and Quality (AHRQ) Patient Safety Index 3. As a result, the administrative data hospitals use to generate claims for payment will now be used to determine a complication rate for HAPUs and HACs in general. This study sought to assess the validity of the administrative data for this purpose. The authors compared the hospitals with the highest HAPU rates based on administrative data from the Agency for Healthcare Research and Quality (AHRQ) that is collected from insurance claims with a statewide surveillance data set of HAPU rates based on comprehensive patient skin examinations.
This study retrospectively analyzed one year's worth of records from the Healthcare Cost and Utilization Project State Inpatient Databases, which collect administrative patient data and are sponsored by the AHRQ, and compared them to the quarterly hospitalwide pressure ulcer surveillance data from California hospitals found in the Collaborative Alliance for Nursing Outcomes Pressure Ulcer Prevalence Study. From the administrative data set the authors excluded patients under 16, no age listed, obstetric deliveries, and prisoners. They also excluded the following: specific hospitals for children, maternity and surgery, and those with no surveillance data. In addition, the authors excluded hospitals with less than 6 months of administrative or surveillance data reported from both data sets. The surveillance data has certain clinical exclusions applied to it as it is built, such as medical instability and patients who are being palliated. It was not possible to apply these exclusions to the administrative data set.
The administrative HAPU rate for each hospital was the cumulative rate of stage II HAPUs (HAPU2+) as determined by patients with a secondary ICD-9 code for a hospital-acquired pressure ulcer stage II or greater over the entire length of stay for each hospitalization who had no pressure ulcer on admission. The surveillance HAPU rate was the percentage of patients examined in a particular hospital who had a stage II HAPU and no ulcer on admission.
Data from 196 hospitals were included in this study. The patients with HAPU2+ were older, with longer length of stays, had more diagnoses listed and were hospitalized more frequently for unscheduled surgery. The mean administrative HAPU2+ rate was .15% (CI .13% - .17%; range, 0% - .74%) as compared to the mean surveillance HAPU2+ rate of 2.0% (CI 1.8% - 2.2%; range, 0% - 7.3%). The hospitals were ranked based on their administrative HAPU2+ rate and placed into quartiles. They were then compared to the hospitals as ranked by the surveillance data. Only 35% of the bottom quartile of hospitals in the administrative data set were below average in the surveillance data set. In addition, the correlation of rates between the two data sets was only weakly positive, with a Pearson correlation coefficient of 0.2 (CI, 0.06 to 0.33).
COMMENTARY
Although the authors anticipated a discrepancy between the surveillance and administrative data, a 10-fold increase in the HAPU2+ rate from the surveillance data as compared to the administrative data was quite surprising. The authors attribute the low rate in the administrative data set to its basis on what the coders use in their claims. The coders can only use diagnoses that are supported by providers (MDs, PAs or nurse practitioners). Wound care nursing notes can only be used if they are supported in the primary provider's documentation. In addition, the surveillance data is generated by a team that does quarterly hospitalwide skin examinations and reviews the entire medical record, including nursing notes, in order to generate data. Given the financial impact of the new rules to hospitals, this discrepancy between the two data sets in terms of HAPU2+ rate in general and the lack of consistency in determining the "poor performers" is disturbing for several reasons. First, there appear to be several ways hospitals can manipulate the system to lower their administrative HAPU2+ rates, such as not listing the ulcer in the discharge diagnoses. Although I agree with the authors that "gaming the system" was probably not occurring in this data set because there was no financial incentive to do so in 2009, there will be financial incentive in 2014. Second, if all hospitals are not making the same effort to really minimize their HAPU rate, which first requires identifying the "real" rate via excellent data collection, then the hospitals that are making the most effort to record correctly will be penalized. There is serious concern that the race to stay out of the bottom quartile and avoid potential financial penalty will not be a race of truly preventing HAPUs but a race to have the best administrative data set.