CMS/Joint Commission Hospital Quality Measures—Is It the Federal Grade for Quality?
CMS/Joint Commission Hospital Quality Measures—Is It the Federal Grade for Quality?
Author: Richard T. Griffey, MD, MPH, Assistant Professor of Emergency Medicine, Washington University School of Medicine, St. Louis, MO; and Joshua M. Kosowsky, MD, Assistant Professor, Harvard Medical School, Clinical Director, Department of Emergency Medicine, Brigham and Women's Hospital, Boston, MA.
Peer Reviewers: Brooks F. Bock, MD, FACEP, Professor, Emergency Medicine, Wayne State University, President, Harper/Hutzel Hospitals, The Detroit Medical Center, Detroit, MI; and Larry B. Mellick, MD, MS, FAAP, FACEP, Professor, Department of Emergency Medicine and Pediatrics, Residency Program Director, Medical College of Georgia, Augusta.
Ask any of your colleagues, "Do you provide quality care?" The answer is, of course, but how do you know? It is because most of your patients seem to do OK? Quality in healthcare has been hard to define, and this task has baffled many. I remember the lecturer who spoke on quality in the emergency department at an ACEP conference about 20 or more years ago. He pulled out a Cross pen from his pocket and rhetorically asked the audience if they felt this was a quality writing instrument. Most of the audience nodded yes. Then he asked how the company determined that they made a quality product. Nobody could provide the answer. His answer was that the company developed specifications for the design of this pen and then monitored the assembly process to ensure compliance with these standards. This struck me as a simple but brilliant solution to the definition of quality in healthcare, and more importantly, a solution that could be implemented and monitored. Simply stated, quality in healthcare consists of deciding beforehand what is the right thing to do, and then doing it. The Hospital Quality Measures are like that; they are evidence-based and consensus-driven processes that, if done consistently, will improve patient outcome.
This issue of Emergency Medicine Reports is devoted to increasing your understanding of these measures and the role they will play in your practice. And I want to thank Dr. Greg Henry for the anecdote I used in this introduction; I have remembered it all these years.
—J. Stephan Stapczynski, MD, FACEP, AAEM, Editor
Introduction
Despite increasing expenditures and advances in medical technology, large gaps persist in the delivery of basic health care in the United States.1 During the past 40 years, a number of converging influences have led to the current high-level focus on quality and transparency in U.S. healthcare and in closing these gaps. Increasing cost pressures, the rise of consumerism, organization of payor groups, application to healthcare of quality management methods borrowed from industry, and the maturation of health services research are but a few of the forces behind the present mandate for change. Accordingly, the landscape of performance measurement in healthcare has changed significantly during this time, most notably in the past decade. This period has introduced the use of standardized measures, compulsory reporting for accreditation, "voluntary" reporting to avoid reimbursement disincentives, other "pay-for-performance" (P4P) financial incentives for achieving quality targets, and public reporting of performance. Soon to follow will be additional hospital measures, the introduction of physician measures assessing individual and group performance, and measurement of patient perspectives on care, among other metrics and efforts.
These changes have already had a significant impact on routine medical practice. Certain measures specifically challenge emergency medicine and have led to questions as to the value of these measures, how they came to be, debates about the wisdom and usefulness of public reporting, and the future of these initiatives.2 The issues of quality and performance measurement have certainly captured the attention of emergency physicians, striving to meet performance targets amidst the pressures of increased patient volume and limited capacity and resources. This experience has underscored the importance of engagement by emergency medicine organizations in setting the determinants by which performance is measured. Navigating the maze of numerous competing and collaborating agencies, the various proposed measures at different levels of care, and the broad initiatives that create incentives for reporting and performance can be puzzling even to those involved in the process.
In this issue of Emergency Medicine Reports, we present the current Centers for Medicare and Medicaid Services (CMS) and Joint Commission Hospital Quality Measures (a.k.a. "Core Measures"), outlining the changes and legislation leading to their development, reviewing fundamental aspects of these measures such as data collection, requirements of reporting, and public access, and attempt to place these measures into a larger framework to make them more understandable. We shine a light on the process of measure selection and development, and give a sense of future directions.
To understand how current conditions for measurement were selected and measures were chosen, it is important to have a sense of their history. The Hospital Quality Measures are a subset of existing measures at the Joint Commission (previously known as JCAHO) and CMS, and the story of their development is one of an evolution in the roles of these agencies and of changes in healthcare. This article will briefly review the evolution of quality efforts at CMS, beginning with utilization review and leading to the role of Quality Improvement Organizations who contract as CMS' agents in every state. Next, this article describes the role and progression of the Joint Commission from small consortium of medical societies to a ubiquitous accreditor, and the history of collaboration between these agencies. The recent changes in quality improvement and performance measurement in healthcare that have led to today's quality agenda and recent efforts to standardize measures in the absence of a centralized authority will be reviewed. The Hospital Quality Measures are presented, along with a description of the process of data collection and analysis, and public access to data. Finally, this issue discusses future plans of CMS and the Joint Commission in this arena, pay for performance initiatives, and what these mean for emergency medicine.
Performance Measurement at CMS and the Evolving Focus on Quality
The history of CMS' efforts in quality assurance begins with passage of Medicare in 1964. Initially designed to provide health insurance for people 65 and older, Medicare was expanded in 1970s to include those younger than 65 receiving social security disability benefits and to certain patients with end-stage renal disease. As the largest single purchaser of healthcare in the United States, Medicare's efforts in securing quality care for its beneficiaries affect quality standards nationally.
In the initial cost-based reimbursement scheme, Medicare's quality assurance efforts consisted primarily of utilization review, focusing on unnecessary services or overuse of services.3 From 1970-1975 this was performed by Experimental Medical Care Review Organizations (EMCROs) provided for by this legislation and made up of voluntary groups of physicians. Neither specific review criteria nor authority for denial of payment were part of EMCROs' mandate, and they were ineffective in controlling the subsequent substantial cost increases. In the first few years after implementation, costs for Medicare doubled and those for Medicaid quadruped.4-6
Under an amendment to the Social Security Act in 1972, utilization review was contracted to Professional Standard Review Organizations (PRSOs). Headed by physicians within a given locality, PRSOs had a primarily regulatory role and were given the authority to recommend denial of payment on claims for inpatient services. Controversial from the outset, PRSOs were criticized by physician groups as government interference with medical practice promoting "cookbook medicine," and criticized by consumer groups on the other side who charged local review and self-policing as inherently ineffective and a conflict of interest. PRSOs did little to control healthcare costs and they, too, were ultimately replaced.3,4,7
HMOs and managed care plans became prevalent in response to rising costs, and under the Deficit Reduction Act of 1983 Medicare's reimbursement scheme shifted to a prospective payment system (PPS), introducing diagnosis-related groups (DRGs), and allowing for capitation for Medicare beneficiaries. Subsequent discussions about quality shifted to assuring appropriate services were provided to beneficiaries by these plans and utilization review focused on underuse.3 While the government set minimum qualifications for the introduction of health plans, quality review of these plans was largely taken on by the JCAHO and the National Committee on Quality Assurance (NCQA), whose Health Plan Employer Data and Information Set (HEDIS) and Consumer Assessment of Healthcare Providers and Systems (CAHPS) measures became standards in evaluating health plans, and which have had numerous subsequent adaptations and applications.3,4,7
Passage of the Peer Review Improvement Act of 1982, as part of the Tax Equity and Fiscal Responsibility Act (TEFRA), replaced PRSOs with 54 Peer Review Organizations (PROs) (one per state and territory) that bid on contracts to provide reviewing and regulatory services. The focus of the PROs remained on cost control, but with time and each subsequent contract cycle or "scope of work" expanded to include quality oversight. Under the Omnibus Budget Reconciliation Act of 1986, the Federal Department of Health and Human Services contracted with the Institute of Medicine (IOM) to study of the impact of the PRO system on quality. Their 1990 report recommended that PROs be tasked primarily with quality improvement and not utilization review or cost control. This was codified to some degree in the 1993 fourth "scope of work" when the PROs were brought under the Health Care Quality Improvement Program (HCQIP) with a new focus on data collection, quality improvement efforts, and collaborations with providers.3,4,7
Under this program, PROs were charged with participating in quality improvement projects focusing on conditions prevalent in the Medicare population. One of the first projects under this program, the cardiac care coordination project, related to improving care in acute myocardial infarction (AMI) was felt to be a success. Working with the American College of Cardiology, the American Heart Association, and PROs in four states, measures were developed, refined, and piloted, with improvements in all four states over the pilot period. All participating states demonstrated significant improvement over the study period. These efforts set the stage for broader efforts. In 1999, HCQIP projects became national, and every PRO was required to demonstrate measurable statewide improvement in the areas of breast cancer, diabetes, heart failure, pneumonia, stroke, and AMI.8
To reflect their shift in focus, PROs underwent a name change between 1999 and 2002 to Quality Improvement Organizations (QIOs). Currently operating under the Eighth Scope of Work ("8th SoWz"), today there are 43 QIOs that contract with CMS for more than $400 million a year to provide service in the home health agencies, nursing homes, managed care plans, and physicians' offices.9 QIOs retain utilization review and regulatory roles as well as those in quality improvement. Whether they are successful in improving quality and the potentially conflicting nature of their regulatory and quality improvement roles remain areas of contention and criticism, particularly by the IOM.
Performance Measurement at the Joint Commission
Before the existence of HCFA, the Joint Commission for Accreditation of Hospitals (JCAH), subsequently the Joint Commission for Accreditation of Healthcare Organizations (JCAHO), and currently the Joint Commission (as of January 8, 2007) had been established as the premier quality monitoring organization in healthcare. Established in 1951, the JCAH was formed by a partnering of the American and Canadian Medical Associations, the American College of Physicians, and the American Hospital Association with the American College of Surgeons (ACS). The ACS, influenced by surgeon Ernest Codman's focus on evaluation of outcomes, established "minimum standards for hospitals," and performed audits to assess whether these standards were met. The Joint Commission's mission of accreditation today supersedes mere compliance with "minimum standards" (now largely a function of credentialing federal agencies and licensing by State Departments of Public Health) to recognize "optimal achievable levels of quality."10
The Social Security Amendments of 1965 included a provision under its "Conditions of Participation," in essence requiring that hospitals be JCAHO-accredited (as then termed) to participate in Medicare and Medicaid programs, sealing the latter agency's preeminence as a standard-setting organization and establishing a precedent for collaboration between the federal government and the Joint Commission. (See Table 1.) Today, more than 95% of acute care hospitals voluntarily report to the Joint Commission and undergo the accreditation process. Until the late 1970s, the JCAHO-accreditation (as then termed) process for hospitals did not focus on performance measurement at all, and as discussed further here, the accreditation process did not require reporting on performance until the 1990s.11
Table 1. Milestones in the History of CMS/Joint Commission Hospital Quality Measures |
• 1965: JCAHO (Joint Commission) accreditation mandatory for Medicare Reimbursement. • 1987: JCAHO announces intention to require reporting of standardized performance measures. Later relents. • 2001: JCAHO announces initial set of performance measures for four conditions: acute myocardial infarction, heart failure, pneumonia, and pregnancy. • 2002: JCAHO requires data collection on 10 performance measures ("starter set") for accreditation. • 2004: CMS and JCAHO align performance measures around those already in use, with goal of making these identical where appropriate. • 2005: CMS begins public reporting of hospital comparative data on 10 measures. • 2006: CMS expands public reporting to 20 measures. |
In 1987, shortly after Health Care Financing Administration (HCFA, the predecessor of CMS) announced its intent to release comparative reports on hospital mortality, JCAHO (as then termed) as part of the "Agenda for Change" announced its intention to require hospitals seeking accreditation to collect and submit data on six sets of standardized performance measures they had developed (perioperative care, obstetrical care, trauma care, oncology care, infection control, and medication use) using a standardized "Indicator Measurement System" to facilitate reporting. While both HCFA's and JCAHO's plans were met with strong objections from various quarters and were abandoned, these efforts set the stage for broader initiatives in the 1990s.12
In 1995, JCAHO (as then termed) named an Advisory Council on Performance Measurement to identify specific criteria as standards for use in accreditation and to develop the attributes of core performance measures. Their work established the initial evaluation framework and criteria used to review performance measurement systems and measures. This led to the 1997 launch of JCAHO's ORYX initiative, introducing performance measures as a condition of accreditation for hospitals, long-term care organizations, networks, home health agencies, and behavioral healthcare organizations.13 ORYX allowed healthcare organizations to choose from a broad range of non-standardized measures to report, and to use various reporting systems for transmission of data. With more than 100 different measurement systems and more than 8000 measures available, however, meaningful analysis was limited, and the need for standardization was quickly apparent.14
In response to this, in 1999 JCAHO (as then termed) developed potential focus areas for standardized or "core" measures for hospitals, announcing the initial four measure sets of acute myocardial infarction (AMI), heart failure (HF), pneumonia (PN), and pregnancy and related conditions (PR) in May 2001. This process included input from stakeholders including clinicians, hospitals, consumers, state hospital associations, and medical societies about potential focus areas. Candidate measures were assessed using the Attributes of Core Performance Measures and Associated Evaluation Criteria, and advisory panels were convened to identify sets of measures to assess care provided in a given focus area. The potential core measures were posted on the JCAHO Web site for stakeholder feedback. After development of the initial specifications for the first sets of core measures, JCAHO pilot tested these at 16 facilities to evaluate feasibility, usefulness, reliability, and implementation costs.14
In 2002 JCAHO introduced these core measures into its performance requirements for hospitals. Hospitals seeking accreditation were required to submit data on three of five standardized measure sets (acute myocardial infarction, heart failure, pneumonia, pregnancy and related conditions, and surgical infection prevention). To a degree, reporting on non-standardized measures is still acceptable to obtain accreditation, but reporting on fewer standardized measures requires reporting on more non-standardized measures.14
Collaboration Between CMS, the Joint Commission, and the Hospital Quality Alliance
In 2001 the federal Department of Health and Human Services announced The Quality Initiative, "to assure quality health care for all Americans through accountability and public disclosure." This consisted of several components: the Hospital Quality Initiative (HQI, see Table 2), the Nursing Home Quality Initiative, and the Home Health Quality Initiatives.
Table 2. Key Components of the CMS Hospital Quality Initiative |
CMS contracted with the National Quality Forum (NQF) to propose a consensus-derived set of hospital quality measures appropriate for public reporting.13 CMS chose 10 of 39 of the NQF consensus-derived measures for several quality improvement efforts and another 24 from this set for a quality incentive demonstration. Throughout this time, JCAHO and CMS had collaborated on the AMI, HF, and PN measures to align the specifications that were common to both and subsequently set out to make their measures sets identical, with common data dictionaries, information forms, and algorithms, including future measures common to both organizations.13
Under the banner of the Hospital Quality Alliance (HQA), CMS collaborated with the American Hospital Association, the Federation of American Hospitals, the Association of American Medical Colleges, and a broad array of stakeholders to develop a voluntary hospital reporting initiative linking a hospital's payment update under Medicare to the submission of data for a set of standardized measures from the JCAHO ORYX system. The HQA identified 20 standardized, NQF-endorsed measures in the areas of AMI, HF, pneumonia, and surgical infections that are referred to as "Hospital Quality Measures." Ten of these became the "starter set" of measures chosen for initial public disclosure. These measures were chosen because, "they are related to three serious medical conditions and prevention of surgical infections and it is possible for hospitals to submit information on for public reporting today."15,16 These are also measures that were largely already known and that CMS could validate using existing systems through their QIOs. Since 2006, reporting on the full 20 measures is now required. (See Table 3.)
Although they had appeared in other forms previously under CMS and JCAHO initiatives, these measures were vetted by CMS through a process in April-June 2004 that included selection of measures and public "Listening Sessions" conducted in five U.S. cities, attended by healthcare consumers, payers, plans, providers, and purchasers. The stated objective of these meetings was to receive feedback and comments from interested parties and end-users "outside the beltway." In April 2005, CMS began publicly reporting hospital comparative data based on the HQA measures via its Web-based tool (see below) with the goal of identifying "a robust set of standardized and easy-to-understand hospital quality measures that would be used by all stakeholders in the healthcare system in order to improve quality of care and the ability of consumers to make informed healthcare choices."15
Acceptance of the Hospital Quality Measures has not proceeded without controversy, and all measures have undergone revision, with the pneumonia measures receiving particular attention from emergency physicians. For example, the case of universal blood cultures before initial antibiotic treatment in patients with pneumonia was roundly challenged by emergency physicians on both clinical and epidemiologic grounds.17 Shortly after publication of an editorial to this effect,18 the description of this measure were revised to indicate that requiring a blood culture before the initial antibiotic dose applied only to patients with pneumonia being admitted to the intensive care unit (ICU) (a subpopulation for which there are reasonable supporting data for the utility of blood cultures) and those emergency department patients in whom blood culture was obtained anyway. The diagnostic criteria for pneumonia have been refined so that a patient is eligible for analysis only if the pneumonia diagnosis is confirmed by chest radiograph or computed tomography (July 2006) and is included in the emergency physician's diagnosis or impression (October 2006). The four-hour window for initial antibiotic administration for patients with pneumonia has been challenged and will be increased to six hours, effective for discharges after October 1, 2007.19 These changes demonstrate some of the perils of accepting a "consensus-derived" measure particularly, as in the utility of blood cultures or the timing of initial antibiotic therapy in the management of pneumonia, without input from the specialties most affected and without considering the measure's full range of effects.
How Are the Measures Analyzed and Reported?
The Hospital Quality Measures (HQM) are made up entirely of process measures and derived only from admitted patients with Medicare coverage. The specifications for these measures identify the numerator and denominator populations, and data are aggregated to present a rate of compliance with the standard. Performance is benchmarked against state and national average and top 10% performance. For Joint Commission accreditation, hospitals are required to submit data on a minimum of three measure sets and report on all measures within a given set. Reporting to both the Joint Commission and CMS is required and is done separately, but there are plans in process to unify the reporting mechanism.
Patients who are eligible for HQM analysis are identified by International Classification of Disease version 9 (ICD-9) codes generated upon hospital discharge. (See Table 4.) Data to calculate compliance with the individual HQM is abstracted from the medical record of appropriate patients by internal hospital employees or by vendors using standardized data collection instruments or worksheets. The data are aggregated and electronically submitted on a quarterly basis to a national clinical data repository via the Web tool, QualityNet Exchange, using either the CMS Abstraction and Reporting Tool (CART), JCAHO's ORYX Core Measure Performance Measurement System (PMS), or qualifying vendor software. CMS, through QIOs and local Clinical Data Abstraction Centers, validates this data by sampling hospital primary data and re-abstracting the clinical measures quarterly. QIOs are responsible for ensuring reliability, consistency, and for mediating appeals.
Table 4. ICD-9 Codes to Identify Patients for Hospital Quality Measure Analysis |
One of the goals in reporting is to standardize coding of quality measures and, when possible, to make this part of the administrative data set used for claims as opposed to reporting this through a separate system. Claims for emergency physician services utilize Current Procedural Terminology (CPT) codes developed and maintained by the AMA and adopted for use by the Health Care Procedure Coding System (HCPCS). These assign a five-digit numeric code for all services, procedures, and specific other items. For physician-level quality measures, CMS has defined a set of HCPCS codes (termed G-codes) to report data for the calculation of the quality measures. These new codes will supplement the usual claims data with clinical data that can be used to measure the quality of services rendered to beneficiaries. Separately, the AMA has also developed CPT II codes, which serve a similar function in providing a coding at the claims level for performance on a quality measure. The CPT II codes have modifiers that provide for exclusion, whereas G-codes include separate codes to indicate this. It is presently unclear whether G-codes or CPT II codes will become the standard for these measures.
Public Access to the Results
Public access to results is considered a key component of the Quality Initiative and intended to provide transparency to consumers and purchasers. The extent to which public reporting is successful in achieving transparency and is being used by consumers remains to be seen. Newspapers and other media have occasionally published stories based on public access data, and some individual hospitals have cautiously used this information for self-promotion. The public has access to the results through www.hospitalcompare.hhs.gov or, for JCAHO publicly reported data, at www.qualitycheck.org. Data on the Hospital Compare Web site is updated quarterly and is typically six to nine months old. (See Figure 1.)
Figure 1. Percent of Pneumonia Patients Given Initial Antibiotic(s) within 4 Hours after Arrival |
Pay for Performance ("P4P")
Despite the name "Pay for Performance," CMS' initial financial incentives or disincentives have focused not on performance per se, but first on encouraging voluntary reporting. For an eligible short-term acute care hospital to receive its full annual Medicare financial update under the inpatient prospective payment system, CMS requires hospitals to submit data on 10 quality measures for three medical conditions: AMI, HF, and pneumonia. These are also the same measures that form the starter set of the voluntary reporting effort established by the Hospital Quality Alliance (HQA). Hospitals failing to report these data by the established deadlines receive a 0.4 percentage point reduction in their annual Medicare payment update. Approximately 96% of all eligible hospitals received their full annual payment for FY 2006.
CMS, however, has also undertaken a number of experiments directed at rewarding high performance on achieving quality targets. The primary P4P program at the hospital level is the Premier Hospital Quality Incentive Demonstration, which provides financial awards of $8.85 million to hospitals that showed measurable improvements in care during the first year of the 3-year program in a number of areas of acute care. Hospitals receive bonuses based on overall score on quality measures for each of the following conditions (34 measures): AMI, HF, community-acquired pneumonia, coronary artery bypass graft (CABG), and hip and knee replacement. Additional P4P programs are being developed at the provider-level and other levels as well.20
Individual Physician Auditing and Reporting
Individual physician or physician group performance is one the aforementioned areas of focus in CMS' P4P initiative. Beginning January 1, 2006, interested physicians began to participate in a voluntary program of reporting on 36 performance measures. As originally implemented, the Physician Voluntary Reporting Program (PVRP) provided no additional payment or reward for reporting under the program, and participation is viewed as an opportunity to get a head start on implementing systems and practices for the reporting of future quality measures, as government and private payers head increasingly in this direction.
On December 20, 2006, the President signed into law the Tax Relief and Health Care Act of 2006 (TRHCA). Section 101 under Title I authorizes the establishment of a physician quality reporting system by CMS. This statutory program has been named the Physician Quality Reporting Initiative (PQRI), and CMS subsequently discontinued the PVRP and replaced it with PQRI. PQRI establishes a financial incentive for eligible professionals to participate in a voluntary quality-reporting program. Eligible professionals who successfully report a designated set of quality measures on claims for dates of service from July 1 to December 31, 2007, may earn a bonus payment, subject to a cap, of 1.5% of total allowed charges for covered Medicare physician fee schedule services, for the traditional Medicare fee-for-service program only. CMS is using 74 measures for PQRI, seven (7) of which are most relevant for emergency medicine. (See Table 5.) The American College of Emergency Physicians has worked with CMS and other agencies to develop methodology to identify compliance with these PQRI measures.
Table 5. Physician Quality Reporting Initiatives Relevant to Emergency Medicine |
• ECG performed for non-traumatic chest pain • Aspirin on arrival for acute myocardial infarction • ECG performed for syncope • Vital signs obtained for community-acquired bacterial pneumonia • Assessment of oxygen saturation for community-acquired bacterial pneumonia • Assessment of mental status for community-acquired bacterial pneumonia • Appropriate empiric antibiotic for community-acquired bacterial pneumonia |
Reporting compliance with PQRI measures requires that for each professional service, three codes be determined: a diagnosis (ICD-9 code), a professional service (CPT Evaluation and Management code), and compliance (CPT Category II code, with or without modifier). The CPT Category II codes were created to facilitate collection and reporting of evidence-based performance measures at the time of service. The CPT Category II codes were developed by the AMA Physician Consortium for Performance Improvement and NCQA, with input from more than 50 professional organizations. The codes have a hierarchal structure (see Table 6), with seven CPT Category II codes relevant to emergency medicine. (See Table 7.) An important aspect to CPT Category II codes is the modifiers that indicate compliance was not possible for specific reasons. (See Table 8.) Compliance with a PQRI measure calculated by the ratio of the numerator (determined by the CPT category II code) to the denominator (determined by a combination of the ICD-9 code, CPT Evaluation and Management code, and patient demographics). Reporting of these codes is to be done on the Medicare claim form submitted by the provider for reimbursement.
Table 6. Structure of CPT Category II Codes |
• 0000F Composite Measures • 0500F Patient Management • 1000F Patient History • 2000F Physical Examination • 3000F Diagnostic Processes and Results • 4000F Therapeutic, Preventative and Other Measures • 5000F Follow-Up and Other Outcomes • 6000F Patient Safety |
Table 7. CPT Category II Codes Used for PQRI Emergency Medicine Relevant Measures |
• ECG for non-traumatic chest pain: 3120F • ASA for acute myocardial infarction: 4084F • ECG for syncope: 3120F • Vital signs assessment for community-acquired bacterial pneumonia: 2010F • Oxygenation assessment for community-acquired bacterial pneumonia: 3028F • Mental status assessment for community-acquired bacterial pneumonia: 2014F • Appropriate empiric antibiotics for community-acquired bacterial pneumonia: 4045F |
Table 8. CPT Category II Code Modifiers |
To be eligible for the PQRI bonus, CMS requires reporting of at least three measures, but is encouraging that providers report as many as possible. Quality data will be reported concurrently with the billing process as noted above. Successful involvement in the PQRI requires an 80% reporting rate in at least three measures. Successful reporting physicians may be eligible for a 1.5% bonus (subject to cap) for service provided to Medicare FFS beneficiaries between July 1 and December 31, 2007. CMS maintains that there are no plans to make PQRI mandatory. PQRI is a new initiative with evolving specifications; the most up-to-date information can be found on CMS and ACEP Web sites.
Standardization and Standard-Setting Bodies
It has been clear for more than a decade that redundant and non-standardized measures present an enormous burden of data collection and reporting for providers and institutions, and wasting of resources that could be better utilized in quality improvement efforts. It is also clear that data and measures serve many stakeholders and are used for a variety of purposes, and that the existing uncoordinated efforts remain inadequate to this task.12
In 1998 the President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry recommended creating an agency to identify national aims for improvement and to report on progress, and a forum to define plans for implementation of quality measurement, data collection, reporting standards, identification and updating of core sets of quality measures, and standardized reporting methods. The latter agency took form in the National Quality Forum (NQF), a not-for-profit organization with a membership across the spectrum of stakeholders whose function remains primarily that of endorsement of measures: "To improve American healthcare through endorsement of consensus-based national standards for measurement and public reporting of healthcare performance data that provide meaningful information about whether care is safe, timely, beneficial, patient-centered, equitable and efficient."21
Perhaps because the former of these two agencies, a centralized standard-setting body, was never established by Congress, performance measurement efforts have often continued in an uncoordinated, fractionated manner. However, a number of groups such as the Joint Commission, CMS, the AMA, the NQF, and the NCQA have made considerable gains in both standardizing hospital measures and aligning initial efforts at creating physician-specific measures. In 2000, the landmark IOM report "To Err is Human" underscored the importance of building quality systems to avoid medical error resulting in patient deaths.22 The follow-up report "Crossing the Quality Chasm" outlined a number of quality goals for systems to build toward.23 These reports have solidified support for more attention to quality and to a proposal by the IOM for a central, independent, National Quality Coordination Board to coordinate the diverse efforts in a usable way.
Future Changes
A number of changes will continue in the near future to expand the mandate of the Quality Initiative and to move toward standardization in reporting. Hospitals currently report on the 20 HQA-approved measures. The HQA anticipates adding new measures in 2007-2008: 30-day mortality for AMI, HF, and pneumonia, influenza vaccination for pneumonia, and two measures concerning venous thromboembolism prophylaxis in surgical patients. It is highly likely that the number of mandatory hospital quality measures is likely to increase in the next few years, so it is important that new measures be selected to serve their intended purpose while minimizing unintended negative consequences. (See Table 9.)
Table 9. Important Attributes of Performance Measures (The Joint Commission) |
• Should target improvement in the health of populations • Should be precisely defined and specified • Should be reliable • Should be valid • Should be interpretable • Should be risk-adjusted or stratified • Should be evaluated for the burden of data collection • Should be useful in the accreditation process • Should be under provider control • Should be publicly available |
The Joint Commission will replace non-core measures by 2010 and will adopt NQF-endorsed measures for non-hospital areas. Physician-level measures will be developed as described above. With adoption of an electronic health record (spurred on by federal and state reimbursement requirements), emphasis will then focus on patients' experience within and across delivery sites. It is proposed that measures collected by human abstraction of paper records will eventually be retired with only randomized data collection from electronic records for surveillance purposes.
Conclusion
The drive for determinants of quality and the derivation of measures in healthcare continues to increase. Initial steps have focused on securing reporting, standardizing measures, and beginning public reporting of results. It is a recognized phenomenon that the introduction of imperfect measures stirs passions and results in increased engagement in quality improvement and development of better measures. In the drive to apply this science to healthcare, however, it is important that the focus is on measuring and encouraging quality rather than simply measuring and encouraging documentation. To assure that measures and conditions are high priority, evidence-based, and appropriate, it is critical that practicing emergency physicians not only sit at the table when metrics are proposed and vetted, but also that they be engaged in and take control of this process as it relates to their field of expertise. Failure to do so results in assuming a reactionary role, and presents a missed opportunity to demonstrate leadership in an area that is bound to expand in medicine.24,25
For the practicing emergency physician, the bottom line is this: Somebody is starting to watch your practice. The methodology may be imprecise, but that somebody is big, powerful, and has the ability to hurt your hospital and affect your career. Your hospital CEO cares about these measures, and so must you. You should know which measures are applicable to your practice in the emergency department and develop techniques and processes to facilitate compliance with both utilization and documentation. Develop protocols or pathways, use pre-printed or computerized order sets, and document with templated screens or forms that incorporate relevant quality measures. In addition, it is important to have a physician champion to represent your interests. This individual knows who is collecting the data on compliance with these measures, knows the methods of the individuals, can audit their performance and correct inaccuracies, and hopefully, review data before submission to the quality. If you and your group can achieve this, you will not only be making your hospital CEO happy, but improving the quality of care for your patients as well.
References
1. Asch SM, Kerr EA, Keesey J, et al. Who is at greatest risk for receiving poor-quality health care? N Engl J Med 2006;354:1147-1156.
2. Feldman JA. Quality-of-care measures in emergency medicine: when will they come? What will they look like? Acad Emerg Med 2006;13:980-982.
3. Bhatia A, Blackstock S, Nelson R, et al. Evolution of quality review programs for Medicare: Quality assurance to quality improvement. Health Care Financing Review 2000;22:69-74.
4. Sprague L. Contracting for quality: Medicare's Quality Improvement Organizations. National Health Policy Forum No 774, June 3, 2002.
5. McIntyre D, Rogers L, Heier EJ. Overview, history and objectives of performance measurement. Health Care Financing Review Spring 2001/Volume 22, Number 3.
6. Institute of Medicine: A Strategy for Quality Assurance. National Academy Press. Washington, DC. 1990.
7. Institute of Medicine (IOM), Medicare: A Strategy for Quality Assurance, vol. 1, ed. Kathleen N. Lohr (Washington, D.C.: National Academies Press, 1990, 139.
8. Jencks SF, Cuerdon T, Burwen DR, et al. Quality of medical care delivered to Medicare beneficiaries: A profile at state and national levels. JAMA 2000;284:1670-1676.
9. Guadagnino C. "QIOs under fire, face reform." Physician's News Digest June 2006. www.physiciansnews.com/cover/606.html. Accessed 9/2006.
10. JCAHO: Our History. Available at www.jointcommission.org/AboutUs/joint_commission_history.htm. Accessed 8/29/2007.
11. JCAHO: Key Historic Milestones. Available at www.jointcommission.org/NR/rdonlyres/CF91D166-C4C3-453A-8445-F26190F19107/0/KeyHistoricalActivities.pdf. Accessed 9/30/2006.
12. Performance Measurement: Accelerating Improvement. The National Academies Press. Washington DC: 2006.
13. JCAHO Specifications Manual for National Hospital Quality Measures. QualityNet Web site. Available at http://qualitynet.org/dcs/ContentServer?cid=1141662756099&pagename=QnetPublic%2FPage%2FQnetTier2&c=Page. Accessed 8/29/2007.
14. JCAHO: A Comprehensive Review of Development and Testing for National Implementation of Hospital Core Measures. Available at www.jointcommission.org/NR/rdonlyres/48DFC95A-9C05-4A44-AB05-1769D5253014/0/AComprehensiveReviewofDevelopmentforCoreMeasures.pdf. Accessed 9/2006.
15. Centers for Medicare and Medicaid Services Web site. Hospital Quality Initiatives Overview. Available at www.cms.gov/HospitalQualityInits. Accessed 8/29/2007.
16. Grant JB, Hayes RP, Pates RD, et al. HCFA's health care quality improvement program: The medical informatics challenge. J Am Medical Inform Assoc 1996;3:15-26.
17. Kennedy M, Bates DW, Wright SB, et al. Do emergency department blood cultures change practice in patients with pneumonia? Ann Emerg Med 2005;46:393-400.
18. Walls RM, Resnick J. The Joint Commission on Accreditation of Healthcare Organizations and Center for Medicare and Medicaid Services community-acquired pneumonia initiative: What went wrong? Ann Emerg Med 2005;46:409-411.
19. Mitka M. JCAHP tweaks emergency departments' pneumonia treatment standards (Medical News & Perspectives). JAMA 2007;297:1758-1759.
20. CMS/Premier Hospital Quality Incentive Demonstration (HQID). HQI. Available at www.premierinc.com/p4p/hqi/. Accessed 8/29/2007.
21. Hurtado MP, Swift EK, Corrigan JM, eds. Envisioning the National Health Care Quality Report. Committee on the National Quality Report on Health Care Delivery. Available at http://www.nap.edu/catalog.php?record_id=10073#toc. Accessed 11/2006.
22. Kohn LT, Corrigan JM, Donaldson MS (Institute of Medicine). To err is human: Building a safer health system. Washington, DC: National Academy Press, 2000.
23. Committee on Quality Health Care in America. Institute of Medicine. Crossing the Quality Chasm. Washington, DC: National Academy Press, 2001.
24. Graff L, Stevens C, Spaite D, et al. Measuring and improving quality in emergency medicine. Acad Emerg Med 2002;9:1091-1107.
25. Burstin H. "Crossing the quality chasm" in emergency medicine. Acad Emerg Med 2002;9:1074-1077.
This issue of Emergency Medicine Reports is devoted to increasing your understanding of these measures and the role they will play in your practice.Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.