Not Everything That Can Be Counted Counts!
Not Everything That Can Be Counted Counts!
Abstract & Commentary
By Rahul Gupta, MD, MPH, FACP, Clinical Assistant Professor, West Virginia University School of Medicine, Charleston, WV. Dr. Gupta reports no financial relationships relevant to this field of study.
This article originally appeared in the April 15, 2013, issue of Internal Medicine Alert. It was edited by Stephen Brunton, MD, and peer reviewed by Gerald Roberts, MD. Dr. Brunton is Adjunct Clinical Professor, University of North Carolina, Chapel Hill, and Dr. Roberts is Senior Attending Physician, Long Island Jewish Medical Center, NS/LIJ Health Care System, New Hyde Park, NY. Dr. Brunton serves on the advisory board for Abbott, Boehringer Ingelheim, Janssen, Novo Nordisk, Sanofi, Sunovion, and Teva; he serves on the speakers bureau of Boehringer Ingelheim, Kowa, Novo Nordisk, and Teva. Dr. Roberts reports no financial relationship to this field of study.
Synopsis: As a federal program rolls out to award providers with incentives for achieving meaningful use in electronic health records, wide measure-by-measure variation can jeopardize the validity of electronic reporting.
Source: Kern LM, et al. Accuracy of electronically reported “meaningful use” clinical quality measures: A cross-sectional study. Ann Intern Med 2013;158:77-83..
The Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 was signed into law as part of the “stimulus package” representing the largest U.S. initiative to date that is designed to encourage widespread use of electronic health records (EHRs).1 Being progressively adopted by hospitals and clinicians, EHR systems have the potential to transform the health care system from a mostly paper-based industry to one that uses information technology to create, store, maintain, and exchange health records. EHRs have been widely touted to improve the quality of patient care as well as provide cost savings by increasing practice efficiencies, improving care coordination, improving accuracy of diagnosis and health outcomes, and increasing patient participation in their care.2 With the goal of promoting the use of EHRs across the nation, the term “meaningful use” is often used for a set of standards defined by the Centers for Medicare & Medicaid Services (CMS) Incentive Programs that govern the use of EHRs and allow eligible providers and hospitals to earn incentive payments by meeting specific criteria. To achieve meaningful use, eligible providers and hospitals must adopt certified EHR technology and use it to achieve specific objectives. These objectives are further divided into three stages, the first being focused on data capture and sharing. Coming soon will be stage two with objectives and measures centered on advanced clinical processes, followed by stage three scheduled for 2016 with focus on improved outcomes. While the EHR Incentive Program offers up to $27 billion in incentives for meaningful use that began in 2011, those who do not achieve it would be facing financial penalties by 2015.3 As providers would be required to submit “clinical quality measures” from their EHRs, it is critical that we ensure the EHR reporting is the valid reflection of the delivered care.
In the current study, Kern et al attempt to test the accuracy of electronic reporting in a community-based setting for 12 quality measures, 11 of which are included in Stage 1 of Meaningful Use Clinical Quality Measures. In this cross-sectional study conducted at a federally qualified health center with a commercially available EHR system, 150 patient records were randomly sampled using 2008 data for each quality measure, resulting in 1154 unique patients. Nearly two-thirds were women, the mean age of patients was 55 years, and patients had a median of four visits in 2008. Electronic reporting of these measures was then compared with manual review of records to determine accuracy of reporting. The researchers found that sensitivity and specificity varied significantly based on the specific type of quality measure. For instance, sensitivity ranged from 46% (for appropriate asthma medication) to 98% (for having HbA1C test done in diabetics). Specificity ranged from 62% (for LDL cholesterol control in diabetics) to 97% (for pneumococcal vaccination). Similarly, positive and negative predictive values as well as positive and negative likelihood ratios also varied by measure. When absolute rates of recommended care were evaluated, only three measures with statistically significant electronic reporting-manual review differences were found. These included the underestimation in electronic reporting of rates of appropriate asthma medication (absolute difference, -39%) and pneumococcal vaccination (absolute difference, -21%), as well as overestimation in rate of LDL cholesterol control in diabetics (absolute difference, 20%) compared with manual review.
Commentary
Consistent with previous work in the field, research by Kern et al finds that there is wide measure-by-measure variation in accuracy of electronic reporting of clinical quality measures when compared with manual patient chart review. While there are several thoughtful reasons for these variations, such as not all the information from patient charts (e.g., free-text notes or scanned documents) makes it to electronic reporting, it is disappointing to observe that as a system we have not yet been able to resolve these “technical incongruities,” especially when our patients’ lives are at risk. As this study focuses on the effect of data source on performance measure validity, if a measure cannot be reliably collected, it cannot be valid. As evident by the above study, considerable issues continue to exist with data accuracy and completeness for medication and problem lists, which are the building blocks for reportable measures. It is evident that for a meaningful EHR system to exist, several “meaningful” steps have to be accomplished. These include a clear and concise documentation of the care delivered that can be accurately captured and then automatically transmitted without losing data in the process. Once such a system is established, creating a set of clinical quality measures to be considered for meaningful use is straightforward. Perhaps we have the cart in front of the horse at this time. However, it is not too late to retool our way of systems thinking and begin to reconfigure EHR systems that recognize and extract clinical information from patient charts rather than force the care provider to fill in structured fields that can alter the workflow. The purpose of EHRs should be to improve practice efficiency and patient care coordination, resulting in improved outcomes that do not force clinicians to alter their clinical workflow to suit the reporting needs of the EHRs. It’s a work in progress, albeit slow.
References
1. Buntin MB, et al. Health information technology: laying the infrastructure for national health reform. Health Aff (Millwood) 2010;29:1214-1219.
2. Holroyd-Leduc JM, et al. Health information technology: Laying the infrastructure for national health reform. J Am Med Inform Assoc 2011;18:732-737.
3. Blumenthal D, Tavenner M. The “meaningful use” regulation for electronic health records. N Engl J Med 2010;363:501-504.
As a federal program rolls out to award providers with incentives for achieving meaningful use in electronic health records, wide measure-by-measure variation can jeopardize the validity of electronic reporting.Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.