Reviewing the data: What ORYX means to you
Reviewing the data: What ORYX means to you
CMs' role in review and analysis often neglected
By Judy Homa-Lowry, RN, MS, CPHQ
President, Homa-Lowry Healthcare Consulting
Canton, MI
When case managers on the front lines of patient care do not have access to information that reflects patient care outcomes, an opportunity is missed - not only for the case manager, but also for the organization, and most importantly, the patient.
Nowhere is the need for access to outcomes data more important than with regard to the ORYX initiative put forth by the Joint Commission on Accreditation of Healthcare Organizations in Oakbrook Terrace, IL.
As part of ORYX, the Joint Commission has mandated that hospitals begin to voluntarily submit patient care data to external databases. The current requirement is for every hospital to submit data on two indicators that encompass 20% of a hospital's population. The measures should focus on patient outcomes. The time frame for additional indicators reflecting a larger share of the patient population over the next several years already is established. The requirement is that hospitals will have 12 indicators representing 35% of their patient population by the end of the year 2000.
This project has been a long struggle for the Joint Commission in terms of initiation and compliance. During the 1980s, the Joint Commission began working on developing its own indicators and developing a database for patient care information that would primarily focus on patient care outcomes. This initiative began as a result of pressure from the health care industry. Questions were raised in terms of the Joint Commission's effectiveness in monitoring patient care outcomes. Most of the standards focused on measuring the structures and processes associated with care, but very few focused on outcomes.
There was little evidence that the existing quality assurance and quality assessment standards of that time were doing much in terms of improving patient care. Not only was the industry questioning the expense and value of the process, but payers also were concerned. It was difficult to defend organizations as providing quality care when stories of tragic patient care outcomes were becoming more popular in the press.
The Joint Commission responded to these pressures by developing indicators that could measure patient care outcomes across the organization as well as within selected specialities. Many resources were expended to bring together national experts in a given field to discuss, develop, and test indicators. The effort was known as the Indicator Measurement System (IMS). Once alpha and beta testing were completed, the system was promoted to the health care industry.
The major objective of the project was the development of a national comparative database. It was felt that this would provide useful information for measuring hospital performance in terms of patient care outcomes. There also was speculation that this would minimize the need for on-site Joint Commission surveys. This approach was intriguing because the surveyor review process was felt to be so subjective.
The plan was that, eventually, the IMS database would contain enough data to allow identification of trends and patterns of hospital performance. If hospitals performed within expected guidelines, there would be no need for a surveyor to visit the hospital. If a hospital corrected unsatisfactory trends and patterns, this also would minimize the need for an on-site surveyor. However, if unsatisfactory trends and patterns continued, an on-site survey would be necessary.
The Joint Commission wanted to require hospitals to participate in this project. It was felt that this would be a good way to begin to make the accreditation process focus more on key patient care outcomes instead of structures and processes. The hospital industry felt uncomfortable reporting these types of patient outcome data to the Joint Commission.
The health care industry watched the initiative closely. It was not until the Joint Commission started to discuss mandating participation in their IMS for continued accreditation that the industry started reacting. There was concern about having only one database that was developed and maintained by the JCAHO. This also stimulated frustration on the part of health care organizations that were currently participating with other vendors and/or in other databases. They felt the mandate proposed by the Joint Commission would be expensive and would conflict with projects they were already involved in.
The industry pressure against this initiative was so severe that the Joint Commission re-evaluated its position. The Joint Commission determined that it would allow hospitals to select their own vendors. It did, however, require that the vendors meet certain criteria before the Joint Commission considered them compliant with ORYX requirements.
It became readily apparent that the Joint Commission had a huge task in front of it. One of the first steps was to develop criteria that vendors must meet to be part of the ORYX program. Some existing and new companies saw this as a tremendous business opportunity. The number of hopeful ORYX vendors proliferated to more than 100 systems. There was no consistency in terms of the data collection and methodologies used by the various vendors.
This pleased most of the health care organizations and vendors who had information products that measured patient care outcomes in some form or fashion. For the Joint Commission, however, it presented a new challenge. If the Joint Commission was to truly begin to provide hospitals with comparative and/or benchmark information, how would it coordinate the information from all of these various vendors?
This task was monumental. With more than a hundred systems utilizing different risk- and/or severity-adjusted models, how could we be sure that hospitals would be comparing apples with apples? In addition, what would be the minimum number of hospitals contained in each of these systems? Was the number of hospitals in each of the systems enough to have a representative sample? How often were the vendors rebasing their systems? What were the data integrity edits that the various vendors were using?
These questions have forced the Joint Commission to re-evaluate the systems that have been selected to participate in the ORYX project.
Hospital's data management must be timely
Hospitals need to not only pay attention to the long-term survival of their vendor in the ORYX program; they also must focus on the internal systems that help the organization stay in compliance with the timel ines established by the vendor and the Joint Commission. Specifically, does the hospital complete its medical records in a timely manner? Are the data submitted to the vendor on time? If the information from the hospital requires edits, does the hospital complete them in a timely manner and get them back to the vendor? If the edits are not completed in a timely manner, the hospital's data will not be completed according to the time frame established by the vendor. This will result in being out of compliance with the Joint Commission's time line because the vendor has to meet time frames established for quarterly data reporting.
Once the vendor returns the data to the hospital, the data need to be evaluated. There should be a hospital procedure for how the data are disseminated throughout the organization. The procedure should include how the information will be reviewed, analyzed, and used for performance improvement in the organization.
The case manager's role in the use of ORYX data often is neglected. This responsibility is often left to others in the organization. The review and analysis of this information requires an interdisciplinary approach. Case managers can have a large impact on suggesting and/or implementing corrective actions that can have a positive effect on patient care.
The case manager's first step in reviewing the ORYX data is to find out who the data vendor is and what method is used for severity adjustment and/or risk adjustment. This is critical because when the data are shared throughout the organization, there may be a tendency to doubt the integrity of the information. The skepticism often comes from the medical staff. This skepticism may be appropriate because this type of data is often used to draw conclusions, as opposed to raising questions about areas that need further analysis.
Some of the methods used for adjustment purposes have not been appropriately peer-reviewed. In this context, peer review means that when data are being "adjusted," there needs to be an independent review of the methodology to make sure the data results are valid and reliable. In order to establish credibility of a data source, the methodology needs to be understood for its strengths and limitations. When the methodology and results are found to be questionable, the validity and reliability of the data become the target of discussions, instead of where to focus performance improvement efforts.
In terms of the data sources used in these external databases, they stem mainly from two types. The first is administrative or billing data; the second is information that is manually collected. Some systems use a combination of both methods.
The source of data and the adjustment method being used need to be understood by individuals using the data. Often systems using administrative or billing data are selected because they are cheaper. The information comes from data already present on the patient's bill. In essence, the information gathered is based on the coding of the medical records. If information is "over coded," not coded, or inappropriately coded, the results will be skewed. This is not to suggest that the role of coders in an organization is an area of high suspicion. Good coders are an extremely valuable asset to the organization. Stringent coding rules dictate what can and cannot be coded. The philosophy of the organization in terms of coding is also an issue.
The accuracy of the coding is not only dependent upon coding rules, but also (and more importantly) upon the information contained in the medical record. Because most adjustment systems are dependent upon the presence or absence of comorbidities, complications, and all of the diagnoses and procedures, medical record documentation is crucial to accurate coding. As an example, I once was reviewing data from a vendor considering a high mortality rate and a low complication rate. The obvious conclusion would be that the mortality rate was high and that the organization or practitioner was providing poor-quality care. However, the opposite was true; the organization was undercoding complications and comorbidities. This caused the mortality rates to be inflated because you would not expect patients without complications or comorbidities to die. When the records were further reviewed and additional complications and comorbidities were considered, the mortality rate came in line with other comparable institutions.
By virtue of the very nature of the methodology involved, it's highly unlikely that one will achieve a "perfect" outcome in terms of results. The data should point one in a direction for further analysis.
There is one other issue to consider in terms of using administrative data. Many times, because of the ease of access and cost, MEDPAR data are used. Vendors can easily access the data. The shortcomings of these data, however, are that they are only available once a year. For example, the earliest that MEDPAR data from 1997 can be obtained would be the summer of 1998. The other drawback is that the data are at least six months old and primarily represent the Medicare population. Does Medicare data reflect or is it a proxy for the rest of your patient population?
The other type of administrative data that can be obtained is what is called all-payer data. This means the data are not limited to the Medicare population. It would include patients from all age groups. The difference is that this type of data is often difficult to obtain because of the restrictions in terms of access. These data are primarily considered to be confidential. Special permission must be obtained before a vendor can have access to this type of data. Because of this fact, it is often harder to do meaningful comparative studies when using this type of information. Comparative studies using these data may be limited to some regions or a few select states.
The final note on the methods used for risk and severity adjustment of administrative data is that different methodologies tend to work better for different patient populations and different DRGs. Lisa Iezzoni, MD, has written books and many articles on the subject of risk-adjustment methodologies and the results obtained from the various systems.
Train manual data collectors adequately
The other type of data used is manually collected data. Data sets are determined in advance. When using this type of methodology, data definitions must be clear for the data being collected. If people responsible for data collection are not trained adequately, the same issue of data inte grity may arise. Most vendors, however, have a built-in system for data collection, data definitions, and checking the validity and reliability of the information. The downside of this type of data collection is that it usually requires tremendous resources to do the data collection.
In summary, before even looking at the indicators the organization has selected, you must ask who the vendor is, what type of data is used (administrative or manually collected), and what the methodology is. Another issue regarding the use of external data is to evaluate the number and representation of patients in the database. For example, how many hospitals are in the database? What regions of the country are they located in? These factors can have a profound effect on the data results.
What are the indicators your organization has selected to encompass 20% of the patient population? This percentage is true of the patient population for 1998. As previously mentioned, this percentage will increase over the next few years. In terms of performing case management, how much of this 20% are you responsible for? What information have you received from your organization about its participation in ORYX?
When your organization receives information from the vendor during a quarter, it will be in the form of a report comparing your organization to others. If your organization is significantly high statistically, the organization must evaluate the data and take corrective actions to address why your organization is having that experience. Many times these data will reflect trends and patterns in your hospital's practice.
If an undesirable trend or pattern is identified, how does the case manager address the information? Are pathways developed to address the patient population identified by ORYX data? Should we develop pathways to address the undesirable trends and patterns identified through the ORYX data?
The answer is to find out the data system selected by your organization. Understand the strengths and weaknesses of the system in terms of data integrity. Stay apprised of the indicators and their results. Use information from ORYX data to evaluate potential areas of improvement for your case management program. Improvement projects may include creating additional pathways, refining existing pathways, designing patient-focused studies, evaluating medical record compliance with actual practice, and combining ORYX data with other data available in the organization.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.