Don't overlook abstractor training
Don't overlook abstractor training
They need not be clinical providers of patient care
By Patrice Spath, ART
Brown-Spath Associates
Forest Grove, OR
Once a set of performance measures has been developed to evaluate compliance with a particular clinical practice guideline, the task of collecting data to assess guideline compliance begins. It is common for organizations to employ nonclinicians to perform the actual function of data gathering. The expertise of data abstractors may range from clerical staff working in a small ambulatory clinic or insurance office to clinical research assistants employed by large medical centers or external review organizations. The training and educational needs of data abstractors are significantly affected by the clinical complexity and sophistication of the data sources.
The skill levels of data abstractors will necessarily vary. Therefore, the training and education recommendations listed below should be considered minimum rather than optimum. It is important to the success of your performance measurement project that this minimal training and education be given to all data abstractors.
At a minimum, data abstractors should be knowledgeable in the medical terminology of the health care circumstances they are expected to evaluate. They need not be clinical providers of patient care services. In fact, using clinical provi ders for data abstraction duties may unnecessarily inflate the cost of the performance assessment project and introduce a degree of subjectivity to the data collection process. That is why some health care providers and external review groups use accredited record technicians or other similarly trained professionals for data collection duties.
Data abstractors should be made aware of the clinical practice guideline for which they are evaluating performance. Ideally, data abstractors understand how the data elements they collect are merged into a report that demonstrates how closely patient care practices adhere to clinical practice guidelines.
Data abstractors also should be introduced to the confidential nature of health care quality assessment. They must be educated in the use of appropriate safeguards for confidentiality of both patients and providers.
Prior to embarking upon data collection, data abstractors should receive training in the language and format of the performance measurement data set, become knowledgeable about the data element definitions, and be familiarized with the data collection instrument. In addition, special data collection circumstances should be clarified and documented in the abstracting procedures. Training should include answering such questions as:
· How are exceptions to be documented? For example, if the medical review criteria state that all pediatric patients' records will include documentation of their immunization history and the exception to this requirement is family refusal to provide the information, how will the data abstractor record this circumstance on the data collection form?
· What role, if any, does patient preference play in recording variances? For example, if a patient chooses not to undergo a recommended surgical procedure, does this event count as a failure of the case to meet criteria, as meeting an exception to the criteria, or should the data abstractor record the event in a special category reserved for patient preference considerations?
· Is missing documentation counted as a variance to the criteria? For example, if the nurse fails to record the results of the stool guaiac test on the emergency record as required by the guidelines but the test results are found elsewhere in the record, does this count as a failure to adhere to criteria? Or, if a data field in a computerized database is blank, how is this finding recorded?
· Where will the data abstractors look for documentation to support compliance with guidelines? For example, what reports in the patient's medical record will be used in determining whether the patient developed a nosocomial infection during his or her hospitalization?
· What patients, if any, are to be excluded from the data collection process? For example, if the performance assessment is directed to geriatric patient care, what are the age limitations of the geriatric patient?
· What documentation will support compliance with each criterion? For example, what will be documented in the record if the patient's blood pressure is normal or abnormal?
· Are all ambiguous terms clearly defined? All words subject to interpretation by the data collectors should be objectively defined. An example of an objective definition is as follows: "A patient is considered to have a tachypnea if their respiratory rate is greater than 24."
· Should the abstractor's personal clinical judgement be used during data collection activities? For example, the guidelines call for the physician to document the reason why an anti-thrombolytic agent was not given to a patient presenting with an acute myocardial infarction. Upon review of the record, no such documentation is found. Is it acceptable for the abstractor to impose his/her clinical judgement in the case? Can the abstractor determine that antithrombolytics were not appropriate and overlook the lack of physician documentation?
The information management professional or quality management specialist who worked with the task group that developed the original guidelines and performance measurement data set. should train the data abstractors. The specialist should remain involved with the project as a facilitator, coordinator, and reporter. Data abstractors should continue to work closely with the specialist.
To test the results of abstractor training and evaluate the completeness of the data-gathering instructions, ask abstractors the questions in the chart inserted in this issue. Their responses will help identify problem areas that need additional clarification.
Training should include a pilot data collection project in which data abstractors are involved in pretesting data-gathering methodologies. Pretesting can provide information about the extent to which changes in criteria language, format, or data source might be necessary prior to launching the performance measurement project. Pretesting also offers an opportunity to examine data collection reliability.
Inter-rater reliability concerns the consistency of data collection between two or more different data abstractors using the same criteria and collection methodology applied to the same data sources. Intrarater reliability relates to one person collecting the data. Does the person collect it the same way each time? If more than one abstractor will be gathering data, inter-rater reliability should be assessed. If one abstractor will be collecting data, intrarater reliability should be evaluated.
At the completion of the pretest, data abstractors should receive feedback about the correctness of their data collection efforts. If accuracy problems are identified and subsequent study design changes cannot improve the quality of an abstractor's work, the data collection should be discontinued until a more reliable data abstractor can be employed.
Good data are the lifeblood of performance measurement. That's why it is important to pay attention to the education and training needs of the data abstractors.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.