Standard definitions lead to better data
Standard definitions lead to better data
NAPHS calls for consensus
Standardized performance measures would go a long way toward helping case managers select appropriate behavioral health providers for their clients. And, although behavioral health care providers seem willing to create industry-wide core performance measures, a recently released white paper from the National Association of Psychiatric Health Systems (NAPHS) in Washington, DC, finds wide variation in the way key benchmarking indicators are defined and collected.
Individual behavioral health facilities collect large quantities of data within their own organizations, reports NAPHS. However, a pilot test of key benchmarking indicators by NAPHS found that substantial challenges remain in gathering, reporting, and comparing data across systems.
The pilot project was coordinated by the NAPHS Benchmarking Committee, chaired by NAPHS Immediate Past President Peter Panzario, MD, chairman of the department of psychiatry at Cedars Sinai Medical Center in Los Angeles. The pilot, which focused on nine indicators chosen from an earlier consensus-driven process of the committee, analyzed data from 48 facilities offering a total of 637,241 inpatient days; 601,595 residential care days; and 161,993 partial hospital days. The facilities represented all levels of care including inpatient, residential, partial hospitalization, and outpatient services and all populations including children and adolescents.
Indicators reviewed were:
- adverse drug reactions;
- completed suicide;
- attempted suicide;
- restraint;
- seclusion;
- symptom/function measure;
- readmission;
- patient satisfaction;
- peer review.
In its report, White Paper: Lessons Learned from Pilot Testing of the NAPHS Benchmarking Indicators, the organization outlines challenges identified in the testing phase and provides commentary on each indicator in the pilot test.
According to the white paper, data comparison efforts are hampered due to wide variation in definitions of key indicator terms used by different facilities. For example, there was great variation, says NAPHS, in the definitions of restraint and seclusion, particularly as they related to children and adolescents. Reporting mechanisms appear to be in place in all facilities for collecting data about restraint and seclusion, the report’s authors note, stressing that if the field can agree on consistent definitions, the possibility for developing meaningful benchmarks seems strong.
Similarly, the definition of attempted suicide used in this pilot test shows how difficult it is to standardize definitions, the report notes. The wide variation in incidents reported indicates that some facilities reported incidences that were not of the severity of the operational definition used in the pilot, according to the report.
"As this pilot test demonstrates, there is high interest and commitment by behavioral health providers to work toward core performance measures that will help organizations improve the quality of care and be responsive to the needs of those who seek mental health care," says Mark Covall, executive director of NAPHS. "We have learned a great deal about the importance of focusing on data that is relevant to clinical operations and collected in ways that conserve limited resources."
White Paper: Lessons Learned from Pilot Testing of the NAPHS Benchmarking Indicators costs $40 prepaid. To order, contact NAPHS, 325 Seventh St., NW, Suite 625, Washington, DC 20004. Telephone: (202) 393-6700, Ext. 15. Or, visit the NAPHS web site at www.naphs.org.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.