New tools ask patients to report, not rate
New tools ask patients to report, not rate
Dissatisfaction with shift in patient assessment
A shift in the way health plans ask patients to assess their care is shaking up the outcomes measurement community, causing revisions in major survey programs and raising questions about the best way to gather consumer-oriented information.
While the new surveys are designed to compare health plan performance, they are likely to influence the entire field of patient satisfaction - or what is now termed "patient assessment of care." Similar surveys designed to assess medical groups already are under development.
At issue is the Consumer Assessment of Health Plans (CAHPS), a tool sponsored by the Agency for Health Care Policy and Research in Rockville, MD, that forms the core of the new member survey of the National Committee for Quality Assurance (NCQA) in Washington, DC, which will be used next year with Accreditation '99. In sharp contrast to prior surveys, the tool relies largely on patient "reports," questions that ask patients how often they experienced certain events such as long wait times, rather than "ratings," which ask patients how satisfied they were with various aspects of care.
While to the lay person that distinction may seem somewhat semantic, the two methods provide very different information, notes John E. Ware Jr., PhD, executive director of the Health Assessment Lab at the New England Medical Center in Boston and a leading researcher in outcomes measurement.
"We feel very strongly that [both] ratings and reports are an essential part [of surveys], but if we had to choose between one or the other, we would choose specific ratings rather than specific reports," says Ware. "Telling how long you had to wait tells me nothing about whether it was satisfactory to you, or whether it met your needs or expectations."
CAHPS developers counter that patients feel more comfortable answering specific questions about what happened during their office visits.
"We designed the questions to make sure that people could understand them, didn't get frustrated by them, and could give responses that were meaningful," says Jim Lubalin, PhD, director of the Washington, DC, office of the Research Triangle Institute (RTI) and a principal investigator in the CAHPS project. RTI participated in the five-year, $10 million project with RAND in Santa Monica, CA, and Harvard Medical School in Boston.
CAHPS omits traditional questions
Changes in wording have wide implications for the surveying of patients nationwide. Some questions that have previously been standard items on surveys no longer appear on the CAHPS/NCQA version.
For example, the survey doesn't ask patients whether they would recommend the health plan or medical group to their family or friends. Questions of that type are very useful as "gross barometers of overall health care quality," says Ware. "They don't tell you if the problem is interpersonal, administrative, financial, but they tell you something is wrong."
In post-survey discussions during the testing phase, some patients commented that "none of my family and friends have my particular needs. It makes it difficult for me to make a recommendation, given that I have different needs," says J. Lee Hargraves, PhD, senior survey scientist at The Picker Institute in Boston.
Yet Hargraves adds, "Certainly if someone wants to continue to use that particular question, they can add it into the survey. That's what I recommended to folks who were concerned about the loss of trend information. You can always add questions from the other survey."
The new survey includes five categories:
· getting care quickly;
· doctors who communicate;
· courteous and helpful office staff;
· getting needed care;
· customer service.
The survey also asks patients to rate their "personal doctor or nurse," "specialist you saw most often," "all doctors and other health providers," and the health plan.
Although these ratings questions may be similar to some that appear on the previous NCQA survey, they use a 0-10 rating scale instead of the traditional five-point scale of excellent to poor.
Ware contends that current research doesn't show the 0-10 scale to be superior and that the change will prevent comparability with prior surveys. Lubalin counters that investigators have developed a way to translate the scores to the prior scale for comparison. "We wanted to move people off the top [of the satisfaction scale]," he says, noting that most patients give their physicians glowing marks. "Eight is our most common response. This is a finer, more discriminating scale of plan performance."
When they design patient assessment surveys, researchers focus keenly on the end result. They want tools that distinguish among health plans (or providers) and promote quality improvement. The CAHPS survey also was designed expressly to produce information that would help consumers choose among health plans.
Researchers conducted a head-to-head test of the two surveys among consumers in Colorado, giving both versions to the same patients and comparing the results. "We looked at which items were best in discriminating among health plans," says Ron D. Hays, PhD, professor of medicine at the University of California in Los Angeles and senior scientist at RAND Health Program.
"When we were looking at the comparison, there were some items in the member satisfaction survey that did pretty well and there were some CAHPS items that did well," he says. "The result is the best of both worlds."
In particular, the CAHPS developers added questions that asked "how much of a problem was it" to get needed care.
Even though the wording changed, the two surveys also overlap greatly in their content, notes Hargraves, who adds that he favors a combination of ratings and reports. "Asking both ratings and reports leads to understanding some of the influences of patients' satisfaction with health care," he says.
But to some, the result moves too far toward the new with not enough of the old. Report-based questions may give health plans some concrete issues to work on, but they also need the historical satisfaction data, notes Joe Carmichael, managing director of National Research Corp. in Lincoln, NE, the nation's largest health care performance measurement firm.
"My opinion would be that the right answer is going to be some kind of a blend, a stronger blend than the CAHPS tool is today, but not all the way back to the old tool," he says.
Likewise, many of Ware's concerns would be assuaged with a stronger mix, such as inclusion of a patient rating of the physician's interpersonal quality. "This is not an either/or decision," says Ware. "The decision about CAHPS is what is the best blend of old and new."
[Editor's note: For a CAHPS kit or more information about CAHPS, contact the Survey User Network at (800) 492-9261.]
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.