Incentives for Online Surveys Boost Research Participation, But Fraud Remains a Concern
When a group of researchers used an online survey to learn how the COVID-19 pandemic had affected cancer survivors, the data revealed suspicious patterns almost immediately.1 “We noticed an unusually high number of respondents the first three days the study was open,” reports Mandi Pratt-Chapman, PhD, the study’s lead author and associate center director of patient-centered initiatives and health equity at the GW Cancer Center in Washington, DC.
Pratt-Chapman and colleagues had included a Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA), and only shared the survey link with trusted organizations and community members. The problem was some individuals shared the link on their social media pages. Pratt-Chapman and colleagues asked for the posts to be taken down, but it was too late.
“After examining the data patterns, we noticed some irregularities in responses,” Pratt-Chapman reports.
Batches of responses had arrived back to back in the middle of the night. To stop the fraudulent responses, researchers added extra questions to the survey and a protocol to check the quality of the collected data. Despite those efforts, of 1,977 responses, about three-quarters had to be eliminated. To double-check that none of the rejected responses were valid, Pratt-Chapman and colleagues contacted the respondents using provided information. “We received no responses to indicate that we had excluded legitimate responses,” Pratt-Chapman says, noting only 569 responses were retained.
“Findings are invalid if participants are not from the population the survey intended to sample. Thus, the data are useless,” says Michael H. Miner, PhD, professor and research director at University of Minnesota Medical School Institute for Sexual and Gender Health.
In one project with which Miner was involved, researchers were testing the effectiveness of an online HIV prevention program. “We had 610 identified fraudulent participants,” says Miner, noting only 186 responses ended up meeting the inclusion criteria.
Another group of researchers wrote a paper on the steps they took to preserve the integrity of their study after identifying fraudulent responses.2 “We became interested in this topic once we discovered discrepancies in our own research study survey data,” explains Hilary Seligman, MD, MAS, one of the study’s authors and professor of medicine at the University of California, San Francisco.
Seligman and colleagues wanted to offer some solutions to preserve data integrity despite widespread problems with fraud. “This is not a standard part of researcher training, in our experience,” Seligman says.
All researchers should consider fraud detection safeguards early in the study planning and design process. Allot the necessary time and resources to ongoing, rigorous data quality checks. Also, invest in fraud detection technology.
Some precautions could unintentionally dissuade certain eligible persons from participating. People who are technology-challenged, those with visual impairments, or non-English speakers might be deterred by the extra steps to prove they are human, not a robot. “Additional questions to verify the identity of an eligible respondent may mean justification to IRBs for adding data that can identify a person’s geolocation and identifying information,” Pratt-Chapman says.
Researchers often offer gift cards or lotteries through which participants can win something of value if they complete a survey. “There is incentive for individuals to respond more than once,” Miner says.
Be mindful of suspicious signs that indicate duplicate respondents, including email addresses or passwords that are similar or surveys with the same answers selected repeatedly (e.g., choosing only answer “4” for all questions) or response patterns (e.g., 1, 2, 3, 4, 1, 2, 3, 4). Watch for responses that do not make sense or contradict previous answers (e.g., “I have had sexual intercourse with at least one woman in the last month,” and “I have not had sexual intercourse in the last month.”). Also, investigate further if responses do not match enrollment criteria (e.g., the person’s age and birthdate do not align).
One solution is for researchers to simply stop offering incentives for people to complete surveys, but that raises other issues. Offering nothing to participants “generally results in low response rates and non-representative samples,” Miner explains.
For any surveys offering incentives, researchers need a comprehensive plan for the inevitable fraudulent responses. There are decisions to be made about payment. “Do they pay everyone, and then check the data to ensure validity? Or, do they check for fraudulent and duplicate responses before paying participants, then withhold payment from those who appear fraudulent?” Miner asks.
Also, if someone responds more than once, the researcher could use that person’s first survey response (and reimburse the participant for just that one survey), or throw out all the participant’s surveys and withhold reimbursement. At the same time, researchers have to weigh the need to protect participants’ privacy against the need to ensure valid data. Collecting IP addresses, email addresses, or passwords are good ways to flag duplicate responses, but it also identifies participants. “There needs to be a balance between what level of privacy [is needed] to assure potential participants, and the need to identify duplicates and frauds,” Miner concludes.
Recruiting on publicly accessible platforms that lack robust account verifications (e.g., Twitter, Reddit) invites survey fraud. “Instead, limit advertising online to closed or moderated specific topic groups or professional accounts,” advises Christina L. Wright, MA, CIP, IRB exempt/expedited team lead for the Human Research Protection Program at Virginia Commonwealth University.
Wright recommends creating “defensive survey designs.” Detailed screening questions are the first line of defense. For example, if the study is targeting health workers, researchers can ask for details on education or employment history. There also are some “big picture” issues for researchers to consider. If online surveys are compromised by fraudulent responses, says Wright, “the opportunity for scientific benefit is not only lost, but societal harm can also be introduced if invalid findings are promulgated.”
Online surveys typically meet the definition of “minimal risk” research. However, even minimal risk to research participants, such as potential discomfort answering questions or loss of confidentiality, should be offset by at least some potential for scientific benefit. Otherwise, says Wright, “one could argue that this kind of research would not meet the basic ethical standard of beneficence, as outlined in The Belmont Report.”
To guard against this possibility, researchers should avoid calling attention to compensation in advertisements, such as listing the exact dollar amount offered. “Consider not offering compensation at all for surveys at higher risk for fraud, such as anonymous, public-facing surveys,” Wright suggests.
If compensation is offered, researchers can show they are serious about deterring fraud by mandating identity verification as a requirement for payment. “Ensure all advertisements and consent documents are transparent about these requirements so that participants know to expect them,” Wright says.
REFERENCES
- Pratt-Chapman M, Moses J, Arem H. Strategies for the identification and prevention of survey fraud: Data analysis of a web-based survey. JMIR Cancer 2021;7:e30730.
- Levi R, Ridberg R, Akers M, Seligman H. Survey fraud and the integrity of web-based survey research. Am J Health Promot 2022;36:18-20.
All researchers should consider fraud detection safeguards early in the study planning and design process. Allot the necessary time and resources to ongoing, rigorous data quality checks, and invest in fraud detection technology.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.