Fraud Allegations Involving Alzheimer’s Disease Study Raise Concerns
Recent allegations suggesting images were altered in previous research on Alzheimer’s disease have raised multiple ethical concerns.1,2 The allegations center around a paper written by researchers at the University of Minnesota that was published in 2006.3
In that paper, the authors provided evidence indicating accumulation of a specific form of beta-amyloid protein was a cause of Alzheimer’s disease. A University of Minnesota spokesperson said the school is aware of questions regarding certain images used in peer-reviewed research publications authored by university faculty, and “will follow its processes to review the questions any claims have raised.”
“The paper in which the suspicious images were found was regarded as influential because it provided evidence to support a certain hypothesis on the molecular mechanisms of Alzheimer’s disease,” says Elisabeth Bik, PhD, a scientific integrity consultant and senior scientist at San Francisco-based Harbers Bik.
Based on the study’s seemingly positive findings, other researchers focused on the same hypothesis. “Since research grant money comes from a limited amount allotted by national governments, this might have come at the expense of research funding for other alternative research lines,” Bik notes.
On the other hand, findings from other studies have supported beta-amyloid as a cause of Alzheimer’s. “Even if this one paper turns out to be not trustworthy, that might still mean [beta-amyloid] is an avenue to pursue,” Bik offers.
The alleged scientific misconduct actually did not really affect the overall field of Alzheimer’s research, according to Dennis J. Selkoe, MD, professor of neurologic diseases at Harvard Medical School and Brigham and Women’s Hospital. “One case of fraud is unfortunate, but does not indict many other honest scientists without any evidence that their findings were jeopardized,” he says.
Selkoe adds that other well-conducted studies back the beta-amyloid hypothesis.4 After the 2006 study was published, knowledgeable study investigators in the Alzheimer’s disease field (including Selkoe) concluded the observations made in that 2006 study were unlikely to be scientifically accurate. The allegedly fraudulent study did not influence subsequent clinical trials, according to Selkoe.
“This in no way condones the egregious malfeasance of the apparent scientific fraud perpetrated by this 2006 report, which is reprehensible and damages the public’s trust in science in general,” Selkoe clarifies.
Charles Glabe, PhD, professor of molecular biology and biochemistry at University of California, Irvine, agrees the impact of the alleged data altering on Alzheimer’s disease research was minimal. “It made a big splash initially because it was published in a high-profile journal, and because it addressed one of the big questions in the field at this time,” Glabe says.
Some scientists wasted time trying to reproduce the 2006 findings, according to Glabe. Other researchers have published their inability to repeat the major finding of this paper. “If a potentially important finding cannot be reproduced, the field quickly forgets about it and moves on,” Glabe argues.
In the process of conducting his own research 25 years ago, Glabe discovered potentially fabricated data in a report that was published in a high-profile journal. Glabe submitted a correction of the erroneous data, but the editors did not allow him to report the details of the correction because it strongly suggested the mistakes were not a random error. “There is no glory in correcting erroneous reports,” Glabe says.
Some of Glabe’s colleagues thought it was terrible to bring up the fact the data were wrong because it cast a negative light on well-regarded authors. Others thought publishing a correction did not go far enough if the mistakes were not a random error. “Fabrication and image manipulation is hard to eliminate, and it is easy for fabricators to defeat image recognition software,” Glabe notes.
Now that they know the software can detect the same image published twice, unscrupulous researchers can just use another image that has not been published. “The best way to prevent ethical failures is to make sure that all key data are reproduced by two different people in the lab, especially if one of them is the principal investigator,” Glabe suggests. “This helps, but it won’t stop all abuses.”
The recent allegations have called attention to research misconduct involving photographic alterations. “But such allegations are not unique,” Bik asserts. “Several scientific detectives and whistleblowers have raised the alarm for science misconduct for decades without getting much attention or action from journals, institutions, or funders.”5
Bik would like to see the high-profile case result in preventive measures, faster investigations, and more severe consequences for researchers guilty of misconduct. “It is also important to point out that research misconduct is, fortunately, rare,” Bik cautions. “Although such cases might gain a lot of attention, the vast majority of scientific studies are trustworthy.”
Selkoe would like to see investigators use forensic image analysis on western blots and other imaged data in submitted papers, at least as a spot check on some fraction of all images submitted.
Any scandal involving alleged fabrication or falsification of data extending over many years is damaging in a number of respects, according to Ferric Fang, MD, director of the Harborview Medical Center clinical microbiology laboratory in Seattle. Such events waste time and resources, erode public support for research, and harm patients by subjecting them to therapies that have no chance of working (instead of other investigational therapies that have at least a chance of possibly working). “However, it can be difficult to quantify the amount of damage to Alzheimer’s research,” Fang notes.
There have been many clinical trials of disease-modifying agents in Alzheimer’s disease, and essentially all have failed.6 “The vast majority of these failures have nothing to do with the current ethical controversy. They simply reflect our inadequate understanding of the pathogenesis of Alzheimer’s disease at the present time,” Fang says.
There have been many other highly publicized cases of fraudulent research. “Although these represent a very small proportion of published research, they cause harm out of proportion to their numbers,” Fang says.
IRBs, researchers, and editors are aware of the possibility of fraud. Still, there is good reason to believe many instances of flawed, fraudulent research have gone unrecognized. “The problem is that it is not so easy to detect manipulated data, especially when the manipulation is skillfully done,” Fang laments.
Uncovering fraud takes time and effort that most scientists would prefer to devote to their own work. Although software programs to detect inappropriate image manipulation or duplication have been developed, the programs still require initial manual screening by a human. “It is useful to continue to scrutinize the research literature. But the task is enormous,” Fang says.
Nonetheless, the recent allegations brought attention to the growing use of image duplication detection software. The technology is relatively new and still in development. “There are several tools, and they all seem to differ in what they can do best,” Bik says.
Some tools are better in microscopy photos, while others also can detect duplications in photos of protein blots. “Most tools now can detect rotations or mirroring as well,” Bik notes.
In addition to detecting duplications between figures in a paper, some tools also can compare those to figures in thousands of previously published papers. “In that respect, these software tools are superior to humans, who can only scan and remember a limited number of images,” Bik says.
Using such software, image detectives have found unexpected reuse of images within or across papers. That kind of fraud would be much harder for humans to find. On the other hand, software cannot detect some cases of duplication or overlap that trained humans can see plainly. Software tools also detect many false-positives in images that are expected to look similar. “Currently, these tools are helpful, but are not always superior to humans,” Bik says. “A human will still be needed to interpret the results.”
REFERENCES
1. Piller C. Blots on a field? Science 2022;377:358-363.
2. Lowe D. Faked beta-amyloid data. What does it mean? Science. July 25, 2022. https://bit.ly/3BiOZ5U
3. Lesné S, Koh MT, Kotilinek L, et al. A specific amyloid-beta protein assembly in the brain impairs memory. Nature 2006;440:352-357.
4. Haass C, Selkoe D. If amyloid drives Alzheimer disease, why have anti-amyloid therapies not yet slowed cognitive decline? PLoS Biol 2022;20:e3001694.
5. Bik EM, Casadevall A, Fang FC. The prevalence of inappropriate image duplication in biomedical research publications. mBio 2016;7:e00809-16.
6. Selkoe DJ, Hardy J. The amyloid hypothesis of Alzheimer’s disease at 25 years. EMBO Mol Med 2016;8:595-608.
In a paper published in 2006, the authors provided evidence indicating accumulation of a specific form of beta-amyloid protein was a cause of Alzheimer’s disease. However, recent accusations suggest images allegedly were altered, raising doubts about the initial conclusions. Regardless of what happened, this case has jumpstarted a conversation about instituting more preventive measures, conducting faster investigations into fraud allegations, and levying more severe consequences for researchers found guilty of misconduct.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.