Special Report: Social Media Study Storm
The Facebook study and the Common Rule
Ethicists debate main issues
Private companies that are not in the pharmaceutical or device or biologics industries mostly are exempt from following the Common Rule and federal regulations, but there have been debates in the past two decades about whether this exemption should be changed.
President William Clinton, the Advisory Committee on Human Radiation Experiments, and others have advocated a change based on moral arguments.1
But the consensus in favor of the status quo has been that extending the Common Rule to all human subjects research would be cumbersome and impractical to enforce, particularly when most of the studies in question would be of low risk and more akin to surveys and quality improvement than clinical trials.
Social media studies and other online research have raised even more questions, some ethicists note.
"We’re going to continue to see these forms of research; they’re not going away," says Elizabeth A. Buchanan, PhD, endowed chair in ethics and director of the center for applied ethics at the University of Wisconsin-Stout in Menomonie. "This is the new normal in research and research ethics."
"There’s so much data that exists and that should be analyzed by researchers across disciplines," she says. "It would be a real shame if that just stopped and we couldn’t have good collaborative and supported research."
The controversy over the Facebook study raises important questions and presents new challenges for social-behavioral researchers, notes John R. Baumann, PhD, assistant vice president for research compliance at Indiana University in Indianapolis.
"We have to find a solution that doesn’t grossly outweigh the harm caused by this, in which the restrictions are not out of balance with the risks and harms, and to avoid situations wherein the cure is worse than the illness," he says.
Baumann, Buchanan and other experts explain how Facebook’s study addresses these research ethics questions:
• Should the Facebook study have provided informed consent?
Many argue that if Facebook had merely observed emotional content on users’ news feeds and the study was done without manipulation of the news feeds, then there would be no debate. It would clearly be a type of study that does not require informed consent.
But the problem is that Facebook manipulated news feeds for some users, directly interacting with a population to discover what would happen if things were changed, says John H. Noble, Jr., PhD, MA, MSW, a retired researcher.
"They crossed the line into active manipulation, and that’s where I think one has to come down from the standpoint of fashioning public policy," Noble says. "Active manipulations or experiments to induce changes in mood and behavior are quite different than observation and interactions to measure naturally occurring or untampered behavior."
Since Facebook was not required to have an IRB review the research, the question really is whether Cornell University, whose researchers were involved in the study, should have had an IRB review.
Cornell’s media statement addresses this issue by stating that its own researchers had access only to the study’s results, and so the IRB concluded that Cornell investigators were not directly engaged in human research and no IRB review was required.
"Arguably, informed consent was not required because de-identified data was the source of analysis, and IRBs have the discretion of waiving consent or exempting the study entirely," says Eric Meslin, PhD, director of the Indiana University Center for Bioethics in Indianapolis.
One could also argue that informed consent was implied when Facebook users clicked on the "agree" button when they joined Facebook, Meslin says.
Others argue that even when people agree to Facebook’s terms and conditions, they might expect certain uses of their data, but not expect the type of manipulation employed for the study.
The key informed consent issue is: What would the people affected by the study expect, says Miguel Roig, PhD, professor of psychology at St. John’s University in Staten Island, NY.
"I have a Facebook account which I admit to checking every day, and — sure — I see ads for things I might actually Google," Roig says. "But I thought in using Facebook that if I follow one of my friends and my friend’s post then whatever my friends post I’ll see; I’m not expecting Facebook to manipulate what I see from my friends."
If the affected Facebook users also have this expectation, then the study was a violation of their expectations, he adds.
• When are transparency and debriefing necessary?
In cases like this study, obtaining informed consent might change behavior and affect the study’s outcomes, Meslin notes.
So an alternative to informed consent is to debrief subjects afterwards, he says.
"Deception research can be acceptable with certain conditions, including a debriefing afterwards, providing counseling and support afterwards," Buchanan says. "What if someone was depressed or on the verge of a mental breakdown and for one week only saw negative streams? It could put them in harm’s way."
This is one of the results of an Internet where people have multiple identities and represent different things to multiple stakeholders, she adds.
Many critics of the Facebook study brought up the "creepiness" factor. They suggested that even if Facebook didn’t break any regulations in its action, their manipulation of news feeds was creepy.
Transparency could have ameliorated the furor over what Facebook did, some suggest.
"Facebook might want to think about transparency and the research enterprise," Buchanan says. "That could be a positive way forward in addressing future research, saying, Here’s what we learned from conducting this study and using this model of deception research.’"
Facebook and other private companies conducting studies could let their users know how their data are being used.
"It’s about transparency and understanding this new domain of research and research ethics," Buchanan adds.
• What were the true risks to Facebook users?
No one will know precisely how the affected Facebook users felt while their news feeds were being manipulated, but one can imagine how a depressed minor or adult might become even more depressed if his or her newsfeeds were chiefly negative, Roig says.
"If you’re already depressed, would that push you over?" he says. "It’s a remote risk, but it is a risk."
This is especially true since any Facebook page — even a teenager’s — could have been manipulated, he adds.
There’s also potential risk in the way the study was disclosed to the public.
"We will never know if these 310,000 Facebook users felt any harm as a result of this study or as a result of learning about it afterwards," Meslin says. "Was it worse to learn afterwards that they were manipulating feeds and changing them accordingly?"
The study’s principal investigator, Adam D. I. Kramer, addressed the risks in his response on Facebook, saying that the actual impact on people in the experiment was the minimal amount to statistically detect it.
"I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused," Kramer wrote. "In hindsight, the research benefits of the paper may not have justified all of this anxiety."
• Could the cure be worse?
"One ethical issue is the vociferousness of the outrage upon the report that Facebook published the study in the National Academy of Sciences," Meslin says.
"We’re now in a very public environment where blogs and reports might carry a considerable amount of weight because of the speed with which they can make their way around the world," he adds. "There isn’t, in my view, a lot of fire, and I’m not even sure there’s that much smoke."
A potential repercussion of the controversy is that research institutions and researchers will scrutinize more closely any de-identified data sets that come their way, Baumann suggests.
"Not many institutions have questioned de-identified data," he says.
This could change.
There also is potential for positive discussion about Internet research, Buchanan says.
"Most of us are using Internet research in some way, or we’re using the Internet in our research," she explains. "We’re not working in isolation anymore, and there are very few projects I have known where research has not been networked."
From here on, IRBs and researchers should think about transparency and how they use deception research, she suggests.
"Algorithmic manipulation is happening all the time, and mostly it’s bringing us things we want to see," Buchanan says. "But we should let users know how their research is being used, how their data are being used."
- Resnik DB. Closing loopholes in the federal research regulations: some practical problems. Am J Bioeth. 2008;8(11):6-8.