Via The Atlantic, there’s a new paper in PNAS (open access) that I’m sure is going to be a widely cited example by people teaching research ethics, and not in a good way:
In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.
More than 650,000 people had their Facebook feeds meddled with in this way, and as that paragraph from the abstract makes clear, it made a difference.
The problem is consent. There is a clear ethical principle that experiments on humans require consent, except in a few specific situations, and that the consent has to be specific and informed. It’s not that uncommon in psychological experiments for some details of the experiment to be kept hidden to avoid bias, but participants still should be given a clear idea of possible risks and benefits and a general idea of what’s going on. Even in medical research, where clinical trials are comparing two real treatments for which the best choice isn’t known, there are very few exceptions to consent (I’ve written about some of them elsewhere).
The need for consent is especially clear in cases where the research is expected to cause harm. In this example, the Facebook researchers expected in advance that their intervention would have real effects on people’s emotions; that it would do actual harm, even if the harm was (hopefully) minor and transient.
Facebook had its research reviewed by an Institutional Review Board (the US equivalent of our Ethics Committees), and the terms of service say they can use your data for research purposes, so they are probably within the law. The psychologist who edited the study for PNAS said
“I was concerned,” Fiske told The Atlantic, “until I queried the authors and they said their local institutional review board had approved it—and apparently on the grounds that Facebook apparently manipulates people’s News Feeds all the time.”
Fiske added that she didn’t want the “the originality of the research” to be lost, but called the experiment “an open ethical question.”
To me, the only open ethical question is whether people believed their agreement to the Facebook Terms of Service allowed this sort of thing. This could be settled empirically, by a suitably-designed survey. I’m betting the answer is “No.” Or, quite likely, “Hell, no!”.
[Update: Story in the Herald]