By Reed Albergotti and Elizabeth Dwoskin
A Facebook Inc. study on users' emotions sparked soul-searching among researchers and calls for better ethical
guidelines in the online world.
"I do think this whole incident will cause a lot of rethinking" about the relationship between business and
academic researchers, said Susan T. Fiske, the study's editor and a professor of psychology and public affairs at
Researchers from Facebook and Cornell University manipulated the news feed of nearly 700,000 Facebook users for a
week in 2012 to gauge whether emotions spread on social media.
They found that users who saw more positive posts tended to write more positive posts themselves, and vice versa.
The study was published in the Proceedings of the National Academy of Sciences earlier in June, but sparked outrage
after a blog post Friday said the study used Facebook users as "lab rats."
Facebook said on Monday that the study may have included users younger than 18. The company said it had revised its
guidelines since the research was conducted, and proposed studies now undergo three internal reviews, including one
centered on privacy for user data.
The incident shines a light on how companies and researchers tap the vast amount of data created online. Internet
companies including Facebook and Google Inc. routinely test adjustments to their sites for reasons including prompting
users to click on more links, or more ads, which are the companies' main source of revenue.
The companies show different versions of a page to groups of users, and monitor how users react. In the industry,
this is known as "A/B" testing, for the two versions.
Early A/B testing focused on issues like the colors of a website. Now, the process is used "to get people to do
exactly what a company wants them to do on websites, in apps and in life," Nancy Hua, chief executive of Apptimize, a
startup that conducts such tests for other companies.
A Google spokeswoman declined to comment.
Critics said such tests carry serious ethical issues when conducted on networks as big as Facebook, which has 1.3
billion users. "It's really an opportunity to discuss the broader power these companies have over our lives," said
Zeynep Tufekci, a sociology professor at the University of North Carolina who studies the social impacts of technology.
Ms. Fiske, the study's editor, said she had ethical concerns about the study because the researchers could be
considered to be manipulating people's moods. She said her concerns were allayed after the authors told her the study
didn't need a full review by Cornell's Institutional Review Board because it was based on "pre-existing data," that had
been de-identified, so the users were anonymous to the researchers.
"The implication was that Facebook already has this data," said Ms. Fiske. "I was relying on Cornell's judgment.
And it sounds like Cornell was relying on Facebook's judgment."
Cornell issued a statement on its website saying that its Institutional Review Board concluded it didn't have to
review the study because the Cornell professor involved only had access to results, and not to any user data.
Facebook's Data Science team aims to harvest the company's user data for market research and academic research. It
has conducted studies on everything from who is attending the World Cup to how users' communications change after
partners break up with them and whether upbeat messages spread more quickly than negative ones.
When Facebook works with academic researchers, the data Facebook provides must be approved by that university's
independent review board. But some researchers said the incident highlighted weaknesses in that system because
university review boards aren't accustomed to data gathered from millions of users posting information about themselves
"Institutional review boards are still getting their feet wet with social media research," said Leslie Meltzer
Henry, a faculty member at the Berman Institute of Bioethics at Johns Hopkins University. Ms. Henry said she watched the
controversy unfold over the weekend, as comments flooded her inbox.
Andrew Ledvina, who was a data scientist at Facebook from early 2012 to the summer of 2013, said there was no
internal review board overseeing the studies. Mr. Ledvina said he and other members of the data science team could run
almost any test they wanted, so long as it didn't annoy users.
Mr. Ledvina said it was sometimes hard to remember that his experiment involved hundreds of thousands of people,
even though they represented only a small percentage of Facebook users. "You get a little desensitized to it," he said.
Several researchers questioned whether the 689,003 Facebook users whose news feeds were altered had been given
sufficient notice and consented to the study. Facebook said users had agreed to such studies when they signed up for
Facebook and agreed to the site's terms of service.
"It's absolutely ridiculous to suggest that clicking a box on a website constitutes informed consent," said David
Gorski, a surgeon, researcher, and editor of the blog Science-Based Medicine. Dr. Gorski said reaction to the experiment
showed a "real culture gap" between medical and psychological researchers and technology companies. At a minimum, he
said, users should have been given the choice to not participate in the study.
Jonathan Moreno, a professor of medical ethics and health policy at University of Pennsylvania, also criticized the
study. "You are sending people whose emotional state you don't know anything about communications that they might find
disturbing," Dr. Moreno said. "That might or might not be something a research ethics board would worry about."
But others defended the study and said Facebook was being criticized unfairly. "This study is becoming a lightning
rod for all the wrong reasons," said Nicholas Christakis, a sociologist at Yale University who has done studies using
Facebook data. "Marketing as a whole is designed to manipulate emotions. Businesses around the U.S. are doing A/B trials
every day involving all of us."
Mr. Christakis said Facebook was being criticized because it published its research, rather than keeping it secret.
Google has been forced to confront ethical limits on its data twice in recent months. Demis Hassabis, the founder
of DeepMind Technologies, insisted that Google create an ethics panel when it acquired his artificial-intelligence
company in January. His concern, according to a person familiar with this thinking, was to prevent his software from
being used in robots that might be sold to the military.
In May, Google created an advisory committee in the wake of a European court ruling that individuals can ask search
engines to remove links. Made up of academics, Google executives and others, the committee is tasked with helping Google
implement the court's ruling and, more broadly, with studying how the company must balance Europeans' new "right to be
forgotten" with what Google has called the public's right to know.
Alistair Barr, Jeff Elder and Jeanne Whalen contributed to this article.
Write to Reed Albergotti at email@example.com and Elizabeth Dwoskin at firstname.lastname@example.org
Subscribe to WSJ: http://online.wsj.com?mod=djnwires
(END) Dow Jones Newswires
Copyright (c) 2014 Dow Jones & Company, Inc.