Tag: facebook policy
Facebook Policy Didn’t Mention ‘Research’ At Time Of Controversial Experiment

Facebook Policy Didn’t Mention ‘Research’ At Time Of Controversial Experiment

By Brandon Bailey, San Jose Mercury News

Fueling more debate over a controversial experiment, Facebook acknowledged Tuesday that its official data-use policy didn’t specifically list “research” among the ways that members’ information might be used at the time it was conducting a study to see how some users’ emotions were affected by subtle manipulation of their Facebook news feeds.

Facebook argues it had users’ consent to carry out the test, based on broader language in the policy that was in effect when the data was gathered in January 2012. The company added a reference to research when it revised the policy four months later — although critics say it’s doubtful many users ever read the lengthy document in any case.

Legal experts said the policy highlights a broader issue: While Internet companies are increasingly using powerful software for testing human behavior and reactions to different messages, there is a large gap between private industry standards and the stricter ethical rules imposed on academic and medical research, where scientists have long wrestled with the risk of doing harm to study subjects.

The gap is likely to be more pronounced as companies hire more data scientists — and as academic researchers collaborate with private firms — to extract valuable insights from vast stockpiles of user information, said Stanford law professor Hank Greely, who studies health law and policy.

Current industry practice “allows pretty much any use of data within a company,” so long as personal identities aren’t revealed, Edward Felten, director of Princeton University’s Center for Information Technology Policy, added in a blog post this week.

Facebook’s methods were consistent with industry practice, he wrote, “but they were clearly inconsistent” with standards for medical or mental health research that require informed consent, review by an ethics panel and other safeguards. Companies generally don’t have to meet those stricter standards unless they receive government funding or are seeking product approval from an agency such as the Food and Drug Administration.

While legal experts say Facebook probably didn’t violate any U.S. laws, and Facebook said the study didn’t use identifying information, British regulators said Tuesday that they will review the study for compliance with privacy laws.

The Facebook test involved nearly 700,000 users who were not told they had been assigned to random groups for one week so data scientists could test their reactions to seeing a reduced number of “positive” or “negative” updates from their friends. The researchers said Facebook users who saw fewer positive updates tended to write posts that were less positive, while those who saw fewer negative updates reacted by posting messages that were more upbeat.

Facebook and its defenders say the study is no different from countless other tests that online companies conduct to gauge users’ reaction to different kinds of messages. But critics argue that, instead of passively observing online behavior, the researchers deliberately altered what users saw, with the expectation it might influence their emotions.

“People were playing with your brain for research purposes, without giving you a chance to say no,” said Stanford’s Greely.

Critics also noted that Facebook researcher Adam Kramer and two co-authors touted the study as having implications for public health when they announced the findings earlier this month. Facebook has more recently characterized the research as part of an effort to improve its services.

Writing in a scientific journal, the researchers said users gave informed consent to the experiment because they agreed to the company’s Data Use Policy when they opened their Facebook accounts. Initial news reports focused on a brief mention of “research” in the current, 9,000-word policy, but a Forbes technology writer reported late Monday that the oft-revised policy didn’t mention research at the time of the experiment.

When asked to clarify how users gave informed consent, a Facebook spokesman Tuesday cited language from the 2012 version of the policy, which said the company uses data from members to provide “innovative features and services we develop in the future that use the information we receive about you in new ways.”

In a statement, the company added: “When someone signs up for Facebook, we’ve always asked permission to use their information to provide and enhance the services we offer. To suggest we conducted any corporate research without permission is complete fiction. Companies that want to improve their services use the information their customers provide, whether or not their privacy policy uses the word ‘research’ or not.”

That’s the problem, according to Daniel Solove, a George Washington University law professor, who added in an online essay: “This story has put two conceptions of consent into stark contrast — the informed consent required for much academic research on humans, and the rather weak consent that is implied if a site has disclosed how it collects and uses data in a way that nobody reads or understands.”

AFP Photo/Nicholas Kamm