News

On the Ethics of Facebook – and Drawing the Right Conclusions News
Is Facebook all wet? (Photo: mkhmarketing/Flickr)

On the Ethics of Facebook – and Drawing the Right Conclusions

July 16, 2014 2578

Facebook logo under water

Are Facebook’s ethics all wet? Maybe not, argues Robert Dingwall. (Photo: mkhmarketing/Flickr)

The research ethics community has been stirred out of its summer torpor by the chance to debate a paper by Facebook and Cornell researchers that investigated the phenomenon of emotional contagion by manipulating Facebook news feeds. When these were biased towards positive content, there was a small but detectable tendency for users to employ more positive language in their own posts – and vice versa. The design and methods have been broadly accepted as robust, although there are some important theoretical issues about how emotion is understood. Nevertheless, the findings have generally been considered to be interesting: even if the effect is not large, at Facebook scale they represent some hundreds of thousands of users each day.

The real controversy, however, was provoked by the apparent lack of regulatory jurisdiction. As a private organization using its own funds, Facebook is not subject to the Common Rule that governs most Federally funded research in the US, and which many universities extend to cover all research with human subjects. This is the legal foundation for the system of Institutional Review Boards (IRB). Facebook’s academic collaborators only received anonymized data, which exempted them from IRB review. It does not seem that any US laws or regulations have been broken, although other countries are investigating whether data protection offenses have been committed under their domestic legislation. Facebook’s terms of service allow the company to do research with user data, although there is some question about whether these clauses were actually in effect when this study was carried out.

There is, however, a difference between acting legally and acting ethically, which some ethicists have been quick to seize on. This response demands a critical analysis. The immediate response of many bioethicists to something that they think is wrong is to demand more regulation. This might, however, be a time to argue the contrary – that this case really demonstrates the over-regulation of most university social science research due to inappropriate generalization from biomedical models.

Most of the criticism turns on the issue of informed consent, which is conventionally a bedrock principle for research ethics. In practice, as many ethnographers have been pointing out for years, this has been fetishized to the point of absurdity. The atrocity stories about IRBs are legion and I will not bother repeating them. Facebook users – I am not one – voluntarily participate in the company’s service just as they might voluntarily hold supermarket or other loyalty cards. In exchange for the benefits offered, we forego a certain measure of privacy – just how much and on what terms has been a matter of debate for some years but the principle is not news. The company does all kinds of things with the data that are designed to manipulate our behavior. Supermarkets send us coupons that are intended to encourage us to spend money in their store on the items that they would like to sell us. Presumably enough of us do this for that to be a productive strategy. Do we explicitly consent to have our behavior influenced in this fashion? Probably not – but we certainly assent to it by continuing to hold our loyalty cards.

It is hard to see a real difference between this project and, for example, the kind of well-established psychological study that leaves a 10 dollar bill on a park bench and monitors what people do when they see it. On the scale of a typical psychology experiment, participants can be debriefed afterwards but it makes nonsense of the study to ask consent first. The publication of Facebook’s findings could, though, be considered as a mass debriefing. The choice of a journal with a liberal Open Access policy furthers this goal.

What are the risks of this intervention? One commentator observes that some of the people receiving the negative condition might have been clinically depressed and pushed over the edge into suicide. Unfortunately, this argument is rather undermined by misquoting US government data that approximately 6.7 per cent of Americans suffer from a major depressive disorder in the course of a year to suppose that this figure translates to a specific week. With such a small observable effect, how plausible is it really to suppose that gloomy posts in one’s news feed will be the last straw?

Empirically, a good many ethicists do not much like the world they happen to live in. Events like this are a good opportunity to release a visceral dislike of corporations, simply because they are large and make profits. The same response is evident in much of what is written about the pharmaceutical industry. This is absolutely not to say that big companies are saintly actors. Clearly there is much potential for harm from pharmaceutical research and this is rightly regulated in both private and public sector environments. However, as many commentators have observed over the years, this does not justify the generalization of that model to the social sciences, where, for the most part, risks of harm are minimal.

In this case, as a respected group of bioethicists have observed in Nature, we might well want to applaud Facebook’s attempt to get a better understanding of its impact on users and its willingness to share that information in a public interest. The real novelty is Facebook’s decision to publish their findings in an academic journal rather than treating them as a trade secret. In doing so, Facebook have demonstrated the irrelevance of much of the current regulatory regime that crushes social science research in universities. The challenge is not to level the field by regulating Facebook but to deregulate public social science more generally.


Robert Dingwall is an emeritus professor of sociology at Nottingham Trent University. He also serves as a consulting sociologist, providing research and advisory services particularly in relation to organizational strategy, public engagement and knowledge transfer. He is co-editor of the SAGE Handbook of Research Management.

View all posts by Robert Dingwall

Related Articles

The Authors of ‘Artificial Intelligence and Work’ on Future Risk
Innovation
December 4, 2024

The Authors of ‘Artificial Intelligence and Work’ on Future Risk

Read Now
Why Might RFK Jr Be Good for US Health Care?
Public Policy
December 3, 2024

Why Might RFK Jr Be Good for US Health Care?

Read Now
Tenth Edition of The Evidence: Why We Need to Change the Narrative Around Part-Time Work
Bookshelf
December 2, 2024

Tenth Edition of The Evidence: Why We Need to Change the Narrative Around Part-Time Work

Read Now
Joshua Greene on Effective Charities
Social Science Bites
December 2, 2024

Joshua Greene on Effective Charities

Read Now
The End of Meaningful CSR?

The End of Meaningful CSR?

In this article, co-authors W. Lance Bennet and Julie Uldam reflect on the inspiration behind their research article, “Corporate Social Responsibility in […]

Read Now
Deciphering the Mystery of the Working-Class Voter: A View From Britain

Deciphering the Mystery of the Working-Class Voter: A View From Britain

How is class defined these these days – asking specifically about Britain here but the question certainly resonates globally – and when […]

Read Now
Doing the Math on Equal Pay

Doing the Math on Equal Pay

In the UK, it’s November 20. In France, it’s today, November 8. For the EU, it’s November 15. It’s the day of […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments