The paper, by University of California Los Angeles graduate student Michael LaCour and Columbia political scientist professor Donald Green, garnered a lot of media attention when it was published, reaching a wide audience thanks to a segment on the radio show This American Life.
This is likely in part due to the intensely topical content of the paper, which found that subjects in California were more inclined to become supportive of same-sex marriage if canvassed by a gay individual.
But when another group of researchers attempted to run a follow-up study, they got very different results. When they looked more closely at the original paper, they found irregularities, including finding the research company commissioned by the original authors had no knowledge of the study.
Since then, Green confronted his co-author LaCour and requested the original data, only to have LaCour confess to “falsely describing at least some of the details of the data collection”.
This prompted Green to request a retraction from Science with the publication’s editor-in-chief Marcia McNutt saying:
[…] Green requested that Science retract the paper because of the unavailability of raw data and other irregularities that have emerged in the published paper.
Another train wreck
This raises at least as many questions as it answers. At the time of publication Green crowed:
The change was equivalent to transforming a Midwesterner into a New Englander on the issue of gay marriage.
Now it appears he had never seen the data. Yet he is a senior academic and LaCour only a post graduate student, albeit at a top notch university.
In social science research, as in research more generally, there is a continuum of error from typo and honest mistake, through inadvertent or time-constrained lack of fact checking, to deliberate fraud.
Deliberate fraud of the truth-enhancing kind is amusingly described in the book Free Radicals. The author, Michael Brooks, makes the case that great science rarely plays by the rules and that great scientists – Tesla and the University of Western Australia’s Barry Marshall) – are often so convinced of the truth and value of what they are doing that they play hard and loose with approved methodology and the standard rules of the scientific enterprise.
The current imbroglio has some of this flavor; LaCour is openly gay and appears to have had no desire to be seen as at arms-length from his research.
Social psychology has seen more than its fair share of all of this. It has rightly been described as a “train wreck” by the eminent psychologist and behavioural economist Dan Kahneman.
So what are the deeper causes and implications of this most recent case of fraud in the social sciences? Four points come to mind.
1. Lack of rigour
The lack of rigour or community standards in social psychology is something Dave Bailey and I have previously discussed.
In more robust fields, like astrophysics or mathematics, wholesale bullshit is somewhat harder to publish, and is way harder to get away with, in the long term.
ast month I noted interesting new work on what helps drive content viral. The same features may well make it more likely that certain kinds of fraud (like certain urban myths) pass muster than others.
Does it tell us something surprising that we want to believe? Does it help drop our defenses? There are some famous examples of fraudulent holocaust memoirs. No one saw that coming.
2. Getting away with it
Another problem is the sheer ease of generating fraud, particularly in a community that has never viewed its job as diagnosing fraud rather than assessing interest and novelty.
Much like the current internet, the scholarly publication system was designed for use by a relatively small group of like-minded people, for whom rooting out malignant behavior (excluding Newton-Leibniz priority battles) was not really part of the equation.
I am willing to argue that cheating is still a relatively uncommon phenomenon in academia. Tedious and fatuous publications whose only purpose is the far-from-trivial need for each academic to be credentialed is a larger issue.
I am not convinced that the rate of academic fraud is that much greater than a generation ago. But the most flagrant cases such as Diederik Stapel, who preferred inventing his research, make one gasp.
Well designed and executed surveys and opinion polls are valuable, if fraught. But with online tools, like Survey Monkey, no one is more than 30 minutes away from authoring their own poorly designed and implemented social science survey. The old computer science saying “garbage in, garbage out” comes to mind.
3. Benefit versus cost
A further issue is the large upside and often limited perceived downside of faking results.
Fame and fortune apart, there is now a huge industry that produces papers guaranteed to be published in the appropriate part of the food chain – at commensurate prices.
There are clearly more than enough takers to make this lucrative, even when the cost-per-paper can run into the tens of thousands of dollars. Science itself has highlighted the problem of “China’s Publication Bazaar” in gob-smacking detail.
4. Hunger for novelty
Worst of all is the appetite of even the most elite journals, grant councils and universities to always have something cutting-edge and sexy for the media, the public and especially for the decision makers.
This is a story with many co-dependent enablers, and few good guys other than the vast majority of ordinary researchers.
What politician stands up to endorse base-funding for arms-length research when they can target money to something new, exciting, buzzword compliant, and more than likely to go nowhere? Even though, the most valuable university-industry tech transfer is still thought to be Gatorade.
Science and Nature are happy to play the embargo game, and even to break their own rules when they see a possibility for even more lovely coverage.
I know from repeated personal experience in many institutions that whatever my academic masters say about valuing basic research, when push comes to shove, they will drop that bone in favor of the big shiny PR generating reflection in the river below.
As Daniel Kanheman has argued, the social sciences – whether political science or behavioral economics – must put in place stronger community standards. Access to data and issues of reproducibility need to be embedded in the process.
This is expensive, time consuming and painful. Many smaller journals and academic communities simply do not have the requisite resources. They can not afford to pay reviewers, nor is it clear that payment would not amplify the problem.
But this particular debacle played out in the leading weekly journal of the American Association for the Advancement of Science (AAAS), which has resources on a scale very few other scientific publishers have. The relatively quick unmasking of the fraud can be read as a sign of reasonably good health, though.
Prevention of the underlying disease will be a longer and messier affair. As with doping in sport or internet crime the issues are here to stay.