International Debate

Can We Replicate the Reported Crisis in Psychology?

June 27, 2016 1255

bad apple_optModern psychology is apparently in crisis. This claim is nothing new. From phrenology to psychoanalysis, psychology has traditionally had an uneasy scientific status. Indeed, the philosopher of science, Karl Popper, viewed Freud’s theories as a typical example of pseudoscience because no test could ever show them to be false. More recently, psychology has feasted on a banquet of extraordinary findings whose scientific credibility has also been questioned.

Some of these extraordinary findings include Daryl Bem’s experiments, published in 2011, that seem to show future events influence the past. Bem, an emeritus professor at Cornell University, revealed that people are more likely to remember a list of words if they practice them after a recall test, compared with practicing them before the test. In another study, he showed that people are significantly better than chance at selecting which of two curtains hide a pornographic image.

The Conversation logo

This article by Keith Laws originally appeared at The Conversation, a Social Science Space partner site, under the title “Is psychology really in crisis?”

Then there’s Yale’s John Bargh who in 1996 reported that, when unconsciously primed with an “elderly stereotype” (by unscrambling jumbled sentences containing words such as “Florida” and “bingo”), people subsequently walk more slowly. Add to this Roy Baumeister who in 1998 presented evidence suggesting we have a finite store of will-power which is sapped whenever we resist temptations such as eating chocolates. Or, in the same year, Ap Dijksterhuis and Ad Van Knippenberg showing that performance on Trivial Pursuit is better after people list typical characteristics of a professor rather than those of a football hooligan.

These studies are among the most controversial in psychology. Not least because other researchers have had difficulty replicating the experiments. These types of studies raise concerns about the methods psychologists use, but also more broadly about psychology itself.

Do not repeat

A survey of 1,500 scientists published in Nature last month indicated that 24% of them said they had published a successful replication and 13% published an unsuccessful replication. Contrast this with over a century of psychology publications, where just 1% of papers attempted to replicate past findings.

Editors and reviewers have been complicit in a systemic bias that has resulted in high-profile psychology journals becoming storehouses for the strange. Many psychologists are obsessed with the “impact factors” of journals (as are the journals) – and one way to increase impact is to publish curios. Certain high-impact journals have a reputation of publishing curios that never get replicated but which attract lots of attention for the author and journal. By contrast, confirming the findings of others through replication is unattractive, rare and relegated to less prestigious journals.

Despite psychology’s historical abandonment of replication, is the tide turning? This year, a crowd-sourced initiative – the OSC Reproducibility project – attempted to replicate 100 published findings in psychology. The multinational collaborators replicated just over a third (36 percent) of the studies. Does this mean that psychological findings are unreliable?

Replication projects are selective, targeting studies that are cheaper and less technically complicated to replicate or those that are simply unbelievable. Other projects such as “Many Labs” have reported a replication rate of 77 percent. All initiatives are non-random and headline replication rates reflect the studies that are sampled. Even if a random sample of studies were examined, we don’t know what would constitute an acceptable replication rate in psychology. This is not an issue specific to psychology. As John Ioannidis noted: “most published research findings are false””. After all, scientific hypotheses are our current best guesses about phenomena, not a simple accumulation of truths.

Questionable research practices

The frustration of many psychologists is palpable because it seems so easy to publish evidence consistent with almost any hypothesis. A likely cause of both unusual findings and non-replicability is psychologists indulging in questionable research practices (QRPs).

In 2012, a survey of 2,000 American psychologists found that most indulged in QRPs. Some 67% admitted selectively reporting studies that “worked”, while 74% failed to report all measures they had used. The survey also found that 71% continued to collect data until a significant result was obtained and 54% reported unexpected findings as if they were expected. And 58% excluded data after analyses. Astonishingly, more than one-third admitted they had doubts about the integrity of their own research on at least one occasion and 1.7% admitted to having faked their data.

The problems associated with modern psychology are longstanding and cultural, with researchers, reviewers, editors, journals and news-media all prioritizing and benefiting from the quest for novelty. This systemic bias, coupled with minimal agreement on fundamental principles in certain areas of psychology, means questionable research practices can flourish – consciously or unconsciously. Large-scale replication projects will not address the cultural problems and may even exacerbate them by presenting replication as something special that we use to target the unbelievable. Replication – whether judged as failed or successful – is a fundamental aspect of normal science and needs to be both more common and more valued by psychologists and psychology journals.The Conversation


Keith Laws is professor of cognitive neuropsychology in the School of Life and Medical Sciences at the University of Hertfordshire. He completed a PhD at the Department of Experimental Psychology, University of Cambridge and is the author of more than100 papers and a recent book, Category-Specificity: Evidence for Modularity of Mind. His research focuses on cognitive function in a variety disorders including Alzheimer's disease, schizophrenia, and obsessive-compulsive disorder.

View all posts by Keith Laws

Related Articles

Digital Scholarly Records are Facing New Risks
Research
May 21, 2024

Digital Scholarly Records are Facing New Risks

Read Now
Analyzing the Impact: Social Media and Mental Health 
Research
May 15, 2024

Analyzing the Impact: Social Media and Mental Health 

Read Now
New Fellowship for Community-Led Development Research of Latin America and the Caribbean Now Open
Academic Funding
May 14, 2024

New Fellowship for Community-Led Development Research of Latin America and the Caribbean Now Open

Read Now
The Long Arm of Criminality
Opinion
April 29, 2024

The Long Arm of Criminality

Read Now
New Opportunity to Support Government Evaluation of Public Participation and Community Engagement Now Open

New Opportunity to Support Government Evaluation of Public Participation and Community Engagement Now Open

The President’s Management Agenda Learning Agenda: Public Participation & Community Engagement Evidence Challenge is dedicated to forming a strategic, evidence-based plan that federal agencies and external researchers can use to solve big problems.

Read Now
Three Decades of Rural Health Research and a Bumper Crop of Insights from South Africa

Three Decades of Rural Health Research and a Bumper Crop of Insights from South Africa

A longitudinal research project project covering 31 villages in rural South Africa has led to groundbreaking research in many fields, including genomics, HIV/Aids, cardiovascular conditions and stroke, cognition and aging.

Read Now
Using Translational Research as a Model for Long-Term Impact

Using Translational Research as a Model for Long-Term Impact

Drawing on the findings of a workshop on making translational research design principles the norm for European research, Gabi Lombardo, Jonathan Deer, Anne-Charlotte Fauvel, Vicky Gardner and Lan Murdock discuss the characteristics of translational research, ways of supporting cross disciplinary collaboration, and the challenges and opportunities of adopting translational principles in the social sciences and humanities.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

1 Comment
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Bryan

Can this also be a link to the publish or perish attitude towards funding? I’ve seen people under pressure just to publish articles otherwise their funding is taken away. Maybe we do need to have a good look at ourselves and that replications should be required and taken more seriously. I know I’d trust something more if the effect could be found in more than one study, but they don’t get the funding, it’s always something new and different that needs to be the latest article. We MUST go back and give ourselves a good solid foundation to build on,… Read more »