Communication

What Do Rising Retraction Rates Mean for Peer Review? Communication
The new journals are out. Let's see what we can filter out.

What Do Rising Retraction Rates Mean for Peer Review?

July 10, 2014 1271

The new journals are out. Let's see what we can filter out.

The new journals are out. Let’s see what we can filter out.

In January, Haruko Obokata and colleagues published two papers in the journal Nature suggesting that a simple acid bath can convert differentiated cells back to a stem-cell-like state. This finding, if true, would be revolutionary. Last week, however, after five months of debate among peers, the papers have been retracted.

This incident is part of a larger trend. The rate of retractions of scientific papers has been growing over the past decade, suggestive to some of a crisis of confidence in science. Can we no longer trust the scientific literature? Is peer review dysfunctional?

The Conversation logo_AU

This article by Nikolaus Kriegeskorte originally appeared at The Conversation, a Social Science Space partner site, under the title “What lesson do rising retraction rates hold for peer review?”

Retractions reveal both science’s weakness and its strength. Science frequently goes wrong; that’s its weakness. Then science corrects itself; that’s its strength. And yet there’s a lesson in the rising rate of retractions.

Amplifying the noise in the system

When a scientific finding is published, our major indicator of its reliability and importance is the prestige of the journal where it appears. So when Obokata’s findings appeared in Nature, one of the top journals, the world paid attention. The story was reported in mass media across the globe. It is difficult to estimate the cost of confusing the world with an incorrect a message at this scale.

The problem is not that science, for five months, was in a state of confusion about Obokata’s claims. Confusion in science is part of the process of working things out. The problem is that the message of the paper was amplified to global visibility, before the field could resolve its confusion.

In the current system of prepublication peer review, a paper is evaluated before publication by a small number of other scientists (typically three or four). Such reviews formed the basis for presenting Obokata’s claims, as fact, to the whole world.

When one of us makes a claim (by submitting a paper), it would seem wise not to blurt it out to the whole world after just four of us (the peer reviewers) have had a look at it.

There’s a clear lesson in the Obokata story and the general trend of rising retraction rates. It was prepublication peer review that failed to catch the error. And it was postpublication peer review, the open debate on the web, that corrected the path of science.

Nature, Science, and other prestige journals are run by talented people who have every incentive to publish the best research. Their review process is professional and their reviewers are highly qualified. However, three or four reviewers asked to comment within a couple of weeks cannot achieve the breadth or depth of evaluation that an open discussion by hundreds of scientists can achieve over several months.

We need this sort of open evaluation among peers before we can justify alerting the entire world. The aura of prestige journals grossly overstates the actual confidence we can have in a scientific result when it first appears. Slight tweaks to the review process as discussed in a Nature reflecting on the Obakata story will not solve the problem. Even dramatic changes, such as doubling the number of reviewers or requiring independent replication, would fall short – as long as peer review is restricted to the prepublication phase.

Prepublication peer review is inadequate

Prepublication peer review is flawed for three reasons. First, it is restricted to a small number of people, the editors and peer reviewers. To bring the brain power of the entire community of peers into the evaluation process, the paper has first to be made publicly available – that is, published. Second, prepublication peer review is conducted in secret. Since the paper is not yet published, the review process as well is hidden from public scrutiny. Typically, the reviewers are anonymous and their reviews secret. There is thus no strong disincentive to self-serving or subtly biased reviewing. Third, the review process delays publication. When conducted quickly, it may lack thoroughness. When given more time, it slows down the progress of science. The present model suffers from both of these drawbacks.

Establishing the reliability of a finding is only half the challenge. The other half is assessing the implications and importance of a study. Prepublication peer review falls short on both counts. Understanding the full implications of a study, too, requires an open peer debate.

We’ve inherited the current system from the pre-internet age. Back when articles needed to be printed on physical paper, we needed to filter before publication to control the costs. Today the internet enables us to “publish then filter,” to use Clay Shirky’s useful phrase. This will revolutionize scientific publishing. For the moment, however, the current system is held in place by historical inertia, our habits, and the financial interests of the publishing industry.

Open evaluation

The emerging alternative model is open evaluation (OE), a transparent public process of peer review and rating after publication. All scientific papers, in such a system, would be instantly published in an open access model, where everyone can read them. They would then be vetted and ranked postpublication in an ongoing fashion.

The transition is not going to be easy or swift, but recent developments and a growing number of startup companies are moving in the right direction. Pubmed, a respository of science publications, has established a forum called PubMed Commons,/em>, where scientists can leave comments on any paper. PLOS Open Evaluation provides a web-based system for sampling opinions on papers through ratings. New journals including ,em>F1000 Research and ScienceOpen/a> rely entirely on postpublication peer review.

Once open evaluation ratings on published papers become available, scientists and journalists will no longer be dependent on the impact factor of the journal as the only immediately available indication to a new paper’s reliability and importance.

A decade from now, Nature, or its successor in prestige science publishing, might pick the most exciting among previously published studies that have fared well through months of open evaluation. With the evaluation taken care of, the publishers will focus on helping authors communicate the findings to an audience that extends to other fields and beyond science. Had Obokata and colleagues published their findings first for their peers, the flaws of the papers would have been exposed before alerting the world. It would have saved us a lot of confusion.The Conversation

***

Nikolaus Kriegeskorte has edited a collection of visions for open evaluation and postpublication peer review. He frequently argues in favour of reform for scientific publishing. He has served in editorial roles for Frontiers and PLoS Computational Biology, and is on the editorial board of ScienceOpen. He receives funding from the UK Medical Research Council, the European Research Council, and the Wellcome Trust.


Nikolaus Kriegeskorte is program leader for the MRC Cognition and Brain Sciences Unit at University of Cambridge. As a brain scientist, he studies visual object recognition with functional magnetic resonance imaging and computational models. In his free time, he likes to imagine how we can make peer review entirely transparent and build a new science in which papers are instantly published and openly evaluated.

View all posts by Nikolaus Kriegeskorte

Related Articles

Sixth Edition of ‘The Evidence’: We Need a New Approach to Preventing Sexual Violence
Bookshelf
July 26, 2024

Sixth Edition of ‘The Evidence’: We Need a New Approach to Preventing Sexual Violence

Read Now
Stop Buying Cobras: Halting the Rise of Fake Academic Papers
Communication
July 22, 2024

Stop Buying Cobras: Halting the Rise of Fake Academic Papers

Read Now
Let’s Return to Retractions Being Corrective, Not Punitive
Communication
July 15, 2024

Let’s Return to Retractions Being Corrective, Not Punitive

Read Now
Uncovering ‘Sneaked References’ in an Article’s Metadata
Communication
July 11, 2024

Uncovering ‘Sneaked References’ in an Article’s Metadata

Read Now
Fifth Edition of ‘The Evidence’: Do Peacebuilding Practices Exclude Women?

Fifth Edition of ‘The Evidence’: Do Peacebuilding Practices Exclude Women?

The June 2024 installment of The Evidence newsletter puts post-war conflict resolution practices under the microscope – taking a closer look at how women are adversely affected by these peacebuilding exercises.

Read Now
How ‘Dad Jokes’ Help Children Learn How To Handle Embarrassment

How ‘Dad Jokes’ Help Children Learn How To Handle Embarrassment

Yes, dad jokes can be fun. They play an important role in how we interact with our kids. But dad jokes may also help prepare them to handle embarrassment later in life.

Read Now
Fourth Edition of ‘The Evidence’: Do Women Make Better Doctors? 

Fourth Edition of ‘The Evidence’: Do Women Make Better Doctors? 

In this issue of The Evidence newsletter, journalist Josephine Lethbridge examines why women doctors see better outcomes in their patients’ health.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

1 Comment
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Joanne Gaudet

Your post hits on some of the core issues when studying journal peer review as a scientific object of study. The relational dynamics in post-publication open review with public participation are most likely to foster rational decision-making with non-anonymous referees (maximal accountability) and open access to all editorial judgements (transparency in judgement). Full access to the original manuscript also leads to greater author-manuscript accountability… Following are a few preprints I propose: Socio-historical: http://hdl.handle.net/10393/31319 – Gaudet, J. 2014. Investigating journal peer review as scientific object of study: unabridged version – Part I. uO Research. Pp. 1-24. http://hdl.handle.net/10393/31320 – Gaudet, J. 2014.… Read more »