Industry

What’s Wrong with Peer Review?

September 16, 2020 2448
Sad face selected in checklist

Have you ever invested a lot of thought and creativity and effort into a journal article, only for it to be rejected with a rude and aggressive peer review? This happened to me quite recently. Last month, I finished work on two major publication projects. One of these concluded successfully, while the other became a major disappointment. Towards the end of the month, Routledge published the Handbook of Global Therapeutic Cultures. I had been the lead editor for this handbook, working together with a group of more than 30 colleagues to assemble a portrait of cutting-edge scholarship on therapeutic cultures and the roles psychological discourses and practices may play in contemporary societies. At the same time, another editorial project failed rather spectacularly. Following three years of work, a special issue I had been co-editing was unexpectedly turned down by the journal for which it had been meant.

In 2017, my co-editor and I hosted a workshop at a London university. The workshop led to engaging and intellectually stimulating conversations with its participants. We drew on these conversations to propose a special issue to a leading academic journal, in response to a call for proposals by the journal’s editorial board. The journal subsequently invited us to prepare the special issue. Its editor-in-chief asked us to draft an article-length introduction, to commission double-blind peer reviews for all articles to be included, and to oversee revisions in response to these peer reviews. Following these revisions, we would then submit the manuscripts to the journal. It would commission one further peer review, which was to assess “the coherence and integrity of the collection as a Special Issue”, according to the instructions the journal’s editor-in-chief sent us in an e-mail at the time.

Daniel Nehring image

Until the very last stage of the editorial process, everything went to plan. The contributors submitted their manuscripts, and my co-editor and I commissioned peer reviews to guide the subsequent round of revisions. All peer reviewers are leading scholars in relevant fields of research. Their peer reviews, 19 in total, were consistently positive in their assessment of the papers, many enthusiastically so. Following the initial submission of the manuscripts to the journal, its editor responded with a single peer review, asking for relatively minor revisions, to enhance the coherence of the special issue.

Up until this point, nobody had raised any concerns about the quality of the special issue or mentioned the possibility of the journal rejecting it for publication. However, this changed suddenly after we had resubmitted the revised, final versions of the manuscript. After a delay of about a month, the editor responded with a short e-mail, in which he advised us that he and the journal’s editorial team had decided to reject. The editor referred to a single new peer review he had commissioned as grounds for this decision, without mention of the other 19 reviews that had formed part of the editorial process.

This new review was remarkable in several ways. First, it shifted the goal posts of the peer review process at a very late stage. It did not assess, as is common at this stage of the peer review process, whether our revisions had satisfactorily addressed the issues raised in the earlier post-submission review. It did not discuss this point at all. Instead, it raised a range of new, substantive concerns the reviewer had about the introductory article. Second, the review did not adhere to the journal’s stated review policy, in that it did not look at “the coherence and integrity of the collection as a Special Issue” at all, instead focusing exclusively on the introductory article. Third, it was short and obviously written in haste – less than a page of notes, full of typos, that flagged its author’s concerns in a disjointed fashion.

Finally, the review stood out for its condescending and aggressive tone. Instead of highlighting deficiencies and suggesting improvements, its author went on the attack, criticizing the introduction and its authors in often highly personal terms, alleging confusion, a lack of understanding, and intellectual weakness. When my co-editor and I pointed these concerns out to the journal’s chief editor, he responded with a robustly worded e-mail to reject our request for further reviews of better quality. And so our special issue died.

It is not my aim here to indulge in navel-gazing. Rather, it is my intention to draw attention to fundamental problems inherent in peer review as it is done and used today. Experiences such as mine seemingly are not uncommon. Stories of aggressive and unfair peer reviews are easy to come by, on academic websites, blogs, and social media  (1, 2, 3). The apparent prevalence of the problem has invited a diverse range of commentary, ranging from advice on how to handle “idiotic” peer reviews to critical commentary on inappropriate reviews. Etienne Benson, writing on the website of the American Psychological Association, argues that what he terms “savage” peer reviews result from a range of factors, including the anonymity of the peer review process, vested interests that underlie academic debates, inadequate training, and simply overload with work.

Kevin Ward, in a 2016 article in Urban Geography, sets out a typology of peer review styles, which the “bitter and twisted,” the “showboat,” and the “goalpost moving” reviewers as distinctive, common figures to be encountered in the world of academic publishing. A common feature of much of this commentary is that, beyond perhaps sending an e-mail to a journal editor to voice one’s displeasure, aggressive and inappropriate peer reviews are a feature of academic publishing that can hardly be changed.

Why is this so? I would argue that important reasons lie in the culture and institutional structures of contemporary academic capitalism. The pervasive economization and commercialisation of global higher education over the past two decades have exacerbated extant forms of hierarchy and competition in academia and created new ones. Academic publishing today is a winner-takes-all world of journal rankings, impact factors, citation frequencies, and h-index scores in which the fit survive and the weak perish. Journals that wish to do well and attract contributions from well-known scholars must find ways to be admitted into the most prestigious rankings, such as the Social Science Citation Index, and occupy top positions in these rankings. Scholars who wish to do well and see their career flourish must find ways to display their work in top-ranked journals with the right impact factors.

The result is a hypercompetitive culture of academic life. So, for example, in his last round of instructions prior to the rejection of our special issue, the chief editor asked us to extensively cross-reference our various papers among each other, presumably to bolster his journal’s citation scores. In such a hypercompetitive culture, it makes sense that some scholars come to be labelled as “weak” and “confused.” The open use of such labels articulates the aggressive masculinity that defines contemporary academic capitalism.

Moreover, the institutional structures of academic capitalism exhibit a lack of transparency and accountability where it truly matters. Peer review and the ways in which journals handle peer reviews are one key site of such intransparency and unaccountability. In our case, the journal editor ignored our concerns about the aggressive tone of the peer review he had commissioned, and he dismissed our concerns about the quality of the review and the integrity of the peer review process in language that was likewise rather antagonistic. What to do next? Academic journals usually do not have a process of complaint and review, and the principles of editorial independence and anonymity of peer review mean that it would have been very difficult for my co-editor and me to demand a full account of the way in which our special issue had been reviewed. The journal’s chief editor has in recent years published numerous articles in his own, top-ranked, journal, and most members of his core editorial team are his direct subordinates at the research center he directs.

Against this backdrop, how much fairness and openness would my we have experienced in response to a formal complaint to the journal? As a means of displaying institutional legitimacy, academic capitalism has set off a blizzard of audits and surveillance all across academia. Unfortunately, this has not made academic publishing more benign or equitable.

My career so far has taken me to a fairly wide range of places, and this has allowed me to experience a wide range of approaches to sociology and social science. In my blog, I reflect on this diversity and its implications for the future of the discipline. Over the last few years, I have also become interested in exploring the contours of academic life under neoliberal hegemony. Far-reaching transformations are taking place at universities around the world, in terms of organisational structures, patterns of authority, and forms of intellectual activity. With my posts, I hope to draw attention to some of these transformations.

View all posts by Daniel Nehring

Related Articles

Why Social Science? Because It Makes an Outsized Impact on Policy
Industry
March 4, 2024

Why Social Science? Because It Makes an Outsized Impact on Policy

Read Now
The Importance of Using Proper Research Citations to Encourage Trustworthy News Reporting
Impact
February 26, 2024

The Importance of Using Proper Research Citations to Encourage Trustworthy News Reporting

Read Now
A Behavioral Scientist’s Take on the Dangers of Self-Censorship in Science
Interview
February 14, 2024

A Behavioral Scientist’s Take on the Dangers of Self-Censorship in Science

Read Now
SSRC Links with U.S. Treasury on Evaluation Projects
Announcements
February 1, 2024

SSRC Links with U.S. Treasury on Evaluation Projects

Read Now
New Report Finds Social Science Key Ingredient in Innovation Recipe

New Report Finds Social Science Key Ingredient in Innovation Recipe

A new report from Britain’s Academy of Social Sciences argues that the key to success for physical science and technology research is a healthy helping of relevant social science.

Read Now
Your Data Likely Isn’t Best Served in a Pie Chart

Your Data Likely Isn’t Best Served in a Pie Chart

Overall, it is best to use pie charts sparingly, especially when there is a more “digestible” alternative – the bar chart.

Read Now
Research Integrity Should Not Mean Its Weaponization

Research Integrity Should Not Mean Its Weaponization

Commenting on the trend for the politically motivated forensic scrutiny of the research records of academics, Till Bruckner argues that singling out individuals in this way has a chilling effect on academic freedom and distracts from efforts to address more important systemic issues in research integrity.

Read Now
4.6 5 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments