Communication

Is Everything a Scholar Writes Automatically Scholarly?

November 26, 2014 1048

Transparency_optWhat is it that sets academic publications apart from articles on The Conversation? Peer review might be your first answer. While The Conversation is built around a journalistic model, there is a big growth in online, open-access journals each with different approaches to peer review. But peer review is impossible to define and reviewing research before it is published can be fraught with problems.

This is part of the reason why so many published research findings are false. Alternative publishing models have developed in response to this. Open access and post-publication peer review are now common.

The Conversation logo

This article by Christopher Sampson originally appeared at The Conversation, a Social Science Space partner site, under the title “What counts as an academic publication?”

This new regime raises questions about what defines academic publishing. Blog posts and journalistic articles can be open access and subject to post-publication peer review, but are they scholarly? New publishing models have also developed their own shortcomings. One problem is the proliferation of predatory open access publishers. Some of these appear happy to accept randomly generated articles for publication, apparently following peer review.

The importance of transparency

So, what should be considered scholarly output? The key to quality research is that we know what went into producing the reported results. All empirical work should be preceded by a published protocol. This should set out – transparently – the methods that were used.

Without one, it’s difficult to reproduce research findings and identify errors. There are plenty of journals that will now publish protocols, such as BMJ Open, PeerJ or SpringerPlus. But publication of a protocol in an open access repository would be sufficient – it isn’t necessary for it to appear in a peer-reviewed journal.

It’s important to make any present or potential conflicts of interest clear. This should apply to authors, reviewers and editors. Journal’s disclosure rules are a start, though these are subject to limitations. We need more sophisticated mechanisms for use alongside initiatives like ORCID, which assigns a unique ID to all researchers.

In most cases, scholars can share the data they have collected and analysed. Making data and analysis files available can help uncover simple errors. The Reinhart-Rogoff-Herndon incident is a case in point. Research findings by two Harvard economists were used to justify austerity policies, but these findings were undermined when a fundamental error was found in an Excel file.

My own field of research – health economics – should make cost-effectiveness models open. These models often form the basis of decisions about whether or not a particular drug will be available to patients, and yet the methods are often unclear to everyone but the authors. Where data relates to individual participants – and cannot be anonymized – this should be made clear to readers and reviewers.

Shine a light on peer review

Evidence suggests that two or three peer reviewers will not be able to identify all errors in a manuscript. This is one of the main problems with pre-publication peer review. It’s also one reason why open access is so important in the definition of good science. Paywalls on traditional academic journals restrict the number of people who can check the quality of a publication and can encourage mistaken consensus over published errors. All scholarly output must be open access.

And so peer review itself should also be transparent. Pre-publication peer review reports should be open and accessible through the journal or a service like Publons, a facility for researchers to record their peer review activity. Mechanisms to support post-publication peer review should also be supported. Reviewers should be identifiable as experts in their field. PubMed Commons is an example of such a tool.

Peer review is important, but I believe that post-publication approaches can be more effective. An additional benefit of open evaluation is the potential for better metrics.

Redefining scholarly output

Scholarly writing should be distinguishable from other forms of publication by its transparency. We should know exactly how authors arrive at their findings. Findings published in academic journals should be given special credence because of this.

Academic publishing should be defined by the presence of strict regulations to maximise transparency. Articles that do not meet transparency criteria should not be eligible for research quality assessments, such as the UK’s Research Excellence Framework. Journalists and academic bloggers will not be subject to such strict rules, and their output will differ accordingly.

Make “good” science clearer

I am by no means the first to call for such measures. But previous calls have focused on ideas for improving scholastic writing rather than the more fundamental challenge of defining it.

Transparency no doubt has its costs, at least in the short term. But without it, true scholarly output will become increasingly indistinguishable from academics’ other forms of writing.

Good science should not be defined by whether or not pre-publication peer review takes place, but by the transparency of the research. Some fear that abandoning our current system might allow more “bad science” to get through. But we have bad science now, and lots of it. Sunlight is the best disinfectant.The Conversation


Christopher Sampson is a health economist based at the Division of Rehabilitation and Ageing at the University of Nottingham. His interest is most strongly drawn towards research in to the methods and theory around valuing health, and the evaluative space of economic evaluation. Sampson also has an active interest in models of academic publishing and the engagement of academics with alternative channels of dissemination, such as blogging and social media. This is in no small part thanks to his role as founder of The Academic Health Economists' Blog.

View all posts by Christopher Sampson

Related Articles

Sixth Edition of ‘The Evidence’: We Need a New Approach to Preventing Sexual Violence
Bookshelf
July 26, 2024

Sixth Edition of ‘The Evidence’: We Need a New Approach to Preventing Sexual Violence

Read Now
Stop Buying Cobras: Halting the Rise of Fake Academic Papers
Communication
July 22, 2024

Stop Buying Cobras: Halting the Rise of Fake Academic Papers

Read Now
Let’s Return to Retractions Being Corrective, Not Punitive
Communication
July 15, 2024

Let’s Return to Retractions Being Corrective, Not Punitive

Read Now
Uncovering ‘Sneaked References’ in an Article’s Metadata
Communication
July 11, 2024

Uncovering ‘Sneaked References’ in an Article’s Metadata

Read Now
Fifth Edition of ‘The Evidence’: Do Peacebuilding Practices Exclude Women?

Fifth Edition of ‘The Evidence’: Do Peacebuilding Practices Exclude Women?

The June 2024 installment of The Evidence newsletter puts post-war conflict resolution practices under the microscope – taking a closer look at how women are adversely affected by these peacebuilding exercises.

Read Now
How ‘Dad Jokes’ Help Children Learn How To Handle Embarrassment

How ‘Dad Jokes’ Help Children Learn How To Handle Embarrassment

Yes, dad jokes can be fun. They play an important role in how we interact with our kids. But dad jokes may also help prepare them to handle embarrassment later in life.

Read Now
Fourth Edition of ‘The Evidence’: Do Women Make Better Doctors? 

Fourth Edition of ‘The Evidence’: Do Women Make Better Doctors? 

In this issue of The Evidence newsletter, journalist Josephine Lethbridge examines why women doctors see better outcomes in their patients’ health.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments