Communication

Is Everything a Scholar Writes Automatically Scholarly?

November 26, 2014 979

Transparency_optWhat is it that sets academic publications apart from articles on The Conversation? Peer review might be your first answer. While The Conversation is built around a journalistic model, there is a big growth in online, open-access journals each with different approaches to peer review. But peer review is impossible to define and reviewing research before it is published can be fraught with problems.

This is part of the reason why so many published research findings are false. Alternative publishing models have developed in response to this. Open access and post-publication peer review are now common.

The Conversation logo

This article by Christopher Sampson originally appeared at The Conversation, a Social Science Space partner site, under the title “What counts as an academic publication?”

This new regime raises questions about what defines academic publishing. Blog posts and journalistic articles can be open access and subject to post-publication peer review, but are they scholarly? New publishing models have also developed their own shortcomings. One problem is the proliferation of predatory open access publishers. Some of these appear happy to accept randomly generated articles for publication, apparently following peer review.

The importance of transparency

So, what should be considered scholarly output? The key to quality research is that we know what went into producing the reported results. All empirical work should be preceded by a published protocol. This should set out – transparently – the methods that were used.

Without one, it’s difficult to reproduce research findings and identify errors. There are plenty of journals that will now publish protocols, such as BMJ Open, PeerJ or SpringerPlus. But publication of a protocol in an open access repository would be sufficient – it isn’t necessary for it to appear in a peer-reviewed journal.

It’s important to make any present or potential conflicts of interest clear. This should apply to authors, reviewers and editors. Journal’s disclosure rules are a start, though these are subject to limitations. We need more sophisticated mechanisms for use alongside initiatives like ORCID, which assigns a unique ID to all researchers.

In most cases, scholars can share the data they have collected and analysed. Making data and analysis files available can help uncover simple errors. The Reinhart-Rogoff-Herndon incident is a case in point. Research findings by two Harvard economists were used to justify austerity policies, but these findings were undermined when a fundamental error was found in an Excel file.

My own field of research – health economics – should make cost-effectiveness models open. These models often form the basis of decisions about whether or not a particular drug will be available to patients, and yet the methods are often unclear to everyone but the authors. Where data relates to individual participants – and cannot be anonymized – this should be made clear to readers and reviewers.

Shine a light on peer review

Evidence suggests that two or three peer reviewers will not be able to identify all errors in a manuscript. This is one of the main problems with pre-publication peer review. It’s also one reason why open access is so important in the definition of good science. Paywalls on traditional academic journals restrict the number of people who can check the quality of a publication and can encourage mistaken consensus over published errors. All scholarly output must be open access.

And so peer review itself should also be transparent. Pre-publication peer review reports should be open and accessible through the journal or a service like Publons, a facility for researchers to record their peer review activity. Mechanisms to support post-publication peer review should also be supported. Reviewers should be identifiable as experts in their field. PubMed Commons is an example of such a tool.

Peer review is important, but I believe that post-publication approaches can be more effective. An additional benefit of open evaluation is the potential for better metrics.

Redefining scholarly output

Scholarly writing should be distinguishable from other forms of publication by its transparency. We should know exactly how authors arrive at their findings. Findings published in academic journals should be given special credence because of this.

Academic publishing should be defined by the presence of strict regulations to maximise transparency. Articles that do not meet transparency criteria should not be eligible for research quality assessments, such as the UK’s Research Excellence Framework. Journalists and academic bloggers will not be subject to such strict rules, and their output will differ accordingly.

Make “good” science clearer

I am by no means the first to call for such measures. But previous calls have focused on ideas for improving scholastic writing rather than the more fundamental challenge of defining it.

Transparency no doubt has its costs, at least in the short term. But without it, true scholarly output will become increasingly indistinguishable from academics’ other forms of writing.

Good science should not be defined by whether or not pre-publication peer review takes place, but by the transparency of the research. Some fear that abandoning our current system might allow more “bad science” to get through. But we have bad science now, and lots of it. Sunlight is the best disinfectant.The Conversation


Christopher Sampson is a health economist based at the Division of Rehabilitation and Ageing at the University of Nottingham. His interest is most strongly drawn towards research in to the methods and theory around valuing health, and the evaluative space of economic evaluation. Sampson also has an active interest in models of academic publishing and the engagement of academics with alternative channels of dissemination, such as blogging and social media. This is in no small part thanks to his role as founder of The Academic Health Economists' Blog.

View all posts by Christopher Sampson

Related Articles

Third Edition of ‘The Evidence’: How Can We Overcome Sexism in AI?
Bookshelf
April 29, 2024

Third Edition of ‘The Evidence’: How Can We Overcome Sexism in AI?

Read Now
The Long Arm of Criminality
Opinion
April 29, 2024

The Long Arm of Criminality

Read Now
Second Edition of ‘The Evidence’ Examines Women and Climate Change
Bookshelf
March 29, 2024

Second Edition of ‘The Evidence’ Examines Women and Climate Change

Read Now
Did the Mainstream Make the Far-Right Mainstream?
Communication
February 27, 2024

Did the Mainstream Make the Far-Right Mainstream?

Read Now
Why Don’t Algorithms Agree With Each Other?

Why Don’t Algorithms Agree With Each Other?

David Canter reviews his experience of filling in automated forms online for the same thing but getting very different answers, revealing the value systems built into these supposedly neutral processes.

Read Now
A Black History Addendum to the American Music Industry

A Black History Addendum to the American Music Industry

The new editor of the case study series on the music industry discusses the history of Black Americans in the recording industry.

Read Now
The Use of Bad Data Reveals a Need for Retraction in Governmental Data Bases

The Use of Bad Data Reveals a Need for Retraction in Governmental Data Bases

Retractions are generally framed as a negative: as science not working properly, as an embarrassment for the institutions involved, or as a flaw in the peer review process. They can be all those things. But they can also be part of a story of science working the right way: finding and correcting errors, and publicly acknowledging when information turns out to be incorrect.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments