Communication

A Guided Tour of Post-Publication Review Sites Communication
Lining up to share their take on a colleague's work. (Photo: Ben Tubby /flickr.com)

A Guided Tour of Post-Publication Review Sites

November 14, 2014 1871

Line of penguins

Lining up to share their take on a colleague’s work. (Photo: Ben Tubby /flickr.com)

Academic debate using the many Web 2.0 and social media tools freely available has only been embraced by a small percentage of academics. Interesting papers are more likely just shared using such as Twitter, Google+ and LinkedIn than discussed, but considering that it is far easier and less time-consuming to share content on the web than review it, it is understandable. Reviewing takes time and unlike reviewing a film, which is foremost a subjective piece of writing and focuses on whether you enjoyed it or not and whether it was well made, peer-review requires more considered thought. Research is measured on whether it was well designed and conducted, not how well it was written (although that does come into the formal review process).

LSE Impact logo

This article by Andy Tattersall originally appeared on the LSE Impact of Social Sciences blog as “Comment, discuss, review: An essential guide to post-publication review sites” and is reposted under the Creative Commons license (CC BY 3.0).

The debate on whether which is the best way forward for post-publication review will continue and like other topics such as measurement of research, there appears to be no ‘silver bullet.’ Instead there is a collection of sites and tools operating in silos, all offering to solve a problem, that being the lack of post publication discussion and assessment. Below are a list of some of the main tools and sites offering some kind of comment, discussion or review system – it is not exhaustive or comprehensive, but it will give you some idea as to what they are and do.

PLOS ONE

PLOS ONE refers to its mission statement as; “Accelerating the publication of peer-reviewed research.” First and foremost PLOS ONE is an open access collection of journals that unlike many traditional journals has sped up up the publication process and ensures authors retain copyright. Not a post-publication review site outright, but it does allow users to comment on the published research, very much like how newspapers allow visitors to comment on their news articles. Commenting on research is in essence less formal than post-publication reviewing, the reader comments and has the remit to post something as in depth as they wish. They may want to write just a few lines about a part of the research, the methodology, results or conclusion or a longer more in-depth review about the whole paper. When commenting on papers in PLOS ONE you must be a registered user and identify any competing interests. The rules are quite simple and say that anyone commenting on someone else’s research must not post content as stated below:

  1. Remarks that could be interpreted as allegations of misconduct
  2. Unsupported assertions or statements
  3. Inflammatory or insulting language

Anyone breaking these rules will be removed and their account disabled- obviously it does not stop them creating new accounts, but that will always be a problem for many interactive websites.

PubMed Commons

PubMed is a huge publicly accessible search engine that accesses the Medline database of references and abstracts on life sciences and biomedical topics. It recently launched PubMed Commons to enable authors to share opinions and information about scientific publications in PubMed.

To be eligible to use PubMed Commons you have to be an author of a publication in the database, therefore preventing anyone from going in and leaving comments. The emails of eligible authors have been collected from the NIH and the Wellcome Trust and authors emails within PubMed and PubMed Central. You can also ask a colleague to invite you into the system.

pubmedcommons© PubMed

The guidelines for PubMed Commons are more stringent than that of PLOS ONE and other such sites. Commenters must use their real name and again disclose any conflicts of interest. By contributing to Commons they grant other users a worldwide, royalty-free, non-exclusive, perpetual license under the Creative Commons Attribution 3.0 United States License. Again the usual rules of not posting inflammatory, offensive and spam comments apply. Full guidelines can be viewed here:

http://www.ncbi.nlm.nih.gov/pubmedcommons/help/guidelines/

Open Review

Open Review is a new feature within the popular academic social network and research sharing platform ResearchGate. Open Review allows users to publish an open and transparent review of any paper they have read, worked with, or cited. ResearchGate say it is: “Designed to approach the evaluation of research in a different way, Open Review encourages scientists and researchers to focus on one key question: Is this research reproducible?” Users can discuss articles they click on, with a slant more towards asking questions about the publication, than commenting or reviewing it.

F1000 Research

f1000© F1000

F1000 – standing for Faculty of 1000 – is made up of F1000 Prime, which is a personalised recommendation system for biomedical research articles from F1000. F1000 Research is an open science journal with post-publication peer reviewed research with underlying datasets. Finally there is F1000 Posters which is an open repository for conference posters and slide presentations.

F1000 Research has a system of open peer review which publishes referee responses and allows for replies by the authors and reader comments, so a bit of everything. In addition they offer incentives for reviewers which include a 50 percent discount on article processing charges for the 12 months following the submission of their referee report. They also offer a 6 month free personal subscription of their sister service F1000 Prime. Users of F1000 can track the conversation and even discuss the article at the bottom of the page, so the entire process, paper, review and discussion are taking place on one page. Even referee’s reports can be cited in F1000 Research and published under a CC BY license. A DOI (digital object identifier) is assigned to every referee report, so it can be cited independently from the article.

PubPeer

PubPeer refers to itself as the online journal club that allows users to search for papers via DOIs, PMIDs, arXiv IDs, keywords and authors amongst other options. PubPeer’s goal is to create an online community that uses the publication of scientific results as an opening for discussion among scientists. Researchers can comment on almost any scientific article published with a DOI or preprint in the arXiv. You can also browse the list of journals with comments, although presently it is rare to find a journal with more than a couple of comments. First and last authors of published articles are invited to post comments- I’m presuming the authors in between also get a say. Unlike some of the other tools mentioned, PubPeer allows for anonymous commenting, which could open the door for more trolling and abusive behaviour as reviewers feel an extra level of protection from what they say. One researcher has filed a lawsuit over anonymous comments which they claim caused them to lose their job after accusations of misconduct in their research.

Publons

The primary aim of Publons is to help researchers get credit for peer review. Whilst writing for peer-reviewed journals have often been seen by many of handing their hard work over to someone else to benefit from financially, that being the publishers, at least there is the benefits of the author’s increased kudos, profile and knowledge-building and potential to gain promotion within their organisation. Peer reviewing can also have similar rewards with regards to the researcher’s CV and promotion prospects and that they get to see emerging research but the anonymous nature of much of it means less opportunity for profile building, yet it is no less part of the system that is the research publishing cycle. Publons sets to works with reviewers, publishers, universities, and funding agencies to turn peer review into a measurable research output. They collect peer review information from reviewers and from publishers, and use the data to create reviewer profiles with publisher-verified peer review contributions that researchers can add to their CV. Publons state that; “reviewers control how each review is displayed on their profile (blind, open, or published), and can add both pre-publication reviews they do for journals and post-publication reviews of any article.”

JOVE

The Journal of Visualized Experiments has now been around for some time, almost a decade but it is only in the last couple of years that it has really broken through and is now subscribed to by many university libraries. JOVE is a PubMed-indexed video journal with a mission to increase the productivity of scientific research. Although not at the forefront of JOVE’s priorities, they do allow for comments on the published research videos.

Peer J

Peer J is an open access peer-reviewed scientific journal that focused on publishing research in the biological and medical sciences. It received substantial backing of US$950,000 from O’Reilly Media – which founder Tim O’Reilly is famous for popularizing the term Web 2.0. (SAGE, the sponsor of Social Science Space, is an investor in PeerJ.) It is part of the same publishing company that was co-founded by publisher Peter Binfield (formerly at PLOS ONE) and CEO Jason Hoyt (formerly at Mendeley), who obviously have a lot of experience in scholarly communications.

peerj© CC BY Peer J

Peer J has a points system for authors and commentators as an incentive to publish and comment on research. Anyone who has ever argued that citations, H Indexes and such as Twitter and followers are just multi-levelled multiplayer games will get this. The points system are below:

  • Be an academic editor on a PeerJ article = 100 pts
  • Be an author on a published PeerJ article = 100 pts
  • Make your manuscript reviews public on a PeerJ article = 35 pts
  • Submit an “open review” as a reviewer on a PeerJ article = 35 pts
  • Be an author on a PeerJ PrePrint = 35 pts
  • Be an academic editor on a rejected PeerJ article without reviews = 35 pts
  • Have an answer on a question accepted = 15 pts
  • Have feedback deemed “very helpful” by an author of a PeerJ PrePrint = 15 pts
  • Receive an up vote for an answer = 10 pts
  • Receive an up vote for a question = 5 pts
  • Receive an up vote for feedback on a PeerJ PrePrint = 5 pts
  • Receive an up vote for reply to question or comment = 1 pt
  • Have first feedback approved in moderation on a PeerJ PrePrint = 1 pt

There are tables of the top authors and reviewers which can be filtered by topic area, publication date and those who have asked the most questions and given the most answers. The questions and answers aspect is a different angle to the commenting process as it does potentially open up further dialogue between authors and commentators. At present though there does not seem to be much activity in this area.

Peer J state: “Our annotation system goes beyond just answering questions or finding answers. Everyone from authors, editors, reviewers, and visitors to PeerJ are contributing in some way. Often, these are “hidden” contributions to the body of science that can go unrecognized. The points that we are starting to show on profile pages are just a light way to surface this participation.”

As for this points ranking system, it will appeal to some researchers, those with a competitive edge, but on the flipside will feel uncomfortable to others who do not want to see their work captured in numbers, and that applies to any kind of metric not just Peer J. Nevertherless, it is an interesting take on the publishing model and one that will continue to create interest and debate.

PaperCritic

One of the first proper research commenting tools, PaperCritic appears to have ceased business but is still worthy of a brief mention. Created using the Mendeley API, PaperCritic connected with a user’s Mendeley account and allowed them to comment on research hosted within Mendeley’s huge database of references. Their blog, Facebook and Twitter accounts all fell silent in 2012 leading me to believe that this was no longer running. The chances are that Mendeley will at some point create their own commenting and reviewing system, so still well worth the mention.

The Winnower is possibly the least academic looking post-publication platform of them all, but that should not put readers off; in fact it should have the opposite effect. An attractive site that sets its stall out on the homepage with the statement that; “The Winnower is founded on the principle that all ideas should be openly discussed, debated and archived. As with so many new academic tools and platforms it began life thanks to a PhD student, namely Joshua Nicholson from Virginia Tech. It provides an interesting new angle looking at research from both ends of the spectrum, that which has made a big impact and research that was retracted with its own ‘Grain’ and ‘Chaff’ page. The grain features publications with more than 1000 citations or a Altmetric score above 250. Whilst the chaff looks at papers that were pulled from publication and give authors an opportunity to talk about their research rather than just a ‘name and shame’ list. The Winnower is obviously still in its early stages due to the handful of reviews and publications, but not every post-publication review site can begin from the point of PubMed. It is one certainly worth keeping an eye on.

Andy Tattersall is an information specialist at the School of Health and Related Research at the University of Sheffield. His role is to scan the horizon for web and technologies opportunities relating to research, teaching and collaboration and maintain networks that support this. He has a keen interest in new ways of working by employing altmetrics, Web 2.0 and social media but also paying close attention to the implications and pitfalls for using such advances. @andy_tattersall

View all posts by Andy Tattersall

Related Articles

Ninth Edition of ‘The Evidence’: Tackling the Gender Pay Gap 
Communication
October 31, 2024

Ninth Edition of ‘The Evidence’: Tackling the Gender Pay Gap 

Read Now
The Conversation Podcast Series Examines Class in British Politics
Communication
October 25, 2024

The Conversation Podcast Series Examines Class in British Politics

Read Now
Emerson College Pollsters Explain How Pollsters Do What They Do
Communication
October 23, 2024

Emerson College Pollsters Explain How Pollsters Do What They Do

Read Now
Diving Into OSTP’s ‘Blueprint’ for Using Social and Behavioral Science in Policy
Bookshelf
October 14, 2024

Diving Into OSTP’s ‘Blueprint’ for Using Social and Behavioral Science in Policy

Read Now
Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions

Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions

When scientists make important discoveries, both big and small, they typically publish their findings in scientific journals for others to read. This […]

Read Now
Eighth Edition of ‘The Evidence’: How Sexist Abuse Undermines Political Representation 

Eighth Edition of ‘The Evidence’: How Sexist Abuse Undermines Political Representation 

In this month’s issue of The Evidence newsletter, Josephine Lethbridge explores rising levels of abuse directed towards women in politics, spotlighting research […]

Read Now
Revisiting the ‘Research Parasite’ Debate in the Age of AI

Revisiting the ‘Research Parasite’ Debate in the Age of AI

The large language models, or LLMs, that underlie generative AI tools such as OpenAI’s ChatGPT, have an ethical challenge in how they parasitize freely available data.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments