Join in our conversation! While you can comment on any of our articles without registering, create an account now to be able to connect with other members, discuss new topics in our forums, and to get regular email alerts with the latest news.Members Login
By Nathalie Cornée | Published: November 9, 2016
Altmetrics: A Practical Guide for Librarians, Researchers and Academics, edited by Andy Tattersall, provides an overview of altmetrics and new methods of scholarly communication and how they can be applied successfully to provide evidence of scholarly contribution and improve how research is disseminated. The book, which draws on the expertise of leading figures in the field, strongly encourages library and information science (LIS) professionals to get involved with altmetrics to meet the evolving needs of the research community, finds Nathalie Cornée.
By Tamar Loach and Martin Szomszor | Published: April 6, 2016
A new report produced by the Digital Science team explores the types of evidence used to demonstrate impact in REF2014 and pulls together guidance from leading professionals on good practice.
By Ismael Rafols and Jordi Molas-Gallart | Published: March 1, 2016
Science and technology systems are routinely monitored and assessed with indicators created to measure the natural sciences in developed countries. Ismael Ràfols and Jordi Molas-Gallart urge the creation of indicators that better reflect research activities and contributions in 'peripheral' spaces.
By Peter Kraker, Katy Jordan and Elisabeth Lex | Published: December 11, 2015
Remember that call for a 'Bad Metric' prize in the recent 'The Metric Tide' report? Peter Kraker, Katy Jordan and Elisabeth Lex take a closer look at one particularly opaque metric, the ResearchGate Score, and suggest they've found a real contender.
By Peter Kraker, Katrin Weller, Isabella Peters and Elisabeth Lex | Published: December 4, 2015
A recent conference aimed to bridge the gap between the different communities interested in bibliometrics. A key theme was the strong need for more openness and transparency: transparency in research evaluation processes to avoid biases, transparency of algorithms that compute new scores and openness of useful technology.
By Elizabeth Gadd | Published: November 12, 2015
The Declaration on Research Assessment, or DORA, has yet to achieve widespread institutional support in the UK. Maybe its reception might be warmed if DORA was more like its cousin, the Leiden Manifesto.
By Brett Buttliere | Published: October 9, 2015
Rather than expecting people to stop utilizing metrics altogether, we would be better off focusing on making sure the metrics are effective and accurate, argues Brett Buttliere.
By Social Science Space | Published: August 17, 2015
While the initial splash made by 'The Metric Tide,' an independent review on the role of metrics in research assessment, has died down since its release last month, the underlying critique continues to make waves.
By Jane Tinkler | Published: August 12, 2015
Jane Tinkler argues that if institutions like HEFCE specify a narrow set of impact metrics, more harm than good would come to universities forced to limit their understanding of how research is making a difference. But, she adds, qualitative and quantitative indicators continue to be an incredible source of learning for how impact works.
By SAGE | Published: July 10, 2015
If the mental picture of peer review turned from it being a chore to it being a career-builder, it's reasonable to think that all of academe might prosper. An interview with a co-founder of Publons, a company which aims to do just that.
By Social Science Space | Published: July 9, 2015
A new report looking at the role of metrics in analyzing British academe finds, 'A lot of the things we value most in academic culture resist simple quantification, and individual indicators can struggle to do justice to the richness and diversity of our research.'
By Shane Gero and Maurício Cantor | Published: May 27, 2015
Peer review is flawed, and a new index proposes a simple way to create transparency and quality control mechanisms. Shane Gero and Maurício Cantor believe that giving citable recognition to reviewers can improve the system by encouraging more participation but also higher quality, constructive input, without the need for a loss of anonymity.