Social, Behavioral Scientists Eligible to Apply for NSF S-STEM Grants
Solicitations are now being sought for the National Science Foundation’s Scholarships in Science, Technology, Engineering, and Mathematics program, and in an unheralded […]
Academics are required to not only find effective ways to communicate their research, but also to increasingly measure and quantify its quality, impact and reach. In Scholarly Communication: What Everyone Needs to Know, Rick Anderson puts us in the picture. And in Measuring Research: What Everyone Needs to Know, Cassidy Sugimoto and Vincent Lariviere critically assess over 20 tools currently available for evaluating the quality of research.
In this post, Lutz Bornmann and Robin Haunschild present evidence from their recent study examining the relationship of peer review, altmetrics, and bibliometric analyses with societal and academic impact. Drawing on evidence from REF2014 submissions, they argue altmetrics may provide evidence for wider non-academic debates, but correlate poorly with peer review assessments of societal impact.
How can the impact of an academic article be measured? It seems that everyone wants to find an answer to this question – from the researcher and author teams that create research articles, to the editors and peer reviewers who curate them, to the societies and publishers who ensure that the articles are released to the world.
Portia Roelofs and Max Gallien cite Bruce Gilley’s defense of colonialism paper published earlier this month to illustrate how deliberately provocative articles have the capacity to hack academia, to privilege clicks and attention over rigor in research.
Altmetrics: A Practical Guide for Librarians, Researchers and Academics, edited by Andy Tattersall, provides an overview of altmetrics and new methods of scholarly communication and how they can be applied successfully to provide evidence of scholarly contribution and improve how research is disseminated. The book, which draws on the expertise of leading figures in the field, strongly encourages library and information science (LIS) professionals to get involved with altmetrics to meet the evolving needs of the research community, finds Nathalie Cornée.
A new report produced by the Digital Science team explores the types of evidence used to demonstrate impact in REF2014 and pulls together guidance from leading professionals on good practice.
Science and technology systems are routinely monitored and assessed with indicators created to measure the natural sciences in developed countries. Ismael Ràfols and Jordi Molas-Gallart urge the creation of indicators that better reflect research activities and contributions in ‘peripheral’ spaces.
Remember that call for a ‘Bad Metric’ prize in the recent ‘The Metric Tide’ report? Peter Kraker, Katy Jordan and Elisabeth Lex take a closer look at one particularly opaque metric, the ResearchGate Score, and suggest they’ve found a real contender.