Cutting NSF Is Like Liquidating Your Finest Investment
Look closely at your mobile phone or tablet. Touch-screen technology, speech recognition, digital sound recording and the internet were all developed using […]
Presenting evidence from a new analysis of business and management academics, the authors explore how journal status is valued by these academics and the point at which journal status becomes more prized than academic influence.
The aim of peer review for research grants and academic hiring boards is to provide expert independent judgement on the quality of research proposals and candidates. Based on findings from a recent survey, Liv Langfeldt, Dag W. Aksnes and Ingvild Reymert, find metrics continue to play a significant role in shaping these decisions, especially for reviewers who are highly ranked themselves.
“Make sure you’re not only citing white guys!” That was the unmistakable takeaway Wednesday as Deen Freelon discussed his research into citation inequities in the social sciences.
In this post, Jorrit Smit and Laurens Hessels, draw on a recent analysis of different impact evaluation tools to explore how they constitute and direct conceptions of research impact. Finding a common separation between evaluation focused on scientific and societal impact, they suggest bridging this divide may prove beneficial to producing research that has public value, rather than research that achieves particular metrics.
Lizzie Gadd argues that any commitment to responsible research assessment needs to include action on global university rankings. She argues universities should unite around the principle of being ‘much more than their rank.’
In academia gender bias is often figured in terms of research productivity and differentials surrounding the academic work of men and women. Alesia Zuccala and Gemma Derrick posit that this outlook inherently ignores a wider set of variables impacting women, and that attempts to achieve cultural change in academia can only be realised, by acknowledging variables that are ultimately difficult to quantify.
Although experts in bibliometry have pointed out the dubious nature of the h-index, most researchers do not always seem to understand that its properties make it a far-from-valid index to seriously and ethically assess the quality or scientific impact of publications.
This year, SAGE will analyze citation data for articles published in SAGE journals in 2009 to find out the most highly cited through the end of 2019. In May, we will present our inaugural 10-Year Impact Awards to the authors of the three papers with the most citations and share helpful insights we learned.