Cutting NSF Is Like Liquidating Your Finest Investment
Look closely at your mobile phone or tablet. Touch-screen technology, speech recognition, digital sound recording and the internet were all developed using […]
Burned out by the hamster-wheel of academe and the regime of metrics, John Postill decided the tonic would be to write a spoof spy thriller about a Spanish nerd with a silly name who moves to London in 1994 and accidentally foils a terrorist plot by an evil anthropologist.
A new survey of university faculty finds that the idea of altmetrics – using something aside from journal citations as the measure of scholarly impact – has made less headway among faculty than might be expected given the hoopla surrounding altmetrics. These new measures are the most familiar in the social science community (barely) and least familiar in the arts and humanities (dramatically so).
Academics are required to not only find effective ways to communicate their research, but also to increasingly measure and quantify its quality, impact and reach. In Scholarly Communication: What Everyone Needs to Know, Rick Anderson puts us in the picture. And in Measuring Research: What Everyone Needs to Know, Cassidy Sugimoto and Vincent Lariviere critically assess over 20 tools currently available for evaluating the quality of research.
To end his trilogy of articles on the research metric system (and Google Scholar in particular), Louis Coiffait explores what improvements could be made.
The active use of metrics in everyday research activities suggests academics have accepted them as standards of evaluation. Yet when asked, many academics profess concern about the limitations of evaluative metrics and the extent of their use. Why is there such a discrepancy between principle and practices?
In Metric Power, David Beer examines the intensifying role that metrics play in our everyday lives, from healthcare provision to our interactions with friends and family, within the context of the so-termed data revolution. This is a book that illustrates our growing implication in, and arguable acquiescence to, an increasingly quantified world, but, Thomas Christie Williams asks, where do we locate resistance?
A culture of bad science can evolve as a result of institutional incentives that prioritize simple quantitative metrics as measures of success, argues Paul Smaldino. But, he adds, not all is lost as new initiatives such as open data and replication are making a positive difference.
As governments seek practical metrics for determining if their research funding is money wisely spent, the quest for ‘impact’ takes on great importance. Drawing from the Australian experience, Stephen Taylor addresses several key measurement principles.