Could Distributed Peer Review Better Decide Grant Funding?
The landscape of academic grant funding is notoriously competitive and plagued by lengthy, bureaucratic processes, exacerbated by difficulties in finding willing reviewers. Distributed […]
The Declaration on Research Assessment, or DORA, has yet to achieve widespread institutional support in the UK. Maybe its reception might be warmed if DORA was more like its cousin, the Leiden Manifesto.
Rather than expecting people to stop utilizing metrics altogether, we would be better off focusing on making sure the metrics are effective and accurate, argues Brett Buttliere.
While the initial splash made by ‘The Metric Tide,’ an independent review on the role of metrics in research assessment, has died down since its release last month, the underlying critique continues to make waves.
Jane Tinkler argues that if institutions like HEFCE specify a narrow set of impact metrics, more harm than good would come to universities forced to limit their understanding of how research is making a difference. But, she adds, qualitative and quantitative indicators continue to be an incredible source of learning for how impact works.
A new report looking at the role of metrics in analyzing British academe finds, ‘A lot of the things we value most in academic culture resist simple quantification, and individual indicators can struggle to do justice to the richness and diversity of our research.’