Could Distributed Peer Review Better Decide Grant Funding?
The landscape of academic grant funding is notoriously competitive and plagued by lengthy, bureaucratic processes, exacerbated by difficulties in finding willing reviewers. Distributed […]
Authorship of an article seems like it ought to be straightforward, but of course it’s not. Even with greater scrutiny, abuse of the process — both adding the wrong people and subtracting the right ones — continues.
A very strong overall REF performance signifies a large concentration of outstanding work. It is an unambiguous plus. All the same, precise league table positions in the REF, indicator by indicator, should be taken with a grain of salt.
Measuring impact was a key feature of the just-released Research Education Framework in the UK. But ‘impact’ isn’t as fair a measurement as we could hope.
Consciously we might be talking about the impending sustainability crisis, but unconsciously we find ways to actually maintain the status quo.
In a conclusion to his two earlier articles on post-publication peer review, Andy Tattersall argues that while new ways to measure scholarly value may not be perfect yet, it’s still high time to start introducing them more widely.
Behavioral scientists have seized on social media and their massive data sets as a way to quickly and cheaply figure out what people are thinking and doing. But some of those tweets and thumbs ups can be misleading.
It’s not necessarily the type of peer review that makes an academic article scholarly, argues Christoper Sampson, but the transparency of how the conclusions were reached.
Although universities and funding bodies pay lip-service to the importance of multi-discipline research, a physicist and an anthropologist argue there is a long way to go before the reality matches the rhetoric.