Could Distributed Peer Review Better Decide Grant Funding?
The landscape of academic grant funding is notoriously competitive and plagued by lengthy, bureaucratic processes, exacerbated by difficulties in finding willing reviewers. Distributed […]
You donate your money to charity, your blood to other and your time to special causes. So why not give of your data for science research?
A survey by the Nuffield Council on Bioethics suggests that researchers appreciate the benefits of competition but also fear how it can emphasize prestige over quality
Authorship of an article seems like it ought to be straightforward, but of course it’s not. Even with greater scrutiny, abuse of the process — both adding the wrong people and subtracting the right ones — continues.
A very strong overall REF performance signifies a large concentration of outstanding work. It is an unambiguous plus. All the same, precise league table positions in the REF, indicator by indicator, should be taken with a grain of salt.
Measuring impact was a key feature of the just-released Research Education Framework in the UK. But ‘impact’ isn’t as fair a measurement as we could hope.
Consciously we might be talking about the impending sustainability crisis, but unconsciously we find ways to actually maintain the status quo.
In a conclusion to his two earlier articles on post-publication peer review, Andy Tattersall argues that while new ways to measure scholarly value may not be perfect yet, it’s still high time to start introducing them more widely.
Behavioral scientists have seized on social media and their massive data sets as a way to quickly and cheaply figure out what people are thinking and doing. But some of those tweets and thumbs ups can be misleading.