Could Distributed Peer Review Better Decide Grant Funding?
The landscape of academic grant funding is notoriously competitive and plagued by lengthy, bureaucratic processes, exacerbated by difficulties in finding willing reviewers. Distributed […]
Adam Seth Levine compares how many practitioners engaged in self-matchmaking by contacting researchers directly through the site versus the number who requested hands-on matchmaking.
As part of a project sponsored by the National Academy of Sciences and the Rita Allen Foundation, four science communications experts tackled surrounding the effective and ethical communication of science to relevant policymakers. in this webinar, we talk to the four experts about their findings and the processes they recommend.
Robert J. Marks, the director of the Walter Bradley Center for Natural and Artificial Intelligence, argues that academic reformers are battling numerical laws that govern how incentives work. His counsel? Know your enemy!
Traditionally one of the biggest obstacles to building relationships between researchers and practitioners is different time scales — nonprofits’ “focus is urgent, immediate, and often in response to events…moving quickly and loudly” whereas “academics work to a different rhythm”.
Promoting public engagement with research has become a core mission for research funders. However, the extent to which researchers can assess the impact of this engagement is often under-analysed and limited to success stories. Drawing on the example of development aid, Marco J Haenssgen argues we need to widen the parameters for assessing public engagement and begin to develop a skills base for the external evaluation of public engagement work.
One long-standing concern with connecting research and practice is that the implications of research findings are often presented in a highly “decontextualized, distant way” that makes it difficult for practitioners to apply them to the specific context where they work.
Arlette Jappe, David Pithan and Thomas Heinze find that the growth in the volume of ‘evaluative citation analysis’ publications has not led to the formation of an intellectual field with strong reputational control. This has left a gap which has been filled by commercial database providers, who by selecting and distributing research metrics have gained a powerful role in defining standards of research excellence without being challenged by expert authority.
In the first of a series of short posts by Adam S. Levine spotlighting what the organization Research4Impact has learned about connecting social science researches with practitioners, he identifies four reasons why nonprofit practitioners have wanted to engage with social scientists.