Impact

Five Considerations for Any Policy to Measure Research Impact

January 13, 2017 1429

Measuring impact

Measuring public impact, as opposed to purely academic impact, is not quite apples to oranges, but it’s also not exactly the same thing.

This year will see the Australian government pilot new ways to measure the impact of university research.

As recommended by the Watt Review, the Engagement and Impact Assessment will encourage universities to ensure academic research produces wider economic and social benefits.

This fits into the National Innovation and Science Agenda, in which taxpayer funds are targeted at research that will have a beneficial future impact on society.

The Conversation logo

This article by Andrew Gunn and Michael Mintrom originally appeared at The Conversation, a Social Science Space partner site, under the title “Five things to consider when designing a policy to measure research impact”

Education Minister Simon Birmingham said the pilots will test “how to measure the value of research against things that mean something, rather than only allocating funding to researchers who spend their time trying to get published in journals.”

This move to measure the non-academic impact of research introduces many new challenges that were not previously relevant when evaluation focused solely on academic merit. New research highlights some of the key issues that need to be addressed when deciding how to measure impact.

1. What should be the object of measurement?
Research impact evaluations needs to trace out a connection between academic research and “real world” impact beyond the university campus. These connections are enormously diverse and specific to a given context. They are therefore best captured through case studies.

When analysing a case study the main issues are: what counts as impact, and what evidence is needed to prove it? When considering this, Australian policymakers can use recent European examples as a benchmark.

Education Minister Simon Birmingham said the pilots will test ‘how to measure the value of research against things that mean something, rather than only allocating funding to researchers who spend their time trying to get published in journals.’

For instance, in the UK’s Research Excellence Framework (REF) – which assesses the quality of academic research – the only impacts that can be counted are those directly flowing from academic research submitted to the same REF exercise.

To confirm the impact, the beneficiaries of research (such as policymakers and practitioners) are required to provide written evidence. This creates a narrow definition of impact because those that cannot be verified, or are not based on submitted research outputs, do not count.

This has been a cause of frustration for some UK researchers, but the high threshold does ensure the impacts are genuine and flow from high quality research.

2. What should be the time frame?
There are unpredictable time lapses between academic work being undertaken and it having impact. Some research may be quickly absorbed and applied, whereas other impacts, particularly those from basic research, can take decades to emerge.

For example, a study looking at time lags in health research found the time lag from research to practice to be on average 17 years. It should be noted, though, that time lapses vary considerably by discipline.

Only in hindsight can the value of some research be fully appreciated. Research impact assessment exercises therefore need to be set to a particular timeframe.

Here, policymakers can learn from previous trials such as one conducted by Australian Technology Network and Group of Eight in 2012. This exercise allowed impacts related to research that occurred during the previous 15 years.

3. Who should be the assessors?
It is a long established convention that academic excellence is decided by academic peers. Evaluations of research are typically undertaken by panels of academics.

However, if these evaluations are extended to include non-academic impact, does this mean there is now a need to include the views of end-users of research?
This may mean the voices of people outside of academia need to be involved in the evaluation of academic research.

In the 2014 UK REF, over 250 “research users” (individuals from the private, public or charitable sectors) were recruited to take part in the evaluation process. However, their involvement was restricted to assessing the impact component of the exercise.

This option is an effective compromise between maintaining the principle of academic peer review of research quality while also including end-users in the assessment of impact.

4. What about controversial impacts?
In many instances the impact of academic research on the wider world is a positive one. But there are some impacts that are controversial – such as fracking, genetically modified crops, nanotechnologies in food, and stem cell research – and need to be carefully considered.

Such research may have considerable impact, but in ways that make it difficult to establish a consensus on how scientific progress impacts “the public good.” Research such as this can trigger societal tensions and ethical questions.

This means that impact evaluation needs to also consider non-economic factors, such as: quality of life, environmental change, and public health. Even though it is difficult placing dollar values on these things.

5. When should impact evaluation occur?
Impact evaluation can occur at various stages in the research process. For example, a funder may invite research proposals where the submissions are assessed based on their potential to produce an impact in the future.

An example of this is the European Research Council Proof of Concept Grants, where researchers who have already completed an ERC grant can bid for follow-on funding to turn their new knowledge into impacts.

Alternatively, impacts flowing from research can be assessed in a retrospective evaluation. This approach identifies impacts where they already exist and rewards the universities that have achieved them.

An example of this is the Standard Evaluation Protocol (SEP) used in the Netherlands, which assesses both the quality of research and its societal relevance.

A novel feature of the proposed Australian system is the assessment of both engagement and impact, as two distinctive things. This means there isn’t one international example to simply replicate.

Although Australia can learn from some aspects of evaluation in other counties, the Engagement and Impact Assessment pilot is a necessary stage to trial the proposed model as a whole.

The pilot – which will test the suitability of a wide range of indicators and methods of assessment for both research engagement and impact – means the assessment can be refined before a planned national rollout in 2018.The Conversation


Andrew Gunn is a researcher in higher education policy at the University of Leeds. Michael Mintrom is professor of public sector management at the Australia and New Zealand School of Government and Monash University.

View all posts by Andrew Gunn and Michael Mintrom

Related Articles

Three Decades of Rural Health Research and a Bumper Crop of Insights from South Africa
Impact
March 27, 2024

Three Decades of Rural Health Research and a Bumper Crop of Insights from South Africa

Read Now
Using Translational Research as a Model for Long-Term Impact
Impact
March 21, 2024

Using Translational Research as a Model for Long-Term Impact

Read Now
Norman B. Anderson, 1955-2024: Pioneering Psychologist and First Director of OBSSR
Impact
March 4, 2024

Norman B. Anderson, 1955-2024: Pioneering Psychologist and First Director of OBSSR

Read Now
New Feminist Newsletter The Evidence Makes Research on Gender Inequality Widely Accessible
Impact
March 4, 2024

New Feminist Newsletter The Evidence Makes Research on Gender Inequality Widely Accessible

Read Now
New Podcast Series Applies Social Science to Social Justice Issues

New Podcast Series Applies Social Science to Social Justice Issues

Sage (the parent of Social Science Space) and the Surviving Society podcast have launched a collaborative podcast series, Social Science for Social […]

Read Now
The Importance of Using Proper Research Citations to Encourage Trustworthy News Reporting

The Importance of Using Proper Research Citations to Encourage Trustworthy News Reporting

Based on a study of how research is cited in national and local media sources, Andy Tattersall shows how research is often poorly represented in the media and suggests better community standards around linking to original research could improve trust in mainstream media.

Read Now
Connecting Legislators and Researchers, Leads to Policies Based on Scientific Evidence

Connecting Legislators and Researchers, Leads to Policies Based on Scientific Evidence

The author’s team is developing ways to connect policymakers with university-based researchers – and studying what happens when these academics become the trusted sources, rather than those with special interests who stand to gain financially from various initiatives.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments