Impact

Five Considerations for Any Policy to Measure Research Impact

January 13, 2017 1753

Measuring impact

Measuring public impact, as opposed to purely academic impact, is not quite apples to oranges, but it’s also not exactly the same thing.

This year will see the Australian government pilot new ways to measure the impact of university research.

As recommended by the Watt Review, the Engagement and Impact Assessment will encourage universities to ensure academic research produces wider economic and social benefits.

This fits into the National Innovation and Science Agenda, in which taxpayer funds are targeted at research that will have a beneficial future impact on society.

The Conversation logo

This article by Andrew Gunn and Michael Mintrom originally appeared at The Conversation, a Social Science Space partner site, under the title “Five things to consider when designing a policy to measure research impact”

Education Minister Simon Birmingham said the pilots will test “how to measure the value of research against things that mean something, rather than only allocating funding to researchers who spend their time trying to get published in journals.”

This move to measure the non-academic impact of research introduces many new challenges that were not previously relevant when evaluation focused solely on academic merit. New research highlights some of the key issues that need to be addressed when deciding how to measure impact.

1. What should be the object of measurement?
Research impact evaluations needs to trace out a connection between academic research and “real world” impact beyond the university campus. These connections are enormously diverse and specific to a given context. They are therefore best captured through case studies.

When analysing a case study the main issues are: what counts as impact, and what evidence is needed to prove it? When considering this, Australian policymakers can use recent European examples as a benchmark.

Education Minister Simon Birmingham said the pilots will test ‘how to measure the value of research against things that mean something, rather than only allocating funding to researchers who spend their time trying to get published in journals.’

For instance, in the UK’s Research Excellence Framework (REF) – which assesses the quality of academic research – the only impacts that can be counted are those directly flowing from academic research submitted to the same REF exercise.

To confirm the impact, the beneficiaries of research (such as policymakers and practitioners) are required to provide written evidence. This creates a narrow definition of impact because those that cannot be verified, or are not based on submitted research outputs, do not count.

This has been a cause of frustration for some UK researchers, but the high threshold does ensure the impacts are genuine and flow from high quality research.

2. What should be the time frame?
There are unpredictable time lapses between academic work being undertaken and it having impact. Some research may be quickly absorbed and applied, whereas other impacts, particularly those from basic research, can take decades to emerge.

For example, a study looking at time lags in health research found the time lag from research to practice to be on average 17 years. It should be noted, though, that time lapses vary considerably by discipline.

Only in hindsight can the value of some research be fully appreciated. Research impact assessment exercises therefore need to be set to a particular timeframe.

Here, policymakers can learn from previous trials such as one conducted by Australian Technology Network and Group of Eight in 2012. This exercise allowed impacts related to research that occurred during the previous 15 years.

3. Who should be the assessors?
It is a long established convention that academic excellence is decided by academic peers. Evaluations of research are typically undertaken by panels of academics.

However, if these evaluations are extended to include non-academic impact, does this mean there is now a need to include the views of end-users of research?
This may mean the voices of people outside of academia need to be involved in the evaluation of academic research.

In the 2014 UK REF, over 250 “research users” (individuals from the private, public or charitable sectors) were recruited to take part in the evaluation process. However, their involvement was restricted to assessing the impact component of the exercise.

This option is an effective compromise between maintaining the principle of academic peer review of research quality while also including end-users in the assessment of impact.

4. What about controversial impacts?
In many instances the impact of academic research on the wider world is a positive one. But there are some impacts that are controversial – such as fracking, genetically modified crops, nanotechnologies in food, and stem cell research – and need to be carefully considered.

Such research may have considerable impact, but in ways that make it difficult to establish a consensus on how scientific progress impacts “the public good.” Research such as this can trigger societal tensions and ethical questions.

This means that impact evaluation needs to also consider non-economic factors, such as: quality of life, environmental change, and public health. Even though it is difficult placing dollar values on these things.

5. When should impact evaluation occur?
Impact evaluation can occur at various stages in the research process. For example, a funder may invite research proposals where the submissions are assessed based on their potential to produce an impact in the future.

An example of this is the European Research Council Proof of Concept Grants, where researchers who have already completed an ERC grant can bid for follow-on funding to turn their new knowledge into impacts.

Alternatively, impacts flowing from research can be assessed in a retrospective evaluation. This approach identifies impacts where they already exist and rewards the universities that have achieved them.

An example of this is the Standard Evaluation Protocol (SEP) used in the Netherlands, which assesses both the quality of research and its societal relevance.

A novel feature of the proposed Australian system is the assessment of both engagement and impact, as two distinctive things. This means there isn’t one international example to simply replicate.

Although Australia can learn from some aspects of evaluation in other counties, the Engagement and Impact Assessment pilot is a necessary stage to trial the proposed model as a whole.

The pilot – which will test the suitability of a wide range of indicators and methods of assessment for both research engagement and impact – means the assessment can be refined before a planned national rollout in 2018.The Conversation


Andrew Gunn is a researcher in higher education policy at the University of Leeds. Michael Mintrom is professor of public sector management at the Australia and New Zealand School of Government and Monash University.

View all posts by Andrew Gunn and Michael Mintrom

Related Articles

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research
Communication
November 21, 2024

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research

Read Now
Tom Burns, 1959-2024: A Pioneer in Learning Development 
Impact
November 5, 2024

Tom Burns, 1959-2024: A Pioneer in Learning Development 

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures
Impact
September 23, 2024

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Read Now
Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate
Impact
September 18, 2024

Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Read Now
Webinar: Fundamentals of Research Impact

Webinar: Fundamentals of Research Impact

Sage 1049 Event, Impact

Whether you’re in a research leadership position, working in research development, or a researcher embarking on their project, creating a culture of […]

Read Now
Paper Opening Science to the New Statistics Proves Its Import a Decade Later

Paper Opening Science to the New Statistics Proves Its Import a Decade Later

An article in the journal Psychological Science, “The New Statistics: Why and How” by La Trobe University’s Geoff Cumming, has proved remarkably popular in the years since and is the third-most cited paper published in a Sage journal in 2013.

Read Now
A Milestone Dataset on the Road to Self-Driving Cars Proves Highly Popular

A Milestone Dataset on the Road to Self-Driving Cars Proves Highly Popular

The idea of an autonomous vehicle – i.e., a self-driving car – isn’t particularly new. Leonardo da Vinci had some ideas he […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments