Featured

At a Glance: The UK’s Twin-track Approach to Measuring Impact

March 18, 2019 2150

Around the world, there’s a persistent belief that policy, practice and the public should all be better informed by evidence. Whether it’s immunisation programmes, literacy initiatives, or development banks, we hope decisions made by politicians, civil servants, doctors, teachers and others make use of the best available evidence (along with their professional judgement). Likewise, the public is expected to engage with, understand and respond to information that makes a range of different claims.

This should be good news for academic researchers, creating a growing demand for the valuable outputs they supply. However, the real world is never as simple as linear impact flowcharts suggest. The ‘evidence’ is often contested, unclear, incomprehensible, inaccessible, or absent. And that professional judgement can often be accompanied by other priorities, human error, biases, and sheer randomness. As for the public, the recent surge of populism suggests things other than ‘facts’ or ‘truth’ can drive them.

Evidently

In the UK, a growing number of intermediary organisations are springing up to facilitate the flow of evidence from researchers to decision makers: What Works Centres, the Behavioural Insights Team (or nudge unit), the Policy Lab, the Parliament knowledge exchange team, the Cabinet Office Open Innovation team, and the Universities Policy Engagement Network (UPEN).

A wide range of sources of research funding either require plans for or evidence of impact, e.g. Impact Acceleration Accounts and the Higher Education Innovation Fund (HEIF). The same focus on impact is also seen among funders outside government e.g. the huge medical charity the Wellcome Trust has a focus on research translation and commercialisation.

However, the two key policies for evaluating (and incentivising) UK universities to have an impact (beyond citations) are the venerable Research Excellence Framework (REF) and the upstart Knowledge Exchange Framework (KEF). The two are not directly comparable as each framework is focused on different types of impact and uses different measurement approaches. Despite that, it’s often the same people in universities who have to deal with both acronyms and as the blossoming relationship between the two is still at the flirting stage, it’s a good moment to look deeper at the particulars of this courtship:

‘Impact’ in REF and KEF

REF KEF
1.For REF2021 impact case studies, impact is defined as … “an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia”Aims to provide more useful, visible, accessible and comparable information about knowledge exchange activities [including different forms of impact]
2.A research-specific beauty contest A holistic assessment of impact, including (but also beyond) research and teaching
3.Uses a case study approach, with each put forward by institutionsUses a case study approach, with each put forward by institutions
4.Largely qualitative peer-review by 38 panels of subject-specific experts, though may feature quantitative indicatorsLargely quantitative analysis of data, though ‘evidence-based statements’ are submitted for some areas
5.Whole sector and first-past-the-postClustered and benchmarked
6.Focused on wider impact – economic and social
Focused on proxies for economic impact e.g. consultancy income, IP, or spin-outs
7.Essential that impact is built on the institution’s own research Doesn’t necessarily have to be built on the institution’s research
8. Focused on linear impact pathwaysCan account for a wider range of pathways, by focussing on a handful of outcome metrics
9. Subject-cluster level (units of assessment or UoAs)Institution-level (though split across seven ‘perspectives’)
10.A massive effort every six years by institutionsHopefully only a little extra effort each year by institutions
11.UK wideEngland only at present
12.A quarter of REF2021’s £2bn of funding will be for impact£0 …for now, though likely to determine HEIF funding (at least) from 2020/21

Many around the world are watching these developments in the UK with interest, in particular, what the impact and ROI of the UK exercises are versus other national approaches. But how do other countries do it? And how is learning shared between them? A future article will look at just those questions…


Louis Coiffait is a commentator, researcher, speaker, and adviser focused on higher education policy, with a particular interest in impact and knowledge exchange He has worked with Pearson, Taylor & Francis, SAGE Publishing, Wonkhe, think tanks, the Higher Education Academy, the National Foundation for Educational Research, the National Association of Head Teachers, the Teacher Training Agency, an MP, and a Minister. He has led projects on how publishers can support the impact agenda, the future of higher education (the Blue Skies series at Pearson), access to elite universities, careers guidance, enterprise education, and the national STEM skills pipeline (for the National Audit Office). He is also committed to volunteering, including over a decade as a school governor and chair of an eight-school federation in Hackney in East London, and recently as vice-chair of a school in Tower Hamlets. He spent three years as chair of Westminster Students’ Union. He studied at York, UCLA and Cambridge. Louis is an RSA Fellow, amateur photographer, “enthusiastic” sportsman, proud East London citizen and Yorkshireman (really).

View all posts by Louis Coiffait

Related Articles

All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture
Event
October 10, 2024

All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture

Read Now
Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions
Research
October 10, 2024

Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions

Read Now
‘Settler Colonialism’ and the Promised Land
International Debate
September 27, 2024

‘Settler Colonialism’ and the Promised Land

Read Now
Webinar: Banned Books Week 2024
Event
September 24, 2024

Webinar: Banned Books Week 2024

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

The creation of the Coalition for Advancing Research Assessment (CoARA) has led to a heated debate on the balance between peer review and evaluative metrics in research assessment regimes. Luciana Balboa, Elizabeth Gadd, Eva Mendez, Janne Pölönen, Karen Stroobants, Erzsebet Toth Cithra and the CoARA Steering Board address these arguments and state CoARA’s commitment to finding ways in which peer review and bibliometrics can be used together responsibly.

Read Now
Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Sage 610 Impact

Psychologists Jonathan St. B. T. Evans and Keith E. Stanovich have a history of publishing important research papers that resonate for years.

Read Now
Revisiting the ‘Research Parasite’ Debate in the Age of AI

Revisiting the ‘Research Parasite’ Debate in the Age of AI

The large language models, or LLMs, that underlie generative AI tools such as OpenAI’s ChatGPT, have an ethical challenge in how they parasitize freely available data.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments