
Even an Imperfect Metrics Regime Has Value

Rather than giving up on measuring impact, let’s go a bit slow until we’ve worked out some of the bugs. (Photo:
Patty O’Hearn Kickham/Flickr/CC BY 2.0)

This article by Jane Tinkler originally appeared on the LSE Impact of Social Sciences blog as “Rather than narrow our definition of impact, we should use metrics to explore richness and diversity of outcomes” and is reposted under the Creative Commons license (CC BY 3.0).
For the full report, supplementary materials, and further reading, visit the LSE’sHEFCEmetrics section.
One of the most common concerns that colleagues discussed with us is that impact metrics focus on what is measurable at the expense of what is important. But, as the report highlights in relation to excellence, it’s more than this. When you design a metric for impact you are explicitly constructing a definition of what impact is and when you go on to use that metric, you are locking in that definition. What we know is that impact is multi-dimensional, the routes by which impact occurs are different across disciplines and sectors, and impact changes over time. We would need a really broad range of metrics to be able to usefully show this variety.

Source: Wilsdon, J., et al. (2015). The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. DOI: 10.13140/RG.2.1.4929.1363
The quantitative evidence supporting claims for impact was diverse and inconsistent, suggesting that the development of robust impact metrics is unlikely … impact indicators are not sufficiently developed and tested to be used to make funding decisions.
So for the impact component of the REF, the Metric Tide report recommended that it is not currently feasible to use quantitative indicators in place of narrative impact case studies. There would be a danger that by doing this the concept of impact might narrow and become too specifically defined by the easy availability of indicators for some types of impact and not for others. For an exercise like the REF, where HEIs are competing for funds, defining impact through quantitative indicators is likely to mean universities ‘play safe’ about which impact stories have greatest currency and therefore should be submitted. This would mean showing less of the diversity and richness of the impacts that we create from our research.
Another reason not to encourage any funder to specify a set of impact metrics at a particular point in time is the growth in the number of tools that can provide some indication of impact. Individual academics are collecting more information about impact-relevant activities and their effects, and universities are making better use of the information they and others already hold to do the same. The recommendations in the Metric Tide report around the improvement of research infrastructure and the greater use of identifiers such as ORCID were made in the hope that this will get easier. It would be a shame if we were not able to make best use of any new tool, just because it was not on some specified list.But that is not to say impact metrics are not useful or needed. We are in fairly early days of our understanding of the ways in which impact happens and both qualitative and quantitative indicators can be a source of learning how impact works in each of our disciplines, locations or sectors. So we should use the dataset of impact case studies as a learning tool about the ways in which successful impact was created, using what methods and with what effects. Although as yet many impact metrics only give partial information, for me some information is always better than none.
Another reason not to encourage any funder to specify a set of impact metrics at a particular point in time is the growth