Academic Funding

The Waves of the Metric Tide

August 17, 2015 1620

big-rock-on-beachWhile the initial splash made by The Metric Tide, an independent review on the role of metrics in research assessment, has died down since its release last month, the underlying critique continues to make waves.

The Metric Tide, the final report from the Independent Review of the Role of Metrics in Research Assessment and Management chaired by Professor James Wilsdon, proposes a framework for the use of responsible metrics, such as impact factors and h-indices, designed to ensure implementation in ways that support the diversities of UK research. Considered an attempt to ‘quantify what cannot be quantified’ and as restricting the natural flow of research that is both determining and deflecting the course of careers, metrics have often been criticised by academics. However, on the other side of the table, many others within the sector view metrics as being an effective way to measure scholarly activity on demand.

The ripples from The Metric Tide are still being felt. What are the indicators we want to measure by? How can metrics be effectively integrated into our assessment system?

This is not the time to apply metrics widely, says the University of London’s Meera Sabaratnam at the Disorder of Things blog. Right now, she insists, “that many structural conditions in the present UK HE system would inhibit the responsible (meaning expert-led, humble, nuanced, supplementing but not substituting careful peer review, reflexive) use of metrics,” including issues such as funding volatility, the vagaries of lay oversight and a lack of similar experiences “by those wielding the judgements.”

She also cites the ill effects of the “rankings culture” on using metrics wisely, concerns which echo both the San Francisco Declaration on Research Assessment and the Leiden Manifesto. Neither of those open letters on their face reject using metrics but both see many existing metrics, such as journal impact factors, as too blunt a tool for the delicate tasks required.

Of course, blunt or fine, metrics are already being used to make decisions, notes Steven Hill, the head of research policy at HEFCE, and so it is of immediate importance to use existing measurements responsibly even as we recognize metrics will not be, as he put it, a ‘silver bullet’:

While we think about the future, it is easy to forget that the REF is already all about metrics of research performance. While there is only limited use of quantitative data as an input to the exercise, the outputs of the exercise, the quality profiles, are themselves metrics of research performance.

A similar point was made by Jane Tinkler, the senior social science advisor at the UK Parliamentary Office of Science and Technology and a consultant on The Metric Tide:

If you consider that nearly 7,000 impact case studies were recently submitted to the REF and (I’m guessing) every single one of them contained some kind of indicator to evidence their impact claims, you might expect the academic community to be more enthusiastic about the use of impact metrics than for some other types of quantitative indicators.

And so a limited but growing use of metrics in making decisions alongside more qualitative measures is inevitable, as Wilsdon himself noted in The Guardian after the release of The Metric Tide:

Metrics should support, not supplant, expert judgement. In our consultation with the research community, we found that peer review, despite its flaws and limitations, continues to command widespread support. We all know that peer review isn’t perfect, but it is still the least worst form of academic governance we have, and should remain the primary basis for assessing research papers, proposals and individuals, and for assessment exercises like the REF. At the same time, carefully selected and applied quantitative indicators can be a useful complement to other forms of evaluation and decision-making. A mature research system needs a variable geometry of expert judgement, quantitative and qualitative indicators. Academic quality is highly context-specific, and it is sensible to think in terms of research qualities, rather than striving for a single definition or measure of quality.

A good, and necessary, first step is to produce a universally used, and recognized, identifier for all academic staff, argues Simon Kerridge, the director of research services at the University of Kent. These would be a sort of ISBN or DOI for researchers. Use of the leading contender at present — ORCID identifier — is already mandated by the Wellcome Trust, which has said it believes “by moving from full names to unique identifiers (referring to Dr Craig Roberts as 0000-0002-9641-6101, rather than ‘C. Roberts’) different interested parties can start reliably talking about the same people, which is a vital first step toward any deeper understanding of researchers, artists, and their activities.”

“We support using a ‘basket of metrics’ or indicators to measure the multiple qualities of the many different entities to be investigated such as articles, journals, researchers or institutions,” Elsevier’s Peter Darroch, the company’s senior product manager for research metrics, wrote at the London School of Economics’ The Impact Blog. “One data source or a single metric is never sufficient to answer questions around research assessment as each metric has its weaknesses, and if used in isolation will create a distorted incentive. … We support: open, standardised methodologies for research metrics; open, persistent, verified, unique global identifiers; and agreed standards for data formats and semantics.”

Elsevier has also partnered with a number of academic institutions in Britain, the U.S. and Australia to produce Snowball Metrics, an effort to produce a global standard for institutional benchmarking that covers the entire spectrum of research activities.

But standardization and expertise will take time – and patience from every quarter. Hence the new forum developed by the Wilsdon-chaired review, a website known as Responsible Metrics.  As Tinkler wrote in urging funders not to get ahead of the game an insisting on any specific metrics just yet:

Individual academics are collecting more information about impact-relevant activities and their effects, and universities are making better use of the information they and others already hold to do the same. The recommendations in The Metric Tide report around the improvement of research infrastructure and the greater use of identifiers such as ORCID were made in the hope that this will get easier. It would be a shame if we were not able to make best use of any new tool, just because it was not on some specified list.

This article was prepared by Mollie Broad, PR assistant at SAGE, and Michael Todd, Social Science Space editor.


Related Articles

Social, Behavioral Scientists Eligible to Apply for NSF S-STEM Grants
Investment
December 3, 2021

Social, Behavioral Scientists Eligible to Apply for NSF S-STEM Grants

Read Now
With COVID and Climate Change Showing Social Science’s Value, Why Cut it Now?
Impact
September 3, 2021

With COVID and Climate Change Showing Social Science’s Value, Why Cut it Now?

Read Now
Testing-the-Waters Policy With Hypothetical Investment: Evidence From Equity Crowdfunding
News
September 9, 2020

Testing-the-Waters Policy With Hypothetical Investment: Evidence From Equity Crowdfunding

Read Now
Compendium of Research Funders’ Impact Requirements
Academic Funding
April 23, 2020

Compendium of Research Funders’ Impact Requirements

Read Now
Congress Seeks Immediate Research Ideas for Stimulus Legislation

Congress Seeks Immediate Research Ideas for Stimulus Legislation

mmittee of the U.S. House of Representatives wants to make sure that all sciences continue to play a role in fighting the coronavirus, and asks for ideas on how the next economic stimulus package in the United States can support research.

Read Now
Ken Prewitt Wants to Retrofit The Social Sciences

Ken Prewitt Wants to Retrofit The Social Sciences

“In a world facing many complex, formidable problems,” Kenneth Prewitt asks, “how can the social sciences become a decisive force for human […]

Read Now
NYU’s Social Science for Impact Forum

NYU’s Social Science for Impact Forum

Each year, NYU researchers analyze New York State Medicaid, New York City Department of Education, and New York City subsidized housing data to discover new patterns of family experiences and outcomes and inform new approaches to fighting poverty, reducing inequality, and expanding opportunity in our communities.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments