Impact

Impact: Who Decides? 

May 12, 2022 2257

In this first response to Ziyad Marar’s thought piece “On Measuring Social Science Impact” from Organization Studies, professor Sue Fletcher-Watson, who represents a field where the direct purpose is to improve the quality of life for a group of individuals, shares why current metrics fall short and what we can do about it.

Impact is a watchword in modern academia. In every grant proposal, we make grand claims about the likely impact of our work. We will transform theory, generate new modes of enquiry and, perhaps, if we’re lucky, change some lives along the way.

But how do we measure this impact – how would we know when impact has been achieved, and who gets to decide? The impact factor purports to do this, as do other technical academic metrics, like the h-index. These metrics have received substantial criticism in recent years, just as they have become ever more influential in determining who gets hired or promoted. I have some sympathy with their dominance, I must say. As a psychologist working at the intersection of clinical, social and educational sciences, I do find metrics a useful shorthand – for example, when deciding which publications to submit to, from outside my discipline. I also report my h-index on my CV, and I think this helps to demonstrate my value as a collaborator to colleagues unfamiliar with my work.

Graphic on P:erspectives on Social Science Impact
Click above to see other pieces of this series as they arrive and to read an excerpt from the essay “On Measuring Social Science Impact.”

At the same time, I am frustrated by the clunkiness inherent in boiling down a rich and nuanced set of academic outputs into a tidy digit, which implies a simple linear hierarchy of impact. The arguments against impact factors and other metrics in this regard are well-rehearsed, but there is one aspect that has received relatively little attention, in my opinion. That problem is the insularity of impact metrics. They are generated by academics, communicated by academics to other academics, used to measure academic-facing outputs, using exclusively academic raw data – publication numbers, citation counts. Is this really what we think constitutes “impact”?

In my own field of autism research, impact means – or should mean – changing the lives of autistic people for the better. There is quite literally no other legitimate goal for autism research (though some findings may be more proximal to that goal, and others farther away). But if I report to my autistic colleagues, advisors or participants that my h-index has gone up, what is that to them? Nothing. And rightly so. My career would not exist without the existence of autistic people. More than that, my career would not exist without the trials and tribulations of autistic people. It behooves me to seek to deliver – and therefore measure – impact in a way that means something to them.

What alternatives, then, are there for capturing impact in the social sciences? Well, researchers do love a good prize, and I would like to see more policy, practice and community organizations using awards and certificates to recognize the research they value. Altmetrics are not bad either – yes, it is just another number, but capturing the fact that real people are talking about and sharing a piece of work strikes me as an important part of the picture. Finally, I’d like to see a lot more community participation in the evaluation of new research ideas. Right at the start of the research journey, stakeholders should have the chance to consider the relevance of planned work for them, and be empowered to hold researchers accountable to their grand claims.

Reliance on simplistic impact metrics doesn’t just miss the target, it’s like we’re shooting in entirely the wrong direction! Let’s measure the real changes we want to see in the world.

Sue Fletcher-Watson is chair of developmental psychology and director of the Salvesen Mindroom Research Centre at The University of Edinburgh.

View all posts by Sue Fletcher-Watson

Related Articles

From the University to the Edu-Factory: Understanding the Crisis of Higher Education
Industry
November 25, 2024

From the University to the Edu-Factory: Understanding the Crisis of Higher Education

Read Now
Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research
Communication
November 21, 2024

Canada’s Storytellers Challenge Seeks Compelling Narratives About Student Research

Read Now
Exploring the Citation Nexus of Life Sciences and Social Sciences
Industry
November 6, 2024

Exploring the Citation Nexus of Life Sciences and Social Sciences

Read Now
Tom Burns, 1959-2024: A Pioneer in Learning Development 
Impact
November 5, 2024

Tom Burns, 1959-2024: A Pioneer in Learning Development 

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

The creation of the Coalition for Advancing Research Assessment (CoARA) has led to a heated debate on the balance between peer review and evaluative metrics in research assessment regimes. Luciana Balboa, Elizabeth Gadd, Eva Mendez, Janne Pölönen, Karen Stroobants, Erzsebet Toth Cithra and the CoARA Steering Board address these arguments and state CoARA’s commitment to finding ways in which peer review and bibliometrics can be used together responsibly.

Read Now
Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Sage 1150 Impact

Psychologists Jonathan St. B. T. Evans and Keith E. Stanovich have a history of publishing important research papers that resonate for years.

Read Now
Revisiting the ‘Research Parasite’ Debate in the Age of AI

Revisiting the ‘Research Parasite’ Debate in the Age of AI

The large language models, or LLMs, that underlie generative AI tools such as OpenAI’s ChatGPT, have an ethical challenge in how they parasitize freely available data.

Read Now
3 2 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

1 Comment
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Dr Sue Oliver

Envisiging Prof. Fletcher-Watson’s argument in the wider academic context, I agree wholeheartedly that all too often, we seem to be writing to out own academic clique. It’s too easy to attach more value to our impact factors than to the actual application of our research to the people we aim to help. Our impact on them needs to be our prime focus.