In this first response to Ziyad Marar’s thought piece “On Measuring Social Science Impact” from Organization Studies, professor Sue Fletcher-Watson, who represents a field where the direct purpose is to improve the quality of life for a group of individuals, shares why current metrics fall short and what we can do about it.
Impact is a watchword in modern academia. In every grant proposal, we make grand claims about the likely impact of our work. We will transform theory, generate new modes of enquiry and, perhaps, if we’re lucky, change some lives along the way.
But how do we measure this impact – how would we know when impact has been achieved, and who gets to decide? The impact factor purports to do this, as do other technical academic metrics, like the h-index. These metrics have received substantial criticism in recent years, just as they have become ever more influential in determining who gets hired or promoted. I have some sympathy with their dominance, I must say. As a psychologist working at the intersection of clinical, social and educational sciences, I do find metrics a useful shorthand – for example, when deciding which publications to submit to, from outside my discipline. I also report my h-index on my CV, and I think this helps to demonstrate my value as a collaborator to colleagues unfamiliar with my work.
At the same time, I am frustrated by the clunkiness inherent in boiling down a rich and nuanced set of academic outputs into a tidy digit, which implies a simple linear hierarchy of impact. The arguments against impact factors and other metrics in this regard are well-rehearsed, but there is one aspect that has received relatively little attention, in my opinion. That problem is the insularity of impact metrics. They are generated by academics, communicated by academics to other academics, used to measure academic-facing outputs, using exclusively academic raw data – publication numbers, citation counts. Is this really what we think constitutes “impact”?
In my own field of autism research, impact means – or should mean – changing the lives of autistic people for the better. There is quite literally no other legitimate goal for autism research (though some findings may be more proximal to that goal, and others farther away). But if I report to my autistic colleagues, advisors or participants that my h-index has gone up, what is that to them? Nothing. And rightly so. My career would not exist without the existence of autistic people. More than that, my career would not exist without the trials and tribulations of autistic people. It behooves me to seek to deliver – and therefore measure – impact in a way that means something to them.
What alternatives, then, are there for capturing impact in the social sciences? Well, researchers do love a good prize, and I would like to see more policy, practice and community organizations using awards and certificates to recognize the research they value. Altmetrics are not bad either – yes, it is just another number, but capturing the fact that real people are talking about and sharing a piece of work strikes me as an important part of the picture. Finally, I’d like to see a lot more community participation in the evaluation of new research ideas. Right at the start of the research journey, stakeholders should have the chance to consider the relevance of planned work for them, and be empowered to hold researchers accountable to their grand claims.
Reliance on simplistic impact metrics doesn’t just miss the target, it’s like we’re shooting in entirely the wrong direction! Let’s measure the real changes we want to see in the world.
Envisiging Prof. Fletcher-Watson’s argument in the wider academic context, I agree wholeheartedly that all too often, we seem to be writing to out own academic clique. It’s too easy to attach more value to our impact factors than to the actual application of our research to the people we aim to help. Our impact on them needs to be our prime focus.