Impact

Do We Turn Away from the ‘Grimpact’ of Some Research?

August 6, 2019 1549
Mourning grimpact

Andrew Crane’s recent post on Negative Impact on the LSE Impact blog and the way it described how in the rush to create impact, researchers are confronted with stark moral choices relating to how they report their findings, struck a chord with our own research and what we term “grimpact.”

Acknowledging that impact is not always positive, we argue there is an implicit optimism within the impact agenda that blinds both the public and the mechanisms designed to evaluate impact. This blindness heightens the risk of grimpact.

That academic research might have negative consequences, either in theory or practice, is not new.  There are countless historical examples of how the application of various scientific ‘advances’ can lead to widespread harm, in cases as varied as the discovery of chlorofluorocarbons as fridge coolants, to the application of racial theory to population registration in the Belgian Congo.  There has also been a long-term rise in public uncertainty over the power of scientific research to generate beneficial change.  This trend has given rise in recent decades to notions of research’s ethical, legal and social aspects and its responsible research and innovation approaches.

What makes the current context different and has increased the likelihood of grimpact, is growing public and political pressure for research to quantify and justify its existence through the contributions it makes to society.  In effect, what makes impact positive to policy-makers is if the change is countable, rather than the nature of change being positive or negative. Compounding this are policies that require examples of impact to be developed for evaluation exercises against impact definitions that promote discourses of research’s “benefit” or “value.” Grimpact is therefore not exclusively attributable to attempts to produce impact, but relates to a wider set of auditing processes designed to make research accountable to the public purse. This, we argue, constitutes the ‘implicit optimism’ of the impact agenda.

LSE-impact-blog-logo
This article by Gemma Derrick and Paul Benneworth originally appeared on the LSE Impact of Social Sciences blog as “Grimpact – Time to acknowledge the dark side of the impact agenda” and is reposted under a Creative Commons license (CC BY 3.0).

Creating research impact does not just depend on good researchers with high-quality knowledge, but also the good intentions of actors in their wider environment to absorb and exploit that knowledge. Yet, controlling (not just monitoring) what happens with research findings ‘beyond academia,’ as Andrew Crane’s post suggests, can be extremely time-consuming. We perceive that some of the problems arise specifically because researchers can’t afford this time, and lack the skills and the freedom (due to the political need to demonstrate public value from public investment) to deliberately and selectively place their findings with users who harbor similar ambitions to build better societies.

Grimpact, therefore, is less about the absence of particular control mechanisms within science, or indeed individual researchers’ moral failings, and more about how a perfect storm of conditions provides an environment in which grimpact can flourish.  This diagnosis forms the basis for our own Grimpact research project, in which we develop a model to explain why publicly-minded scholars might condone their research to be used in ways that undermine public support for research.

On the basis of four longitudinal case studies, we examine the mechanisms of grimpact and provide greater conceptualization of its nature, risk and threat to the academic reward system.  We identified four main characteristics that allow grimpact to thrive:

  • that it results from a violation of normal impact. The existence of normal impact drives many impact definitions used in evaluation exercises fuelling an implicit optimism;
  • that there was difficulty in attributing blame to either/or the researcher, the user, or the research itself. In all cases, grimpact emerged when all three actors used/behaved irresponsibly;
  • that grimpact did not solely emerge from an event of research misconduct; and
  • once unleashed, grimpact is contagious in that, once co-opted by actors beyond traditional academic control mechanisms, it takes on new forms of argument that downplay evidence in favor of emotion.

One example that may well be known to readers is the case of MMR and autism, arguably the most infamous example of grimpact.  The 1998 original paper was published in the Lancet, with all the quality indicators of academic excellence being published in a highly regarded journal. The journal ex-ante quality control system (assuming the highest ethical standards by authors) was interpreted by publics as proving the anomalous relationship of the vaccine causing autism.  The study caused a vaccine scare that today is seeing measles cases and fatalities soaring in developed countries to levels previously unknown since the introduction of the vaccine.

Post-publication, the paper was comprehensively debunked, the researchers stripped of their medical licences, and the article retracted.  Regardless, the (by now infamous) researcher became a voice for a message that promoted anti-vaccine sentiments, at which point traditional scientific sanctions had little influence on its spread.  Furthermore, the standard reaction of normal science (on the assumption of normal impact) was futile, whether to produce further evidence against the message, or falsify his hypotheses through replication studies.

The problem remains that behind the implicit optimism of impact agendas, the scientific sphere is ill-prepared to counter the contagion of grimpact beyond traditional armaments such as peer-review, sanctions, retractions and producing even more evidence.  For MMR, these defenses were insufficient to stop the contagion, nor alter the message fueled by an emotive reaction to the fear of autism by groups who are immune to concerns of robustness or replicability. Indeed the spread of grimpact, and its adoption and transformation by further, non-academic actors cannot be fought by the provision of even further evidence.  The result is a public that is, “sick of experts.”

As more countries sign up to research-accountability-through-societal-impact, and incentivize it through evaluation policies, we are concerned about the absence of appropriate governance mechanisms to minimize the risk and prevent the contagion of grimpact.  Instead, well-intentioned research evaluation policies to restore public faith in scientific investment may work only to fuel attacks on its validity.  Currently, we are witnessing a tide of relativism – facts, we are told do not exist – ultimately this leaves publics more vulnerable to persuasion by charlatans and ‘alternative facts’.  The question remains, how science systems, and their blind commitment to impact, might prepare themselves, and respond to the risk of grimpact.

About the GrImpact Research Group:
The GrImpact research group is an international collaboration effort between Gemma Derrick (Lancaster University), Paul Benneworth (Western Norway University of Applied Sciences), Rita Faria (University of Porto), Gunnar Sivertsen (Nordic Institute for Studies in Innovation, Research & Education), and David Budtz-Pedersen (Humanomics Research Centre, Aalborg University). The group tweets at @HEGrimpact.

Their preliminary findings — Derrick GE, Faria R, Benneworth B, Budtz-Petersen G & Sivertsen G. (2018) “Towards characterising negative impact: Introducing Grimpact” — were presented at the 23rd International Conference on Science and Technology Indicators, ‘Science, Technology and Innovation indicators in transition,’ in September 2018 at Leiden, The Netherlands.

Their preliminary findings — Derrick GE, Faria R, Benneworth B, Budtz-Petersen G & Sivertsen G. (2018) “Towards characterising negative impact: Introducing Grimpact” — were presented at the 23rd International Conference on Science and Technology Indicators, ‘Science, Technology and Innovation indicators in transition,’ in September 2018 at Leiden, The Netherlands.

Gemma Derrick is a senior lecturer in higher education at the Centre for Higher Education Research & Evaluation at Lancaster University. Her most recent book,The Evaluators Eye, was published in 2018. Paul Benneworth is a professor of innovation and regional development at the Department of Business Administration and Social Sciences at the Western Norway University of Applied Sciences and senior researcher at CHEPS, the Netherlands.

View all posts by Gemma Derrick and Paul Benneworth

Related Articles

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures
Impact
September 23, 2024

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Read Now
Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate
Impact
September 18, 2024

Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Read Now
Webinar: Fundamentals of Research Impact
Event
September 4, 2024

Webinar: Fundamentals of Research Impact

Read Now
Paper Opening Science to the New Statistics Proves Its Import a Decade Later
Impact
July 2, 2024

Paper Opening Science to the New Statistics Proves Its Import a Decade Later

Read Now
A Milestone Dataset on the Road to Self-Driving Cars Proves Highly Popular

A Milestone Dataset on the Road to Self-Driving Cars Proves Highly Popular

The idea of an autonomous vehicle – i.e., a self-driving car – isn’t particularly new. Leonardo da Vinci had some ideas he […]

Read Now
Why Social Science? Because It Can Help Contribute to AI That Benefits Society

Why Social Science? Because It Can Help Contribute to AI That Benefits Society

Social sciences can also inform the design and creation of ethical frameworks and guidelines for AI development and for deployment into systems. Social scientists can contribute expertise: on data quality, equity, and reliability; on how bias manifests in AI algorithms and decision-making processes; on how AI technologies impact marginalized communities and exacerbate existing inequities; and on topics such as fairness, transparency, privacy, and accountability.

Read Now
Digital Scholarly Records are Facing New Risks

Digital Scholarly Records are Facing New Risks

Drawing on a study of Crossref DOI data, Martin Eve finds evidence to suggest that the current standard of digital preservation could fall worryingly short of ensuring persistent accurate record of scholarly works.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments