Love it or loathe it (or anything in between), the “impact agenda” is here to stay. An increasingly important component of the United Kingdom’s Research Excellence Framework (REF), impact carries strong financial and reputational incentives for both institutions and researchers. As an agenda, it is the object of acclaim, concern, and overt condemnation. The various lines of criticism are probably well-known to the readers of this blog: the impact agenda is seen by many as posing excessive demands on academics without consideration of the time requirements, favouring some types of research topics and methods over others, reifying traditional academic elites, rewarding institutional game-playing, and turning academic life into a competitive game of winners and losers, at the expense of scientific collaboration. Critics worry about the “impact of impact” on academic autonomy, and the risk of it fundamentally altering the meaning, nature, and value of research. The very incentives that place impact high on the list of priorities for universities may lead individual researchers to pursue low-hanging impact fruits for motives primarily based around career advancement, focusing on research themes that have the potential to score highly for impact in the REF, rather than embarking on original, curiosity-driven, and open-ended research.
As if things weren’t complicated enough, another “agenda” has been gaining ground in UK academia, less visibly perhaps, but potentially with comparably profound consequences. Enter the ODA research agenda – i.e. the specific requirements and implications of research funded by the UK aid budget (a.k.a. Overseas Development Assistance, or ODA). This research has to fulfil the requirements of ODA compliance (in relation to both its country focus and its purpose) and is primarily aimed at generating impact in developing countries.
ODA-funded research has been rising in size and importance over the last few years. Its latest and more prominent manifestation is the £1.5 billion Global Challenges Research Fund (GCRF). This research has to fulfil, as noted by the Independent Commission for Aid Impact:
“[w]ith a budget of £1.5 billion over five years from 2016 to 2021, [GCRF] marks a change in the overall pattern of UK government funding for science and research, making a notable part of this funding contingent on whether or not the research themes fall within the international ODA definition” (emphasis added).
In very practical terms, this means universities across the UK – including those with limited experience of development research so far – have been busily gearing up to access these new sources of funding.
Questions about the consequences of this new agenda are starting to emerge. On the academic side, these questions closely mirror the points made above: will the new ODA research agenda divert focus from the pursuit of academic knowledge towards impact sensationalism? Will those disciplines that can more easily show visible, short-term results in developing countries be at an unfair advantage?
Yet different concerns are raised from aid specialists about the possible diversion of ODA resources away from their intended purpose, potentially diluting the anti-poverty imperative in favour of simply showing “good intentions” in addressing development problems. It certainly does not allay anybody’s fear the fact that some of these resources are effectively “double counted” in both the science and the aid budgets.
There are clear similarities between these two agendas – including the imperative to show value for money at a time of austerity and widespread cuts in public spending. Yet so far remarkably little reflection has gone into how these two agendas fit together and interplay – and the risk that they may be pulling academia in different directions, with negative consequences for everyone involved.
Where does this leave us? For a start, I believe that those of us working in international development research should be much louder and clearer in making the case that the current impact agenda, as embodied by the REF, is unfit to fully capture the complexity of doing research for development with ODA funding – and we need to be more specific in explaining why.
Many would agree that international development impact is more difficult to “track” and “show” because it happens overseas (and is thus more difficult to neatly wrap it up for a REF impact case study). Yet limiting the discussion to the practical challenges of tracing contribution is reductive.
We need to start asking fundamental questions – what counts as impact? Impact for whom? – and explore the ways in which our intended impact interplays with local power dynamics (what we could call, in effect, the political economy of impact).
We need to recognise that trade-offs are likely to emerge – including, notably, between the academic imperative of publishing and the ever-changing policy and practice landscape. The more we take to heart the imperatives of equitable North/South partnership and co-production, the stronger these trade-offs are likely to be (and that’s a good thing!).
We need to acknowledge the complexity of impact pathways, which are likely to diverge from the linearity (peer reviewed publication in academic journals then knowledge sharing and dissemination then uptake then impact) which, though addressed by Research England in the latest version of its draft panel criteria, is still, de facto, a requirement for impact case studies in REF 2021.
Before being a “thing” in academia, impact was a common frame of reference for development practitioners. We need to start reconciling these two meanings for research for development impact. A more holistic conversation is essential and overdue.