We’d all agree, wouldn’t we? Reducing health and social problems caused by alcohol abuse is a good idea. So, to set such a policy in motion we just need the evidence to show which interventions are most effective. Here’s were social scientists can be flown in, so to speak, like the Flying Doctor in those old Australian movies, to provide the evidence on which policies can be based.
To use the language of youthful social media, Evidence Based Policy is ‘trending’. The EU, UN, Council of Europe, the OECD and most developed economies all make use of organisations that collate, analyse and make available high-quality socioeconomic and demographic data for international comparisons and use in policy development. Some of these organisations act as honest brokers, reviewing research and making recommendations. One of the most powerful is the UK National Institute for Clinical Excellence (NICE) where its review of cost benefits of treatments influences directly whether they are provided by the NHS. In social science the Dutch have a similarly powerful influence on government policy, but generates the research activity itself.
You would therefore be forgiven for believing that the social sciences are heralding a new Age of Reason in which systematic, scientific studies of social activity and experiences are providing the foundations for rational, evidence based policies. As they become a topic for study in their own right the complexities of the interaction between social science research and policy making are becoming more apparent.
These complexities are unravelled in a fascinating collection of 12 papers brought together in the current issue of Contemporary Social Science, guest edited by Linda Hantrais, Ashley Lenihan and Susanne MacGregor from LSE. The papers review the interactions of many different social science disciplines with a variety of policy issues from places as varied as India, across the EU and the Global South.
Returning to policies to reduce problems caused by alcohol consumption, Franca Beccaria from The Institute for Training and Research in Turin, points out in this special issue that social science insights have been ignored in favour of broad brush epidemiological studies. In other words, a number crunching approach has led to attempts to reduce aggregate levels of alcohol consumption. Yet social research demonstrates that problem drinking relates to specific people in particular situations. Social research would lead to a policy focus on risky and harmful drinking rather than attempts to reduce alcohol use overall.
This advocacy of a change in policy emphasis illustrates how embedded policies are in the way they are formulated. So often these formulations come out of traditions that have no distinct scientific basis. Sticking with alcohol reduction policies, Alfred Uhl of the Austrian Public Health Institute in Vienna points out in his paper that the Protestant-dominated, Nordic countries rely on reductions in supply, through high prices and limited outlets. By contrast policies in Southern European countries, with a strong Catholic tradition, emphasise problematic alcohol use and treatment for alcoholism. These differences of course make a European-wide unitary policy difficult to agree and even more difficult to implement.
Such considerations illustrate the challenge of generating neutral, directly effective policy relevant research. As Maren Duvendack from the University of East Anglia and Kate Maclean state ‘evidence can never be apolitical’. They consider the research exploring the benefits of micro finance programmes, to show that emphasis on broad scale quantitative methodologies, such as the relationship of micro finance uptake to children’s schooling, labour supply, fertility and nutritional status, ignore many of the negative consequences that can only be garnered from face to face interviews. These include the experience of debt in cultures were that is problematic, making the exclusion of oppressed minorities and domineering hierarchies more entrenched.
The rich qualitative issues that Duvendak and MacLean emphasise are less attractive to policy makers than the apparently clear and confident recommendations that come from economists. This is clearly illustrated by Edwin van de Haar’s account of the Netherlands Bureau for Economic Policy Analysis , where he works. This boasts a long tradition of advising government. Van de Haar claims his bureau is taken seriously because in times of uncertainty policy-makers cling to ‘the magic of numbers’. Unlike many organisations the Netherlands Bureau does much of the research itself, rather than interpreting the work of others. This presumably helps them to formulate projects in ways that are relevant to policy makers but also gives them an edge in making sense of the results. Yet, interestingly there are attempts by some governments to do more to explore what people feel and think and put less emphasis on the economics of human activity.
This question of what sort of research is appropriate or acceptable for policy makers is coming to the fore in the UK as a network of ‘what works’ units is being set up around the country. Dan Bristow and his colleagues from Cardiff University discuss what works for ‘what works’ units, and concludes that different local contexts can require quite different forms of evidence. Practitioners may also just not have the time to engage with the evidence effectively.
Without a full consideration of context and of how evidence can be used policy guidelines can have serious unintended consequences. This is well illustrated by Karen Anderton and James Palmer from the University of Oxford when exploring the impact of EU biofuels targets. Subsequent examinations showed that these targets had deleterious effects on food prices, food security, deforestation and the loss of biodiversity. They make a strong case for policy recommendations to be part of an ongoing iterative process, not a one-off recommendation. That way the evaluation of any evidence-based interventions becomes an integral part of the research process.
The iterative research process can also help to highlight the way any intervention has been framed, as illustrated with the internationally significant examination of the second-language learning policy in Estonia. As Tatjana Kiilo and Dagmar Kutsar from the University of Tartu demonstrate in their paper, the government policy of encouraging Russian-speaking Estonians to develop fluency in the Estonian language can be supported from arguments dealing with nation-state building and national identity, or it can be presented as of value for developing social cohesion and improving employment opportunities. These different contexts raise different sorts of research questions and require different social research methodologies.
What is true of framing of issues within Estonia is even more relevant when attempts are made to compare the implications of social science evidence across different countries. This special issue with international examples of research shows just how challenging it is to extrapolate the results from one country to another.
The message, then, that emerges from all the papers in this special issue of Contemporary Social Science is that most, probably all, government policies require evidence derived from social science in order to be rational and effective. But this is not a simple one-way process from researchers to policy makers. Social scientists need to be aware of the political and administrative constraints on what research is relevant and policy makers require an informed understanding of the limits of what scientific evidence can provide.