Impact

Impact and Assessing Public Engagement

July 18, 2019 2805

A survey team photo taken at a public engagement workshop in Salavan, Lao People’s Democratic Republic. (Photo: Amphayvone Thepkhamkong)
LSE-impact-blog-logo
This article by Marco J. Haenssgen originally appeared on the LSE Impact of Social Sciences blog as “Developing a finer grained analysis of research impact: Can we assess the wider effects of public engagement?” and is reposted under the Creative Commons license (CC BY 3.0).

Authors writing for the LSE Impact Blog have often argued for the relevance and importance of public engagement, which remains high on researchers’ and funders’ agendas, especially in the medical sciences. The UK Medical Research Council (MRC) advises for instance that, “effective public engagement is a key part of the MRC’s mission and all MRC-funded establishments are encouraged to dedicate resources to support this area of work.” Over the 2005-2018 period, the Wellcome Trust also awarded more than £30 million for dedicated public engagement projects.

However, much can go wrong in public engagement. Some observers have stressed the risks to researchers through the misrepresentation of scientific research, possible reputational consequences of an active social media presence, or the harm that can be caused by toxic comments online. Target and non-target groups can also experience negative consequences and outright harms. As a form of health communication, public engagement can also create misunderstanding, resistance, or actions with problematic and unanticipated consequences. Notably, in Denmark, efforts to raise the public awareness of drug resistance led to a leafleting campaign urging readers not to have sex with pig farmers.

After several years of practice, can we say with confidence what public engagement has achieved, where it may be a good and a bad use of money, and what design principles we should employ to minimise its unintended consequences? I would argue the answer is no.


This post draws on the author’s co-authored paper, “Translating antimicrobial resistance: a case study of context and consequences of antibiotic-related communication in three northern Thai villages,” published in Palgrave Communications.


Methods for the evaluation of public engagement do exist, have been advocated in this blog, and initiatives like The Global Health Network have even established comprehensive evaluation databases. But the practical implementation of evaluation designs is often rudimentary (e.g. based on “evaluation forms” handed out during an event), and typically limited to the positive and intended outcomes of an activity. What if the seemingly successful activity was financially wasteful, undermined the coherence of a broader public engagement program, led people to behave worse in areas that were not of interest for the researchers, or its positive effects evaporated immediately after the event? We should not only measure “impact” with its positive connotations, but also “grimpact” as the unintended negative side-effects of research and public engagement.

To improve evaluation practice in health-related public engagement, we can look for guidance from development aid evaluation, which routinely uses five criteria to assess development projects and programmes:

  1. Effectiveness: To what extent have our objectives been achieved? These objectives can pertain to the target population, but they can also address for instance collaborative relationships or new research insights.
  2. Efficiency: Operational efficiency considers whether resources were used appropriately to produce the activity; cost-effectiveness considers total costs relative to the population reached or per effective engagement; and allocative efficiency considers if resources could have been employed more usefully to achieve the same goal.
  3. Impact: What are the positive and negative, intended and unintended consequences of the project, and the associated equity implications? Larger-scale programmes may also relate to broader societal-level impacts like mortality or enrolment rates.
  4. Relevance: Do the engagement objectives correspond to target group requirements, national and global priorities and partner/donor policies? Relevance also addresses whether the activity suggested a plausible mechanism to achieve its objectives, and whether it aligned with parallel engagement activities.
  5. Sustainability: Are the effects and impacts likely to persist beyond the end of the activity?

To illustrate the application of these criteria, let us take the example of a recent interdisciplinary health behavior research project about drug resistance in Southeast Asia, which involved knowledge exchange workshops with 150 participants in five villages in Thailand and Laos, an international photo exhibition with 500+ visitors showcasing traditional healing in Thailand, and social media work that reached 350,000 impressions on Facebook, Twitter, LinkedIn, and Reddit. The research project collected survey data, interviews, observations, and oral and written feedback, all of which enable an informal review of effectiveness, efficiency, relevance, impact, and sustainability. Our objectives were to (1) share information about drug resistance and local forms of treatment with our research participants, to (2) learn from them about medicine use and health behaviours locally and internationally, and to (3) spark interest in our research among the non-academic public.

Photo credit, Amphayvone Thepkhamkong: Public engagement workshop in Salavan, Lao PDR.

On the face of it, we achieved these objectives (effectiveness). For example, survey data showed that the workshop participants had 30 percentage-points higher awareness of drug resistance three months after the event (compared to 17 percentage points in the villages more generally), and we received positive event feedback and extensive engagement with our social media campaigns (e.g. 12,900 engagements on Facebook/Twitter). The engagement also enabled us to formulate new research hypotheses based on the insights from the workshop participants, and testimonials from exhibition visitors included statements such as “So enlightening and so inspiring – who knew medicine was so fun!” Yet, if we adhere to the five evaluation criteria, then we could not automatically consider the engagement a success only because it achieved its stated goals.

The broader assessment was indeed more mixed if we go beyond effectiveness as goal achievement. For example, we also observed negative impacts as some villagers increased their antibiotic use in a potentially detrimental way, and one workshop participant even felt sufficiently informed about antibiotics to start selling them in her local grocery store. The relevance of the activities against the backdrop of drug resistance as one of 10 threats to global health in 2019 also might be obvious for global health researchers and practitioners. This would entail in principle a positive assessment of the relevance criterion, but drug resistance is less clearly a priority issue for rural populations that often face several livelihood constraints like fluctuating incomes, discrimination, or the risk of droughts and floods. The isolated engagement activities can also not easily claim sustainable outcomes, which again weakens the overall assessment. (The costs of reaching the target groups ranged from £0.85 per 1,000 social media impressions, £16 per exhibition visitor, to £35 for each workshop participant, but we cannot judge efficiency in the absence of more extensive reference values.)

Goal achievement or “effectiveness” should therefore be only one criterion alongside efficiency, impact, relevance, and sustainability according to which we evaluate public engagement. To improve evaluation practice and build a knowledge base of the benefits and risks of public engagement, funders and academic institutions should support researchers with teams of experienced external evaluators to accompany public engagement projects from the design phase onward – if only on a sample of projects. While these evaluations should be independent, researchers and evaluators could work closely together to inform each other, and subsequently co-own the evaluation findings and publish them jointly to add to the body of public engagement knowledge.


Marco J Haenssgen is an assistant professor in global sustainable development at the University of Warwick and an associate fellow at the Institute of Advanced Study. He is a social scientist with a background in management and international development and experience in aid evaluation, intergovernmental policy making, and management consulting. His research emphasizes marginalization and health behavior in the context of health policy implementation, technology diffusion, and antimicrobial resistance with a geographical focus on Southeast Asia.

View all posts by Marco J. Haenssgen

Related Articles

Three Decades of Rural Health Research and a Bumper Crop of Insights from South Africa
Impact
March 27, 2024

Three Decades of Rural Health Research and a Bumper Crop of Insights from South Africa

Read Now
Using Translational Research as a Model for Long-Term Impact
Impact
March 21, 2024

Using Translational Research as a Model for Long-Term Impact

Read Now
Norman B. Anderson, 1955-2024: Pioneering Psychologist and First Director of OBSSR
Impact
March 4, 2024

Norman B. Anderson, 1955-2024: Pioneering Psychologist and First Director of OBSSR

Read Now
New Feminist Newsletter The Evidence Makes Research on Gender Inequality Widely Accessible
Impact
March 4, 2024

New Feminist Newsletter The Evidence Makes Research on Gender Inequality Widely Accessible

Read Now
New Podcast Series Applies Social Science to Social Justice Issues

New Podcast Series Applies Social Science to Social Justice Issues

Sage (the parent of Social Science Space) and the Surviving Society podcast have launched a collaborative podcast series, Social Science for Social […]

Read Now
The Importance of Using Proper Research Citations to Encourage Trustworthy News Reporting

The Importance of Using Proper Research Citations to Encourage Trustworthy News Reporting

Based on a study of how research is cited in national and local media sources, Andy Tattersall shows how research is often poorly represented in the media and suggests better community standards around linking to original research could improve trust in mainstream media.

Read Now
Connecting Legislators and Researchers, Leads to Policies Based on Scientific Evidence

Connecting Legislators and Researchers, Leads to Policies Based on Scientific Evidence

The author’s team is developing ways to connect policymakers with university-based researchers – and studying what happens when these academics become the trusted sources, rather than those with special interests who stand to gain financially from various initiatives.

Read Now
5 1 vote
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments