Public Policy

COVID-19: Blood on Whose Hands?

May 16, 2021 1984
Knowns and unknowns matrix

As the UK Deputy Chief Medical Officer Jonathan Van Tam said in a recent press conference, the current statistics on COVID-19 infections may be almost as good as we will ever get. When the reports of positive tests in the UK – often wrongly described as ‘cases’ – drop towards 1,000 per day, the overwhelmingly majority will be false. This is not a failure of the tests but a feature of the mathematics. As you look for increasingly rare events, even with good tests, false positives rapidly come to swamp true ones.

The opening of the end game also means the start of attempts to control the story ahead of the inevitable lessons learned inquiry. Different factions want to get their retaliation in first and pin blame on their opponents. The complex story of late 2019 and early 2020 is reduced to claims that X, Y and Z had blood on their hands because they did not immediately follow the injunctions of A, B, and C.

Sociologists have been studying organizational failures leading to major accidents, disasters and catastrophes for more than 50 years. I cut my own professional teeth in research on decision-making in cases of child abuse and neglect and how the ordinary work of health care, social work and legal agencies could go wrong – and how this was often misunderstood by subsequent inquiries. I came into pandemic planning in the early 2000s partly through a study of how the silent epidemic of deaths in the 2003 European heatwave went unnoticed until bodies piled up in the mortuaries. It was an experience that closely paralleled the emergence of both SARS and COVID, and Ebola in West Africa in 2013.

Science policy elites in the UK may not have much respect for sociology but this work has been extensively used in understanding medical errors, industrial accidents, nuclear power station meltdowns and space shuttle explosions. There are good reasons why the Biden Administration has appointed a sociologist as deputy director of science policy. What it teaches us very clearly is that these events are not about personalities but about the inevitable difficulties of making decisions in complex environments on imperfect and uncertain information.

If policymakers are always going to make the right decisions, they need to have perfect information in real time. Nothing is overlooked and everything is understood. There is instant feedback, permitting constant and smooth adjustment to an evolving situation. Obviously this is unrealistic. However, it is a proper use of a model to ask what stops the real world working like that.

Basically, decision-makers simply cannot have all the right information available to them at the right time in a form that they can use. Although later research has refined the approach, the fundamentals were laid out by a British engineer turned sociologist called Barry Turner in the 1970s. He identified four main categories of information failure – all of which are evident in the early UK responses to the COVID-19 pandemic.

First, there are the things that we just don’t know, the unknown unknowns. In the winter of 2019/20, no one had a good understanding of the properties of this coronavirus. Indeed, relatively little was known about coronaviruses at all, compared with other types of virus. Although research was carried out at great speed – the virus genome was published in early January 2020 – there are still many unknowns. The ways in which the virus is actually transmitted, for example, continue to be disputed. Policy makers have to choose between competing claims supported by limited evidence.

Second, there are the things people know but don’t fully appreciate. This may be due to factors like a false sense of security, distrust of the source, being focused on a different problem or spotting the relevant data in a flood of information. If we look back at early 2020, there are two contradictory attitudes to the Chinese data. On one hand, an expectation that the Chinese authorities would manage the Wuhan outbreak as successfully as they had managed SARS. On the other, a certain suspicion of the information coming out of Wuhan and the extent to which this was being edited by the Chinese government or manipulated by unofficial sources hostile to the government.

Third, there are the things which are known but not brought together in a timely fashion because the knowledge is distributed across a number of interested groups. In the UK, the Scientific Advisory Group for Emergencies, or SAGE, system was supposed to deal with this but ultimately reinforced it. A system designed for very specific short-term emergencies was not appropriate to a slow-burn, long-term problem. The original 2007 UK pandemic plan, which I was involved with, recognized this risk and assigned lead responsibility to the Cabinet Office, supporting the prime minister, precisely in order to have an impartial response to a whole-society challenge. Information would not be excluded simply because it was unfamiliar to a particular community, like that of the health department and its partners.

Fourth, there are the things we know but which do not fit our current models of understanding. Again, this is why whole-society response matters. As a sociologist of science, I am interested in what knowledge gets into decision-making and what is excluded. In this case, there is a body of research in social science and law about rules and compliance, which has never featured in the output of SPI-B – the group of health psychologists advising government. There is considerable evidence of a failure by life scientists to appreciate the limitations of experimental research in engineering, relevant to transmission, and of engineers failing to understand the limitations of unsystematic reviews in medical sciences.

These are not failures of individuals but properties of the way in which modern societies operate. Uncertainty is managed by setting boundaries around what can be considered and discussed. Sometimes those boundaries filter out stuff that later proves to be relevant.

When we read about attempts by some stakeholders, and sympathetic journalists, to blame others for failures, often in lurid language like “blood on their hands,” we need to see these as complaints that, in a fast-moving situation, their interest was just not favored. Their case was not sufficiently well-evidenced or compelling enough to cut through the fog of war and compete for the limited attention that decision-makers can give to any problem.

Part of this competition is precisely that between science and politics. Whatever we might think in other contexts of the particular individual who is the UK prime minister, it is surely properly part of his role to be reluctant to impose whole-society damage in response to a sectional claim based on the hypothetical scenarios produced by a few modelers, and promoted by their allies. I have great respect for many of these scientists as individuals but they can be unworldly about the robustness of their data and the values implicit in their assumptions. Moreover, modelling does not translate directly into prescriptions for action – simple calls to reduce transmission do not specify precisely how this is best achieved.

We also need to think about the counter-factual, the alternative history. Back in February 2020, I was on a panel for Japanese TV with Gabriel Leung, who is one of the world’s most eminent public health specialists. He was very clear at that point that a pandemic would only come to an end through herd immunity. The policy choice was how to get there, whether by managed infection or by vaccination, which was not at that point on offer. If we look at countries with similar demography, health systems and population health levels over the last year, what is striking is how little variation there has been in mortality outcomes. Countries that locked down harder mostly had harder rebounds and ended up in a similar place to those that went for a slower-burn strategy.

There will, of course, be lessons to be learned from this pandemic and it is right that there should be inquiries to spell them out. It will not, however, be helpful to see this as a partisan exercise in blaming individuals for acting within the limits of what was possible in systems that others had designed for very different purposes. If there is one big lesson, though, it would be that pandemics should always be treated as a whole-society challenge and led from the center of government in a way that can draw on the full range of expertise available in the country.

Robert Dingwall is an emeritus professor of sociology at Nottingham Trent University. He also serves as a consulting sociologist, providing research and advisory services particularly in relation to organizational strategy, public engagement and knowledge transfer. He is co-editor of the SAGE Handbook of Research Management.

View all posts by Robert Dingwall

Related Articles

There’s Something in the Air, Part 2 – But It’s Not a Miasma
Insights
April 15, 2024

There’s Something in the Air, Part 2 – But It’s Not a Miasma

Read Now
To Better Forecast AI, We Need to Learn Where Its Money Is Pointing
Innovation
April 10, 2024

To Better Forecast AI, We Need to Learn Where Its Money Is Pointing

Read Now
A Community Call: Spotlight on Women’s Safety in the Music Industry 
Insights
March 22, 2024

A Community Call: Spotlight on Women’s Safety in the Music Industry 

Read Now
Charles V. Hamilton, 1929-2023: The Philosopher Behind ‘Black Power’
Career
March 5, 2024

Charles V. Hamilton, 1929-2023: The Philosopher Behind ‘Black Power’

Read Now
Did the Mainstream Make the Far-Right Mainstream?

Did the Mainstream Make the Far-Right Mainstream?

The processes of mainstreaming and normalization of far-right politics have much to do with the mainstream itself, if not more than with the far right.

Read Now
SSRC Links with U.S. Treasury on Evaluation Projects

SSRC Links with U.S. Treasury on Evaluation Projects

Thanks to a partnership between the SSRC and the US Department of the Treasury, two new research opportunities in program evaluation – the Homeowner Assistance Fund Project and the State and Local Fiscal Recovery Funds Project – have opened.

Read Now
The Use of Bad Data Reveals a Need for Retraction in Governmental Data Bases

The Use of Bad Data Reveals a Need for Retraction in Governmental Data Bases

Retractions are generally framed as a negative: as science not working properly, as an embarrassment for the institutions involved, or as a flaw in the peer review process. They can be all those things. But they can also be part of a story of science working the right way: finding and correcting errors, and publicly acknowledging when information turns out to be incorrect.

Read Now
4.2 5 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments