Science & Social Science

Isaac Asimov’s critique of algorithmic thinking

June 1, 2025 1129

Isaac Asimov (1920-1992) left a legacy of influence that many more literary writers might envy. In his own lifetime, he was one of the most highly regarded authors of science fiction. His work has worn better than that of his contemporaries, Robert A. Heinlein and Arthur C Clarke, even if his personal reputation has been somewhat tarnished by a well-documented history of sexual harassment. The impact of his best-known writings has, however, been almost entirely opposite to their intentions. He has become something of a hero to a range of scientists, engineers and entrepreneurs who see his writing, which Asimov himself described as social science fiction, as exemplifying the rule-governed nature of social life. A formal code of ethics can be hard-wired into robots to prevent them harming humans. Autonomous vehicles can deliver safe and reliable personal mobility. The future can be wholly predicted by statistical means. On closer inspection, though, Asimov sets up these scenarios only to subvert them. Rules always depend upon what the sociologist Harold Garfinkel called the ‘etcetera clause’, that their application is shaped by the context in which they are used.


Asimov’s Three Laws of Robotics first appeared together in a short story published in 1942:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The specific wording varied somewhat over the years and a fourth law, described as the Zeroth to precede the others, was added in a 1986 novel: “A robot may not injure humanity or, through inaction, allow humanity to come to harm.”

In a 1981 interview, Asimov commented:
“I have my answer ready whenever someone asks me if I think that my Three Laws of Robotics will actually be used to govern the behavior of robots, once they become versatile and flexible enough to be able to choose among different courses of behavior.
My answer is, “Yes, the Three Laws are the only way in which rational human beings can deal with robots—or with anything else.”
—But when I say that, I always remember (sadly) that human beings are not always rational.”
Asimov’s stance on human rationality was of a piece with his belief in scientific rationality and his repudiation of Judaism in favour of humanism. At the same time, his work often explored the problems that arose when rationalist robots met human improvisation.


The robots’ literal-minded approach to encounters with humans was a frequent source of trouble and a gift to parodists. The robot asked to ‘give me a hand’ with a task and responds by cutting a hand off a co-worker…the robot that judges humanity is capable of so much self-harm that the logical action is to destroy humanity so that it could not come to any more harm…This literal approach has much in common with Garfinkel’s ‘experiments’, which he used as teaching resources throughout his career. Consider the conventional greeting, ‘How are you’, and the chaos that rapidly ensues when one party repeatedly asks for greater specificity rather than accepting a conventional response such as ‘Fine’. Or the disturbance caused by behaving with one’s family as if one were a lodger and constantly requesting permission to perform any action, from opening a window to eating something from the refrigerator.


Something of the same scepticism can be seen in Asimov’s reflections on self-driving cars in a 1953 short story. This technology only works if human drivers are excluded from the roads so that traffic movements are entirely rule-governed. However, the cars still need to communicate their intentions to each other, which implies a primitive form of consciousness. The story culminates in the murder of an abusive car-owner through the collective action of a group of cars on behalf of one of his victims. The aggregated consciousness of the individual cars leads them to act in a way that is not in accord with the rules of their programming.


The ultimate failure of the algorithmic approach to human action is explored on a cosmic scale in the original Foundation trilogy. Although they are often assigned to the genre of fantasy novels, these books are actually a rigorous investigation – and satire – of the pretensions of statistical approaches to social explanation, in particular behavioural psychology. One of the key protagonists is Hari Seldon, who has invented the discipline of psychohistory: ‘that branch of mathematics which deals with the reactions of human conglomerates to fixed social and economic stimuli…[assuming] that the human conglomerate is sufficiently large for valid statistical treatment…[and ]that the human conglomerate be itself unaware of psychohistoric analysis in order that its reactions be truly random.’ Psychohistory makes all other social sciences obsolete because its model eliminates the uncertainties that they seek to manage and enables robust predictions about the future course of societies. Seldon creates an avatar who will appear at predicated crisis points to nudge humanity towards the least detrimental courses of action.


The contemporary resonances should be apparent. Seldon’s work prefigures the assumption that rules can be induced from Large Language Models – the only previous problem has been that the databases were too small. Human societies are conceived as aggregates of individuals in much the same way as agent-based modelling. A knowledge of the rules governing the actions and interactions of those atomised individuals is sufficient to make robust predictions about the fate of societies into the infinite future. Asimov, however, devotes the second and third books of the original trilogy to dismantling these assumptions. .All goes well for the first iteration of Seldon’s avatar, giving somewhat gnomic guidance to an assembly of the planetary elite. In the outer reaches of the galaxy, however, a random genetic mutation has thrown up a Napoleonic military leader, the Mule, whose actions do not fit the model. Seldon’s predictions collapse as the Mule’s followers spread across known space. By the end of the trilogy, Seldon’s avatar has twice addressed an empty room and finally gives a message that is wholly irrelevant to the moment of crisis.


Unlike many of his present-day fans, Asimov’s background was bioscience rather than physical science. Arguably, this is a better basis for understanding the limitations of control in complex systems. Evolutionary theory rests on Darwin’s metaphor of a ‘tangled bank’ where every plant, animal, insect and bacterium is in constant competition with every other life-form. At different moments, competitive pressures select for one or another to succeed, while that success immediately creates new selection pressure on other species. This dynamic world has a constant capacity to elude human controls, although it may be possible to make simplifying assumptions that allow short-term predictions. Those simplifying assumptions are, as Garfinkel saw, always subject to an etcetera clause. Algorithms are not useless and humans have an inescapable need to try to predict futures in order to select investments in the present. Asimov’s lesson, however, is that both need to be used with humility. Confident declarations about the inevitable consequences of rule-governed actions and the futures they create are poor bases for public policies that need sufficient flexibility to deal with chance, contingency and uncertainty.

Robert Dingwall is an emeritus professor of sociology at Nottingham Trent University. He also serves as a consulting sociologist, providing research and advisory services particularly in relation to organizational strategy, public engagement and knowledge transfer. He is co-editor of the SAGE Handbook of Research Management.

View all posts by Robert Dingwall

Related Articles

David Autor on the Labor Market
Social Science Bites
June 2, 2025

David Autor on the Labor Market

Read Now
The Chilling Impact of Censorship in Higher Education
Ethics
May 26, 2025

The Chilling Impact of Censorship in Higher Education

Read Now
Pope Francis, Human Dignity, and the Right to Stay, Migrate and Return
International Debate
May 5, 2025

Pope Francis, Human Dignity, and the Right to Stay, Migrate and Return

Read Now
Rosanna Smart Featured at Mark Kleiman Innovation for Public Policy Memorial Lecture 
Public Policy
April 29, 2025

Rosanna Smart Featured at Mark Kleiman Innovation for Public Policy Memorial Lecture 

Read Now
Ready to Tackle Global Challenges? Apply to Attend Dubai Showcase

Ready to Tackle Global Challenges? Apply to Attend Dubai Showcase

Are you a researcher with an idea that could help solve one of today’s most pressing problems? A conference in Dubai this […]

Read Now
How Can You Serve the Globe’s People If You Don’t Know How Many There Are?

How Can You Serve the Globe’s People If You Don’t Know How Many There Are?

Every day, decisions that affect our lives depend on knowing how many people live where. For example, how many vaccines are needed […]

Read Now
Harshad Keval on White Narcissism in the Academy

Harshad Keval on White Narcissism in the Academy

Sociologist Jason Arday, one of two editors for Sage’s Social Science for Social Justice book series, interviews Harshad Keval about his book […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest


This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments