Public Policy

How Intelligent is Artificial Intelligence?

September 27, 2023
A woodcut from 1867 shows a locomotive labeled “Speculation” heads over a cliff while passengers with bubbles marked “miscellaneous stock,” “Smashem Railway Company,” “Grand Hotel Limited,” “Mining Company,” and “the Philly Financial Company,” don’t notice. One bubble bearing the word “obligations” floats free in the air with various bank notes, and no one pays any attention.
“There is a danger that criticism of the AI project in these terms is seen as romantic resistance to progress, like John Ruskin’s hostility to railways,” writes Robert Dingwall. (Image: Hathi Digital Library Trust and the University of Minnesota Library /Victorian Web)

Cryptocurrencies are so last year. Today’s moral panic is about AI and machine learning. Governments around the world are hastening to adopt positions and regulate what they are told is a potentially existential threat to humanity – and certainly to a lot of middle-class voters in service occupations. However, it is notable that most of the hype is coming from the industry, allied journalists and potential clients, and then being transmitted to the politicians. The computer scientists I talk to – and this is inevitably a small and potentially biased sample – are a good deal more skeptical.

One of them suggested a useful experiment that anyone can carry out on ChatGPT. Ask it to write your obituary in the style of a serious newspaper. Who could resist the chance to read their own obituary? So I invited the site to produce one. The software had no difficulty in distinguishing me from the handful of other Robert Dingwalls on the Internet. However, it got my birth year and age wrong in the first paragraph and then went on to describe my professional contributions at such a high level of generalization and vagueness as to be pretty meaningless. It knew I had a wife, children and grandchildren but nothing about them: for various reasons both my wife and two of my children minimize their web presence. It correctly identified the areas of sociology to which I had contributed but not in any specific way. It thought my first degree was from Oxford rather than Cambridge and made no mention of my PhD from Aberdeen. I have never held a position at a US university – I have been a visiting fellow at the American Bar Foundation, which is a non-profit that collaborates with Northwestern. The punchline is a rather trite quote from John Donne that could have been used about anybody.

Maybe the approach would work better if I had asked it to write an obituary for a higher-profile public figure like a president or prime minister. On the other hand, I would argue that my web presence is probably greater than many academics and that it is a fairer test of AI to ask it to write about the Z-list than the A-list. The obituary exercise is one area where we can almost guarantee to be more expert than the tool we are using in a way that makes visible its limitations.

What I have yet to see is any acknowledgment that the enthusiasm for AI and Large Language Models rests on a particular stance in relation to philosophy and the social sciences that has been contested since at least the beginning of the 20th century. Specifically, there is a continuing demand for models of human action and social organization that assume this is governed by rules or laws akin to those of Newtonian physics. These have consistently failed to deliver but the demand does not go away. Big Data is the latest iteration. If only we can devote enough computing power to the task, and assemble large enough data sets, then we will be able to induce the algorithms that generate the observable world of actions, ideas and material reality.

I learned social science methods just as SPSS was coming into use. Our first caution was against ‘data dredging’ – correlating every variable with every other variable and then making up a story about the ones that were significant. The mathematics of probability mean that some relationships will inevitably be significant by chance rather than causality and we might make some dangerous mistakes. Machine learning has a distinct whiff of the same problem.

Readers of this blog may have seen my comments on the film Oppenheimer and its representation of the relativist revolution in 20th-century thought. AI takes the opposite side. It assumes that meanings and representations are stable and unidimensional. But Large Language Models do not escape the indeterminacy of language simply by scale. It is not an accident that the US Navy funded Harold Garfinkel to study the ‘etcetera clause,’ the implicit extension to any rule which acknowledges that its operation inevitably reflects the circumstances of its use. Corpus linguistics has told us many interesting things about bodies of text and the history of language but the ability of computing power to generate meaningful talk outside fairly narrow and highly rule-governed contexts is limited. This is a feature not a bug. Popular resentment at ‘Computer says no’ responses from large organizations reflects precisely the algorithm’s insensitivity to context and the local features that humans attend to.

Set against the AI revolution, we have the traditions of understanding humans as skilled improvisers, capable of creating spontaneous order in the moment, taking account of the actions of others and the specifics of the material and non-material resources at hand. As Stanley Lieberson and Howard Becker proposed, from very different starting points, this may offer us a probabilistic view of the possibilities of planned social actions, much as Oppenheimer’s generation moved on from Einstein’s hankering for a rule-governed, wholly predictable, universe.

There is a danger that criticism of the AI project in these terms is seen as romantic resistance to progress, like John Ruskin’s hostility to railways. That would be a serious mistake. The fad for AI takes one side in a debate between serious people in philosophy and the sciences, social and natural, that has been conducted for the last century or so. If the limits of language are not properly understood, all we are left with is the hype of a snake oil industry trying to extract profits from naïve investors and favorable regulation from gullible politicians.

Whatever happened to cryptocurrencies…?

5 1 vote
Article Rating

Robert Dingwall is an emeritus professor of sociology at Nottingham Trent University. He also serves as a consulting sociologist, providing research and advisory services particularly in relation to organizational strategy, public engagement and knowledge transfer. He is co-editor of the SAGE Handbook of Research Management.

View all posts by Robert Dingwall

Related Articles

Matchmaking Research to Policy: Introducing Britain’s Areas of Research Interest Database
Impact
November 2, 2023

Matchmaking Research to Policy: Introducing Britain’s Areas of Research Interest Database

Read Now
Long Covid – A Contested Disorder
News
October 4, 2023

Long Covid – A Contested Disorder

Read Now
Surveys Provide Insight Into Three Factors That Encourage Open Data and Science
Research
September 19, 2023

Surveys Provide Insight Into Three Factors That Encourage Open Data and Science

Read Now
Efforts To Protect Endangered Minority Languages: Helpful Or Harmful?
Communication
September 11, 2023

Efforts To Protect Endangered Minority Languages: Helpful Or Harmful?

Read Now
Marc Augé, 1935-2023: Anthropologist Founder Of ‘Non-Places’

Marc Augé, 1935-2023: Anthropologist Founder Of ‘Non-Places’

French anthropologist Marc Augé, who died on July 24, is renowned for his concept of “non-places”. His 1993 text of the same name describes a reality that is very much relevant to our everyday lives.

Read Now
Melissa Kearney on Marriage and Children

Melissa Kearney on Marriage and Children

In this Social Science Bites podcast, economist Melissa Kearney reviews the long-term benefits of growing up in a two-parent household and details some of the reasons why such units have declined in the last four decades.

Read Now
National Academies Looks at How to Reduce Racial Inequality In Criminal Justice System

National Academies Looks at How to Reduce Racial Inequality In Criminal Justice System

To address racial and ethnic inequalities in the U.S. criminal justice system, the National Academies of Sciences, Engineering and Medicine just released “Reducing Racial Inequality in Crime and Justice: Science, Practice and Policy.”

Read Now
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments