Innovation

Why Don’t Algorithms Agree With Each Other?

February 21, 2024 1367

There is a tendency to think of automatic online processes as neutral and mathematically logical. They are usually described as algorithms. That means they consist of a series of pre-planned calculations that can be reduced to the most elementary operations, such as plus, minus, if then. It would seem obvious to assume that these ethereal calculations are derived from rational procedures which are firmly based on mathematical principles.

Such assumptions about algorithms are important to examine in light of the explosion of artificially intelligent systems. They are based on the apparently exotic process, which is referred to as ‘machine learning.’ As much as IT experts wish to insist their systems are almost human, hijacking words like ‘intelligence’ and ‘learning,’ their machines are still just frighteningly fast automatons. They do not understand, conceptualize, or even perceive as a human being would. Their ‘learning’ is based on building millions of connections between inputs and outputs.

These associations, and associations between the associations, and the links between them, may open up surprising possibilities for achieving the goals they are set. But the machine is not surprised. It will now know or comprehend what it has discovered in any way that is analogous to the thoughts of a person.

Given all that, the role of human input would seem to be negligible? If the algorithms are just churning through endless connections, derived from millions of examples, where is the possibility for values, preferences, biases, and all those aspects of human thoughts and actions that make those thoughts and actions human? Do these algorithms inevitably produce the neutral, unprejudiced results that would be expected of a mere machine?

I had the unexpected possibility of testing this hypothesis recently when I set out to get motor insurance for the car I was about to buy. These days it is rare to have the possibility of talking to an insurance agent. All searches for insurance cover consist of filling in forms online. Even if you do manage to speak to someone, there is no conversation. That person is just filling in a form on your behalf. The algorithms rule.

Going through this process with several different insurance companies it quickly became clear that they all ask the same questions. Age, history of motoring accidents, marital status, previous insurance history details of the car to be insured, and so on. I’m sure some of these questions have no bearing on the calculation of how much to charge for the insurance premium. They are probably using the opportunity to derive information about the demographics of potential customers. One company, having been told I was retired, wanted to know about previous employment. But otherwise, the basic information being asked for was the same, even if the format varied.

To may surprise the resulting premiums requested varied enormously. The first company, one I’d insured with previously, declared it would not insure a person of my age! They suggested I contact an insurance broker. They came up with a premium of over £2,000. I therefore approached a company that advertised widely. Their figure was £1,500. Both way beyond the average figure I’d paid in the past. Undaunted, I filled in the form for another well-known insurer. They came up with an offer close to £800. Interestingly, all four of the forms I’d filled in where somewhat different, even though they asked me the same questions. Out of curiosity, I filled in the form for a fifth organization. This form was remarkably similar to the fourth organisation. It offered a premium just £2 more expensive than the fourth one.

This empirical study therefore showed very clearly that the algorithms these companies used had somewhat different biases built into them. They were all huge companies, presumably with access to vast amounts of data on the risks associated with insuring different cars and different owners. Could that data have been so different from one to the other? What variations must have been built in by some human agency to generate such a variety of different outcomes?

 The algorithms for car insurance must be much simpler than many of the processes that are now carried out by the impressively complex artificially intelligent system which are now storming the ramparts of daily activities. The results here are a clear warning that no matter how sophisticated the programming, no matter how many interactions have been used to ‘educate’ the algorithms, they are generated by human beings. People who have values and biases, undeclared prejudices, and unconscious habits. We regard them as neutral machines at our peril.  

.

Professor David Canter, the internationally renowned applied social researcher and world-leading crime psychologist, is perhaps most widely known as one of the pioneers of "Offender Profiling" being the first to introduce its use to the UK.

View all posts by David Canter

Related Articles

‘Settler Colonialism’ and the Promised Land
International Debate
September 27, 2024

‘Settler Colonialism’ and the Promised Land

Read Now
Webinar: Banned Books Week 2024
Event
September 24, 2024

Webinar: Banned Books Week 2024

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures
Impact
September 23, 2024

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Read Now
Revisiting the ‘Research Parasite’ Debate in the Age of AI
International Debate
September 11, 2024

Revisiting the ‘Research Parasite’ Debate in the Age of AI

Read Now
This Anthropology Course Looks at Built Environment From Animal Perspective

This Anthropology Course Looks at Built Environment From Animal Perspective

Title of course: Space/Power/Species What prompted the idea for the course? A few years ago, I came across the architect Joyce Hwang’s […]

Read Now
Trippin’ Forward: Management Research and the Development of Psychedelics

Trippin’ Forward: Management Research and the Development of Psychedelics

Charlie Smith reflects on his interest in psychedelic research, the topic of his research article, “Psychedelics, Psychedelic-Assisted Therapy and Employees’ Wellbeing,” published in Journal of Management Inquiry.

Read Now
2024 Henry and Bryna David Lecture: K-12 Education in the Age of AI

2024 Henry and Bryna David Lecture: K-12 Education in the Age of AI

The slow, relentless creep of computing is currently in overdrive with powerful artificial intelligence tools impacting every aspect of our lives. What […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments