Innovation

Why Don’t Algorithms Agree With Each Other?

February 21, 2024 488

There is a tendency to think of automatic online processes as neutral and mathematically logical. They are usually described as algorithms. That means they consist of a series of pre-planned calculations that can be reduced to the most elementary operations, such as plus, minus, if then. It would seem obvious to assume that these ethereal calculations are derived from rational procedures which are firmly based on mathematical principles.

Such assumptions about algorithms are important to examine in light of the explosion of artificially intelligent systems. They are based on the apparently exotic process, which is referred to as ‘machine learning.’ As much as IT experts wish to insist their systems are almost human, hijacking words like ‘intelligence’ and ‘learning,’ their machines are still just frighteningly fast automatons. They do not understand, conceptualize, or even perceive as a human being would. Their ‘learning’ is based on building millions of connections between inputs and outputs.

These associations, and associations between the associations, and the links between them, may open up surprising possibilities for achieving the goals they are set. But the machine is not surprised. It will now know or comprehend what it has discovered in any way that is analogous to the thoughts of a person.

Given all that, the role of human input would seem to be negligible? If the algorithms are just churning through endless connections, derived from millions of examples, where is the possibility for values, preferences, biases, and all those aspects of human thoughts and actions that make those thoughts and actions human? Do these algorithms inevitably produce the neutral, unprejudiced results that would be expected of a mere machine?

I had the unexpected possibility of testing this hypothesis recently when I set out to get motor insurance for the car I was about to buy. These days it is rare to have the possibility of talking to an insurance agent. All searches for insurance cover consist of filling in forms online. Even if you do manage to speak to someone, there is no conversation. That person is just filling in a form on your behalf. The algorithms rule.

Going through this process with several different insurance companies it quickly became clear that they all ask the same questions. Age, history of motoring accidents, marital status, previous insurance history details of the car to be insured, and so on. I’m sure some of these questions have no bearing on the calculation of how much to charge for the insurance premium. They are probably using the opportunity to derive information about the demographics of potential customers. One company, having been told I was retired, wanted to know about previous employment. But otherwise, the basic information being asked for was the same, even if the format varied.

To may surprise the resulting premiums requested varied enormously. The first company, one I’d insured with previously, declared it would not insure a person of my age! They suggested I contact an insurance broker. They came up with a premium of over £2,000. I therefore approached a company that advertised widely. Their figure was £1,500. Both way beyond the average figure I’d paid in the past. Undaunted, I filled in the form for another well-known insurer. They came up with an offer close to £800. Interestingly, all four of the forms I’d filled in where somewhat different, even though they asked me the same questions. Out of curiosity, I filled in the form for a fifth organization. This form was remarkably similar to the fourth organisation. It offered a premium just £2 more expensive than the fourth one.

This empirical study therefore showed very clearly that the algorithms these companies used had somewhat different biases built into them. They were all huge companies, presumably with access to vast amounts of data on the risks associated with insuring different cars and different owners. Could that data have been so different from one to the other? What variations must have been built in by some human agency to generate such a variety of different outcomes?

 The algorithms for car insurance must be much simpler than many of the processes that are now carried out by the impressively complex artificially intelligent system which are now storming the ramparts of daily activities. The results here are a clear warning that no matter how sophisticated the programming, no matter how many interactions have been used to ‘educate’ the algorithms, they are generated by human beings. People who have values and biases, undeclared prejudices, and unconscious habits. We regard them as neutral machines at our peril.  

.

Professor David Canter, the internationally renowned applied social researcher and world-leading crime psychologist, is perhaps most widely known as one of the pioneers of "Offender Profiling" being the first to introduce its use to the UK.

View all posts by David Canter

Related Articles

To Better Forecast AI, We Need to Learn Where Its Money Is Pointing
Innovation
April 10, 2024

To Better Forecast AI, We Need to Learn Where Its Money Is Pointing

Read Now
Free Online Course Reveals The Art of ChatGPT Interactions
Resources
March 28, 2024

Free Online Course Reveals The Art of ChatGPT Interactions

Read Now
Why Social Science? Because It Makes an Outsized Impact on Policy
Industry
March 4, 2024

Why Social Science? Because It Makes an Outsized Impact on Policy

Read Now
A Black History Addendum to the American Music Industry
Insights
February 6, 2024

A Black History Addendum to the American Music Industry

Read Now
NSF Responsible Tech Initiative Looking at AI, Biotech and Climate

NSF Responsible Tech Initiative Looking at AI, Biotech and Climate

The U.S. National Science Foundation’s new Responsible Design, Development, and Deployment of Technologies (ReDDDoT) program supports research, implementation, and educational projects for multidisciplinary, multi-sector teams

Read Now
New Report Finds Social Science Key Ingredient in Innovation Recipe

New Report Finds Social Science Key Ingredient in Innovation Recipe

A new report from Britain’s Academy of Social Sciences argues that the key to success for physical science and technology research is a healthy helping of relevant social science.

Read Now
When University Decolonization in Canada Mends Relationships with Indigenous Nations and Lands

When University Decolonization in Canada Mends Relationships with Indigenous Nations and Lands

Community-based work and building and maintaining relationships with nations whose land we live upon is at the heart of what Indigenizing is. It is not simply hiring more faculty, or putting the titles “decolonizing” and “Indigenizing” on anything that might connect to Indigenous peoples.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments