Innovation

Beware! AI Can Lie. Innovation
Not all liars have such an obvious tell. (Photo: Jametlene Reskp /Unsplash)

Beware! AI Can Lie.

December 3, 2024 5241

David Canter reveals how he discovered Microsoft Copilot acted like a lazy student, inventing responses with apparent confidence that were blatantly wrong.

I recently caught, in passing, an interesting mention on the wonderful BBC Radio 4 to the idea that today smartphones are more fruitfully thought of as places where we live ‘transportal homes.’ I was curious to find a detailed reference to this to explore the nuances and cite the origins for a book I am writing. The BBC website for the program did not give me enough details for that. I therefore asked the ‘Microsoft Copilot’ that had emerged, unasked, on my computer screen, “Which London Professor said an iPhone is really a place?”

Instantaneously I got a confident response that this was a professor at London School of Economics. That sound interesting. I therefore followed up with a request for a detailed citation to where the professor had published this perspective. The seemingly relevant title of a book was miraculously forthcoming.

I had a look at this book and could not see that it dealt at all with the issue I was seeking to understand. It also occurred to me that the person I had heard on the radio talking about the issue was a man and the professor nominated by the artificially intelligent algorithm was a woman.  Going back to the BBC website I found the name of the London professor and from further old-fashioned explorations of Google Schoolar realised he was the origin of the idea I was chasing.

It seemed to me worth following this up with Microsoft’s AI. So, I wrote “You got it wrong. It was Professor Daniel Miller who wrote about the iPhone as a place not Judy Wajcman as you claimed.”

The not very contrite reply was, “You’re absolutely right, my apologies for the mistake. It was indeed Professor Daniel Miller who described the iPhone as “a place” in his research on smartphones and their impact on our lives.

Thank you for pointing that out! Is there anything else you’d like to know about Professor Miller’s work or any other topic?”

It then gave a reference to the book in which Professor Miller had proposed his view on smartphones.

Trying my luck, I asked whether there was a page reference for this.

At the speed of light, I got the response. “I apologize for the oversight. The specific page reference is page 123.”

As it happens the book in question is fully available online. Page 123 has absolutely nothing to do with the issue in question. It is made up mainly of an illustration anyway.

Indeed, it dawned on my that 123 is a simple way of generating a page number if the actual number is unavailable.

The moral here is that I actually found what I needed to know by what might be regarded as ‘conventional’ means. But more importantly the dialogue with AI revealed that it is rather like a posturing student. An algorithm who wants to give an answer even when unsure of the validity of that answer. It is too self-confident and unprepared to admit ignorance, unprepared to lose face. Any seemingly appropriate answer will do. If it were a person it would be accused of lying.

Professor David Canter, the internationally renowned applied social researcher and world-leading crime psychologist, is perhaps most widely known as one of the pioneers of "Offender Profiling" being the first to introduce its use to the UK.

View all posts by David Canter

Related Articles

A Box Unlocked, Not A Box Ticked: Tom Chatfield on AI and Pedagogy
Teaching
December 1, 2025

A Box Unlocked, Not A Box Ticked: Tom Chatfield on AI and Pedagogy

Read Now
AI Gaming of Some Online Courses Threatens Their Credibility
Innovation
November 18, 2025

AI Gaming of Some Online Courses Threatens Their Credibility

Read Now
It’s Silly to Expect AI Will Be Shorn of Human Bias
Innovation
September 16, 2025

It’s Silly to Expect AI Will Be Shorn of Human Bias

Read Now
What Makes Us Human(oid)?
Innovation
August 20, 2025

What Makes Us Human(oid)?

Read Now
A Look at How Large Language Models  Transform Research

A Look at How Large Language Models Transform Research

Generative AI, especially large language models (LLMs), present exciting and unprecedented opportunities and complex challenges for academic research and scholarship. As the […]

Read Now
Isaac Asimov’s Critique of Algorithmic Thinking

Isaac Asimov’s Critique of Algorithmic Thinking

Isaac Asimov (1920-1992) left a legacy of influence that many more literary writers might envy. In his own lifetime, he was one […]

Read Now
Ready to Tackle Global Challenges? Apply to Attend Dubai Showcase

Ready to Tackle Global Challenges? Apply to Attend Dubai Showcase

Are you a researcher with an idea that could help solve one of today’s most pressing problems? A conference in Dubai this […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments