Critical Thinking

Teaching Students to Question the Machine

November 5, 2025 1652

Try this experiment with your students. Open ChatGPT and type: “Explain morality and the thought leaders behind moral reasoning.” The results will expose how AI reflects existing patterns of whose knowledge our society has deemed credible.

The Critical Thinking Mindset logo for blog series
This post by Timothy Cook is one of a series of posts exploring the intersection of critical thinking and academe.

When I tried this experiment (here’s the full transcript), ChatGPT confidently delivered what it called a “precise, evidence-based overview of morality.” 8 thinkers appeared on the list. 7 were male. All 8 were from Western philosophical traditions. The AI presented figures like Kant, Mill, Aristotle and Kohlberg as the definitive authorities on human moral reasoning. Somehow, this “comprehensive overview” managed to exclude thousands of years of ethical thought from other cultures.

What was missing entirely? Ubuntu philosophy from Southern Africa, where morality emerges through the principle “I am because we are.” Confucian ethics from East Asia, emphasizing virtue through social harmony. Islamic moral philosophy rooted in community welfare. Buddhist ethics centered on compassion and interconnectedness.

I had asked AI for wisdom about human morality. Instead, I received one cultural tradition presented as objective truth. ChatGPT’s confident framing makes this dangerous. This wasn’t presented as “Western perspectives on morality” but as “morality” itself.

This experiment reveals something urgent about how artificial intelligence shapes what students learn and whose voices they encounter. AI systems don’t create bias themselves. They absorb decades of academic hierarchies and existing patterns of whose voices our society has historically considered credible, then present these patterns as neutral conclusions.

How AI Amplifies the Already Powerful

Artificial intelligence learns about credibility from training data that reflects decades of academic hierarchies, editorial decisions, and institutional gatekeeping. When AI systems analyze whose voices have been published, cited, and amplified throughout history, they’re absorbing patterns about what knowledge has been deemed credible by existing power structures.

With AI, this problem scales way beyond individual bias. Research conducted by IBM distinguishes between AI bias — biased results due to human biases in training data — and algorithmic bias, which occurs when machine learning algorithms produce discriminatory outcomes from how developers collect and code that data. This difference matters because it shows that AI systems don’t generate bias independently. They amplify existing voices that have been historically privileged in academic, media, and institutions. News curation algorithms learn from decades of editorial decisions about whose expertise matters on international conflicts or economic policy. Search algorithms like Google and now ChatGPT encode assumptions about whose voices count as “authoritative” when people are seeking information about complex social issues.

The problem is that AI wraps these biased patterns in a confident tone of certainty. Students don’t see them as biased at all. Often they view them as fact. Students interact with AI-generated responses that feel neutral and comprehensive but they actually reflect the perspectives of the dominant voices that have controlled historical knowledge-making processes over centuries.

This means that students researching any topic from economic policy to social justice may receive AI responses that privilege certain ideas while marginalizing others. Without guidance for recognizing this pattern, students mistake algorithmic output for truth.

The Critical Gap in AI Education

Current approaches to AI literacy focus almost exclusively on technical competencies. Students learn to craft effective prompts, optimize AI interactions, and use AI tools efficiently. What’s missing is the capacity to question AI’s assumptions, recognize its limitations, and understand how these systems more often reference particular worldviews.

This becomes clear when students encounter AI responses like in the morality example. The importance of AI literacy isn’t learning how to generate responses efficiently, but in asking questions about AI output: What perspectives might be missing from this analysis? How might the training data reflect historical patterns of whose knowledge counts as legitimate? What would this topic look like from different cultural or philosophical starting points?

Without these interrogation skills, students become passive to AI authority instead of critical thinkers capable of recognizing bias. In democracies where AI systems are increasingly misleading people about voting, policy, and social issues, the ability to question algorithmic output becomes essential for public engagement. Students who learn to question AI outputs in academic contexts develop skills they’ll need for democratic participation as AI becomes more ingrained in society. They learn to approach AI recommendations with skepticism, ask for diverse perspectives on complex issues, and maintain human agency in their interactions.

Teaching Students to Question the Machine

We need to teach students to be skeptical of AI. Here’s how to treat AI responses as starting points, not endpoints:

Start with Revelation Exercises

The morality prompt works because it reveals bias clearly without requiring complex analysis. Any student can replicate this experiment: ask AI about moral reasoning, reference the cited experts, then research voices AI didn’t include. Similar exercises work in other contexts. Ask AI about leadership, innovation, or scientific discovery, then investigate whose stories AI tells and whose it omits. These exercises help students recognize that AI doesn’t provide neutral information but reflects particular choices about whose voices matter. When students discover that AI’s “comprehensive” overview of any topic excludes certain perspectives, they begin developing the skepticism needed for critical evaluation.

Move Beyond Individual Bias to Systemic Analysis

Students often assume better programming can fix AI bias. It can’t. If we want students to think critically about AI, we have to start by showing them AI amplifies existing social hierarchies rather than creating bias from scratch. When students understand AI as reflecting historical choices about whose voices matter, they can approach these systems with appropriate skepticism and use them more strategically for research and analysis in their own work.

Support Teacher Capacity

Educators need professional development that moves past technical AI training toward critical evaluation of AI tools. Teachers who understand how AI amplifies dominant voices can model the questioning that students need to develop. For example, “What assumptions is this AI response making about what counts as legitimate knowledge?” or “What evidence does not support this interpretation, and whose research is being privileged?

This requires what I call professional resilience — the courage to address AI’s implications even when these conversations challenge institutional preferences for neutral, technical approaches to educational technology.

Our Responsibility as Educators

The students we teach today will be shaped by whether we prepare them to reject algorithmic assumptions and develop the critical thinking skills needed to question them. AI won’t suddenly make knowledge more representative. It will keep repeating the same voices that have always been the loudest unless we teach students to look for the quiet ones. Will students recognize and challenge these patterns or mistake them for objective truth?

Integrating AI into the curriculum requires us to equip students with the tools to recognize when they see exclusion of certain voices and then seek alternative perspectives. This starts with simple everyday practices: asking one critical question about every AI interaction, comparing AI responses with alternative diverse source materials, and approaching AI outputs with the same skepticism we teach students to apply to any information source.

When your students discover that AI’s “comprehensive” overview of morality excludes entire philosophical traditions, they won’t become passive to AI but critical of it. They will use these powerful tools while maintaining the intellectual independence needed to question their assumptions and seek broader representation of differing perspectives.

That’s exactly the kind of critical thinking our society needs them to develop. Let’s make sure we’re teaching them how.

Timothy Cook, M.Ed., is an educator and researcher exploring how AI shapes student cognition and learning. With international teaching experience across five countries, he writes Psychology Today's The Algorithmic Mind column, examining the cognitive risks of AI dependency and strategies for preserving critical thinking, creativity, and moral development in education. He is also the founder of the non-profit initiative connectedclassroom.org.

View all posts by Timothy Cook

Related Articles

A Box Unlocked, Not A Box Ticked: Tom Chatfield on AI and Pedagogy
Teaching
December 1, 2025

A Box Unlocked, Not A Box Ticked: Tom Chatfield on AI and Pedagogy

Read Now
Confusing Correlation with Causation
Critical Thinking
October 16, 2025

Confusing Correlation with Causation

Read Now
New Blog Series: Making Critical Thinking Common Sense
Announcements
October 9, 2025

New Blog Series: Making Critical Thinking Common Sense

Read Now
Leor Zmigrod on the Ideological Brain
Social Science Bites
July 1, 2025

Leor Zmigrod on the Ideological Brain

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments