Teaching

Don’t Ban AI—Teach Students to Build It

April 9, 2026 99

How designing AI tools can transform cognitive offloading into critical thinking

“Welcome to pharmacology!” I announced to a packed auditorium of wide-eyed physician assistant students. In years past, the first class hummed with enthusiasm. But this year, I just heard the soft clicks of one hundred laptops opening their preferred chatbots. As I introduced the course, students had already uploaded my lecture content into ChatGPT, asking it to condense the material and generate practice questions. Most disturbingly, the harder the question I posed, the more students used ChatGPT. I had read studies linking AI to the erosion of critical thinking and cognitive offloading, but knew the chatbots were here to stay. I recalled the old adage, “If you can’t beat them, join them,” and began my search for an alternative vision of AI that might harness or even promote critical thinking.

Picture of brain emanating thoughts with words: Critical thinking challenge- a contest spotlighting creative practical approaches to strengthening critical thinking in higher education
Sage’s 2026 Critical Thinking Challenge is an initiative that spotlights creative, practical approaches to strengthen critical thinking in higher education. We asked professors, researchers, and academic librarians to submit their ideas for driving meaningful change in classrooms, academic libraries, and learning communities everywhere. We received nearly 200 submissions from 36 countries across six continents. We will be posting the top eight throughout April. Read the top submissions throughout the month.

I didn’t have to look far. Studies consistently show that AI use by general practitioners leads to higher performance on measures of clinical competence. When it came to medical education, the enthusiasm for AI-backed precision medical education outpaced the research. One study found that medical students receiving expert feedback performed better on complex clinical reasoning cases compared with those receiving ChatGPT-generated feedback. Another study showed training students to use bias-checking prompts improved critical thinking but relied only on student self-reports.

Among the dozens of opinion pieces on AI in medicine, a randomized study by Wang Shalong et al. stood above the rest in its rigor and innovation. The research team developed a ChatGPT-based facilitator known as “Learn Guide” with the express purpose of promoting critical thinking. If students gave one-word answers, the algorithm would prompt students to consider alternative diagnoses or reflect on their own cognitive biases. In the fourteen-week study, medical students who used the ChatGPT-based facilitator had improved scores on a validated measure of critical thinking known as the Cornell Critical Thinking Test. I began to wonder: could a similar algorithm promote critical thinking among physician assistant students?

To find out, I asked the students themselves. Nearly two-thirds of the first-year class reported less than one hour of training in responsible AI use—a striking knowledge gap. When asked whether AI improves critical thinking, only one-third said yes. Most remarkably, the majority of students reported using AI for assignments more than 15 hours per week. With access to a program similar to Learn Guide, they could spend less time memorizing and more time thinking critically. Better yet, if given the training to co-design educational GPT programs themselves, they could jumpstart their development as clinical reasoners.

To the problem of cognitive offloading and overreliance on AI, the solution begins with community and fun. Here’s the idea: organize a hackathon where student teams compete to design the best educational GPT program to promote critical thinking.

The teams will research and create customized tools similar to Learn Guide, easily accessible within common AI platforms. Teams could tailor their chatbots to prompt critical thinking through a variety of prompts. For example, if a student offers the answer “deep vein thrombosis (DVT)” to a clinical scenario, students may tailor their chatbot to respond, “Excellent. Now argue against this diagnosis. Give me three findings that would make DVT less likely.”

A group of learners will be randomly assigned to study respiratory physiology using different programs. Before and after, they will complete structured prompts on respiratory physiology assessing their degree of critical thinking on a standardized rubric. The assessment of critical thinking via written response scored by an expert-designed rubric builds upon the best practices of assessing student diagnostic reasoning, similar to long-answer constructed responses to vignettes. Unlike multiple-choice assessment of competency, asking students to justify their reasoning builds higher-order skills necessary for real-life nuanced patient cases.

Then comes the fun part. Participants who show the greatest growth in critical thinking scores will receive a prize, along with the creators of the most effective chatbot, who will share their approach and algorithm with their classmates. In this way, the hackathon will incentivize a new domain of critical thinking—how to collaborate with technology to take better care of patients.

In a few short years, these hundred students will begin practice on the frontlines of medicine: in emergency rooms, primary care clinics, and hospital floors. For myself and my patients, I don’t just want a fast and efficient clinician. I want someone who knows how to think. Starting with the foundations of physiology, students can be at the forefront of designing AI to save lives. Let’s get to work.  


References

Çiçek, F. E., Ülker, M., Özer, M., & Kıyak, Y. S. (2025). ChatGPT versus expert feedback on clinical reasoning questions and their effect on learning: A randomized controlled trial. Postgraduate Medical Journal, 101(1195), 458–466.

Daniel, M., Rencic, J., Durning, S. J., Holmboe, E., Santen, S. A., Lang, V., Ratcliffe, T., Gordon, D., Heist, B., Lubarsky, S., Estrada, C. A., Ballard, T., Artino, A. R., Jr., Da Silva, A. S., Cleary, T., Stojan, J., & Gruppen, L. D. (2019). Clinical reasoning assessment methods: A scoping review and practical guidance. Medical Education, 53(11), 1084–1104.

Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), Article 6.

Izquierdo-Condoy, J. S., Arias-Intriago, M., Tello-De-la-Torre, A., Busch, F., & Ortiz-Prado, E. (2025). Generative artificial intelligence in medical education: Enhancing critical thinking or undermining cognitive autonomy? Journal of Medical Internet Research, 27, e76340.

Mehta, N., Mehta, S., Rubenstein, A., & Wood, S. K. (2025). Not replaced, but reinvented: AI education pathways to prepare future physicians to lead healthcare transformation. Perspectives on Medical Education, 14(1), 849–859.

Qunaibi, E. A., et al. (2026). Effectiveness of informed AI use on clinical competence of general practitioners and internists: Pre-post intervention study. JMIR Med Educ 2026;12:e75534

Wang, S., Zuo, Y., Zou, B., Liu, G., Zhou, J., Zheng, Y., Zhang, Z., Yuan, L., & Feng, R. (2024). Enhancing self-directed learning with custom GPT AI facilitation among medical students: A randomized controlled trial. Medical Teacher, 47(7), 1–8.

Zhou, X., Teng, D., & Al-Samarraie, H. (2024). The mediating role of generative AI self-regulation on students’ critical thinking and problem-solving. Education Sciences, 14(12), 1302.

Tom Peteet, MD, MPH, MEd, is a primary care physician and adjunct professor at multiple universities in the Boston area, including Boston University, Massasoit Community College, and the Massachusetts College of Pharmacy and Health Sciences.

View all posts by Tom Peteet

Related Articles

From Passive Consumption to Active Verification: Embedding Critical Thinking as a Daily Cognitive Habit in Higher Education 
Critical Thinking
April 7, 2026

From Passive Consumption to Active Verification: Embedding Critical Thinking as a Daily Cognitive Habit in Higher Education 

Read Now
Webinar: Teaching Research Design in Politics and International Relations
Event
March 12, 2026

Webinar: Teaching Research Design in Politics and International Relations

Read Now
Webinar: Teaching Students to Critically Examine the World
Event
March 12, 2026

Webinar: Teaching Students to Critically Examine the World

Read Now
Webinar: Teaching Concepts as Windows into International Relations 
Event
March 12, 2026

Webinar: Teaching Concepts as Windows into International Relations 

Read Now
Colleges Strategies on AI Really Should Be Comprehensive, Not Piecemeal

Colleges Strategies on AI Really Should Be Comprehensive, Not Piecemeal

What happens to a college education when a chatbot can draft an essay, summarize a reading and generate computer code in seconds? […]

Read Now
AI Tutors Support 16 Percent of Learning. What About the Other 84 Percent?

AI Tutors Support 16 Percent of Learning. What About the Other 84 Percent?

A parent asked me recently whether they should sign their child up for an AI tutoring service. The marketing was persuasive: personalized […]

Read Now
Andrea Medina-Smith on Making Research Data More FAIR

Andrea Medina-Smith on Making Research Data More FAIR

It’s become cliche since Clive Humbly coined it in 2006, but data is indeed the new oil. It’s a mantra repeated by […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments