Don’t Ban AI—Teach Students to Build It
How designing AI tools can transform cognitive offloading into critical thinking
“Welcome to pharmacology!” I announced to a packed auditorium of wide-eyed physician assistant students. In years past, the first class hummed with enthusiasm. But this year, I just heard the soft clicks of one hundred laptops opening their preferred chatbots. As I introduced the course, students had already uploaded my lecture content into ChatGPT, asking it to condense the material and generate practice questions. Most disturbingly, the harder the question I posed, the more students used ChatGPT. I had read studies linking AI to the erosion of critical thinking and cognitive offloading, but knew the chatbots were here to stay. I recalled the old adage, “If you can’t beat them, join them,” and began my search for an alternative vision of AI that might harness or even promote critical thinking.

I didn’t have to look far. Studies consistently show that AI use by general practitioners leads to higher performance on measures of clinical competence. When it came to medical education, the enthusiasm for AI-backed precision medical education outpaced the research. One study found that medical students receiving expert feedback performed better on complex clinical reasoning cases compared with those receiving ChatGPT-generated feedback. Another study showed training students to use bias-checking prompts improved critical thinking but relied only on student self-reports.
Among the dozens of opinion pieces on AI in medicine, a randomized study by Wang Shalong et al. stood above the rest in its rigor and innovation. The research team developed a ChatGPT-based facilitator known as “Learn Guide” with the express purpose of promoting critical thinking. If students gave one-word answers, the algorithm would prompt students to consider alternative diagnoses or reflect on their own cognitive biases. In the fourteen-week study, medical students who used the ChatGPT-based facilitator had improved scores on a validated measure of critical thinking known as the Cornell Critical Thinking Test. I began to wonder: could a similar algorithm promote critical thinking among physician assistant students?
To find out, I asked the students themselves. Nearly two-thirds of the first-year class reported less than one hour of training in responsible AI use—a striking knowledge gap. When asked whether AI improves critical thinking, only one-third said yes. Most remarkably, the majority of students reported using AI for assignments more than 15 hours per week. With access to a program similar to Learn Guide, they could spend less time memorizing and more time thinking critically. Better yet, if given the training to co-design educational GPT programs themselves, they could jumpstart their development as clinical reasoners.
To the problem of cognitive offloading and overreliance on AI, the solution begins with community and fun. Here’s the idea: organize a hackathon where student teams compete to design the best educational GPT program to promote critical thinking.
The teams will research and create customized tools similar to Learn Guide, easily accessible within common AI platforms. Teams could tailor their chatbots to prompt critical thinking through a variety of prompts. For example, if a student offers the answer “deep vein thrombosis (DVT)” to a clinical scenario, students may tailor their chatbot to respond, “Excellent. Now argue against this diagnosis. Give me three findings that would make DVT less likely.”
A group of learners will be randomly assigned to study respiratory physiology using different programs. Before and after, they will complete structured prompts on respiratory physiology assessing their degree of critical thinking on a standardized rubric. The assessment of critical thinking via written response scored by an expert-designed rubric builds upon the best practices of assessing student diagnostic reasoning, similar to long-answer constructed responses to vignettes. Unlike multiple-choice assessment of competency, asking students to justify their reasoning builds higher-order skills necessary for real-life nuanced patient cases.
Then comes the fun part. Participants who show the greatest growth in critical thinking scores will receive a prize, along with the creators of the most effective chatbot, who will share their approach and algorithm with their classmates. In this way, the hackathon will incentivize a new domain of critical thinking—how to collaborate with technology to take better care of patients.
In a few short years, these hundred students will begin practice on the frontlines of medicine: in emergency rooms, primary care clinics, and hospital floors. For myself and my patients, I don’t just want a fast and efficient clinician. I want someone who knows how to think. Starting with the foundations of physiology, students can be at the forefront of designing AI to save lives. Let’s get to work.
References
Çiçek, F. E., Ülker, M., Özer, M., & Kıyak, Y. S. (2025). ChatGPT versus expert feedback on clinical reasoning questions and their effect on learning: A randomized controlled trial. Postgraduate Medical Journal, 101(1195), 458–466.
Daniel, M., Rencic, J., Durning, S. J., Holmboe, E., Santen, S. A., Lang, V., Ratcliffe, T., Gordon, D., Heist, B., Lubarsky, S., Estrada, C. A., Ballard, T., Artino, A. R., Jr., Da Silva, A. S., Cleary, T., Stojan, J., & Gruppen, L. D. (2019). Clinical reasoning assessment methods: A scoping review and practical guidance. Medical Education, 53(11), 1084–1104.
Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), Article 6.
Izquierdo-Condoy, J. S., Arias-Intriago, M., Tello-De-la-Torre, A., Busch, F., & Ortiz-Prado, E. (2025). Generative artificial intelligence in medical education: Enhancing critical thinking or undermining cognitive autonomy? Journal of Medical Internet Research, 27, e76340.
Mehta, N., Mehta, S., Rubenstein, A., & Wood, S. K. (2025). Not replaced, but reinvented: AI education pathways to prepare future physicians to lead healthcare transformation. Perspectives on Medical Education, 14(1), 849–859.
Qunaibi, E. A., et al. (2026). Effectiveness of informed AI use on clinical competence of general practitioners and internists: Pre-post intervention study. JMIR Med Educ 2026;12:e75534
Wang, S., Zuo, Y., Zou, B., Liu, G., Zhou, J., Zheng, Y., Zhang, Z., Yuan, L., & Feng, R. (2024). Enhancing self-directed learning with custom GPT AI facilitation among medical students: A randomized controlled trial. Medical Teacher, 47(7), 1–8.
Zhou, X., Teng, D., & Al-Samarraie, H. (2024). The mediating role of generative AI self-regulation on students’ critical thinking and problem-solving. Education Sciences, 14(12), 1302.

