Deskilling and Automation Bias: A Cautionary Tale for Health Professions Educators
By: Eric Warm MD, MACP In an era when artificial intelligence (AI) promises unprecedented transformation in medicine, health professions educators find themselves at a pivotal crossroads. While AI can improve diagnostic accuracy, personalize care, and streamline workflows, it also risks eroding the very expertise we aim to cultivate in future clinicians. Two closely intertwined threats—deskilling and automation bias—have emerged as central challenges that demand urgent attention. What Are Deskilling and Automation Bias? Deskilling refers to the erosion of clinical judgment, procedural competence, or diagnostic reasoning due to over-reliance on automated systems. Tasks once performed with deep skill become passively monitored or entirely delegated to machines. Think of it as a cognitive and manual atrophy: skills fade not because they are unnecessary, but because they are no longer practiced. Automation bias compounds this problem. It is the tendency to trust automated systems uncritically—even when they are wrong. This leads to two types of errors: errors of commission (acting on incorrect AI suggestions) and errors of omission (failing to act because the AI didn’t prompt action). As AI systems become more deeply embedded in electronic health records, diagnostics, and decision-making tools, these risks are no longer speculative—they are already visible in clinical training and care delivery. How Deskilling Manifests in Health Professions Training Health professions education is already showing signs of subtle, yet significant, erosion in core competencies due to automation. Consider these real-world scenarios: Radiology and Pathology: AI image analysis tools are now flagging abnormalities faster and more consistently than humans in many contexts. But the flip side? Residents risk becoming less skilled at interpreting subtle, atypical findings if the AI always sees them first. A well-trained radiologist must recognize rare or emerging patterns, especially those not yet captured in training data. AI may not help when the unknown emerges. Anticoagulation Management: Internal medicine residents often no longer titrate heparin themselves; nurse-driven protocols or AI systems take the reins. Without first-hand experience, trainees may struggle to understand nuanced pharmacologic dynamics, reducing their ability to manage atypical or high-risk cases. Clinical Documentation and Reasoning: Natural language processing tools generate clinical notes from voice inputs or data extraction. While this alleviates clerical burden, it risks displacing the critical skill of synthesizing patient data into a coherent narrative—foundational for sound diagnostic reasoning. Decision Support Tools in Diagnosis: Online tools now generate differentials instantly. This expedites care, but when overused, they bypass the learner’s struggle—and growth—through ambiguity and complexity. These examples point to an urgent trend: technology is not just changing how we teach, but what gets learned. Automation Bias in Practice The dangers of automation bias are just as pressing. Studies show that clinicians—even experienced ones—are vulnerable to over-trusting AI systems: In mammography, a prospective study demonstrated that radiologists, regardless of experience, were significantly influenced by AI-suggested BI-RADS categories. When the AI provided incorrect suggestions, the accuracy of radiologists’ assessments dropped markedly, with less experienced readers being the most susceptible. This effect was attributed to automation bias, as clinicians deferred to the AI’s recommendation even when it contradicted their own judgment.1 In medication management, a study involving UK general practitioners found that clinicians changed their prescriptions in response to clinical decision support system (CDSS) advice in approximately 22.5% of cases. Critically, in 5.2% of all cases, clinicians switched from a correct to an incorrect prescription after receiving erroneous advice from the CDSS, directly demonstrating automation bias.2 In musculoskeletal imaging, a laboratory study evaluating AI-assisted diagnosis of anterior cruciate ligament (ACL) ruptures on MRI found that 45.5% of the total mistakes made by clinicians in the AI-assisted round were due to following incorrect AI recommendations. This effect was observed across all levels of clinical expertise, indicating that even experienced clinicians are not immune to automation bias.3 In pathology, the adoption of digital slide analysis and AI-augmented diagnostic tools has similarly raised concerns about the diminishment of manual microscopy skills and the ability to synthesize complex morphological and clinical data. As AI systems take on more of the routine diagnostic workload, pathologists may become less adept at recognizing rare disease variants or subtle histological patterns, especially if they begin to defer to AI recommendations without critical appraisal.4 Anesthesiology provides another salient example, where AI-driven decision support and closed-loop systems automate aspects of intraoperative management, such as hemodynamic monitoring and drug titration. While these systems can enhance patient safety by reducing human error, they also risk diminishing anesthesiologists’ situational awareness and manual skills, particularly in managing complex intraoperative scenarios without algorithmic support.5-7 Bias doesn’t stem from ignorance—it often arises from efficiency pressure, cognitive overload, or misplaced confidence that “the machine knows better.” Lessons from Outside Healthcare: The Human-in-the-Loop Imperative Mitigation begins with explicit human‑in‑the‑loop (HITL) design—systems that require clinicians to review, interpret, and when necessary, override AI recommendations. The idea is hardly novel. In November 2024, Presidents Biden and Xi publicly agreed that “the decision to use nuclear weapons should remain under human control and not be delegated to artificial intelligence,” echoing long‑standing U.S. and allied doctrine.8 If the world’s most destructive capability demands human judgment, surely we can insist on the same safeguard before letting a model manage insulin drips or certify a student’s competence. A Utilitarian Counterpoint: Yes, AI Saves Lives Despite these risks, AI offers substantial utilitarian benefits across patient outcomes, workforce satisfaction, and system efficiency. AI systems have demonstrated high accuracy in specific diagnostic tasks, such as melanoma detection and sepsis risk prediction, and have been associated with reduced hospital and ICU length of stay, lower in-hospital mortality, and improved management of chronic conditions.9-11 However, the same logic should apply to health professionals and AI as to parachutes and pilots. Just because autopilot can land the plane doesn’t mean we train pilots to be passengers. Medicine is no different. The utility of AI must be weighed not only against its successes but also against the risks when it fails and humans can no longer step in competently. Actionable Strategies for Health Professions Educators The question is not whether to adopt AI, but how to adopt it without hollowing out professional expertise. Here are key strategies: Preserve deliberate practice. Allocate curricular time for teachers and learners to do the challenging thing themselves before consulting the tool. For instance, have students write a progress note unaided, then compare with an LLM suggestion. Surface machine uncertainty. Dashboards should display confidence ranges, prompting educators to interrogate low‑certainty outputs rather than accept them as gospel. Requiring a brief justification—Why do you agree or disagree with the model?—reduces blind trust. Teach AI literacy as a core competency. Health professions education should cover model limitations, data ethics, and prompt engineering. Design for Explainability and Transparency. Advocate for AI tools that show why they recommend a particular course—not just what they recommend. Explainable AI (XAI) fosters understanding and guards against blind trust. Re‑skill don’t de‑skill. Use freed‑up time for deeper mentorship, community engagement, or experimental pedagogy, activities that algorithms cannot emulate and that sustain professional identity. Build Human-in-the-Loop Systems. Mandate that clinicians remain the final arbiters of care. This preserves accountability, supports ethical decision-making, and mitigates automation bias. System workflows should require clinicians to document their reasoning, especially when accepting or rejecting AI suggestions. What Implementation Could Look Like In an internal‑medicine clerkship an AI tool drafts SOAP notes from structured data. Students must annotate each AI sentence indicating “accept,” “modify,” or “reject” with rationale. Faculty review both the note and the meta‑commentary, turning documentation into a clinical‑reasoning exercise. In a simulation center nurse‑anesthesia residents run scenarios where the closed‑loop hemodynamic controller is occasionally blinded. Debriefing focuses on signals that should trigger manual override. At the CME level a hospital builds a “dashboard of disagreement,” flagging cases where clinicians overrode AI triage or radiology reads. Monthly morbidity and mortality conferences explore whether the override reflected higher human insight or unhelpful bias. The Bottom Line AI is neither savior nor saboteur. It is a tool. But like any powerful tool, it reshapes the human roles around it. If we train clinicians merely to supervise machines, we risk not just deskilling them—but losing the heart of what it means to care. As health professions educators, we must lead the way in designing an AI-enabled future that elevates, rather than erases, human expertise’s evolve effectively to meet future challenges without losing sight of timeless educational values. References: Automation Bias in Mammography: The Impact of Artificial Intelligence BI-RADS Suggestions on Reader Performance. Dratsch T, Chen X, Rezazade Mehrizi M, et al. Radiology. 2023;307(4):e222176. doi:10.1148/radiol.222176. Automation Bias: Empirical Results Assessing Influencing Factors. Goddard K, Roudsari A, Wyatt JC. International Journal of Medical Informatics. 2014;83(5):368-75. doi:10.1016/j.ijmedinf.2014.01.001. Artificial Intelligence Suppression as a Strategy to Mitigate Artificial Intelligence Automation Bias. Wang DY, Ding J, Sun AL, et al. Journal of the American Medical Informatics Association : JAMIA. 2023;30(10):1684-1692. doi:10.1093/jamia/ocad118. AI in Pathology: What Could Possibly Go Wrong?. Nakagawa K, Moukheiber L, Celi LA, et al. Seminars in Diagnostic Pathology. 2023;40(2):100-108. doi:10.1053/j.semdp.2023.02.006. Promises and Perils of Artificial Intelligence in Neurosurgery. Panesar SS, Kliot M,Parrish R, et al. Neurosurgery. 2020;87(1):33-44. doi:10.1093/neuros/nyz471. Autopilots in the Operating Room: Safe Use of Automated Medical Technology. Ruskin KJ, Corvin C, Rice SC, Winter SR. Anesthesiology. 2020;133(3):653-665. doi:10.1097/ALN.0000000000003385. Decision-Making in Anesthesiology: Will Artificial Intelligence Make Intraoperative Care Safer?. Duran HT, Kingeter M, Reale C, Weinger MB, Salwei ME. Current Opinion in Anaesthesiology. 2023;36(6):691-697. doi:10.1097/ACO.0000000000001318. https://www.reuters.com/world/biden-xi-agreed-that-humans-not-ai-should-control-nuclear-weapons-white-house-2024-11-16/?utm_source=chatgpt.com Transforming Healthcare: The Role of Artificial Intelligence. Aslani A, Pournik O, Abbasi SF, Arvanitis TN. Studies in Health Technology and Informatics. 2025;327:1363-1367. doi:10.3233/SHTI250625. Artificial Intelligence in U.S. Health Care Delivery.Sahni NR, Carrus B. The New England Journal of Medicine. 2023;389(4):348-358. doi:10.1056/NEJMra2204673. Benefits and Harms Associated With the Use of AI-related Algorithmic Decision-Making Systems by Healthcare Professionals: A Systematic Review. Wilhelm C, Steckelberg A, Rebitschek FG.The Lancet Regional Health. Europe. 2025;48:101145. doi:10.1016/j.lanepe.2024.101145. The views and opinions expressed in this post are those of the author(s) and do not necessarily reflect the official policy or position of The University of Ottawa. For more details on our site disclaimers, please see our ‘About’ page Like this: Like Loading...