Tag Archives: Generative AI

Current and Potential Risks of GenAI on Medical Education


A recent editorial in BMJ Evidence-Based Medicine examines the potential risks associated with the use of generative AI in medical education.  Researcher/educators from the University of Missouri School of Medicine, describe the importance and necessity of clearly defining and delineating these AI risks for learners so that educators can develop and offer specific strategies to implement rather than relying on vague warnings such as “use with caution.”

Generative artificial intelligence (AI) in just a few short years has evolved from experimental to an everyday tool. Large language models (LLMs) are now capable of writing clinically relevant text that appears to be quite competent and has been believably comparable to human output. Various AI tools have been included in student and faculty reviews, work flows and accepted in clinical communication. Surveys demonstrate a common adoption and use among learners for writing assignments and reviews without any institutional guidance or policies in place to offer warnings or to suggest balance. Unfortunately the focus of published reviews often describe only the benefits and possibilities of AI in today’s and tomorrow’s healthcare.

The authors suggest six risk categories:

  • Automation bias
  • Outsourcing of reasoning
  • Loss of skills
  • Racial and demographic biases
  • Hallucinations, defined as false information presented with confidence
  • Data privacy and security concerns

Most Concerning Risk

The authors suggest that the most concerning risk to medical students is loss of skills. Where experienced physicians have developed mental models, pattern recognition, reasoning habits and critical skills over years of practice, students are in the process of building these competencies. Similar to the way many students everywhere use the Internet to answer questions or solve problems, the retrieval of information is “outsourced” and many will “skip the very effort that generates lasting learning and expertise.”

Experienced clinicians can often recognize when an AI suggestion is incorrect. In contrast, students have not yet internalized the parameters needed to detect subtle but potentially dangerous errors.

Another risk highlighted by the study is the outsourcing of reasoning, a process that tends to occur gradually and almost imperceptibly. AI models produce fluent, polished responses that can lead users to abandon independent information seeking, critical appraisal, and knowledge synthesis. Over time, this results in the deterioration of skills that should be continuously reinforced. A specific warning sign or ‘red flag’ highlighted by the authors is “when a student can no longer explain a concept, a differential diagnosis, or a treatment plan in their own words without first checking what the AI thinks.”

Other specific indicators of technological dependence include rarely consulting primary sources, avoiding solving exercises or drafting texts independently, and performing poorly on oral examinations without access to AI tools. “Incorporating regular periods of study and self-assessment without AI is a simple way for students to monitor whether their own reasoning remains intact.”

Other suggestions:

  • Use AI as a second opinion only
  • Critically review AI recommendations
  • Minimize overconfidence in AI
  • Proposed creation of confidence-calibration laboratories students can practice rejecting problematic AI-generated responses.
  • Reassessment of thinking by demonstrating their reasoning processes
  • Improve institutional policies
  • Policies should require the transparent disclosure of AI use in academic work
  • Address the biases embedded in the training data of the model
  • Vigilant awareness of privacy and data security risks, particularly in healthcare settings

Risks of over-reliance on AI

Due to the widespread use across a vast array of tasks, along with the potential benefits, the risks stemming from over-reliance on these tools are growing proportionally. Medical school and training programs need to consider risks such as automation bias, cognitive off-loading and outsourcing of reasoning, de-skilling (with the greatest harm to novices), bias and inequity, hallucinated content and sources, and privacy, security and data governance.

Source: Hough J, Culley N, Erganian C, Alahdab F. Potential risks of GenAI on medical education. https://doi.org/10.1136/bmjebm-2025-114339