مقالات پذیرفته شده در نهمین کنگره بین المللی زیست پزشکی
Conversational AI in Medical Training: A Narrative Review of ChatGPT’s Role in Standardized Patient Development
Conversational AI in Medical Training: A Narrative Review of ChatGPT’s Role in Standardized Patient Development
Ali Madadi Mahani,1,*
1. Student Research Committee, School of Medicine, Kerman University of Medical Sciences, Kerman, Iran
Introduction: Standardized patients (SPs) are an essential pedagogical tool in medical education, offering structured opportunities for learners to practice clinical reasoning, patient-centered communication, and empathy in a safe and controlled setting. Yet, the traditional SP model faces persistent challenges: financial cost, logistical complexity, variability in performance, and limited scalability across diverse educational contexts. With the advent of conversational artificial intelligence, particularly large language models such as ChatGPT, new possibilities have emerged for simulating patient encounters that are dynamic, scalable, and adaptable. Unlike rule-based chatbots, ChatGPT demonstrates flexible, context-sensitive dialogue that can approximate authentic clinical interactions. This narrative review explores the role of ChatGPT in standardized patient development, synthesizing current evidence, educational experiences, and theoretical perspectives.
Methods: The present narrative review conducted by investigating published studies available up to 2025. Databases were identified through PubMed, Scopus, and Google Scholar, using keywords such as ChatGPT, large language models, standardized patients, medical education, and clinical skills training. Selected studies included both original and review studies. The review emphasized three domains: (1) feasibility of ChatGPT as a standardized patient substitute or adjunct, (2) pedagogical value and learner experience, and (3) ethical, practical, and future considerations.
Results: The reviewed literature reveals a growing interest in AI-driven SP alternatives, with ChatGPT positioned as one of the most versatile tools. Across case studies, ChatGPT has demonstrated the capacity to generate consistent patient narratives, adapt to diverse questioning styles, and provide immediate, contextually relevant responses. In simulated interviews, students report improved confidence and reduced anxiety, particularly when practicing sensitive conversations such as psychiatric histories or delivering bad news. ChatGPT’s capacity to mimic emotional tone—while limited in non-verbal cues—offers a foundation for early-stage communication training.
Educators note several strengths: (a) scalability, as multiple learners can engage simultaneously with customized cases; (b) cost-effectiveness, reducing reliance on trained human actors; and (c) adaptability, as prompts can generate endless patient variations across specialties. However, limitations remain. The absence of body language and paralinguistic cues restricts authenticity. ChatGPT occasionally introduces inaccuracies or overuses medical jargon, requiring careful prompt design and faculty oversight. Ethical concerns include over-reliance on AI, risks of misinformation, and ensuring that AI-driven encounters do not undermine empathy cultivation.
The synthesis also highlights hybrid models, where ChatGPT augments rather than replaces human SP programs. For instance, ChatGPT may serve as a preparatory step before live encounters, allowing students to rehearse and refine interviewing skills. It can also function as a formative feedback tool, scaffolding self-directed practice. Preliminary findings suggest that when integrated thoughtfully, ChatGPT can enrich the learning continuum without diminishing the irreplaceable human dimensions of clinical training.
Conclusion: This narrative review underscores the emerging role of ChatGPT in standardized patient development for medical education. Evidence indicates that ChatGPT is a promising adjunctive tool, capable of enhancing accessibility, flexibility, and learner engagement. While not a substitute for human SPs, it offers complementary strengths that align with modern educational demands, particularly in resource-limited settings and for preparatory training. Its integration should be guided by pedagogical frameworks, ethical safeguards, and empirical evaluation. Future directions include refining prompt engineering, embedding multimodal features (voice, affect recognition), and conducting longitudinal studies to assess learning outcomes. By merging the reliability of standardized scenarios with the adaptability of AI, ChatGPT has the potential to reshape how medical students practice patient encounters—ushering in an era where human expertise and artificial intelligence co-create meaningful, scalable educational experiences.
Keywords: ChatGPT, large language models, standardized patients, medical education, clinical skills trainig