Latest: FDA Approves New Biosimilar for Oncology Treatment

Harnessing Predictive Models in Competency-Based Education: Opportunities and Challenges

0 Mins
By: Pedro Tanaka, MD, PhD Competency-Based Education (CBE) in medicine aims to ensure that trainees achieve well-defined abilities before advancing, shifting focus from time-based training to demonstrated mastery. Predictive models, such as learning analytics, positive predictive value (PPV), and growth curves, have become increasingly important tools for supporting CBE frameworks. They offer the promise of early intervention, individualized learning plans, and improved program design, but also raise important questions about equity and over-reliance on data. Learning Analytics: Tracking Progress Toward Competencies Learning analytics involves systematically analyzing trainee performance data to identify patterns and predict outcomes. For example, ACGME Milestones assessments collected semiannually across residency can be analyzed longitudinally to track learner trajectories across domains like patient care, professionalism, and interpersonal skills.1,2 Studies in surgery and anesthesiology have shown that early milestone ratings can predict the likelihood of meeting graduation targets.3 Such analytics allow programs to identify struggling trainees early, enabling tailored remediation or coaching interventions that help ensure readiness for independent practice. Yet, learning analytics depend on the validity and reliability of the data. If milestone ratings are biased, such as showing disparities by gender or race,4,5 predictive models risk reinforcing inequities. Therefore, institutions must invest in faculty development to promote fair, consistent assessments, and integrate safeguards to monitor for unintended bias. Positive Predictive Value (PPV): Evaluating Assessment Reliability PPV is a statistical estimate of the probability that a trainee who scores below a threshold at one time point will fail to achieve the target level at graduation. Studies using milestone data have calculated PPVs for subcompetencies in surgery, showing that a low early rating can predict failure to reach Level 4 competency at graduation with substantial probability (up to 70% in some domains).1 PPVs can be invaluable for program directors: they transform raw ratings into actionable early-warning signals, supporting timely, evidence-based remediation. However, PPV is only as reliable as the underlying assessments. Without rigorous standard-setting, rater training, and attention to contextual factors, PPVs can misclassify learners, especially those from historically marginalized backgrounds.4 Growth Curves: Modeling Individual Learning Trajectories Growth curve modeling tracks how individual trainees progress over time, revealing patterns of steady improvement, plateauing, or even decline. In anesthesiology, 2 for example, growth curve analyses of milestone ratings have shown moderate reliability for detecting individual differences in learning rates, enabling programs to design individualized learning plans. These analyses reveal not only average growth patterns but also highlight substantial variability among learners and programs, suggesting that some residents progress rapidly while others plateau or require additional support. By identifying distinct latent trajectory groups, including those at risk of not meeting competency targets, programs can tailor interventions such as enhanced mentoring, targeted clinical experiences, or remediation efforts. Furthermore, the findings underscore the value of using milestone data to track progress over time, inform curriculum improvements, and support a programmatic approach to assessment that promotes both learner development and educational quality. Such models also highlight variation between learners and programs, supporting quality improvement efforts. However, they require large datasets and careful interpretation. Growth trajectories can be influenced by institutional culture, case exposure, and faculty support, which must be considered in any analysis to avoid simplistic conclusions. Balancing Promise and Caution in Implementation Predictive models hold transformative potential for CBE. They can enable personalized learning, ensure early identification of struggling trainees, and improve curriculum design. But they also demand caution: equity concerns, the risk of over-reliance on imperfect data, and potential for misinterpretation must be thoughtfully addressed. Educational institutions can maximize benefits by adopting predictive tools transparently, investing in assessor training, monitoring for bias, and using predictive data as one component, rather than the sole determinant, of holistic learner assessment. In this way, predictive analytics can strengthen, rather than undermine, the core mission of competency-based medical education: preparing all trainees for safe, effective, and equitable independent practice About the Author: Pedro Tanaka, MD, PhD is a Clinical Professor of Anesthesiology at Stanford University. He serves as Associate Dean for Academic Affairs and Associate DIO for GME. With advanced training in education and coaching, he supports faculty development, curriculum improvement, and leadership growth, enhancing the effectiveness of clinician-educators in academic medicine.. References Smith, Brigitte K., Kenji Yamazaki, Abigail Luman, Ara Tekian, Eric Holmboe, Erica L. Mitchell, Yoon Soo Park, and Stanley J. Hamstra. “Predicting performance at graduation from early ACGME milestone ratings: longitudinal learning analytics in professionalism and communication in vascular surgery.” Journal of Surgical Education 80, no. 2 (2023): 235-246. Sun, Ting, Yoon Soo Park, Fei Chen, Sean O. Hogan, and Pedro Tanaka. “Longitudinal reliability of Milestones learning trajectories during anesthesiology residency.” Anesthesiology 142, no. 5 (2025): 918. Smith, Brigitte K., Kenji Yamazaki, Ara Tekian, Eric Holmboe, Stanley J. Hamstra, Erica L. Mitchell, and Yoon Soo Park. “The use of learning analytics to enable detection of underperforming trainees: An analysis of national vascular surgery trainee ACGME Milestones assessment data.” Annals of Surgery 277, no. 4 (2023): e971-e977. Card, Dan, Lauren McPherson, Nicholas Marka, Benjamin Langworthy, Taj Mustapha, Claudio Violato, Robert Englander, and Rachel Stork Poeppelman. “Sex and racial bias in medical student EPA assessments: Findings and hypotheses for bias mitigation targets.” Medical Teacher (2025): 1-10. Sun, Ting, Yoon Soo Park, Fei Chen, Sean O. Hogan, Nikki Wilkinson, and Pedro Tanaka. “National Study of Milestone Evaluation Disparities by Gender and Race in Anesthesiology Residency Training.” Manuscript submitted for publication 2025.
Tags:
ICE Blog
Author

ICE Blog

📢

Advertisement

300x250 Banner

Recent Content

COPD Biologics: Early Treatment Insights

Pulmonology • 2 hours ago

Antihypertensive Medication Guidelines

Cardiology • 4 hours ago

Juvenile Arthritis Care Transition

Rheumatology • 6 hours ago