Evidence
Plast Reconstr Surg Glob Open. 2024 Sep 5;12(9):e6136. doi: 10.1097/GOX.0000000000006136. eCollection 2024 Sep.
ABSTRACT
BACKGROUND: ChatGPT, launched in 2022 and updated to Generative Pre-trained Transformer 4 (GPT-4) in 2023, is a large language model trained on extensive data, including medical information. This study compares ChatGPT’s performance on Plastic Surgery In-Service Examinations with medical residents nationally as well as its earlier version, ChatGPT-3.5.
METHODS: This study reviewed 1500 questions from the Plastic Surgery In-service Examinations from 2018 to 2023. After excluding image-based, unscored, and inconclusive questions, 1292 were analyzed. The question stem and each multiple-choice answer was inputted verbatim into ChatGPT-4.
RESULTS: ChatGPT-4 correctly answered 961 (74.4%) of the included questions. Best performance by section was in core surgical principles (79.1% correct) and lowest in craniomaxillofacial (69.1%). ChatGPT-4 ranked between the 61st and 97th percentiles compared with all residents. Comparatively, ChatGPT-4 significantly outperformed ChatGPT-3.5 in 2018-2022 examinations (P < 0.001). Although ChatGPT-3.5 averaged 55.5% correctness, ChatGPT-4 averaged 74%, a mean difference of 18.54%. In 2021, ChatGPT-3.5 ranked in the 23rd percentile of all residents, whereas ChatGPT-4 ranked in the 97th percentile. ChatGPT-4 outperformed 80.7% of residents on average and scored above the 97th percentile among first-year residents. Its performance was comparable with sixth-year integrated residents, ranking in the 55.7th percentile, on average. These results show significant improvements in ChatGPT-4’s application of medical knowledge within six months of ChatGPT-3.5’s release.
CONCLUSION: This study reveals ChatGPT-4’s rapid developments, advancing from a first-year medical resident’s level to surpassing independent residents and matching a sixth-year resident’s proficiency.
PMID:39239234 | PMC:PMC11377087 | DOI:10.1097/GOX.0000000000006136
Estimated reading time: 4 minute(s)
Latest: Psychiatryai.com #RAISR4D Evidence
Cool Evidence: Engaging Young People and Students in Real-World Evidence
Real-Time Evidence Search [Psychiatry]
AI Research
ChatGPT-4 Surpasses Residents: A Study of Artificial Intelligence Competency in Plastic Surgery In-service Examinations and Its Advancements from ChatGPT-3.5
🌐 90 Days
Evidence Blueprint
ChatGPT-4 Surpasses Residents: A Study of Artificial Intelligence Competency in Plastic Surgery In-service Examinations and Its Advancements from ChatGPT-3.5
☊ AI-Driven Related Evidence Nodes
(recent articles with at least 5 words in title)
More Evidence