Evidence
CEUR Workshop Proc. 2023 Mar;3359(ITAH):48-57. Epub 2023 Mar 16.
ABSTRACT
Advances in computational behavior analysis via artificial intelligence (AI) promise to improve mental healthcare services by providing clinicians with tools to assist diagnosis or measurement of treatment outcomes. This potential has spurred an increasing number of studies in which automated pipelines predict diagnoses of mental health conditions. However, a fundamental question remains unanswered: How do the predictions of the AI algorithms correspond and compare with the predictions of humans? This is a critical question if AI technology is to be used as an assistive tool, because the utility of an AI algorithm would be negligible if it provides little information beyond what clinicians can readily infer. In this paper, we compare the performance of 19 human raters (8 autism experts and 11 non-experts) and that of an AI algorithm in terms of predicting autism diagnosis from short (3-minute) videos of N = 42 participants in a naturalistic conversation. Results show that the AI algorithm achieves an average accuracy of 80.5%, which is comparable to that of clinicians with expertise in autism (83.1%) and clinical research staff without specialized expertise (78.3%). Critically, diagnoses that were inaccurately predicted by most humans (experts and non-experts, alike) were typically correctly predicted by AI. Our results highlight the potential of AI as an assistive tool that can augment clinician diagnostic decision-making.
PMID:38037663 | PMC:PMC10687770
Add to Google Keep
Estimated reading time: 4 minute(s)
Latest: Psychiatryai.com #RAISR4D
Cool Evidence: Engaging Young People and Students in Real-World Evidence ☀️
Real-Time Evidence Search [Psychiatry]
AI Research [Andisearch.com]
Comparison of Human Experts and AI in Predicting Autism from Facial Behavior
🌐 90 Days
Evidence Blueprint
Comparison of Human Experts and AI in Predicting Autism from Facial Behavior
☊ AI-Driven Related Evidence Nodes
(recent articles with at least 5 words in title)
More Evidence