J Clin Psychol. 2025 May 9. doi: 10.1002/jclp.23808. Online ahead of print.
ABSTRACT
OBJECTIVE: This study aimed to evaluate the performance and proof-of-concept of psychological first aid (PFA) provided by two AI chatbots, ChatGPT-4 and Gemini.
METHODS: A mixed-method cross-sectional analysis was conducted using validated PFA scenarios from the Institute for Disaster Mental Health. Five scenarios representing different disaster contexts were selected. Data were collected by prompting both chatbots to perform PFA based on these scenarios. Quantitative performance was assessed using the PFA principles of Look, Listen, and Link, with scores assigned using IFRC’s PFA scoring template. Qualitative analysis involved content analysis for AI hallucination, coding responses, and thematic analysis to identify key subthemes and themes.
RESULTS: ChatGPT-4 outperformed Gemini, achieving an overall score of 90% (CI: 86%-93%) compared to Gemini’s 73% (CI: 67%-79%), a statistically significant difference (p = 0.01). In the Look domain, ChatGPT-4 scored higher (p = 0.02), while both performed equally in the Listen and Link domain. The content analysis of AI hallucinations reveals that ChatGPT-4 has a relative frequency of 18.4% (CI: 12%-25%), while Gemini exhibits a relative frequency of 50.0% (CI: 26.6%-71.3%), (p < 0.01). Five themes emerged from the qualitative analysis: Look, Listen, Link, Professionalism, Mental Health, and Psychosocial Support.
CONCLUSION: ChatGPT-4 demonstrated superior performance in providing PFA compared to Gemini. While AI chatbots show potential as supportive tools for PFA providers, concerns regarding AI hallucinations highlight the need for cautious implementation. Further research is necessary to enhance the reliability and safety of AI-assisted PFA, particularly by eliminating hallucinations, and to integrate the current advances in voice-based chatbot functionality.
PMID:40347026 | DOI:10.1002/jclp.23808
AI-Assisted Evidence Search
Share Evidence Blueprint
Search Google Scholar