Welcome to PsychiatryAI.com: [PubMed] - Psychiatry AI Latest

The potential and pitfalls of using a large language model such as ChatGPT, GPT-4, or LLaMA as a clinical assistant

Evidence

J Am Med Inform Assoc. 2024 Jul 17:ocae184. doi: 10.1093/jamia/ocae184. Online ahead of print.

ABSTRACT

OBJECTIVES: This study aims to evaluate the utility of large language models (LLMs) in healthcare, focusing on their applications in enhancing patient care through improved diagnostic, decision-making processes, and as ancillary tools for healthcare professionals.

MATERIALS AND METHODS: We evaluated ChatGPT, GPT-4, and LLaMA in identifying patients with specific diseases using gold-labeled Electronic Health Records (EHRs) from the MIMIC-III database, covering three prevalent diseases-Chronic Obstructive Pulmonary Disease (COPD), Chronic Kidney Disease (CKD)-along with the rare condition, Primary Biliary Cirrhosis (PBC), and the hard-to-diagnose condition Cancer Cachexia.

RESULTS: In patient identification, GPT-4 had near similar or better performance compared to the corresponding disease-specific Machine Learning models (F1-score ≥ 85%) on COPD, CKD, and PBC. GPT-4 excelled in the PBC use case, achieving a 4.23% higher F1-score compared to disease-specific “Traditional Machine Learning” models. ChatGPT and LLaMA3 demonstrated lower performance than GPT-4 across all diseases and almost all metrics. Few-shot prompts also help ChatGPT, GPT-4, and LLaMA3 achieve higher precision and specificity but lower sensitivity and Negative Predictive Value.

DISCUSSION: The study highlights the potential and limitations of LLMs in healthcare. Issues with errors, explanatory limitations and ethical concerns like data privacy and model transparency suggest that these models would be supplementary tools in clinical settings. Future studies should improve training datasets and model designs for LLMs to gain better utility in healthcare.

CONCLUSION: The study shows that LLMs have the potential to assist clinicians for tasks such as patient identification but false positives and false negatives must be mitigated before LLMs are adequate for real-world clinical assistance.

PMID:39018498 | DOI:10.1093/jamia/ocae184

Document this CPD Copy URL Button

Google

Google Keep

LinkedIn Share Share on Linkedin

Estimated reading time: 5 minute(s)

Latest: Psychiatryai.com #RAISR4D Evidence

Cool Evidence: Engaging Young People and Students in Real-World Evidence

Real-Time Evidence Search [Psychiatry]

AI Research

The potential and pitfalls of using a large language model such as ChatGPT, GPT-4, or LLaMA as a clinical assistant

Copy WordPress Title

🌐 90 Days

Evidence Blueprint

The potential and pitfalls of using a large language model such as ChatGPT, GPT-4, or LLaMA as a clinical assistant

QR Code

☊ AI-Driven Related Evidence Nodes

(recent articles with at least 5 words in title)

More Evidence

The potential and pitfalls of using a large language model such as ChatGPT, GPT-4, or LLaMA as a clinical assistant

🌐 365 Days

Floating Tab
close chatgpt icon
ChatGPT

Enter your request.

Psychiatry AI RAISR 4D System Psychiatry + Mental Health