Katalog Plus
Bibliothek der Frankfurt UAS
Bald neuer Katalog: sichern Sie sich schon vorab Ihre persönlichen Merklisten im Nutzerkonto: Anleitung.
Dieses Ergebnis aus BASE kann Gästen nicht angezeigt werden.  Login für vollen Zugriff.

Large language models for diabetes training: a prospective study

Title: Large language models for diabetes training: a prospective study
Authors: Li, H; Jiang, Z; Guan, Z; Bao, Y; Liu, Y; Hu, T; Li, J; Liu, R; Wu, L; Cheng, D; Ji, H; Wang, Y; Wang, YX; Cheung, CY; Zheng, Y; Wang, J; Li, Z; Wu, W; Lim, CC; Bee, YM; Tan, HC; Ekinci, EI; Klonoff, DC; Echouffo-Tcheugui, JB; Mathioudakis, N; Corsino, L; Simó, R; Sabanayagam, C; Tan, GSW; Cheng, CY; Wong, TY; Cai, C; Mao, L; Lim, LL; Tham, YC; Sheng, B; Jia, W
Publisher Information: Elsevier BV
Publication Year: 2025
Collection: The University of Melbourne: Digital Repository
Description: Diabetes poses a considerable global health challenge, with varying levels of diabetes knowledge among healthcare professionals, highlighting the importance of diabetes training. Large Language Models (LLMs) provide new insights into diabetes training, but their performance in diabetes-related queries remains uncertain, especially outside the English language like Chinese. We first evaluated the performance of ten LLMs: ChatGPT-3.5, ChatGPT-4.0, Google Bard, LlaMA-7B, LlaMA2-7B, Baidu ERNIE Bot, Ali Tongyi Qianwen, MedGPT, HuatuoGPT, and Chinese LlaMA2-7B on diabetes-related queries, based on the Chinese National Certificate Examination for Primary Diabetes Care in China (NCE-CPDC) and the English Specialty Certificate Examination in Endocrinology and Diabetes of Membership of the Royal College of Physicians of the United Kingdom. Second, we assessed the training of primary care physicians (PCPs) without and with the assistance of ChatGPT-4.0 in the NCE-CPDC examination to ascertain the reliability of LLMs as medical assistants. We found that ChatGPT-4.0 outperformed other LLMs in the English examination, achieving a passing accuracy of 62.50%, which was significantly higher than that of Google Bard, LlaMA-7B, and LlaMA2-7B. For the NCE-CPFC examination, ChatGPT-4.0, Ali Tongyi Qianwen, Baidu ERNIE Bot, Google Bard, MedGPT, and ChatGPT-3.5 successfully passed, whereas LlaMA2-7B, HuatuoGPT, Chinese LLaMA2-7B, and LlaMA-7B failed. ChatGPT-4.0 (84.82%) surpassed all PCPs and assisted most PCPs in the NCE-CPDC examination (improving by 1 %-6.13%). In summary, LLMs demonstrated outstanding competence for diabetes-related questions in both the Chinese and English language, and hold great potential to assist future diabetes training for physicians globally.
Document Type: article in journal/newspaper
Language: English
ISSN: 2095-9273
Relation: https://hdl.handle.net/11343/366454
Availability: https://hdl.handle.net/11343/366454
Rights: https://creativecommons.org/licenses/by/4.0 ; CC BY
Accession Number: edsbas.7204E610
Database: BASE