Background and aims: Chat Generative Pre-Trained Transformer (Chat-GPT) has proven effective in addressing patient inquiries related to gastrointestinal (GI) disease. We aimed to assess the effectiveness and reliability of Chat-GPT in answering common patients' queries on GI endoscopy. Methods: Eighteen selected patients' queries regarding GI endoscopy were rated on a Likert-type scale by 10 health professionals and 2 non-health professionals on the following features: reliability (1-6), accuracy (1-3), and comprehensibility (1-3). Results: The mean reliability, accuracy, and comprehensibility values were 5.2 ± 1.7, 2.7 ± 0.4, and 2.9 ± 0.2, respectively. Overall, most answers were rated as having a solid level of reliability (94.4%) and accuracy (100%) and fair levels of comprehensibility (61.1%). The physicians considered the tool to be adequate for addressing questions related to clinical practice, except for inquiries regarding bowel prep solutions, medications, and pacemaker management. Conclusions: Chat-GPT 4.0 demonstrated effectiveness in providing patients with informative content about GI endoscopy, even though health professional support remains essential for a comprehensive approach.
Unveiling the effectiveness of Chat-GPT 4.0, an artificial intelligence conversational tool, for addressing common patient queries in gastrointestinal endoscopy / Calabrese, Giulio; Maselli, Roberta; Maida, Marcello; Barbaro, Federico; Morais, Rui; Nardone, Olga Maria; Sinagra, Emanuele; Di Mitri, Roberto; Sferrazza, Sandro. - In: IGIE. - ISSN 2949-7086. - 4:1(2025). [10.1016/j.igie.2025.01.012]
Unveiling the effectiveness of Chat-GPT 4.0, an artificial intelligence conversational tool, for addressing common patient queries in gastrointestinal endoscopy
Calabrese, Giulio;Nardone, Olga Maria;
2025
Abstract
Background and aims: Chat Generative Pre-Trained Transformer (Chat-GPT) has proven effective in addressing patient inquiries related to gastrointestinal (GI) disease. We aimed to assess the effectiveness and reliability of Chat-GPT in answering common patients' queries on GI endoscopy. Methods: Eighteen selected patients' queries regarding GI endoscopy were rated on a Likert-type scale by 10 health professionals and 2 non-health professionals on the following features: reliability (1-6), accuracy (1-3), and comprehensibility (1-3). Results: The mean reliability, accuracy, and comprehensibility values were 5.2 ± 1.7, 2.7 ± 0.4, and 2.9 ± 0.2, respectively. Overall, most answers were rated as having a solid level of reliability (94.4%) and accuracy (100%) and fair levels of comprehensibility (61.1%). The physicians considered the tool to be adequate for addressing questions related to clinical practice, except for inquiries regarding bowel prep solutions, medications, and pacemaker management. Conclusions: Chat-GPT 4.0 demonstrated effectiveness in providing patients with informative content about GI endoscopy, even though health professional support remains essential for a comprehensive approach.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


