Version 1
: Received: 15 September 2023 / Approved: 15 September 2023 / Online: 18 September 2023 (13:37:48 CEST)
How to cite:
Wojcik, S.; Rulkiewicz, A.; Pruszczyk, P.; Lisik, W.; Poboży, M.; Pilchowska, I.; Domienik-Karlowicz, J. Beyond Human Understanding: Benchmarking Language Models for Polish Cariology Expertise. Preprints2023, 2023091100. https://doi.org/10.20944/preprints202309.1100.v1
Wojcik, S.; Rulkiewicz, A.; Pruszczyk, P.; Lisik, W.; Poboży, M.; Pilchowska, I.; Domienik-Karlowicz, J. Beyond Human Understanding: Benchmarking Language Models for Polish Cariology Expertise. Preprints 2023, 2023091100. https://doi.org/10.20944/preprints202309.1100.v1
Wojcik, S.; Rulkiewicz, A.; Pruszczyk, P.; Lisik, W.; Poboży, M.; Pilchowska, I.; Domienik-Karlowicz, J. Beyond Human Understanding: Benchmarking Language Models for Polish Cariology Expertise. Preprints2023, 2023091100. https://doi.org/10.20944/preprints202309.1100.v1
APA Style
Wojcik, S., Rulkiewicz, A., Pruszczyk, P., Lisik, W., Poboży, M., Pilchowska, I., & Domienik-Karlowicz, J. (2023). Beyond Human Understanding: Benchmarking Language Models for Polish Cariology Expertise. Preprints. https://doi.org/10.20944/preprints202309.1100.v1
Chicago/Turabian Style
Wojcik, S., Iwona Pilchowska and Justyna Domienik-Karlowicz. 2023 "Beyond Human Understanding: Benchmarking Language Models for Polish Cariology Expertise" Preprints. https://doi.org/10.20944/preprints202309.1100.v1
Abstract
The growing dependence on large language models (LLM)s highlights the urgent need to deepen trust in these technologies. Regular, rigorous validation of their expertise, especially in nuanced and intricate scenarios, is essential to ensure their readiness for clinical applications. Our study pioneers the exploration of LLM utility in the field of cardiology. We stand at the cusp of a transformative era where mature AI and LLMs, notably ChatGPT, GPT-4, and Google Bard, are poised to influence healthcare significantly. Recently, we put three available LLMs, OpenAI's ChatGPT-3.5, GPT-4.0, and Google's Bard, to the test against a significant Polish medical special-ization licensing exam (PES). The exams cover the scope of completed specialist training, focusing on diagnostic and therapeutic procedures, excluding invasive medical procedures and interven-tions. In our analysis, GPT-4 consistently outperformed the others, ranking first, with and Google Bard and ChatGPT- 3.5 following, respectively. The performance metrics underscore GPT-4's no-table potential in medical applications. Given a score improvement of over 23.5 % between two AI models released just four months apart, clinicians must stay informed and up-to-date about these rapidly evolving tools and their potential applications to clinical practice. Our results provide a snapshot of the current capabilities of these models, highlighting the nuanced performance dif-ferences when confronted with identical questions
Keywords
ChatGPT; google bard; innovations; AI in medicine; health IT; artificial intelligence; large language model; medical education; language processing; virtual teaching assistant
Subject
Public Health and Healthcare, Public Health and Health Services
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.