Rodríguez-Cantelar, M.; Estecha-Garitagoitia, M.; D’Haro, L.F.; Matía, F.; Córdoba, R. Automatic Detection of Inconsistencies and Hierarchical Topic Classification for Open-Domain Chatbots. Appl. Sci.2023, 13, 9055.
Rodríguez-Cantelar, M.; Estecha-Garitagoitia, M.; D’Haro, L.F.; Matía, F.; Córdoba, R. Automatic Detection of Inconsistencies and Hierarchical Topic Classification for Open-Domain Chatbots. Appl. Sci. 2023, 13, 9055.
Rodríguez-Cantelar, M.; Estecha-Garitagoitia, M.; D’Haro, L.F.; Matía, F.; Córdoba, R. Automatic Detection of Inconsistencies and Hierarchical Topic Classification for Open-Domain Chatbots. Appl. Sci.2023, 13, 9055.
Rodríguez-Cantelar, M.; Estecha-Garitagoitia, M.; D’Haro, L.F.; Matía, F.; Córdoba, R. Automatic Detection of Inconsistencies and Hierarchical Topic Classification for Open-Domain Chatbots. Appl. Sci. 2023, 13, 9055.
Abstract
Current State-of-the-Art (SotA) chatbots are able to produce high quality sentences, handling different conversation topics, and larger interaction times. Unfortunately, the generated responses highly depend on the data on which they have been trained, the specific dialogue history and current turn used for guiding the response, the internal decoding mechanisms, ranking strategies, among others. Therefore, it may happen that for semantically similar questions asked by users, the chatbot may provide a different answer, which can be considered as a form of hallucination or produce confusion in long-term interactions. In this research paper, we propose a novel methodology consisting of two main phases: a) hierarchical automatic detection of topics and subtopics in dialogue interactions using a Zero-Shot learning approach, and b) detecting inconsistent answers using K-Means and the Silhouette coefficient. To evaluate the efficacy of topic and subtopic detection, we used a subset of the DailyDialog dataset and real dialogue interactions gathered during the Alexa Socialbot Grand Challenge 5 (SGC5). The proposed approach enables detecting up to 18 different topics and 102 subtopics. For the purpose of detecting inconsistencies, we manually generate multiple paraphrased questions and employ several pre-trained SotA chatbot models to generate responses. Our experimental results demonstrate a weighted F-1 value of 0.34 for topic detection, a weighted F-1 value of 0.78 for subtopic detection in DailyDialog, then 81% and 62% accuracy for topic and subtopic classification in SGC5; finally, to predict the number of different responses, we obtained a mean squared error (MSE) of 3.4 when testing smaller generative models and 4.9 in recent large language models.
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.