Article
Version 1
Preserved in Portico This version is not peer-reviewed
Mitigating Large Language Model Bias: Automated Dataset Augmentation and Prejudice Quantification
Version 1
: Received: 5 May 2024 / Approved: 6 May 2024 / Online: 6 May 2024 (07:24:03 CEST)
A peer-reviewed article of this Preprint also exists.
Mondal, D.; Lipizzi, C. Mitigating Large Language Model Bias: Automated Dataset Augmentation and Prejudice Quantification. Computers 2024, 13, 141. Mondal, D.; Lipizzi, C. Mitigating Large Language Model Bias: Automated Dataset Augmentation and Prejudice Quantification. Computers 2024, 13, 141.
Abstract
Despite the growing capabilities of large language models, concerns exist about the biases they develop. In this paper, we propose a novel, automated mechanism for debiasing through specified dataset augmentation in the lens of bias producers that can be useful in a variety of industries, especially ones that are “restricted” and have limited data. We consider that bias can occur due to intrinsic model architecture and dataset quality. The two aspects are evaluated using two different metrics we created. We show that our dataset augmentation algorithm reduces bias as measured by our metrics.
Keywords
natural language processing; large language models; dataset augmentation; computational social science
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment