Abstract
Generative AI, such as large language models, has undergone rapid development within recent years. As these models become increasingly available to the public, concerns arise about perpetuating and amplifying harmful biases in applications. Gender stereotypes can be harmful and limiting for the individuals they target, whether they consist of misrepresentation or discrimination. Recognizing gender bias as a pervasive societal construct, this paper studies how to uncover and quantify the presence of gender biases in generative language models. In particular, we derive generative AI analogues of three well-known non-discrimination criteria from classification, namely independence, separation and sufficiency. To demonstrate these criteria in action, we design prompts for each of the criteria with a focus on occupational gender stereotype, specifically utilizing the medical test to introduce the ground truth in the generative AI context. Our results address the presence of occupational gender bias within such conversational language models. Our code is public at https://github.com/sterlie/fairness-criteria-LLM.
Original language | English |
---|---|
Title of host publication | Proceedings of the Fairness and ethics towards transparent AI: facing the chalLEnge through model Debiasing (FAILED) 2024 |
Number of pages | 27 |
Publisher | Springer |
Publication status | Accepted/In press - 2025 |
Event | Fairness and ethics towards transparent AI: facing the chalLEnge through model Debiasing: Workshop at ECCV 2024 - Milano, Italy Duration: 29 Sept 2024 → 29 Sept 2024 |
Workshop
Workshop | Fairness and ethics towards transparent AI: facing the chalLEnge through model Debiasing |
---|---|
Country/Territory | Italy |
City | Milano |
Period | 29/09/2024 → 29/09/2024 |