Towards Fairer Foundation Models: On Measures and Mitigation Strategies KU Leuven
Thanks to transformer-based foundation models, natural language processing has lately been at the forefront of many AI-related innovations. These models learn the distribution of vast amounts of training data exceptionally well and can be re-used and applied for a wide range of tasks, which is why they are considered foundational. However, these foundation models also reproduce undesirable traits from their training data, such as reproducing ...