< Back to previous page

Project

Fairness in Reinforcement Learning for Allocation Problems. (FWOSB142)

Fairness has become a central concern for the development of
automated decision-making systems for real-world applications.
Decisions in the context of allocation problems, such as loan
granting, vaccine distribution, have a high societal impact and require
careful fairness considerations regarding the affected individuals and
groups. Additionally, such problems can evolve over time. Therefore,
it is important to understand the long-term effect of decisions in such
settings, as well as to adapt to any potential changes. We will study
fairness in a diverse set of dynamic allocation problems using
reinforcement learning (RL), some with immediate feedback (i.e.,
bandit setting), and others where there is a long-term impact of
decisions or sequential decision aspect.
The main direction we propose here is to augment the initial decision
problem and to consider fairness in addition to system’s
performance. To this end we will take a multi-objective RL approach
which will enable us to reach our overarching objective: to develop
fair RL techniques by design and offer a more transparent
perspective on the decision process. To evaluate our methods, we
consider two use cases: epidemic control and fraud detection. These
use cases are different from an RL point of view, and involve different
fairness aspects, offering us an extensive and diverse empirical
evaluation with knowledge that is transferable to other application
domains (e.g., border control, water distribution
Date:1 Nov 2022 →  Today
Keywords:Reinforcement Learning, Fairness by design, Multi-Objective Reinforcement Learning
Disciplines:Machine learning and decision making, Artificial intelligence not elsewhere classified