Cifre contract with SAP Labs France “Privacy and fairness for ML” (December 2021 – December 2024)
- Participants: Caelin Kaplan, Giovanni Neglia
- Collaborators: Anderson Santana de Oliveira
There are increasing concerns among scholars and the public about bias, discrimination, and fairness in AI and machine learning. Decision support systems may present biases, leading to unfair treatment of some categories of individuals, for instance, systematically assigning high risk of recidivism in a criminal offense analysis system. Essentially, the analysis of whether an algorithm’s output is fair (e.g. does not disadvantages a group with respect to others) depends on substantial contextual information that often requires human intervention. There are though several metrics for fairness that have been developed, which may rely on collecting additional sensitive attributes (e.g., ethnicity) before imposing strong privacy guarantees to be used in any situation. It is known that differential privacy has the effect of hiding outliers from the data analysis, perhaps compounding existing bias in certain situations. This project encompasses the search for a mitigating strategy. The PhD thesis of Caelin Kaplan is funded by this project.