Valda Seminar: Debabrota Basu

Debabrota Basu, Inria Lille.

11 March 2022, 10:30-11:30.

ENS S16.

Verifying and Explaining Unfairness of Machine Learning Algorithms

In recent years, machine learning (ML) algorithms have been deployed in safety-critical and high-stake decision-making, where the fairness or unbiasedness of algorithms bears significant importance. Fairness in ML centres on detecting bias towards certain demographic populations induced by an ML classifier and proposes algorithmic solutions to mitigate the bias with respect to different fairness definitions. In this talk, we focus on two problems of fair ML: verification and explanation.

— A fairness verifier computes the bias in the prediction of an ML classifier – essentially beyond a finite dataset – given the probability distribution of input features. To this end, we discuss two solutions to the verification problem, one based on stochastic satisfiability (SSAT) and another based on the stochastic subset-sum problem (S3P). Our SSAT based framework can verify any classifier represented as a Boolean formula. In contrast, our S3P based framework provides a scalable solution for linear classifiers. Both solutions overcome challenges in earlier SMT and sampling-based verifiers by verifying multiple sensitive features while considering correlated features represented as a Bayesian network.

— Our ongoing research is on explaining the unfairness of a classifier by attributing unfairness contribution among input features and their interactions. To this end, we discuss a model-agnostic solution built on global sensitivity analysis and variance decomposition. The resulting explanation framework can identify influential features and feature interactions that potentially induce the unfairness in an ML classifier.

The relevant articles on fairness verification are available at: https://arxiv.org/abs/2009.06516 and https://arxiv.org/abs/2109.09447.

 

Comments are closed.