Learning fair predictors with Sensitive Subspace Robustness

06/28/2019
by   Mikhail Yurochkin, et al.
5

We consider an approach to training machine learning systems that are fair in the sense that their performance is invariant under certain perturbations to the features. For example, the performance of a resume screening system should be invariant under changes to the name of the applicant or switching the gender pronouns. We connect this intuitive notion of algorithmic fairness to individual fairness and study how to certify ML algorithms as algorithmically fair. We also demonstrate the effectiveness of our approach on three machine learning tasks that are susceptible to gender and racial biases.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset