Fair Balance: Mitigating Machine Learning Bias Against Multiple Protected Attributes With Data Balancing
This paper aims to improve machine learning fairness on multiple protected at-tributes. Machine learning fairness has attracted increasing attention since machine learning models are increasingly used for high-stakes and high-risk decisions. Most existing solutions for machine learning fairness only target one protected attribute(e.g. sex) at a time. These solutions cannot generate a machine learning model which is fair against every protected attribute (e.g. both sex and race) at the same time. To solve this problem, we propose FairBalance in this paper to balance the distribution of training data across every protected attribute before training the machine learning models. Our results show that, under the assumption of unbiased ground truth labels, FairBalance can significantly reduce bias metrics (AOD, EOD, and SPD) on every known protected attribute without much, if not any damage to the prediction performance.
READ FULL TEXT