Active Fairness Instead of Unawareness

09/14/2020
by   Boris Ruf, et al.
0

The possible risk that AI systems could promote discrimination by reproducing and enforcing unwanted bias in data has been broadly discussed in research and society. Many current legal standards demand to remove sensitive attributes from data in order to achieve "fairness through unawareness". We argue that this approach is obsolete in the era of big data where large datasets with highly correlated attributes are common. In the contrary, we propose the active use of sensitive attributes with the purpose of observing and controlling any kind of discrimination, and thus leading to fair results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset