Towards Auditing Unsupervised Learning Algorithms and Human Processes For Fairness

09/20/2022
by   Ian Davidson, et al.
0

Existing work on fairness typically focuses on making known machine learning algorithms fairer. Fair variants of classification, clustering, outlier detection and other styles of algorithms exist. However, an understudied area is the topic of auditing an algorithm's output to determine fairness. Existing work has explored the two group classification problem for binary protected status variables using standard definitions of statistical parity. Here we build upon the area of auditing by exploring the multi-group setting under more complex definitions of fairness.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset