A statistical approach to detect sensitive features in a group fairness setting

05/11/2023
by   Guilherme Dean Pelegrina, et al.
0

The use of machine learning models in decision support systems with high societal impact raised concerns about unfair (disparate) results for different groups of people. When evaluating such unfair decisions, one generally relies on predefined groups that are determined by a set of features that are considered sensitive. However, such an approach is subjective and does not guarantee that these features are the only ones to be considered as sensitive nor that they entail unfair (disparate) outcomes. In this paper, we propose a preprocessing step to address the task of automatically recognizing sensitive features that does not require a trained model to verify unfair results. Our proposal is based on the Hilber-Schmidt independence criterion, which measures the statistical dependence of variable distributions. We hypothesize that if the dependence between the label vector and a candidate is high for a sensitive feature, then the information provided by this feature will entail disparate performance measures between groups. Our empirical results attest our hypothesis and show that several features considered as sensitive in the literature do not necessarily entail disparate (unfair) results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/19/2018

Taking Advantage of Multitask Learning for Fair Classification

A central goal of algorithmic fairness is to reduce bias in automated de...
research
06/17/2020

LimeOut: An Ensemble Approach To Improve Process Fairness

Artificial Intelligence and Machine Learning are becoming increasingly p...
research
02/07/2020

Oblivious Data for Fairness with Kernels

We investigate the problem of algorithmic fairness in the case where sen...
research
09/06/2019

Approaching Machine Learning Fairness through Adversarial Network

Fairness is becoming a rising concern w.r.t. machine learning model perf...
research
06/18/2020

Towards Threshold Invariant Fair Classification

Effective machine learning models can automatically learn useful informa...
research
11/11/2019

Kernel Dependence Regularizers and Gaussian Processes with Applications to Algorithmic Fairness

Current adoption of machine learning in industrial, societal and economi...
research
02/18/2013

Feature Multi-Selection among Subjective Features

When dealing with subjective, noisy, or otherwise nebulous features, the...

Please sign up or login with your details

Forgot password? Click here to reset