Fairness Deconstructed: A Sociotechnical View of 'Fair' Algorithms in Criminal Justice

06/25/2021
by   Rajiv Movva, et al.
0

Early studies of risk assessment algorithms used in criminal justice revealed widespread racial biases. In response, machine learning researchers have developed methods for fairness, many of which rely on equalizing empirical metrics across protected attributes. Here, I recall sociotechnical perspectives to delineate the significant gap between fairness in theory and practice, focusing on criminal justice. I (1) illustrate how social context can undermine analyses that are restricted to an AI system's outputs, and (2) argue that much of the fair ML literature fails to account for epistemological issues with underlying crime data. Instead of building AI that reifies power imbalances, like risk assessment algorithms, I ask whether data science can be used to understand the root causes of structural marginalization.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset