Fairness Deconstructed: A Sociotechnical View of 'Fair' Algorithms in Criminal Justice
Early studies of risk assessment algorithms used in criminal justice revealed widespread racial biases. In response, machine learning researchers have developed methods for fairness, many of which rely on equalizing empirical metrics across protected attributes. Here, I recall sociotechnical perspectives to delineate the significant gap between fairness in theory and practice, focusing on criminal justice. I (1) illustrate how social context can undermine analyses that are restricted to an AI system's outputs, and (2) argue that much of the fair ML literature fails to account for epistemological issues with underlying crime data. Instead of building AI that reifies power imbalances, like risk assessment algorithms, I ask whether data science can be used to understand the root causes of structural marginalization.
READ FULL TEXT