Algorithmic Injustices: Towards a Relational Ethics
It has become trivial to point out how decision-making processes in various social, political and economical sphere are assisted by automated systems. Improved efficiency, the hallmark of these systems, drives the mass scale integration of automated systems into daily life. However, as a robust body of research in the area of algorithmic injustice shows, algorithmic tools embed and perpetuate societal and historical biases and injustice. In particular, a persistent recurring trend within the literature indicates that society's most vulnerable are disproportionally impacted. When algorithmic injustice and bias is brought to the fore, most of the solutions on offer 1) revolve around technical solutions and 2) do not focus centre disproportionally impacted groups. This paper zooms out and draws the bigger picture. It 1) argues that concerns surrounding algorithmic decision making and algorithmic injustice require fundamental rethinking above and beyond technical solutions, and 2) outlines a way forward in a manner that centres vulnerable groups through the lens of relational ethics.
READ FULL TEXT