Measurement and Fairness

12/11/2019
by   Abigail Z. Jacobs, et al.
0

We introduce the language of measurement modeling from the quantitative social sciences as a framework for understanding fairness in computational systems. Computational systems often involve unobservable theoretical constructs, such as "creditworthiness," "teacher quality," or "risk to society," that cannot be measured directly and must instead be inferred from observable properties thought to be related to them—i.e., operationalized via a measurement model. This process introduces the potential for mismatch between the theoretical understanding of the construct purported to be measured and its operationalization. Indeed, we argue that many of the harms discussed in the literature on fairness in computational systems are direct results of such mismatches. Further complicating these discussions is the fact that fairness itself is an unobservable theoretical construct. Moreover, it is an essentially contested construct—i.e., it has many different theoretical understandings depending on the context. We argue that this contestedness underlies recent debates about fairness definitions: disagreements that appear to be about contradictory operationalizations are, in fact, disagreements about different theoretical understandings of the construct itself. By introducing the language of measurement modeling, we provide the computer science community with a process for making explicit and testing assumptions about unobservable theoretical constructs, thereby making it easier to identify, characterize, and even mitigate fairness-related harms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset