Attesting Biases and Discrimination using Language Semantics

09/10/2019
by   Xavier Ferrer Aran, et al.
0

AI agents are increasingly deployed and used to make automated decisions that affect our lives on a daily basis. It is imperative to ensure that these systems embed ethical principles and respect human values. We focus on how we can attest to whether AI agents treat users fairly without discriminating against particular individuals or groups through biases in language. In particular, we discuss human unconscious biases, how they are embedded in language, and how AI systems inherit those biases by learning from and processing human language. Then, we outline a roadmap for future research to better understand and attest problematic AI biases derived from language.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset