The Linguistic Blind Spot of Value-Aligned Agency, Natural and Artificial

07/02/2022
by   Travis LaCroix, et al.
0

The value-alignment problem for artificial intelligence (AI) asks how we can ensure that the 'values' (i.e., objective functions) of artificial systems are aligned with the values of humanity. In this paper, I argue that linguistic communication (natural language) is a necessary condition for robust value alignment. I discuss the consequences that the truth of this claim would have for research programmes that attempt to ensure value alignment for AI systems; or, more loftily, designing robustly beneficial or ethical artificial agents.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset