Minimum Levels of Interpretability for Artificial Moral Agents

07/02/2023
by   Avish Vijayaraghavan, et al.
0

As artificial intelligence (AI) models continue to scale up, they are becoming more capable and integrated into various forms of decision-making systems. For models involved in moral decision-making, also known as artificial moral agents (AMA), interpretability provides a way to trust and understand the agent's internal reasoning mechanisms for effective use and error correction. In this paper, we provide an overview of this rapidly-evolving sub-field of AI interpretability, introduce the concept of the Minimum Level of Interpretability (MLI) and recommend an MLI for various types of agents, to aid their safe deployment in real-world settings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset