Unexplainability and Incomprehensibility of Artificial Intelligence

06/20/2019
by   Roman V. Yampolskiy, et al.
1

Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain people would not understand some of those explanations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset