Explanation from Specification

12/13/2020
by   Harish Naik, et al.
0

Explainable components in XAI algorithms often come from a familiar set of models, such as linear models or decision trees. We formulate an approach where the type of explanation produced is guided by a specification. Specifications are elicited from the user, possibly using interaction with the user and contributions from other areas. Areas where a specification could be obtained include forensic, medical, and scientific applications. Providing a menu of possible types of specifications in an area is an exploratory knowledge representation and reasoning task for the algorithm designer, aiming at understanding the possibilities and limitations of efficiently computable modes of explanations. Two examples are discussed: explanations for Bayesian networks using the theory of argumentation, and explanations for graph neural networks. The latter case illustrates the possibility of having a representation formalism available to the user for specifying the type of explanation requested, for example, a chemical query language for classifying molecules. The approach is motivated by a theory of explanation in the philosophy of science, and it is related to current questions in the philosophy of science on the role of machine learning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset