Why X rather than Y? Explaining Neural Model' Predictions by Generating Intervention Counterfactual Samples

11/05/2019
by   Thai Le, et al.
0

Even though the topic of explainable AI/ML is very popular in text and computer vision domain, most of the previous literatures are not suitable for explaining black-box models' predictions on general data mining datasets. This is because these datasets are usually in high-dimensional vectored features format that are not as friendly and comprehensible as texts and images to the end users. In this paper, we combine the best of both worlds: "explanations by intervention" from causality and "explanations are contrastive" from philosophy and social science domain to explain neural models' predictions for tabular datasets. Specifically, given a model's prediction as label X, we propose a novel idea to intervene and generate minimally modified contrastive sample to be classified as Y, that then results in a simple natural text giving answer to the question "Why X rather than Y?". We carry out experiments with several datasets of different scales and compare our approach with other baselines on three different areas: fidelity, reasonableness and explainability.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset