SAFE: Saliency-Aware Counterfactual Explanations for DNN-based Automated Driving Systems

07/28/2023
by   amir-samadi, et al.
0

A CF explainer identifies the minimum modifications in the input that would alter the model's output to its complement. In other words, a CF explainer computes the minimum modifications required to cross the model's decision boundary. Current deep generative CF models often work with user-selected features rather than focusing on the discriminative features of the black-box model. Consequently, such CF examples may not necessarily lie near the decision boundary, thereby contradicting the definition of CFs. To address this issue, we propose in this paper a novel approach that leverages saliency maps to generate more informative CF explanations. Source codes are available at: https://github.com/Amir-Samadi//Saliency_Aware_CF.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset