Saliency Methods for Explaining Adversarial Attacks
In this work, we aim to explain the classifications of adversary images using saliency methods. Saliency methods explain individual classification decisions of neural networks by creating saliency maps. All saliency methods were proposed for explaining correct predictions. Recent research shows that many proposed saliency methods fail to explain the predictions. Notably, the Guided Backpropagation (GuidedBP) is essentially doing (partial) image recovery. In our work, our numerical analysis shows the saliency maps created by GuidedBP do contain class-discriminative information. We propose a simple and efficient way to enhance the created saliency maps. The proposed enhanced GuidedBP is the state-of-the-art saliency method to explain adversary classifications.
READ FULL TEXT