When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time

12/18/2017
by   David J. Miller, et al.
0

A significant threat to the recent, wide deployment of machine learning-based systems, including deep neural networks (DNNs), for a host of application domains is adversarial learning (Adv-L) attacks. The main focus here is on exploits applied against (DNN-based) classifiers at test time. While much work has focused on devising attacks that make perturbations to a test pattern (e.g., an image) which are human-imperceptible and yet still induce a change in the classifier's decision, there is relative paucity of work in defending against such attacks. Moreover, our thesis is that most existing defense approaches "miss the mark", seeking to robustify the classifier to make "correct" decisions on perturbed patterns. While, unlike some prior works, we make explicit the motivation of such approaches, we argue that it is generally much more actionable to detect the attack, rather than to "correctly classify" in the face of it. We hypothesize that, even if human-imperceptible, adversarial perturbations are machine-detectable. We propose a purely unsupervised anomaly detector (AD), based on suitable (null hypothesis) density models for the different DNN layers and a novel Kullback-Leibler "distance" AD test statistic. Tested on MNIST and CIFAR10 image databases under the prominent attack strategy proposed by Goodfellow et al. [5], our approach achieves compelling ROC AUCs for attack detection of 0.992 on MNIST, 0.957 on noisy MNIST images, and 0.924 on CIFAR10. We also show that a simple detector that counts the number of white regions in the image achieves 0.97 AUC in detecting the attack on MNIST proposed by Papernot et al. [12].

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset