Structured Adversarial Attack: Towards General Implementation and Better Interpretability

08/05/2018
by   Kaidi Xu, et al.
16

When generating adversarial examples to attack deep neural networks (DNNs), ℓ_p norm of the added perturbation is usually used to measure the similarity between original image and adversarial example. However, such adversarial attacks may fail to capture key infomation hidden in the input. This work develops a more general attack model i.e., the structured attack that explores group sparsity in adversarial perturbations by sliding a mask through images aiming for extracting key structures. An ADMM (alternating direction method of multipliers)-based framework is proposed that can split the original problem into a sequence of analytically solvable subproblems and can be generalized to implement other state-of-the-art attacks. Strong group sparsity is achieved in adversarial perturbations even with the same level of distortion in terms of ℓ_p norm as the state-of-the-art attacks. Extensive experimental results on MNIST, CIFAR-10 and ImageNet show that our attack could be much stronger (in terms of smaller ℓ_0 distortion) than the existing ones, and its better interpretability from group sparse structures aids in uncovering the origins of adversarial examples.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset