Nesterov Accelerated Gradient and Scale Invariance for Improving Transferability of Adversarial Examples

08/17/2019
by   Jiadong Lin, et al.
3

Recent evidence suggests that deep neural networks (DNNs) are vulnerable to adversarial examples, which are crafted by adding human-imperceptible perturbations to legitimate examples. However, most of the existing adversarial attacks generate adversarial examples with weak transferability, making it difficult to evaluate the robustness of DNNs under the challenging black-box setting. To address this issue, we propose two methods: Nesterov momentum iterative fast gradient sign method (N-MI-FGSM) and scale-invariant attack method (SIM), to improve the transferability of adversarial examples. N-MI-FGSM tries a better optimizer by applying the idea of Nesterov accelerated gradient to gradient-based attack method. SIM leverages the scale-invariant property of DNNs and optimizes the generated adversarial example by a set of scaled images as the inputs. Further, the two methods can be naturally combined to form a strong attack and enhance existing gradient attack methods. Empirical results on ImageNet and NIPS 2017 adversarial competition show that the proposed methods can generate adversarial examples with higher transferability than existing competing baselines.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset