Fast Gradient Non-sign Methods

10/25/2021
by   Yaya Cheng, et al.
0

Adversarial attacks make their success in fooling DNNs and among them, gradient-based algorithms become one of the mainstreams. Based on the linearity hypothesis <cit.>, under ℓ_∞ constraint, sign operation applied to the gradients is a good choice for generating perturbations. However, the side-effect from such operation exists since it leads to the bias of direction between the real gradients and the perturbations. In other words, current methods contain a gap between real gradients and actual noises, which leads to biased and inefficient attacks. Therefore in this paper, based on the Taylor expansion, the bias is analyzed theoretically and the correction of , , Fast Gradient Non-sign Method (FGNM), is further proposed. Notably, FGNM is a general routine, which can seamlessly replace the conventional sign operation in gradient-based attacks with negligible extra computational cost. Extensive experiments demonstrate the effectiveness of our methods. Specifically, ours outperform them by 27.5% at most and 9.5% on average. Our anonymous code is publicly available: <https://git.io/mm-fgnm>.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset