L 1-norm double backpropagation adversarial defense

03/05/2019
by   Ismaïla Seck, et al.
0

Adversarial examples are a challenging open problem for deep neural networks. We propose in this paper to add a penalization term that forces the decision function to be at in some regions of the input space, such that it becomes, at least locally, less sensitive to attacks. Our proposition is theoretically motivated and shows on a first set of carefully conducted experiments that it behaves as expected when used alone, and seems promising when coupled with adversarial training.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset