Advocating for Multiple Defense Strategies against Adversarial Examples

12/04/2020
by   Alexandre Araujo, et al.
0

It has been empirically observed that defense mechanisms designed to protect neural networks against ℓ_∞ adversarial examples offer poor performance against ℓ_2 adversarial examples and vice versa. In this paper we conduct a geometrical analysis that validates this observation. Then, we provide a number of empirical insights to illustrate the effect of this phenomenon in practice. Then, we review some of the existing defense mechanism that attempts to defend against multiple attacks by mixing defense strategies. Thanks to our numerical experiments, we discuss the relevance of this method and state open questions for the adversarial examples community.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset