EvoBA: An Evolution Strategy as a Strong Baseline forBlack-Box Adversarial Attacks

07/12/2021
by   Andrei Ilie, et al.
0

Recent work has shown how easily white-box adversarial attacks can be applied to state-of-the-art image classifiers. However, real-life scenarios resemble more the black-box adversarial conditions, lacking transparency and usually imposing natural, hard constraints on the query budget. We propose EvoBA, a black-box adversarial attack based on a surprisingly simple evolutionary search strategy. EvoBA is query-efficient, minimizes L_0 adversarial perturbations, and does not require any form of training. EvoBA shows efficiency and efficacy through results that are in line with much more complex state-of-the-art black-box attacks such as AutoZOOM. It is more query-efficient than SimBA, a simple and powerful baseline black-box attack, and has a similar level of complexity. Therefore, we propose it both as a new strong baseline for black-box adversarial attacks and as a fast and general tool for gaining empirical insight into how robust image classifiers are with respect to L_0 adversarial perturbations. There exist fast and reliable L_2 black-box attacks, such as SimBA, and L_∞ black-box attacks, such as DeepSearch. We propose EvoBA as a query-efficient L_0 black-box adversarial attack which, together with the aforementioned methods, can serve as a generic tool to assess the empirical robustness of image classifiers. The main advantages of such methods are that they run fast, are query-efficient, and can easily be integrated in image classifiers development pipelines. While our attack minimises the L_0 adversarial perturbation, we also report L_2, and notice that we compare favorably to the state-of-the-art L_2 black-box attack, AutoZOOM, and of the L_2 strong baseline, SimBA.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset