Raising the Bar for Certified Adversarial Robustness with Diffusion Models

05/17/2023
by   Thomas Altstidl, et al.
0

Certified defenses against adversarial attacks offer formal guarantees on the robustness of a model, making them more reliable than empirical methods such as adversarial training, whose effectiveness is often later reduced by unseen attacks. Still, the limited certified robustness that is currently achievable has been a bottleneck for their practical adoption. Gowal et al. and Wang et al. have shown that generating additional training data using state-of-the-art diffusion models can considerably improve the robustness of adversarial training. In this work, we demonstrate that a similar approach can substantially improve deterministic certified defenses. In addition, we provide a list of recommendations to scale the robustness of certified training approaches. One of our main insights is that the generalization gap, i.e., the difference between the training and test accuracy of the original model, is a good predictor of the magnitude of the robustness improvement when using additional generated data. Our approach achieves state-of-the-art deterministic robustness certificates on CIFAR-10 for the ℓ_2 (ϵ = 36/255) and ℓ_∞ (ϵ = 8/255) threat models, outperforming the previous best results by +3.95% and +1.39%, respectively. Furthermore, we report similar improvements for CIFAR-100.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset