Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers

06/09/2019
by   Hadi Salman, et al.
4

Recent works have shown the effectiveness of randomized smoothing as a scalable technique for building neural network-based classifiers that are provably robust to ℓ_2-norm adversarial perturbations. In this paper, we employ adversarial training to improve the performance of randomized smoothing. We design an adapted attack for smoothed classifiers, and we show how this attack can be used in an adversarial training setting to boost the provable robustness of smoothed classifiers. We demonstrate through extensive experimentation that our method consistently outperforms all existing provably ℓ_2-robust classifiers by a significant margin on ImageNet and CIFAR-10, establishing the state-of-the-art for provable ℓ_2-defenses. Our code and trained models are available at http://github.com/Hadisalman/smoothing-adversarial .

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset