Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples

11/20/2018
by   Hajime Ono, et al.
0

How can we make machine learning provably robust against adversarial examples in a scalable way? Since certified defense methods, which ensure ϵ-robust, consume huge resources, they can only achieve small degree of robustness in practice. Lipschitz margin training (LMT) is a scalable certified defense, but it can also only achieve small robustness due to over-regularization. How can we make certified defense more efficiently? We present LC-LMT, a light weight Lipschitz margin training which solves the above problem. Our method has the following properties; (a) efficient: it can achieve ϵ-robustness at early epoch, and (b) robust: it has a potential to get higher robustness than LMT. In the evaluation, we demonstrate the benefits of the proposed method. LC-LMT can achieve required robustness more than 30 epoch earlier than LMT in MNIST, and shows more than 90 % accuracy against both legitimate and adversarial inputs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset