Lagrangian Objective Function Leads to Improved Unforeseen Attack Generalization in Adversarial Training
Recent improvements in deep learning models and their practical applications have raised concerns about the robustness of these models against adversarial examples. Adversarial training (AT) has been shown effective to reach a robust model against the attack that is used during training. However, it usually fails against other attacks, i.e. the model overfits to the training attack scheme. In this paper, we propose a simple modification to the AT that mitigates the mentioned issue. More specifically, we minimize the perturbation ℓ_p norm while maximizing the classification loss in the Lagrangian form. We argue that crafting adversarial examples based on this scheme results in enhanced attack generalization in the learned model. We compare our final model robust accuracy against attacks that were not used during training to closely related state-of-the-art AT methods. This comparison demonstrates that our average robust accuracy against unseen attacks is 5.9 dataset and is 3.2 state-of-the-art methods. We also demonstrate that our attack is faster than other attack schemes that are designed for unseen attack generalization, and conclude that it is feasible for large-scale datasets.
READ FULL TEXT