Adaptive versus Standard Descent Methods and Robustness Against Adversarial Examples

11/09/2019
by   Marc Khoury, et al.
12

Adversarial examples are a pervasive phenomenon of machine learning models where seemingly imperceptible perturbations to the input lead to misclassifications for otherwise statistically accurate models. In this paper we study how the choice of optimization algorithm influences the robustness of the resulting classifier to adversarial examples. Specifically we show an example of a learning problem for which the solution found by adaptive optimization algorithms exhibits qualitatively worse robustness properties against both L_2- and L_∞-adversaries than the solution found by non-adaptive algorithms. Then we fully characterize the geometry of the loss landscape of L_2-adversarial training in least-squares linear regression. The geometry of the loss landscape is subtle and has important consequences for optimization algorithms. Finally we provide experimental evidence which suggests that non-adaptive methods consistently produce more robust models than adaptive methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset