Expressive Losses for Verified Robustness via Convex Combinations

05/23/2023
by   Alessandro De Palma, et al.
0

In order to train networks for verified adversarial robustness, previous work typically over-approximates the worst-case loss over (subsets of) perturbation regions or induces verifiability on top of adversarial training. The key to state-of-the-art performance lies in the expressivity of the employed loss function, which should be able to match the tightness of the verifiers to be employed post-training. We formalize a definition of expressivity, and show that it can be satisfied via simple convex combinations between adversarial attacks and IBP bounds. We then show that the resulting algorithms, named CC-IBP and MTL-IBP, yield state-of-the-art results across a variety of settings in spite of their conceptual simplicity. In particular, for ℓ_∞ perturbations of radius 1/255 on TinyImageNet and downscaled ImageNet, MTL-IBP improves on the best standard and verified accuracies from the literature by from 1.98% to 3.92% points while only relying on single-step adversarial attacks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset