Robust Learning against Logical Adversaries

07/01/2020
by   Yizhen Wang, et al.
57

Test-time adversarial attacks have posed serious challenges to the robustness of machine learning models, and in many settings the adversarial manipulation needs not be bounded by small ℓ_p-norms. Motivated by semantic-preserving attacks in security domain, we investigate logical adversaries, a broad class of attackers who create adversarial examples within a reflexive-transitive closure of a logical relation. We analyze the conditions for robustness and propose normalize-and-predict – a learning framework with provable robustness guarantee. We compare our approach with adversarial training and derive a unified framework that provides the benefits of both approaches.Driven by the theoretical findings, we apply our framework to malware detection. We use our framework to learn new detectors and propose two generic logical attacks to validate model robustness. Experiment results on real-world data set show that attacks using logical relations can evade existing detectors, and our unified framework can significantly enhance model robustness.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset