ATRAS: Adversarially Trained Robust Architecture Search

06/13/2021
by   Yigit Alparslan, et al.
0

In this paper, we explore the effect of architecture completeness on adversarial robustness. We train models with different architectures on CIFAR-10 and MNIST dataset. For each model, we vary different number of layers and different number of nodes in the layer. For every architecture candidate, we use Fast Gradient Sign Method (FGSM) to generate untargeted adversarial attacks and use adversarial training to defend against those attacks. For each architecture candidate, we report pre-attack, post-attack and post-defense accuracy for the model as well as the architecture parameters and the impact of completeness to the model accuracies.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset