Adversarial Token Attacks on Vision Transformers

10/08/2021
by   Ameya Joshi, et al.
14

Vision transformers rely on a patch token based self attention mechanism, in contrast to convolutional networks. We investigate fundamental differences between these two families of models, by designing a block sparsity based adversarial token attack. We probe and analyze transformer as well as convolutional models with token attacks of varying patch sizes. We infer that transformer models are more sensitive to token attacks than convolutional models, with ResNets outperforming Transformer models by up to ∼30% in robust accuracy for single token attacks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset