Exploring and Improving Robustness of Multi Task Deep Neural Networks via Domain Agnostic Defenses
In this paper, we explore the robustness of the Multi-Task Deep Neural Networks (MT-DNN) against non-targeted adversarial attacks across Natural Language Understanding (NLU) tasks as well as some possible ways to defend against them. Liu et al., have shown that the Multi-Task Deep Neural Network, due to the regularization effect produced when training as a result of its cross task data, is more robust than a vanilla BERT model trained only on one task (1.1 has generalized better, making it easily transferable across domains and tasks, it can still be compromised as after only 2 attacks (1-character and 2-character) the accuracy drops by 42.05 tasks. Finally, we propose a domain agnostic defense which restores the model's accuracy (36.75 defense or an off-the-shelf spell checker.
READ FULL TEXT