Backdoor Attacks to Graph Neural Networks

06/19/2020
by   Zaixi Zhang, et al.
0

Node classification and graph classification are two basic graph analytics tools. Node classification aims to predict a label for each node in a graph, while graph classification aims to predict a label for the entire graph. Existing studies on graph neural networks (GNNs) in adversarial settings mainly focused on node classification, leaving GNN based graph classification largely unexplored. We aim to bridge this gap in this work. Specifically, we propose a subgraph based backdoor attack to GNN based graph classification. In our backdoor attack, a GNN classifier predicts an attacker-chosen target label for a testing graph once the attacker injects a predefined subgraph to the testing graph. Our empirical results on three real-world graph datasets show that our backdoor attacks are effective with small impact on a GNN's prediction accuracy for clean testing graphs. We generalize a state-of-the-art randomized smoothing based certified defense to defend against our backdoor attacks. Our empirical results show that the defense is ineffective in some cases, highlighting the needs of new defenses for our backdoor attacks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset