BlindSage: Label Inference Attacks against Node-level Vertical Federated Graph Neural Networks

08/04/2023
by   Marco Arazzi, et al.
0

Federated learning enables collaborative training of machine learning models by keeping the raw data of the involved workers private. One of its main objectives is to improve the models' privacy, security, and scalability. Vertical Federated Learning (VFL) offers an efficient cross-silo setting where a few parties collaboratively train a model without sharing the same features. In such a scenario, classification labels are commonly considered sensitive information held exclusively by one (active) party, while other (passive) parties use only their local information. Recent works have uncovered important flaws of VFL, leading to possible label inference attacks under the assumption that the attacker has some, even limited, background knowledge on the relation between labels and data. In this work, we are the first (to the best of our knowledge) to investigate label inference attacks on VFL using a zero-background knowledge strategy. To concretely formulate our proposal, we focus on Graph Neural Networks (GNNs) as a target model for the underlying VFL. In particular, we refer to node classification tasks, which are widely studied, and GNNs have shown promising results. Our proposed attack, BlindSage, provides impressive results in the experiments, achieving nearly 100 cases. Even when the attacker has no information about the used architecture or the number of classes, the accuracy remained above 85 Finally, we observe that well-known defenses cannot mitigate our attack without affecting the model's performance on the main classification task.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset