Privacy-Preserved Neural Graph Similarity Learning

10/21/2022
by   Yupeng Hou, et al.
0

To develop effective and efficient graph similarity learning (GSL) models, a series of data-driven neural algorithms have been proposed in recent years. Although GSL models are frequently deployed in privacy-sensitive scenarios, the user privacy protection of neural GSL models has not drawn much attention. To comprehensively understand the privacy protection issues, we first introduce the concept of attackable representation to systematically characterize the privacy attacks that each model can face. Inspired by the qualitative results, we propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning. To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices. Instead, we learn multi-perspective graph representations based on learnable context vectors. To alleviate the attacks to graph properties, the obfuscated features that contain information from both graphs are communicated. In this way, the private properties of each graph can be difficult to infer. Based on the node-graph matching techniques while calculating the obfuscated features, PPGM can also be effective in similarity measuring. To quantitatively evaluate the privacy-preserving ability of neural GSL models, we further propose an evaluation protocol via training supervised black-box attack models. Extensive experiments on widely-used benchmarks show the effectiveness and strong privacy-protection ability of the proposed model PPGM. The code is available at: https://github.com/RUCAIBox/PPGM.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset