Multi-fidelity Stability for Graph Representation Learning

11/25/2021
by   Yihan He, et al.
0

In the problem of structured prediction with graph representation learning (GRL for short), the hypothesis returned by the algorithm maps the set of features in the receptive field of the targeted vertex to its label. To understand the learnability of those algorithms, we introduce a weaker form of uniform stability termed multi-fidelity stability and give learning guarantees for weakly dependent graphs. We testify that  <cit.>'s claim on the generalization of a single sample holds for GRL when the receptive field is sparse. In addition, we study the stability induced bound for two popular algorithms: (1) Stochastic gradient descent under convex and non-convex landscape. In this example, we provide non-asymptotic bounds that highly depend on the sparsity of the receptive field constructed by the algorithm. (2) The constrained regression problem on a 1-layer linear equivariant GNN. In this example, we present lower bounds for the discrepancy between the two types of stability, which justified the multi-fidelity design.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset