Stability and Generalization of ℓ_p-Regularized Stochastic Learning for GCN
Graph convolutional networks (GCN) are viewed as one of the most popular representations among the variants of graph neural networks over graph data and have shown powerful performance in empirical experiments. That ℓ_2-based graph smoothing enforces the global smoothness of GCN, while (soft) ℓ_1-based sparse graph learning tends to promote signal sparsity to trade for discontinuity. This paper aims to quantify the trade-off of GCN between smoothness and sparsity, with the help of a general ℓ_p-regularized (1<p≤ 2) stochastic learning proposed within. While stability-based generalization analyses have been given in prior work for a second derivative objectiveness function, our ℓ_p-regularized learning scheme does not satisfy such a smooth condition. To tackle this issue, we propose a novel SGD proximal algorithm for GCNs with an inexact operator. For a single-layer GCN, we establish an explicit theoretical understanding of GCN with the ℓ_p-regularized stochastic learning by analyzing the stability of our SGD proximal algorithm. We conduct multiple empirical experiments to validate our theoretical findings.
READ FULL TEXT