Discrete and Continuous Deep Residual Learning Over Graphs

11/21/2019
by   Pedro H. C. Avelar, et al.
0

In this paper we propose the use of continuous residual modules for graph kernels in Graph Neural Networks. We show the how both discrete and continuous residual layers allow for more robust training, being that continuous residual layers are those which are applied by integrating through an Ordinary Differential Equation (ODE) solver to produce their output. We experimentally show that these residuals achieve better results than the ones with non-residual modules when multiple layers are used, mitigating the low-pass filtering effect of GCN-based models. Finally, we apply and analyse the behaviour of these techniques and give pointers to how this technique can be useful in other domains by allowing more predictable behaviour under dynamic times of computation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset