Edge Contraction Pooling for Graph Neural Networks
Graph Neural Network (GNN) research has concentrated on improving convolutional layers, with little attention paid to developing graph pooling layers. Yet pooling layers can enable GNNs to reason over abstracted groups of nodes instead of single nodes. To close this gap, we propose a graph pooling layer relying on the notion of edge contraction: EdgePool learns a localized and sparse hard pooling transform. We show that EdgePool outperforms alternative pooling methods, can be easily integrated into most GNN models, and improves performance on both node and graph classification.
READ FULL TEXT