Improving the Robustness of Graphs through Reinforcement Learning and Graph Neural Networks

Graphs can be used to represent and reason about real world systems. A variety of metrics have been devised to quantify their global characteristics. In general, prior work focuses on measuring the properties of existing graphs rather than the problem of dynamically modifying them (for example, by adding edges) in order to improve the value of an objective function. In this paper, we present RNet-DQN, a solution for improving graph robustness based on Graph Neural Network architectures and Deep Reinforcement Learning. We investigate the application of this approach for improving graph robustness, which is relevant to infrastructure and communication networks. We capture robustness using two objective functions and use changes in their values as the reward signal. Our experiments show that our approach can learn edge addition policies for improving robustness that perform significantly better than random and, in some cases, exceed the performance of a greedy baseline. Crucially, the learned policies generalize to different graphs including those larger than the ones on which they were trained. This is important because the naive greedy solution can be prohibitively expensive to compute for large graphs; our approach offers an O(|V|^3) speed-up with respect to it.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset