Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence

01/30/2018
by   Arslan Chaudhry, et al.
0

We study the incremental learning problem for the classification task, a key component in developing life-long learning systems. The main challenges while learning in an incremental manner are to preserve and update the knowledge of the model. In this work, we propose a generalization of Path Integral (Zenke et al., 2017) and EWC (Kirkpatrick et al., 2016 with a theoretically grounded KL-divergence based perspective. We show that, to preserve and update the knowledge, regularizing the model's likelihood distribution is more intuitive and provides better insights to the problem. To do so, we use KL-divergence as a measure of distance which is equivalent to computing distance in a Riemannian manifold induced by the Fisher information matrix. Furthermore, to enhance the learning flexibility, the regularization is weighted by a parameter importance score that is calculated along the entire training trajectory. Contrary to forgetting, as the algorithm progresses, the regularized loss makes the network intransigent, resulting in its inability to discriminate new tasks from the old ones. We show that this problem of intransigence can be addressed by storing a small subset of representative samples from previous datasets. In addition, in order to evaluate the performance of an incremental learning algorithm, we introduce two novel metrics to evaluate forgetting and intransigence. Experimental evaluation on incremental version of MNIST and CIFAR-100 classification datasets shows that our approach outperforms existing state-of-the-art baselines in all the evaluation metrics.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset