Curriculum Learning by Transfer Learning: Theory and Experiments with Deep Networks

02/11/2018
by   Daphna Weinshall, et al.
0

Our first contribution in this paper is a theoretical investigation of curriculum learning in the context of stochastic gradient descent when optimizing the least squares loss function. We prove that the rate of convergence of an ideal curriculum learning method in monotonically increasing with the difficulty of the examples, and that this increase in convergence rate is monotonically decreasing as training proceeds. In our second contribution we analyze curriculum learning in the context of training a CNN for image classification. Here one crucial problem is the means to achieve a curriculum. We describe a method which infers the curriculum by way of transfer learning from another network, pre-trained on a different task. While this approach can only approximate the ideal curriculum, we observe empirically similar behavior to the one predicted by the theory, namely, a significant boost in convergence speed at the beginning of training. When the task is made more difficult, improvement in generalization performance is observed. Finally, curriculum learning exhibits robustness against unfavorable conditions such as strong regularization.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset