Overcoming the vanishing gradient problem in plain recurrent networks

01/18/2018
by   Yuhuang Hu, et al.
0

Plain recurrent networks greatly suffer from the vanishing gradient problem while Gated Neural Networks (GNNs) such as Long-short Term Memory (LSTM) and Gated Recurrent Unit (GRU) deliver promising results in many sequence learning tasks through sophisticated network designs. This paper shows how we can address this problem in a plain recurrent network by analyzing the gating mechanisms in GNNs. We propose a novel network called the Recurrent Identity Network (RIN) which allows a plain recurrent network to overcome the vanishing gradient problem while training very deep models without the use of gates. We compare this model with IRNNs and LSTMs on multiple sequence modeling benchmarks. The RINs demonstrate competitive performance and converge faster in all tasks. Notably, small RIN models produce 12 Sequential and Permuted MNIST datasets and reach state-of-the-art performance on the bAbI question answering dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset