High Order Recurrent Neural Networks for Acoustic Modelling

02/22/2018
by   Chao Zhang, et al.
0

Vanishing long-term gradients are a major issue in training standard recurrent neural networks (RNNs), which can be alleviated by long short-term memory (LSTM) models with memory cells. However, the extra parameters associated with the memory cells mean an LSTM layer has four times as many parameters as an RNN with the same hidden vector size. This paper addresses the vanishing gradient problem using a high order RNN (HORNN) which has additional connections from multiple previous time steps. Speech recognition experiments using British English multi-genre broadcast (MGB3) data showed that the proposed HORNN architectures for rectified linear unit and sigmoid activation functions reduced word error rates (WER) by 4.2 corresponding RNNs, and gave similar WERs to a (projected) LSTM while using only 20

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset