Compressing LSTM Networks by Matrix Product Operators

12/22/2020
by   Ze-Feng Gao, et al.
0

Long Short-Term Memory (LSTM) models are the building blocks of many state-of-the-art algorithms for Natural Language Processing (NLP). But, there are a large number of parameters in an LSTM model. This usually brings out a large amount of memory space needed for operating an LSTM model. Thus, an LSTM model usually requires a large amount of computational resources for training and predicting new data, suffering from computational inefficiencies. Here we propose an alternative LSTM model to reduce the number of parameters significantly by representing the weight parameters based on matrix product operators (MPO), which are used to characterize the local correlation in quantum states in physics. We further experimentally compare the compressed models based the MPO-LSTM model and the pruning method on sequence classification and sequence prediction tasks. The experimental results show that our proposed MPO-based method outperforms the pruning method.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset