Speech-XLNet: Unsupervised Acoustic Model Pretraining For Self-Attention Networks

10/23/2019
by   Xingchen Song, et al.
0

Self-attention network (SAN) can benefit significantly from the bi-directional representation learning through unsupervised pretraining paradigms such as BERT and XLNet. In this paper, we present an XLNet-like pretraining scheme "Speech-XLNet" for unsupervised acoustic model pretraining to learn speech representations with SAN. The pretrained SAN is finetuned under the hybrid SAN/HMM framework. We conjecture that by shuffling the speech frame orders, the permutation in Speech-XLNet serves as a strong regularizer to encourage the SAN to make inferences by focusing on global structures through its attention weights. In addition, Speech-XLNet also allows the model to explore the bi-directional contexts for effective speech representation learning. Experiments on TIMIT and WSJ demonstrate that Speech-XLNet greatly improves the SAN/HMM performance in terms of both convergence speed and recognition accuracy compared to the one trained from randomly initialized weights. Our best systems achieve a relative improvement of 11.9 the TIMIT and WSJ tasks respectively. In particular, the best system achieves a phone error rate (PER) of 13.3 knowledge, is the lowest PER obtained from a single system.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset