A Comparison of Discrete Latent Variable Models for Speech Representation Learning

10/24/2020
by   Henry Zhou, et al.
0

Neural latent variable models enable the discovery of interesting structure in speech audio data. This paper presents a comparison of two different approaches which are broadly based on predicting future time-steps or auto-encoding the input signal. Our study compares the representations learned by vq-vae and vq-wav2vec in terms of sub-word unit discovery and phoneme recognition performance. Results show that future time-step prediction with vq-wav2vec achieves better performance. The best system achieves an error rate of 13.22 on the ZeroSpeech 2019 ABX phoneme discrimination challenge

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset