Probing Acoustic Representations for Phonetic Properties

10/25/2020
by   Danni Ma, et al.
0

Pre-trained acoustic representations such as wav2vec and DeCoAR have attained impressive word error rates (WER) for speech recognition benchmarks, particularly when labeled data is limited. But little is known about what phonetic properties these various representations acquire, and how well they encode transferable features of speech. We compare features from two conventional and four pre-trained systems in some simple frame-level phonetic classification tasks, with classifiers trained on features from one version of the TIMIT dataset and tested on features from another. All contextualized representations offered some level of transferability across domains, and models pre-trained on more audio data give better results; but overall, DeCoAR, the system with the simplest architecture, performs best. This type of benchmarking analysis can thus uncover relative strengths of various proposed acoustic representations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset