S-vectors: Speaker Embeddings based on Transformer's Encoder for Text-Independent Speaker Verification
X-vectors have become the standard for speaker-embeddings in automatic speaker verification. X-vectors are obtained using a Time-delay Neural Network (TDNN) with context over several frames. We have explored the use of an architecture built on self-attention which attends to all the features over the entire utterance, and hence better capture speaker-level characteristics. We have used the encoder structure of Transformers, which is built on self-attention, as the base architecture and trained it to do a speaker classification task. In this paper, we have proposed to derive speaker embeddings from the output of the trained Transformer encoder structure after appropriate statistics pooling to obtain utterance level features. We have named the speaker embeddings from this structure as s-vectors. s-vectors outperform x-vectors with a relative improvement of 10 trained on Voxceleb-1 only and Voxceleb-1+2 datasets. We have also investigated the effect of deriving s-vectors from different layers of the model.
READ FULL TEXT