Mask Proxy Loss for Text-Independent Speaker Recognition

11/09/2020
by   Jiachen Lian, et al.
0

Open-set speaker recognition can be regarded as a metric learning problem, which is to maximize inter-class variance and minimize intra-class variance. Supervised metric learning can be categorized into entity-based learning and proxy-based learning[Different from the definition in <cit.>, we adopt the concept of entity-based learning rather than pair-based learning to illustrate the data-to-data relationship. Entity refers to real data point.]. Most of existing metric learning objectives like Contrastive, Triplet, Prototypical, GE2E, etc all belong to the former division, the performance of which is either highly dependent on sample mining strategy or restricted by insufficient label information in the mini-batch. Proxy-based losses mitigate both shortcomings, however, fine-grained connections among entities are either not or indirectly leveraged. This paper proposes a Mask Proxy (MP) loss which directly incorporates both proxy-based relationship and entity-based relationship. We further propose Multinomial Mask Proxy (MMP) loss to leverage the hardness of entity-to-entity pairs. These methods have been applied to evaluate on VoxCeleb test set and reach state-of-the-art Equal Error Rate(EER).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset