Learning Audio-Visual embedding for Person Verification in the Wild

09/09/2022
by   Peiwen Sun, et al.
9

It has already been observed that audio-visual embedding is more robust than uni-modality embedding for person verification. Here, we proposed a novel audio-visual strategy that considers aggregators from a fusion perspective. First, we introduced weight-enhanced attentive statistics pooling for the first time in face verification. We find that a strong correlation exists between modalities during pooling, so joint attentive pooling is proposed which contains cycle consistency to learn the implicit inter-frame weight. Finally, each modality is fused with a gated attention mechanism to gain robust audio-visual embedding. All the proposed models are trained on the VoxCeleb2 dev dataset and the best system obtains 0.18 official trial lists of VoxCeleb1 respectively, which is to our knowledge the best-published results for person verification.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset