MMER: Multimodal Multi-task learning for Emotion Recognition in Spoken Utterances

03/31/2022
by   Harshvardhan Srivastava, et al.
0

Emotion Recognition (ER) aims to classify human utterances into different emotion categories. Based on early-fusion and self-attention-based multimodal interaction between text and acoustic modalities, in this paper, we propose a multimodal multitask learning approach for ER from individual utterances in isolation. Experiments on the IEMOCAP benchmark show that our proposed model performs better than our re-implementation of state-of-the-art and achieves better performance than all other unimodal and multimodal approaches in literature. In addition, strong baselines and ablation studies prove the effectiveness of our proposed approach. We make all our codes publicly available on GitHub.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset