Unsupervised Time-Aware Sampling Network with Deep Reinforcement Learning for EEG-Based Emotion Recognition

12/14/2022
by   Yongtao Zhang, et al.
0

Recognizing human emotions from complex, multivariate, and non-stationary electroencephalography (EEG) time series is essential in affective brain-computer interface. However, because continuous labeling of ever-changing emotional states is not feasible in practice, existing methods can only assign a fixed label to all EEG timepoints in a continuous emotion-evoking trial, which overlooks the highly dynamic emotional states and highly non-stationary EEG signals. To solve the problems of high reliance on fixed labels and ignorance of time-changing information, in this paper we propose a time-aware sampling network (TAS-Net) using deep reinforcement learning (DRL) for unsupervised emotion recognition, which is able to detect key emotion fragments and disregard irrelevant and misleading parts. Extensive experiments are conducted on three public datasets (SEED, DEAP, and MAHNOB-HCI) for emotion recognition using leave-one-subject-out cross-validation, and the results demonstrate the superiority of the proposed method against previous unsupervised emotion recognition methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset