Learning generic feature representation with synthetic data for weakly-supervised sound event detection by inter-frame distance loss

11/02/2020
by   Yuxin Huang, et al.
0

Due to the limitation of strong-labeled sound event detection data set, using synthetic data to improve the sound event detection system performance has been a new research focus. In this paper, we try to exploit the usage of synthetic data to improve the feature representation. Based on metric learning, we proposed inter-frame distance loss function for domain adaptation, and prove the effectiveness of it on sound event detection. We also applied multi-task learning with synthetic data. We find the the best performance can be achieved when the two methods being used together. The experiment on DCASE 2018 task 4 test set and DCASE 2019 task 4 synthetic set both show competitive results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset