Exploring the time-domain deep attractor network with two-stream architectures in a reverberant environment

07/01/2020
by   Hangting Chen, et al.
0

With the success of deep learning in speech signal processing, speaker-independent speech separation under a reverberant environment remains challenging. The deep attractor network (DAN) performs speech separation with speaker attractors on the time-frequency domain. The recently proposed convolutional time-domain audio separation network (Conv-TasNet) surpasses ideal masks in anechoic mixture signals, while its architecture renders the problem of separating signals with variable numbers of speakers. Moreover, these models will suffer performance degradation in a reverberant environment. In this study, we propose a time-domain deep attractor network (TD-DAN) with two-stream convolutional networks that efficiently performs both dereverberation and separation tasks under the condition of variable numbers of speakers. The speaker encoding stream (SES) of the TD-DAN models speaker information, and is explored with various waveform encoders. The speech decoding steam (SDS) accepts speaker attractors from SES, and learns to predict early reflections. Experiment results demonstrated that the TD-DAN achieved scale-invariant source-to-distortion ratio (SI-SDR) gains of 10.40/9.78 dB and 9.15/7.92 dB on the reverberant two- and three-speaker development/evaluation set, exceeding Conv-TasNet by 1.55/1.33 dB and 0.94/1.21 dB, respectively.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset