Handling Background Noise in Neural Speech Generation

02/23/2021
by   Tom Denton, et al.
0

Recent advances in neural-network based generative modeling of speech has shown great potential for speech coding. However, the performance of such models drops when the input is not clean speech, e.g., in the presence of background noise, preventing its use in practical applications. In this paper we examine the reason and discuss methods to overcome this issue. Placing a denoising preprocessing stage when extracting features and target clean speech during training is shown to be the best performing strategy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset