Many-to-Many Voice Transformer Network

05/18/2020
by   Hirokazu Kameoka, et al.
6

This paper proposes a voice conversion (VC) method based on a sequence-to-sequence (S2S) learning framework, which makes it possible to simultaneously convert the voice characteristics, pitch contour and duration of input speech. We previously proposed an S2S-based VC method using a transformer network architecture, which we call the "voice transformer network (VTN)". While the original VTN is designed to learn only a mapping of speech feature sequences from one domain into another, we extend it so that it can simultaneously learn mappings among multiple domains using only a single model. This allows the model to fully utilize available training data collected from multiple domains by capturing common latent features that can be shared across different domains. On top of this model, we further propose incorporating a training loss called the "identity mapping loss", to ensure that the input feature sequence will remain unchanged when it already belongs to the target domain. Using this particular loss for model training has been found to be extremely effective in improving the performance of the model at test time. We conducted speaker identity conversion experiments and showed that model obtained higher sound quality and speaker similarity than baseline methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset