Multi-view Temporal Alignment for Non-parallel Articulatory-to-Acoustic Speech Synthesis

12/30/2020
by   Jose A. Gonzalez-Lopez, et al.
0

Articulatory-to-acoustic (A2A) synthesis refers to the generation of audible speech from captured movement of the speech articulators. This technique has numerous applications, such as restoring oral communication to people who cannot longer speak due to illness or injury. Most successful techniques so far adopt a supervised learning framework, in which time-synchronous articulatory-and-speech recordings are used to train a supervised machine learning algorithm that can be used later to map articulator movements to speech. This, however, prevents the application of A2A techniques in cases where parallel data is unavailable, e.g., a person has already lost her/his voice and only articulatory data can be captured. In this work, we propose a solution to this problem based on the theory of multi-view learning. The proposed algorithm attempts to find an optimal temporal alignment between pairs of non-aligned articulatory-and-acoustic sequences with the same phonetic content by projecting them into a common latent space where both views are maximally correlated and then applying dynamic time warping. Several variants of this idea are discussed and explored. We show that the quality of speech generated in the non-aligned scenario is comparable to that obtained in the parallel scenario.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/26/2020

Non-parallel Voice Conversion System with WaveNet Vocoder and Collapsed Speech Suppression

In this paper, we integrate a simple non-parallel voice conversion (VC) ...
research
03/18/2022

A^3T: Alignment-Aware Acoustic and Text Pretraining for Speech Synthesis and Editing

Recently, speech representation learning has improved many speech-relate...
research
06/04/2020

Hierarchical Optimal Transport for Robust Multi-View Learning

Traditional multi-view learning methods often rely on two assumptions: (...
research
10/13/2021

A Melody-Unsupervision Model for Singing Voice Synthesis

Recent studies in singing voice synthesis have achieved high-quality res...
research
11/05/2020

Semi-supervised Learning for Singing Synthesis Timbre

We propose a semi-supervised singing synthesizer, which is able to learn...
research
11/14/2016

Multi-view Recurrent Neural Acoustic Word Embeddings

Recent work has begun exploring neural acoustic word embeddings---fixed-...
research
09/04/2020

Silent Speech Interfaces for Speech Restoration: A Review

This review summarises the status of silent speech interface (SSI) resea...

Please sign up or login with your details

Forgot password? Click here to reset