Articulatory speech synthesis from static context-aware articulatory targets

Speaker: Anastasiia Tsukanova

Date: April 6, 2017

Abstract:

In this talk I will present my work that has been carried out in the domain of articulatory speech synthesis — that is, synthesizing speech through simultaneous control over the articulators (the jaw, the tongue, the lips, the velum, the larynx and the epiglottis) and the source. It is based on static MRI data (97 annotated images capturing the articulation of French vowels and blocked consonant-vowel syllables). This rule-based control has to take into account coarticulation and be flexible enough to be able to vary strategies

for speech production. The results of this synthesis are evaluated visually, acoustically and perceptually, and the problems encountered are broken down by their origin: the dataset, its modelling, the algorithm for managing the vocal tract shapes, their translation to the area functions, and the acoustic simulation.

Then I will describe the recently acquired dynamic data (rtMRI) and talk about our plans regarding it.