Catégorie : Séminaires

Generative FLOW for expressive speech synthesis

Orateur : Ajinkya Kulkarni Date : le 14 février 2019 à 10h30 – C005 Résumé : Recently, Generative FLOW based architecture have been proposed for generating high quality image generation. The major challenges in machine learning domain are ability to learn the representation from few data points and ability to generate new data from learned …

Lire la suite

SING: Symbol-to-Instrument Neural Generator

Orateur : Alexandre Défossez Date : le 10 janvier 2019 à 13h00 – B011 Résumé : Recent progress in deep learning for audio synthesis opens the way to models that directly produce the waveform, shifting away from the traditional paradigm of relying on vocoders or MIDI synthesizers for speech or music generation. Despite their successes, …

Lire la suite

Deep learning-based speaker localization and speech separation from Ambisonics recordings

Orateur : Laureline Pérotin Date : le 22 novembre 2018 à 10h30 – C005 Résumé : Personal assistants are flourishing, but it is still hard to achieve voice control in adverse conditions, whenever noise, other speakers, reverberation or furniture reflections are present. Preprocessings such as speaker localization and speech enhancement have shown to help automatic …

Lire la suite

Analysis and development of speech enhancement features in cochlear implants

Orateur : Nicolas Furnon Date : le 18 octobre 2018 à 10h30 – C005 Résumé : Cochlear implants (CIs) are complex systems developed to restore the hearing sense to people with profound auditory loss. These solutions are efficient in quiet environments but adverse situations remain very challenging for CI users. A range of algorithms are …

Lire la suite

Processus alpha-stable pour le traitement du signal

Orateur : Mathieu Fontaine Date : le 27 septembre 2018 à 10h30 – C005 Résumé : Le sujet scientifique de la séparation de sources sonores (SSS) vise à décomposer les signaux audio en leurs éléments constituants, par exemple en séparant la voix du chanteur principal de son accompagnement musical ou du bruit de fond. Dans …

Lire la suite

Adversarial Neural Networks for Language Identification

Orateur : Raphaël Duroselle Date : le 12 juillet 2018 à 10h30 – C103 Résumé : Language identification systems are very common in speech processing and are used to classify the spoken language given a recorded audio sample. They are often used as a front-end for subsequent processing tasks such as automatic speech recognition or speaker …

Lire la suite

Semi-supervised learning with deep neural networks for relative transfer function inverse regression

Speaker : Emmanuel Vincent Date : le 07 juin 2018 Résumé : Prior knowledge of the relative transfer function (RTF) is useful in many applications but remains little studied. In this work, we propose a semi-supervised learning algorithm based on deep neural networks (DNNs) for RTF inverse regression, that is to generate the full-band RTF vector …

Lire la suite

Leveraging Word Contexts in Wikipedia for OOV Proper Nouns Recovery in Speech Recognition

Speaker : Badr Abdullah Date : le 31 mai 2018 Résumé : Automatic Speech Recognition (ASR) systems are usually trained on static data and a finite vocabulary. When a spoken utterance contains Out-Of-Vocabulary (OOV) words, ASR systems misrecognize these words as in-vocabulary words with similar acoustic properties, but with entirely different meaning. The majority of OOV …

Lire la suite

Speech/non-speech segmentation for speech recognition

Speaker : Odile Mella et Dominique Fohr Date : le 24 mai 2018 Résumé : Multiple-input neural network-based residual echo suppression

Multiple-input neural network-based residual echo suppression

Speaker : Guillaume Carbajal Date : le 12 avril 2018 Résumé : A residual echo suppressor (RES) aims to suppress the residual echo in the output of an acoustic echo canceler (AEC). Spectral-based RES approaches typically estimate the magnitude spectra of the near-end speech and the residual echo from a single input, that is either the …

Lire la suite