Category: Seminars

Feb 06

Upcoming team seminars

Continue reading

Feb 02

Feedback on text analysis and emotion recognition in voice using deep learning

Speaker: Nicolas Turpault Date: February 15, 2018 Abstract: – During my internship in a startup in London I developed a system to try to recognise emotion in voice. In this work we used some speech processing (MFCC) and then applied a RNN (LSTM) to predict the emotion in voice. We used SEMAINE and Avec databases to …

Continue reading

Jan 15

Biomechanical models of speech articulators to understand speech motor control

Speaker: Pascal Perrier (Gipsa-lab Grenoble) Date: January 18, 2018 Abstract: We have been working for the last 20 years on the development of 2D and the 3D biomechanical models of speech articulators in the aim to better understand (1) how speech movements are constrained, (2) which degrees of freedom speakers have to deal with the goals …

Continue reading

Nov 30

Arabic speech synthesis

Speaker: Amal Houidhek Date: November 30, 2017 Abstract: The first part of the presentation investigates statistical parametric speech synthesis (SPSS) of Modern Standard Arabic (MSA): Hidden Markov Models (HMM)-based speech synthesis system relies on a description of speech segments corresponding to phonemes, with a large set of features that represent phonetic, phonologic, linguistic and contextual aspects. …

Continue reading

Oct 05

An annihilation filter approach for the blind identification of speech excited SIMO acoustic systems

Speaker: Mathieu Hu Date: October 5, 2017 Abstract: The characterization of the room impulse responses via the cross-relation is reinterpreted for noisy conditions and exploited in this work to propose an approach for the blind identification of acoustic channels from reverberant noisy speech signals. In this novel approach, which aims to annihilate the speech content from …

Continue reading

Sep 14

Dynamic out-of-vocabulary retrieval for automatic speech recognition

Speaker: Amélie Greiner Date: September 14, 2017 Abstract: To perform a transcription, a speech recognition system relies on a vocabulary that contains all the words that can be transcribed. In practice, it is impossible to include all the existing words in this vocabulary, which therefore contains only the most common words of the language. Out-of-vocabulary words …

Continue reading

Sep 07

Virtual Acoustic Space Learning for Auditory Scene Geometry Estimation

Speaker: Antoine Deleforge (Researcher, INRIA Rennes) Date: September 7, 2017 Abstract: Most auditory scene analysis methods (source separation, denoising, dereverberation, etc.) rely on some geometrical information about the system: Where are the sources? Where are the microphones? What is around or between them? Since the geometrical configuration of real-world systems is often very complex, classical approaches …

Continue reading

Aug 31

Anti-Spoofing Methods for Speaker Verification: Recent Advancements and Future Directions

Speaker: Md Sahidullah (Visiting Researcher) Date: August 31, 2017 Abstract: Automatic speaker verification (ASV) technology is recently finding its way to end-user applications. This voice-based authentication technology shows promising recognition performance in the controlled conditions. However, ASV technology is highly vulnerable to spoofing attacks where an intruder uses a synthetic or recorded voice to get illegitimate …

Continue reading

Jun 29

Black-box Optimization of Deep Neural Networks for Acoustic Modeling

Speaker: Aman Zaid Berhe Date: June 29, 2017 Abstract: Deep neural networks are now the state-of-the-art in acoustic modeling for automatic speech recognition. The allow obtaining robust and high accuracy acoustic models. However, these models have a lot of hyper-parameters. Hyper-parameters optimization is very tedious yet essential tasks to successfully train very deep neural networks. We …

Continue reading

Jun 22

HRTF range extrapolation by spherical harmonics decomposition

Speaker: Lauréline Perotin Date: June 22, 2017 Abstract: In order to locate sound in space, to know from which direction it comes from but also from how far away, our brain analyses all the distortions applied the soundwave from its (estimated) origin to our eardrums. Those reflections, diffractions and other propagation-related transformations are contained in the …

Continue reading