Author's posts

Jan 31

Biosignal-based speech processing for communication rehabilitation

Seminar  by Thomas Hueber, CNRS, GIPSA-lab, Grenoble Thursday 8 February, 10:00 – 11:00, room F107 INRIA Montbonnot Saint-Martin Abstract. Propelled by the progress of machine learning, speech technologies such as automatic speech recognition and text-to-speech synthesis have become enough advanced to be deployed in several consumer products and used in our daily lives. However, using …

Continue reading

Dec 04

A Bayesian Framework for Head Pose Estimation and Tracking

PhD defense by Vincent Drouard Monday 18 December 2017, 11:00 – 12:00, Grand Amphithéatre INRIA Montbonnot Saint-Martin In this thesis, we address the well-known problem of head-pose estimation in the context of human-robot interaction (HRI). We accomplish this task in a two step approach. First, we focus on the estimation of the head pose from …

Continue reading

Dec 04

Towards automatic learning of gait signatures for people identification in video

Seminar by Manuel J. Marin-Jimenez, Universidad de Córdoba Tuesday 19 December 2017, 11:00 – 12:00, room F107 INRIA Montbonnot Saint-Martin Abstract: This talk targets people identification in video based on the way they walk (i.e. gait). While classical methods typically derive gait signatures from sequences of binary silhouettes, in this talk we present the use …

Continue reading

Oct 06

Audio-Visual Analysis for Human-Robot Interaction

Data Science Seminar Series, 12 Octobrer 2017 Radu Horaud Abstract: Robots have gradually moved from factory floors to populated spaces. Therefore, there is a crucial need to endow robots with communicative skills. One of the prerequisites of human-robot communication (or more generally, interaction) is the ability of robots to perceive their environment, to detect people, …

Continue reading

Oct 05

Reinforcement approaches for online learning of virtual vocal conversational agents

Seminar by Fabrice Lefèvre, Université d’Avignon Room C207 9:30 – 11:00   Abstract: Stochastic techniques, supervised or not, or by reinforcement, are now largely spread on every modules intervening in academic systems of human-machine dialog. Current researches strive to improve even further their quality to reach satisfying and acceptable global performance. A major default remains …

Continue reading

Sep 29

IEEE/RSJ IROS’17: Novel Technology Paper Award Finalist!

Yutong Ban (PhD student) and his co-authors, Xavier Alameda-Pineda, Fabien Badeig, and Radu Horaud, were among the five finalists of the “Novel Technology Paper Award for Amusement Culture” at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada, September 2017, for their paper Tracking a Varying Number of People with a Visually-Controlled …

Continue reading

May 19

ERC Proof of Concept (PoC) Grant Awarded to Radu Horaud

As an ERC Advanced Grant holder, Radu Horaud was awarded a Proof of Concept grant for his project Vision and Hearing in Action Laboratory (VHIALab). The project will develop software packages enabling companion robots to robustly interact with multiple users.

Apr 06

IEEE HSCMA’17: Best Paper Award!

Israel Dejene Gebru (PhD student) and his co-authors, Christine Evers, Patrick Naylor (both from Imperial College London) and Radu Horaud, received the best paper award at the IEEE Fifth Joint Workshop on Hands-free Speech Communication and Microphone Arrays, San Francisco, USA, 1-3 March 2017, for their paper Audio-visual Tracking by Density Approximation in a Sequential …

Continue reading

Mar 10

Modeling Reverberant Mixtures for Multichannel Audio-Source Separation

Wednesday, 22 March 2017, 10:30 – 11:30 am, room F107, INRIA Montbonnot Seminar by Simon Leglaive, Telecom ParisTech, Paris   Abstract: We tackle the problem of multichannel audio-source separation in under-determined reverberant mixtures. The aim of this talk is to present source separation approaches that can take advantage of prior knowledge on the mixing filters, …

Continue reading

Jan 26

Audio-visual diarization dataset now available for download

We just made public our novel AVDIAR dataset. AVDIAR stands for “audio-visual diarization”. The dataset contains recordings of social gatherings done with two cameras and six microphones. Both the audio and visual data were carefully annotated, such that it is possible to evaluate the performance of various algorithms, such as person tracking, speech-source localization, speaker …

Continue reading