Radu HORAUD

Author's posts

Oct 06

Audio-Visual Analysis for Human-Robot Interaction

Data Science Seminar Series, 12 Octobrer 2017 Radu Horaud Abstract: Robots have gradually moved from factory floors to populated spaces. Therefore, there is a crucial need to endow robots with communicative skills. One of the prerequisites of human-robot communication (or more generally, interaction) is the ability of robots to perceive their environment, to detect people, …

Continue reading »

Oct 05

Reinforcement approaches for online learning of virtual vocal conversational agents

Seminar by Fabrice Lefèvre, Université d’Avignon Room C207 9:30 – 11:00   Abstract: Stochastic techniques, supervised or not, or by reinforcement, are now largely spread on every modules intervening in academic systems of human-machine dialog. Current researches strive to improve even further their quality to reach satisfying and acceptable global performance. A major default remains …

Continue reading »

Sep 29

IEEE/RSJ IROS’17: Novel Technology Paper Award Finalist!

Yutong Ban (PhD student) and his co-authors, Xavier Alameda-Pineda, Fabien Badeig, and Radu Horaud, were among the five finalists of the “Novel Technology Paper Award for Amusement Culture” at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada, September 2017, for their paper Tracking a Varying Number of People with a Visually-Controlled …

Continue reading »

May 19

ERC Proof of Concept (PoC) Grant Awarded to Radu Horaud

As an ERC Advanced Grant holder, Radu Horaud was awarded a Proof of Concept grant for his project Vision and Hearing in Action Laboratory (VHIALab). The project will develop software packages enabling companion robots to robustly interact with multiple users.

Apr 06

IEEE HSCMA’17: Best Paper Award!

Israel Dejene Gebru (PhD student) and his co-authors, Christine Evers, Patrick Naylor (both from Imperial College London) and Radu Horaud, received the best paper award at the IEEE Fifth Joint Workshop on Hands-free Speech Communication and Microphone Arrays, San Francisco, USA, 1-3 March 2017, for their paper Audio-visual Tracking by Density Approximation in a Sequential …

Continue reading »

Mar 10

Modeling Reverberant Mixtures for Multichannel Audio-Source Separation

Wednesday, 22 March 2017, 10:30 – 11:30 am, room F107, INRIA Montbonnot Seminar by Simon Leglaive, Telecom ParisTech, Paris   Abstract: We tackle the problem of multichannel audio-source separation in under-determined reverberant mixtures. The aim of this talk is to present source separation approaches that can take advantage of prior knowledge on the mixing filters, …

Continue reading »

Jan 26

Audio-visual diarization dataset now available for download

We just made public our novel AVDIAR dataset. AVDIAR stands for “audio-visual diarization”. The dataset contains recordings of social gatherings done with two cameras and six microphones. Both the audio and visual data were carefully annotated, such that it is possible to evaluate the performance of various algorithms, such as person tracking, speech-source localization, speaker …

Continue reading »

Dec 15

IEEE ICPR’16: Best Scientific Paper Award!

Xavier Alameda-Pineda and his co-authors from the University of Trento received the Intel best scientific paper award (INTEL BSPA), track image, speech, signal, and video processing, at the 23rd IEEE International Conference on Pattern Recognition (ICPR’16), Cancun, Mexico, 4-8 December 2016, for their paper Multi-Paced Dictionary Learning for Cross-Domain Retrieval and Recognition. IEEE ICPR is …

Continue reading »

Dec 10

MSc. Project: Deep Learning for Voice Activity Detection

MSc project on “Deep Learning for Voice Activity Detection” Duration: 6 months Short description: Voice Activity Detection (VAD) is a technique that classifies a (possibly noisy) audio signal into speech and non-speech segments. It is an essential building block for many speech-based systems, such as speech recognition and spoken dialog for human-computer and human-robot interaction, …

Continue reading »

Nov 28

Augmented parametric shapes for real-time dense 3D modeling using an RGB-D camera

Friday, December 9, 2016,11:00 am to 12:00 pm, room F108, INRIA Montbonnot Seminar by Diego Thomas, University of Kyushu, Fukuoka, Japan   Abstract: Consumer grade RGB-D cameras such as the Kinect camera or the Asus Xtion pro camera have become the commodity tool to build dense 3D models of indoor scenes. The volumetric Truncated Signed …

Continue reading »