I received the engineering B.Sc. (2008) and the M.Sc. (2010) degrees in computer science and mathematics from the Ensimag engineering school (Grenoble, France), as well as the specialized M.Sc. research degree in computer graphics, vision and robotics from the Université Joseph Fourier (Grenoble, France). In November 2013, I received the Ph.D. degree in computer sciences and applied mathematics of the university of Grenoble (France), under the supervision of Pr. Radu Horaud. My PhD thesis (English written) is available at this link.
I work since January the 1st 2014 as a post-doctoral fellow at the chair of Multimedia Communications and Signal Processing of the Friedrich-Alexander-Universität of Erlangen (Germany) with Pr. Walter Kellerman. I work there within the EU project EARS (Embodied Audition for RobotS), which aims at exploring new algorithms for enhancing the auditive capabilities of humanoid robots. From 2010 to 2013, I was involved in the European project HUMAVIPS (Humanoids with auditory and visual abilities in populated spaces). See this page on recent works.
My research interests include machine learning, audio signal processing, sound source separation and localization, computational auditory scene analysis, and robot audition.
NEWS: My thesis was awarded the 2014 French thesis prize on Image, Signal and Vision: http://gretsi.fr/prix-de-these2014/resultats.php
- In audio signal processing, I developed new methods for supervised sound source localization and separation
- In the field of statistical machine learning, I made theoretical and algorithmic contributions in probabilistic high dimensional regression.
- I am the author of two Matlab toolbox for Gaussian locally linear mapping and Supervised Binaural Mapping.
- In the field of binaural audition, I introduced the concept of acoustic space : the manifold of all perceivable binaural cues by a system.
- I participated to the development of an integrated demonstrator on the humanoid robot NAO for real-time speaker detection. The audio localization module yielded an APP code deposit whom I am 75% contributor. Check out the video of the demonstrator.
My articles citations are available on my Google Scholar page. This paper by M. Bernard et al. is based on my work on acoustic space learning and uses the CAMIL dataset. The CAMIL dataset is also mentioned in a chapter of the reference book “The Technology of Binaural Listening” by Jens Blauert.
These two videos illustrate some of the work I did in audio-visual acoustic space mapping. The first one is based on learning with the AVASM dataset. The second one is based on calibration data from the robot NAO.
- E-mail: deleforge(at)LNT(dot)de