Category: News

Book co-edited by Xavier Alameda-Pineda

New book published by Academic Press (Elsevier), entitled “Multimodal Behavior Analysis in the Wild”, edited by Xavier Alameda Pineda, Elisa Ricci and Nicu Sebe. The book gathers 20 chapters written by 75 researchers from all over the world.  

ACM SIGMM Rising Star Award 2018

The 2018 winner of the prestigious ACM Special Interest Group on Multimedia (SIGMM) Rising Star Award is our colleague Dr. Xavier Alameda-Pineda. The award is given in recognition of his contribution to multimodal social behavior understanding. Congratulations Xavi!

(Closed) MSc. Project on Speaker identity modeling with deep learning for re-identification

MSc. Project on Speaker identity modeling with deep learning for re-identification Short description: Speaker identification is the task that aims at determining which speaker has produced a given utterance [1]. On the other hand, speaker verification or re-identification aims at determining whether there is a match between a given speech utterance and a target speaker …

Continue reading

(Closed) MSc. Project on Coupled Audio-visual Multi-speaker Tracking

MSc. Project on Coupled Audio-visual Multi-speaker Tracking Short description: Multi-speaker tracking has been widely investigated and the Perception team contributed with a consistent methodological framework based on variational Bayes techniques [1-4]. Often, audio-visual tracking methods first map all auditory and visual information in the same space, to later on run a tracking algorithm. However, in …

Continue reading

(Closed) MSc. Project on Gazeable Objects

MSc project on “Gazeable Objects” Duration: about 6 months Short description: Gaze is the direction towards which a person is looking. The automatic estimation of the gaze from a single image and from videos has been a hot research topic in previous years [1-4]. Often, researchers studied gaze from a human-centered perspective, trying to answer the …

Continue reading

Sparse representation, dictionary learning, and deep neural networks: their connections and new algorithms

Seminar  by Mostafa Sadeghi, Sharif University of Technology, Tehran Tuesday 19 June 2018, 14:30 – 15:30, room F107 INRIA Montbonnot Saint-Martin Abstract. Over the last decade, sparse representation, dictionary learning, and deep artificial neural networks have dramatically impacted on the signal processing and machine learning areas by yielding state-of-the-art results in a variety of tasks, …

Continue reading

Deep Regression Models and Computer Vision Applications for Multiperson Human-Robot Interaction

PhD defense by Stéphane Lathuilière Tuesday 22nd May 2018, 11:00, Grand Amphithéatre INRIA Grenoble Rhône-Alpes, Montbonnot Saint-Martin Abstract: In order to interact with humans, robots need to perform basic perception tasks such as face detection, human pose estimation or speech recognition. However, in order have a natural interaction with humans, the robot needs to model …

Continue reading

Audio-Visual Analysis in the Framework of Humans Interacting with Robots

PhD defense by Israel D. Gebru Friday 13 April 2018, 9:30 – 10:30, Grand Amphithéatre INRIA Grenoble Rhône-Alpes, Montbonnot Saint-Martin In recent years, there has been a growing interest in human-robot interaction (HRI), with the aim to enable robots to naturally interact and communicate with humans. Natural interaction implies that robots not only need to …

Continue reading

Plane Extraction from Depth-Data

The following journal paper has just been published: Richard Marriott, Alexander Pashevich, and Radu Horaud. Plane Extraction from Depth Data Using a Gaussian Mixture Regression Model. Pattern Recognition Letters. vol. 110, pages 44-50, 2018. The paper is free for download from our publication page or directly from Elsevier.

Software engineer / Audio-visual perception for robotics

Context Perception team (https://team.inria.fr/perception), at INRIA Grenoble Rhône-Alpes and Jean Kuntzman Laboratory at Grenoble Alpes University, works on computational models for mapping images and sounds onto meaning and actions. The team members address these challenging topics: computer vision, auditory signal processing and scene analysis, machine learning, and robotics. In particular, we develop methods for the …

Continue reading