Category: Job Offers

(Closed) Offre d’emploi : ingénieur traitement du signal et de l’image

Offre d’emploi CDD : ingénieur expert de développement en traitement du signal et de l’image pour la robotique Prise de fonction et durée : octobre/novembre 2016, 12 mois renouvelable (jusqu’à 36 mois) Mission : Dans le cadre du projet ERC VHIA, l’équipe PERCEPTION (https://team.inria.fr/perception), du centre de recherche INRIA Grenoble Rhône-Alpes situé à Montbonnot Saint-Martin, …

Continue reading

(Closed) MSc Project: Visual Multi-speaker Recognition for Human-robot Interaction

MSc project on “Visual Multi-speaker Recognition for Human-robot Interaction” Duration: 6 months (and it may continue with a PhD) Short description: The main goal of this project is to design and develop an automatic system able to characterize videos including multiple speakers. This system will provide useful information in human-robot interaction, like the number of …

Continue reading

(CLOSED) Projet de stage Master 1 : Estimation de poses et reconnaissance d’activités par apprentissage profond (deep learning).

Offre de stage 2-3 mois pour étudiants de M1 ou 2ème année d’école d’ingénieur. Sujet : Estimation de poses et reconnaissance d’activités par apprentissage profond (deep learning). Détails du sujet de stage

(Closed) Ingénieur Informatique et Robotique

Offre d’emploi CDD : ingénieur expert de développement en informatique et robotique Prise de fonction : janvier/février 2015  Dans le cadre d’une collaboration européenne pluriannuelle, l’équipe PERCEPTION (https://team.inria.fr/perception), du centre de recherche INRIA Grenoble Rhône-Alpes situé à Montbonnot Saint-Martin, recrute un ingénieur expert de développement en informatique et robotique. Mission : l’ingénieur aura comme tâches …

Continue reading

(closed) Master Project: Robot Walking in a Piecewise Planar World

Short Description Companion robots, such as the humanoid robot NAO manufactured by Aldebaran Robotics, should be able to move on non-planar surfaces, such as going up- or downstairs. This implies that the robot is able to perceive the floor and to extract piecewise planar surfaces, such as the stairs, independently of the floor type. This …

Continue reading

(closed) Master Project: Spatio-temporal fusion of range and stereo data for high-quality depth sequences

Short Description High-quality depth sequences are required by lots of applications such as 3DTV and film industry. The current range cameras are based on time of flight (TOF) and provide either low-resolution (e.g. Mesa SR4000) or mid-resolution (e.g. Kinect 2) depth maps. Commonly, these cameras are used along with a stereoscopic pair of color cameras …

Continue reading

(closed) Master Project: Visual Action Recognition with Fisher-Kernel Encoding of Time Series

Short Description In this project we address the problem of recognizing human actions in a video sequence. Unlike previous approaches, we aim at developing a method able to continuously recognize and segment actions. For this purpose a per-frame rather than a per-video representation is needed. This means that the data (short videos) are represented as …

Continue reading

(closed) Master Project: Audio-Visual Event Localization with the Humanoid Robot NAO

Short Descrption The PERCEPTION team investigates the computational principles underlying human-robot interaction. Under this broad topic, this master project will investigate the use of computer vision and audio signal processing methods enabling a robot to localize events that are both seen and heard, such as a group of people engaged in a conversation or in an informal …

Continue reading

(closed) Fully Funded PhD Positions to Start in September/October 2014

July 16th, 2014: All the positions were provided, the call for applicants is closed. The PERCEPTION group is seeking for PhD students to develop advanced methods for activity recognition and human-robot interaction based on joint visual and auditory processing. We are particularly interested in students with strong background in computer vision, statistical machine learning, and auditory …

Continue reading

(closed) Master Project: Sound-source localization based on audio-visual mapping learning

Deadline for sending applications: 30 November 2013. Project proposed by Radu Horaud and Antoine Deleforge. We propose to address the interesting and challenging problem of sound-source localization. A sound source is generally localized using the time-difference of arrival (TDOA) between pairs of microphones. In general, the microphones are arranged in a linear or circular array …

Continue reading