Return to Projects

PoC VHIALab

Vision and Hearing In Action Laboratory

ERC Proof of Concept #767064

VHIALab develops audio-visual machine perception software for human-robot-interaction

Principal Investigator: Radu Horaud | Duration: 1/2/2018 – 31/1/2019 (12 months) | ERC funding: 150000 €

 

Summary: The objective of VHIALab is the development and commercialization of software packages enabling a robot companion to easily and naturally interact with people. The methodologies developed in ERC VHIA propose state of the art solutions to human-robot interaction (HRI) problems in a general setting and based on audio-visual information. The ambitious goal of VHIALab will be to build software packages based on VHIA, thus opening the door to commercially available multi-party multi-modal human-robot interaction. The methodology investigated in VHIA may well be viewed as a generalization of existing single-user spoken dialog systems. VHIA enables a robot (i) to detect and to locate speaking persons, (ii) to track several persons over time, (iii) to recognize their behavior, and (iv) to extract the speech signal of each person for subsequent speech recognition and face-to-face dialog. These methods will be turned into software packages compatible with a large variety of companion robots. VHIALab will add a strong valorization potential to VHIA by addressing emerging and new market sectors. Industrial collaborations set up in VHIA will be strengthened.