Modelling face-to-face conversational interaction with robots

Thursday, October 6, 2016, 11:00 am to 12:00 am, room F107, INRIA Montbonnot

Seminar by Gabriel Skantze, KTH, Stockholm, Sweden

Abstract: When humans interact and collaborate with each other, they coordinate their turn-taking behaviours using verbal and non-verbal signals, expressed in the face and voice. If robots of the future are supposed to engage in social interaction with humans, it is essential that they can generate and understand these behaviours. In this talk, I will give an overview of several studies done at KTH that show how humans in interaction with a human-like robot make use of the same coordination signals typically found in studies on human-human interaction, and that it is possible to automatically detect and combine these cues to facilitate real-time coordination. The studies also show that humans react naturally to such signals when used by a robot, without being given any special instructions. They follow the gaze of the robot to disambiguate referring expressions, they conform when the robot selects the next speaker using gaze, and they respond naturally to subtle cues, such as gaze aversion, breathing, facial gestures and hesitation sounds. I will also describe the research platforms we have developed at KTH: a framework for conversational multi-modal interaction called IrisTK, and the back-projected robot head Furhat.

Comments are closed.