Automatic analysis of user’s social cues during mediated communication

Speaker: Nathan Libermann

Date: March 16, 2017

Abstract:

In this talk I will present the results of my master internship. I propose to explore the social cues expressed by a user during a mediated communication either with an embodied conversational agent or with another human. For this purpose, I have exploited a machine learning method to identify the characteristics of facial and head social cues in each interaction type and constructed a model to automatically determine whether the user is interacting with a virtual agent or another human.

To finish I will make a brief presentation of my PhD topic: deep learning for musical structure analysis and generation.