Detecting social attention attractors in free-standing conversational groups through multimodal head and body pose estimation

SpeakerXavier Alameda-Pineda

Date: December 3, 2015

Abstract:

During natural social gatherings, humans tend to organize themselves in the so-called free-standing conversational groups (FCGs). Studying FCGs in unstructured social settings (e.g., cocktail party ) is gratifying due to the wealth of information available at the group (mining social networks) and individual (recognizing native behavioural and personality traits) levels. However, analysing social scenes involving FCGs is also highly challenging due to the difficulty in extracting behavioural cues such as target locations, their speaking activity and head/body pose due to crowdedness and presence of extreme occlusions. Importantly, visual information typically obtained with a distributed camera network might not suffice to achieve the sought robustness. In this line of thought, recent advances in wearable sensing technology open the door to multimodal and richer information flows. In this study we cast the head and body pose estimation problem into a matrix completion task. We introduce a framework able to fuse multimodal data emanating from a combination of distributed and wearable sensors, taking into account the temporal consistency, the head/body coupling and the noise inherent to the scenario. We report results on the novel and challenging SALSA dataset (http://tev.fbk.eu/salsa), containing visual, auditory and infra-red recordings of 18 people interacting in a regular indoor environment. We demonstrate the soundness of the proposed method and the usability for higher-level tasks such as the detection of F-formations and the discovery of social attention attractors.