Return to Research

Audio-Visual Multi-Speaker Tracking via Variational Inference

Variational Bayesian Inference for Audio-Visual Tracking of Multiple Speakers

Yutong Ban,  Xavier Alameda-Pineda, Laurent Girin, and Radu Horaud

IEEE Transactions on Pattern Analysis and Machine Intelligence (Early access)

arXiv (full paper with appendixes) | Results | Acknowledgements

 

Abstract. In this paper we address the problem of tracking multiple speakers via the fusion of visual and auditory information. We propose to exploit the complementary nature of these two modalities in order to accurately estimate smooth trajectories of the tracked persons, to deal with the partial or total absence of one of the modalities over short periods of time, and to estimate the acoustic status – either speaking or silent – of each tracked person along time. We propose to cast the problem at hand into a generative audio-visual fusion (or association) model formulated as a latent-variable temporal graphical model. This may well be viewed as the problem of maximizing the posterior joint distribution of a set of continuous and discrete latent variables given the past and current observations, which is intractable. We propose a variational inference model which amounts to approximate the joint distribution with a factorized distribution. The solution takes the form of a closed-form expectation maximization procedure. We describe in detail the inference algorithm, we evaluate its performance and we compare it with several baseline methods. These experiments show that the proposed audio-visual tracker performs well in informal meetings involving a time-varying number of people.

Selected Results on the AVDIAR Dataset


Legend: 

Top left: tracking results. Bounding box represents the estimated speaker position, green line represents the speaker trajectory. The index represents speaker identity. When speaker identity is red, the speaker is estimated as silent and green indicates speaking.

Top right: vision contribution to the tracking. The ellipse represents the covariance correspond to vision contribution. different colors represent different speakers.

Bottom left: audio contribution to the tracking.

Bottom right: contribution of state dynamics to the tracking.

Two settings:

Full camera field of view (FFOV): full camera view is used.

Partial camera field of view (PFOV): at both sides of the images, vision information are not used, only audio information are available.

Possible incompatibility due to video codec, we suggest to use Chrome for visualization!

Seq08-3P-S1M1 (FFOV)

Seq13-4P-S2M1 (FFOV)

Seq28-3P-S1M1 (FFOV)

Seq32-4P-S1M1 (FFOV)

Seq03-1P-S0M1 (PFOV)

Seq22-1P-S0M1 (PFOV)

Seq19-2P-S1M1 (PFOV)

Seq20-2P-S1M1 (PFOV)

Acknowledgements. Funding from the European Union FP7 ERC Advanced Grant VHIA (#340113) is greatly acknowledged.