Paper published in IEEE Transactions on PAMI

The paper Variotional Bayesian Inference for Audio-Visual Tracking of Multiple Speakers has been published in the IEEE Transactions on Pattern Analysis and Machine Intelligence (journal with one of the highest impact score in the category computational intelligence). This work is part of the Ph.D. thesis of Yutong Ban, now with…

Continue reading

Deformations in Deep Models for Image and Video Generation

Seminar  by Stéphane Lathuilière, Télécom Paris Friday, 18 October 2019, 14:30 – 15:30, room F107 INRIA Montbonnot Saint-Martin   Abstract: Generating realistic images and videos has countless applications in different areas, ranging from photography technologies to e-commerce business. Recently, deep generative approaches have emerged as effective techniques for generation tasks. In this…

Continue reading

The Kinovis-MST Dataset

The Kinovis Multiple-Speaker Tracking Dataset Data | pdf from arXiv | download | reference The Kinovis multiple speaker tracking (Kinovis-MST) datasets contain live acoustic recordings of multiple moving speakers in a reverberant environment. The data were recorded in the Kinovis multiple-camera laboratory at INRIA Grenoble Rhône-Alpes.  The room size is 10.2…

Continue reading

Sparse representation, dictionary learning, and deep neural networks: their connections and new algorithms

Seminar  by Mostafa Sadeghi, Sharif University of Technology, Tehran Tuesday 19 June 2018, 14:30 – 15:30, room F107 INRIA Montbonnot Saint-Martin Abstract. Over the last decade, sparse representation, dictionary learning, and deep artificial neural networks have dramatically impacted on the signal processing and machine learning areas by yielding state-of-the-art results…

Continue reading

Plane Extraction from Depth-Data

The following journal paper has just been published: Richard Marriott, Alexander Pashevich, and Radu Horaud. Plane Extraction from Depth Data Using a Gaussian Mixture Regression Model. Pattern Recognition Letters. vol. 110, pages 44-50, 2018. The paper is free for download from our publication page or directly from Elsevier.

Continue reading