Mokameeting du 5 décembre 2018 / Julien RABIN et Mathurin MASSIAS

Le prochain séminaire de l’équipe Mokaplan aura lieu le mercredi 5 décembre à 10h30 à l’INRIA Paris (2 rue Simone Iff) en salle Jacques-Louis Lions 1 (attention, salle inhabituelle!).

Nous aurons le plaisir d’écouter Julien RABIN (Université de Caen)  et Mathurin MASSIAS (Télécom PARISTECH / INRIA PARIETAL).

  • Exposé de Julien RABIN:

Titre: Semi-Discrete Optimal Transport in Patch Space for Enriching Gaussian Textures

Résumé: A bilevel texture model is proposed, based on a local transform of a Gaussian random field. The core of this method relies on the optimal transport of a continuous Gaussian distribution towards the discrete exemplar patch distribution. The synthesis then simply consists in a fast post-processing of a Gaussian texture sample, boiling down to an improved nearestneighbor patch matching, while offering theoretical guarantees on statistical compliancy.

  • Exposé de Mathurin MASSIAS

Titre: Dual extrapolation for sparse Generalized Linear models

Résumé: Generalized Linear Models (GLM) are a wide class of regression and classification models, where the predicted variable is obtained from a linear combination of the input variables.
For statistical inference in high dimensions, sparsity inducing regularization have proven useful while offering statistical guarantees.
However, solving the resulting optimization problems can be challenging: even for popular iterative algorithms such as coordinate descent, one needs to loop over a large number of variables.
To mitigate this, techniques known as screening rules and working sets diminish the size of the optimization problem at hand, either by progressively removing variables, or by solving a growing sequence of smaller problems.
For both of these techniques, significant variables are identified by convex duality.
In this talk, we show that the dual iterates of a GLM exhibit a Vector AutoRegressive (VAR) behavior after support identification, when the primal problem is solved with proximal gradient or cyclic coordinate descent.
Exploiting this regularity one can construct dual points that offer tighter control of optimality, enhancing the performance of screening rules and helping to design a competitive working set algorithm.

Leave a Reply

Your email address will not be published.