4 papers accepted for ICML 2020

We are pleased to share the good news that four papers from our team have been accepted for ICML 2020 (International Conference in Machine Learning):

Debiased Sinkhorn barycenters
Hicham Janati (Inria and CREST/ENSAE), Marco Cuturi (Google and CREST/ENSAE), Alexandre Gramfort (Inria)

Implicit differentiation of Lasso-type models for hyperparameter optimization
Quentin Bertrand (Inria), Quentin Klopfenstein (Université de Bourgogne), Mathieu Blondel (NTT), Samuel Vaiter (CNRS), Alexandre Gramfort (Inria), Joseph Salmon (Université de Montpellier)

Aggregation of Multiple Knockoffs
Tuan-Binh Nguyen (Inria), Jerome-Alexis Chevalier (Inria), Sylvain Arlot (University Paris Sud), Thirion Bertrand (Inria)

Super-efficiency of automatic differentiation for functions defined as a minimum
Pierre Ablin (CNRS and ENS), Gabriel Peyré (CNRS and ENS), Thomas Moreau (Inria)

New Article in Nature Communications! Collaboration with the Stanford Cognitive & Systems Neuroscience Lab

Are brain structures of written word decoding and attention linked? Can we predict cognitive performance?

Find out in our article in Nature Communications.


While predominant models of visual word form area (VWFA) function argue for its specific role in decoding written language, other accounts propose a more general role of VWFA in complex visual processing. However, a comprehensive examination of structural and functional VWFA circuits and their relationship to behavior has been missing. Here, using high-resolution multimodal imaging data from a large Human Connectome Project cohort (N = 313), we demonstrate robust patterns of VWFA connectivity with both canonical language and attentional networks. Brain-behavior relationships revealed a striking pattern of double dissociation: structural connectivity of VWFA with lateral temporal language network predicted language, but not visuo-spatial attention abilities, while VWFA connectivity with dorsal fronto-parietal attention network predicted visuo-spatial attention, but not language abilities. Our findings support a multiplex model of VWFA function characterized by distinct circuits for integrating language and attention, and point to connectivity-constrained cognition as a key principle of human brain organization.

Congratulations to Patricio for your PHD defense !

“Statistical learning with high-cardinality string categorical variables »

The jury comprises:


Gaël Varoquaux, Inria (team Parietal), Palaiseau, France.


Laurent Charlin, HEC Montréal, Montréal, Canada.

Stéphane Gaïffas, Université Paris Diderot (LPSM), Paris, France.


Balázs Kégl, Huawei/CNRS, France.

Charles Bouveyron, Université Côte d’Azur, Nice, France.

Marc Schoenauer, Inria (team TAU), France.

Patrick Valduriez, Inria (LIRMM), Montpellier, France. 

4 papers accepted for NeurIPS 2019

We are pleased to share the good news that four papers out of four submissions from our team have been accepted for NeurIPS 2019 (alphab. order):

1) Pierre Ablin, T. Moreau, M. Massias & A. Gramfort: “Learning step sizes for unfolded sparse coding” https://arxiv.org/abs/1905.11071

2) Quentin Bertrand, M. Messias, A. Gramfort and J. Salmon: “Concomitant Lasso with Repetitions (CLaR): beyond averaging multiple realizations of heteroscedastic noise” https://arxiv.org/abs/1902.02509

3) David Sabbagh, P. Ablin, G. Varoquaux, A. Gramfort and D. Engemann: “Manifold-regression to predict from MEG/EEG brain signals without source modeling” https://arxiv.org/abs/1906.02687

4) Meyer Scetbon & G. Varoquaux A: “Comparing distributions: l1 geometry improves kernel two-sample testing” (selected as spotlight).

See you in Vancouver!

New PhD position available

High-performance Simulation for the design of compressed sensing trajectories in high resolution functional neuroimaging at 7 and 11.7 Tesla.

See https://team.inria.fr/parietal/files/2019/04/PhDThesis_4D-fMRI_SPARKLING.pdf