Paper accepted at ICLR 2023

We are glad to announce that the paper “Self-supervised learning with rotation-invariant kernels” (Léon Zheng, Gilles Puy, Elisa Riccietti, Patrick Pérez, Rémi Gribonval) has been accepted at the 11th Interanational Conference on Representation Learning (Kigali, May 2023). This project is a collaboration betwen OCKHAM team and valeo.ai.

One paper accepted at Transaction of Machine Learning Research

We are glad to announce that the following paper with contributions from the Dante team have been accepted for publication in the Transaction of Machine Learning Research (TMLR) journal:

Three papers accepted at NeurIPS 2022

We are glad to announce that the following papers with contributions from the Dante team have been accepted for publication at the NeurIPS 2022 conference:

– Template based Graph Neural Network with Optimal Transport Distances, C. Vincent-Cuaz, R. Flamary, M. Corneli, T. Vayer & N. Courty.

Benchopt: Reproducible, efficient and collaborative optimization benchmarks, T. Moreau, M. Massias, A. Gramfort and others.

Beyond L1: Fast and better sparse models with skglm, Q. Bertrand, Q. Klopfenstein, P.-A. Bannier, G. Gidel & M. Massias

 

 

 

Paper accepted for publication in SIMODS

We are pleased to announce that our work “Efficient Identification of Butterfly Sparse Matrix Factorizations” (Léon Zheng, Elisa Riccietti, Rémi Gribonval) has been accepted for publication in SIAM Journal  on Mathematics of Data Science.

This work studies identifiability aspects of sparse matrix factorizations with butterfly constraints, a structure associated with fast transforms and used in recent neural network compression methods for its expressiveness and complexity reduction properties. In particular, we show that the  butterfly factorization algorithm from the article “Fast learning of fast transforms, with guarantees” (ICASSP 2022) is endowed with exact recovery guarantees.

Poste permanent Inria : Ingénieur(e) développement logiciel spécialiste en calcul scientifique pour l’apprentissage et le traitement du signal

Notre équipe bénéficie cette année d’un poste permanent d’ingénieur(e) Inria. Profil recherché en développement logiciel spécialiste en calcul scientifique pour l’apprentissage et le traitement du signal.

La première affectation au sein de notre équipe porte sur une durée de 4 ans renouvelable. La personne recrutée s’intègrera par ailleurs au collectif des ingénieurs permanents de l’institut, représenté au niveau d’un centre par le Service d’Expérimentation et de Développement (SED).

Poste ouvert dans un premier temps en mobilité fonction publique (date limite de candidature 6 mai 2022), puis le cas échéant sur concours de recrutement.

Pour des détails sur le poste, les contacts, et comment candidater, c’est par ici.

(Français) [Portrait] Mathurin Massias, nouveau chercheur dans l’équipe Dante

Sorry, this entry is only available in French.

[Seminar MLSP] Thomas Debarre. Total-Variation-Based Optimization: Theory and Algorithms for Minimal Sparsity

We will receive Thomas Debarre on Thursday 23th September for a seminar.

Title :

Total-Variation-Based Optimization: Theory and Algorithms for Minimal Sparsity

Abstract :

The total-variation (TV) norm for measures as a regularizer for continuous-domain inverse problems has been the subject of many recent works, both on the theoretical and algorithmic sides. Its sparsity-promoting effect is now well understood, particularly in the context of Dirac recovery. In this talk, I will present some of our TV-related work in the context of spline recovery, i.e., in the presence of a differential regularization operator. My emphasis will be on the study of the solution set of such problems, which is typically non unique, and more specifically on identifying their sparsest solution. I will also presents algorithmic aspects and results for spline reconstruction.

[Seminar MLSP] Alexandre ARAUJO. Building Compact and Robust Deep Neural Networks with Toeplitz Matrices

We will receive Alexandre ARAUJO on Thursday 1st July for a seminar

Title: Building Compact and Robust Deep Neural Networks with Toeplitz Matrices

Abstract:

Deep neural networks are state-of-the-art in a wide variety of tasks, however, they exhibit important limitations which hinder their use and deployment in real-world applications. When developing and training neural networks, the accuracy should not be the only concern, neural networks must also be cost-effective and reliable. Although accurate, large neural networks often lack these properties. This work focuses on the problem of training neural networks which are not only accurate but also compact, easy to train, reliable and robust to adversarial examples. To tackle these problems, we leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.

Rémi Vaudaine. Contextual anomalies in graphs, detection and explanation

For the MLSP seminars we will receive Rémi Vaudaine, on Thursday 24th June at 3.30pm,  who will talk  about  anomalies detection in graphs:
Title: Contextual anomalies in graphs, detection and explanation
Abstract: Graph anomaly detection have proved very useful in a wide range of domains. For instance, for detecting anomalous accounts (e.g. bots, terrorists, opinion spammers or social malwares) on online platforms, intrusions and failures on communication networks or suspicious and fraudulent behaviors on social networks.
However, most existing methods often rely on pre-selected features built from the graph, do not necessarily use local information and do not consider context based anomalies. To overcome these limits, we present a Context-Based Graph Anomaly Detector which makes use of local information to detect anomalous nodes of a graph in a semi-supervised way.  We use Graph Attention Networks (GAT) with our custom attention mechanism to build local features, aggregate them and classify unlabeled nodes into normal or anomaly.
Nevertheless, most of the models based on machine learning, particularly deep neural networks, are seen as black boxes where the output cannot be humanly related to the input in a simple way. This implies a lack of understanding of the underlying model and its results. We present a new method to explain, in a human-understandable fashion, the decision of a black-box model for anomaly detection on attributed graph data. We show that our method can recover the information that leads the model to label a node as anomalous.

Launching Dante PhDs-Postdocs Seminars

We are glad to announce the creation of a small club of discussion between PhDs and Postdocs in Dante! In our weekly meetings, we discuss in a small group about research, science, and any kind of topics of interests for young researchers. For instance, for our first session on February 23rd 2021, we discussed about “How to manage experiments in machine learning”, based on a presentation by Luc Giffon. As we all have different research background, there will be a lot of exciting topics to discuss in our future seminars!