Poste permanent Inria : Ingénieur(e) développement logiciel spécialiste en calcul scientifique pour l’apprentissage et le traitement du signal

Notre équipe bénéficie cette année d’un poste permanent d’ingénieur(e) Inria. Profil recherché en développement logiciel spécialiste en calcul scientifique pour l’apprentissage et le traitement du signal.

La première affectation au sein de notre équipe porte sur une durée de 4 ans renouvelable. La personne recrutée s’intègrera par ailleurs au collectif des ingénieurs permanents de l’institut, représenté au niveau d’un centre par le Service d’Expérimentation et de Développement (SED).

Poste ouvert dans un premier temps en mobilité fonction publique (date limite de candidature 6 mai 2022), puis le cas échéant sur concours de recrutement.

Pour des détails sur le poste, les contacts, et comment candidater, c’est par ici.

(Français) [Portrait] Mathurin Massias, nouveau chercheur dans l’équipe Dante

Sorry, this entry is only available in French.

[Seminar MLSP] Thomas Debarre. Total-Variation-Based Optimization: Theory and Algorithms for Minimal Sparsity

We will receive Thomas Debarre on Thursday 23th September for a seminar.

Title :

Total-Variation-Based Optimization: Theory and Algorithms for Minimal Sparsity

Abstract :

The total-variation (TV) norm for measures as a regularizer for continuous-domain inverse problems has been the subject of many recent works, both on the theoretical and algorithmic sides. Its sparsity-promoting effect is now well understood, particularly in the context of Dirac recovery. In this talk, I will present some of our TV-related work in the context of spline recovery, i.e., in the presence of a differential regularization operator. My emphasis will be on the study of the solution set of such problems, which is typically non unique, and more specifically on identifying their sparsest solution. I will also presents algorithmic aspects and results for spline reconstruction.

[Seminar MLSP] Alexandre ARAUJO. Building Compact and Robust Deep Neural Networks with Toeplitz Matrices

We will receive Alexandre ARAUJO on Thursday 1st July for a seminar

Title: Building Compact and Robust Deep Neural Networks with Toeplitz Matrices

Abstract:

Deep neural networks are state-of-the-art in a wide variety of tasks, however, they exhibit important limitations which hinder their use and deployment in real-world applications. When developing and training neural networks, the accuracy should not be the only concern, neural networks must also be cost-effective and reliable. Although accurate, large neural networks often lack these properties. This work focuses on the problem of training neural networks which are not only accurate but also compact, easy to train, reliable and robust to adversarial examples. To tackle these problems, we leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.

Rémi Vaudaine. Contextual anomalies in graphs, detection and explanation

For the MLSP seminars we will receive Rémi Vaudaine, on Thursday 24th June at 3.30pm,  who will talk  about  anomalies detection in graphs:
Title: Contextual anomalies in graphs, detection and explanation
Abstract: Graph anomaly detection have proved very useful in a wide range of domains. For instance, for detecting anomalous accounts (e.g. bots, terrorists, opinion spammers or social malwares) on online platforms, intrusions and failures on communication networks or suspicious and fraudulent behaviors on social networks.
However, most existing methods often rely on pre-selected features built from the graph, do not necessarily use local information and do not consider context based anomalies. To overcome these limits, we present a Context-Based Graph Anomaly Detector which makes use of local information to detect anomalous nodes of a graph in a semi-supervised way.  We use Graph Attention Networks (GAT) with our custom attention mechanism to build local features, aggregate them and classify unlabeled nodes into normal or anomaly.
Nevertheless, most of the models based on machine learning, particularly deep neural networks, are seen as black boxes where the output cannot be humanly related to the input in a simple way. This implies a lack of understanding of the underlying model and its results. We present a new method to explain, in a human-understandable fashion, the decision of a black-box model for anomaly detection on attributed graph data. We show that our method can recover the information that leads the model to label a node as anomalous.

Launching Dante PhDs-Postdocs Seminars

We are glad to announce the creation of a small club of discussion between PhDs and Postdocs in Dante! In our weekly meetings, we discuss in a small group about research, science, and any kind of topics of interests for young researchers. For instance, for our first session on February 23rd 2021, we discussed about “How to manage experiments in machine learning”, based on a presentation by Luc Giffon. As we all have different research background, there will be a lot of exciting topics to discuss in our future seminars!

Talk « Mission-aware path planning of unmanned aerial vehicles » by Ahmed Boubrima from Rice University

We are pleased to virtually receive Ahmed Boubrima from Rice University, who will talk about « Mission-aware path planning of unmanned aerial vehicles » during the next session of our working group.

This session will virtually take place on Friday, March 12 at 3 PM.

You can access to this session by the link below :

Meeting ID: 950 7726 8125
Passcode: 239383
One tap mobile
+33170372246,,95077268125# France
+33170379729,,95077268125# France
Dial by your location
        +33 1 7037 2246 France
        +33 1 7037 9729 France
        +33 1 7095 0103 France
        +33 1 7095 0350 France
        +33 1 8699 5831 France
Meeting ID: 950 7726 8125
Find your local number: https://zoom.us/u/aky9NrKYD

[Seminar MLSP] Pedro Rodrigues. Leveraging Global Parameters for Flow-based Neural Posterior Estimation

Nous recevrons Pedro Rodrigues le Jeudi 25 Février à 10h

Title : Leveraging Global Parameters for Flow-based Neural Posterior Estimation
Presenter : Pedro L. C. Rodrigues (post-doctorant équipe Parietal, INRIA-Saclay)

Inferring the parameters of a stochastic model based on experimental observations is central to the scientific method. A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations. This arises in many practical situations, such as when inferring the distance and power of a radio source (is the source close and weak or far and strong?) or when estimating the amplifier gain and underlying brain activity of an electrophysiological experiment. In this work, we present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters. Our method extends recent developments in simulation-based inference (SBI) based on normalizing flows to Bayesian hierarchical models. We validate quantitatively our proposal on a motivating example amenable to analytical solutions and then apply it to invert a well known non-linear model from computational neuroscience.

Julien Fageot. TV- based methods for sparse reconstruction in continuous-domain

Le Lundi 8 Février à 10h30 nous accueillerons Julien Fageot post-doctorant à McGill University qui nous parlera de reconstruction parcimonieuse dans le domaine continu.

Title: TV- based methods for sparse reconstruction in continuous-domain

Abstract: We consider the problem of reconstructing an unknown function from some finitely many and possibly corrupted linear measurements. This is achieved by considering an optimization task using a sparsity-promoting regularization. More precisely, we consider the total-variation norm on Radon measures – which is the infinite-dimensional counterpart of the classic L1 norm used for sparse reconstruction in sparse statistical learning and compressed sensing – and a regularization operator that controls the smoothness of the reconstruction. The goal of this presentation is to discuss some theoretical and computational aspects of this infinite-dimensional optimization problem (form of the solutions, connection with spline theory, uniqueness issues, algorithmic strategies) and to illustrate the potential of the method for continuous-domain signal reconstruction.

PhD Seminar – Clément Lalanne’s mini lectures on Differential Privacy

Clément Lalanne, PhD at Dante and UMPA, will give a series of mini lectures on Differential Privacy.

Differential Privacy is a tractable property that aims at protecting the privacy of the individuals composing a dataset.

  • December 9th 2020 at 3:30pm : Introductory lecture on Differential Privacy. On this first lecture we will go through the basic definitions and properties of this concept while explaining why it is appealing for real world applications. We will also cover some modest examples. For the next lectures, Clément plans to cover some general techniques to turn an existing algorithm into a differentially private one and to present some advanced techniques in order to track the privacy loss through composition of algorithms.
  • December 16th 2020 at 3:45pmWe will focus on the techniques that add noise to the output of an algorithm in order to enhance privacy.
  • January 13th 2021 at 9:00am : We will study the graphical representation of differential privacy in an hypothesis testing setup and use it to deduce some properties including the advanced sharp composition theorem for Differential Privacy.


After the lectures, the slides and a detailed pdf will be available at https://clemlal.github.io/privacy.

Thank you !