[Seminar MLSP] Alexandre ARAUJO. Building Compact and Robust Deep Neural Networks with Toeplitz Matrices

We will receive Alexandre ARAUJO on Thursday 1st July for a seminar

Title: Building Compact and Robust Deep Neural Networks with Toeplitz Matrices

Abstract:

Deep neural networks are state-of-the-art in a wide variety of tasks, however, they exhibit important limitations which hinder their use and deployment in real-world applications. When developing and training neural networks, the accuracy should not be the only concern, neural networks must also be cost-effective and reliable. Although accurate, large neural networks often lack these properties. This work focuses on the problem of training neural networks which are not only accurate but also compact, easy to train, reliable and robust to adversarial examples. To tackle these problems, we leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.

Rémi Vaudaine. Contextual anomalies in graphs, detection and explanation

For the MLSP seminars we will receive Rémi Vaudaine, on Thursday 24th June at 3.30pm,  who will talk  about  anomalies detection in graphs:
Title: Contextual anomalies in graphs, detection and explanation
Abstract: Graph anomaly detection have proved very useful in a wide range of domains. For instance, for detecting anomalous accounts (e.g. bots, terrorists, opinion spammers or social malwares) on online platforms, intrusions and failures on communication networks or suspicious and fraudulent behaviors on social networks.
However, most existing methods often rely on pre-selected features built from the graph, do not necessarily use local information and do not consider context based anomalies. To overcome these limits, we present a Context-Based Graph Anomaly Detector which makes use of local information to detect anomalous nodes of a graph in a semi-supervised way.  We use Graph Attention Networks (GAT) with our custom attention mechanism to build local features, aggregate them and classify unlabeled nodes into normal or anomaly.
Nevertheless, most of the models based on machine learning, particularly deep neural networks, are seen as black boxes where the output cannot be humanly related to the input in a simple way. This implies a lack of understanding of the underlying model and its results. We present a new method to explain, in a human-understandable fashion, the decision of a black-box model for anomaly detection on attributed graph data. We show that our method can recover the information that leads the model to label a node as anomalous.

Launching Dante PhDs-Postdocs Seminars

We are glad to announce the creation of a small club of discussion between PhDs and Postdocs in Dante! In our weekly meetings, we discuss in a small group about research, science, and any kind of topics of interests for young researchers. For instance, for our first session on February 23rd 2021, we discussed about “How to manage experiments in machine learning”, based on a presentation by Luc Giffon. As we all have different research background, there will be a lot of exciting topics to discuss in our future seminars!

Talk « Mission-aware path planning of unmanned aerial vehicles » by Ahmed Boubrima from Rice University

We are pleased to virtually receive Ahmed Boubrima from Rice University, who will talk about « Mission-aware path planning of unmanned aerial vehicles » during the next session of our working group.

This session will virtually take place on Friday, March 12 at 3 PM.

You can access to this session by the link below :

Meeting ID: 950 7726 8125
Passcode: 239383
One tap mobile
+33170372246,,95077268125# France
+33170379729,,95077268125# France
Dial by your location
        +33 1 7037 2246 France
        +33 1 7037 9729 France
        +33 1 7095 0103 France
        +33 1 7095 0350 France
        +33 1 8699 5831 France
Meeting ID: 950 7726 8125
Find your local number: https://zoom.us/u/aky9NrKYD

[Seminar MLSP] Pedro Rodrigues. Leveraging Global Parameters for Flow-based Neural Posterior Estimation

Nous recevrons Pedro Rodrigues le Jeudi 25 Février à 10h

Title : Leveraging Global Parameters for Flow-based Neural Posterior Estimation
Presenter : Pedro L. C. Rodrigues (post-doctorant équipe Parietal, INRIA-Saclay)

Inferring the parameters of a stochastic model based on experimental observations is central to the scientific method. A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations. This arises in many practical situations, such as when inferring the distance and power of a radio source (is the source close and weak or far and strong?) or when estimating the amplifier gain and underlying brain activity of an electrophysiological experiment. In this work, we present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters. Our method extends recent developments in simulation-based inference (SBI) based on normalizing flows to Bayesian hierarchical models. We validate quantitatively our proposal on a motivating example amenable to analytical solutions and then apply it to invert a well known non-linear model from computational neuroscience.

Julien Fageot. TV- based methods for sparse reconstruction in continuous-domain

Le Lundi 8 Février à 10h30 nous accueillerons Julien Fageot post-doctorant à McGill University qui nous parlera de reconstruction parcimonieuse dans le domaine continu.

Title: TV- based methods for sparse reconstruction in continuous-domain

Abstract: We consider the problem of reconstructing an unknown function from some finitely many and possibly corrupted linear measurements. This is achieved by considering an optimization task using a sparsity-promoting regularization. More precisely, we consider the total-variation norm on Radon measures – which is the infinite-dimensional counterpart of the classic L1 norm used for sparse reconstruction in sparse statistical learning and compressed sensing – and a regularization operator that controls the smoothness of the reconstruction. The goal of this presentation is to discuss some theoretical and computational aspects of this infinite-dimensional optimization problem (form of the solutions, connection with spline theory, uniqueness issues, algorithmic strategies) and to illustrate the potential of the method for continuous-domain signal reconstruction.

PhD Seminar – Clément Lalanne’s mini lectures on Differential Privacy

Clément Lalanne, PhD at Dante and UMPA, will give a series of mini lectures on Differential Privacy.

Differential Privacy is a tractable property that aims at protecting the privacy of the individuals composing a dataset.

  • December 9th 2020 at 3:30pm : Introductory lecture on Differential Privacy. On this first lecture we will go through the basic definitions and properties of this concept while explaining why it is appealing for real world applications. We will also cover some modest examples. For the next lectures, Clément plans to cover some general techniques to turn an existing algorithm into a differentially private one and to present some advanced techniques in order to track the privacy loss through composition of algorithms.
  • December 16th 2020 at 3:45pmWe will focus on the techniques that add noise to the output of an algorithm in order to enhance privacy.
  • January 13th 2021 at 9:00am : We will study the graphical representation of differential privacy in an hypothesis testing setup and use it to deduce some properties including the advanced sharp composition theorem for Differential Privacy.


After the lectures, the slides and a detailed pdf will be available at https://clemlal.github.io/privacy.

Thank you !

Mathurin Massias. Fast resolution resolution of structured inverse problems: extrapolation and iterative regularization

This Thursday 14 Jan at 4.pm we will welcome Mathurin Massias post-doctoral researcher at University of Genova who will present his works on structured inverse problem:

Title: Fast resolution resolution of structured inverse problems: extrapolation and iterative regularization

Abstract: Overparametrization is common in linear inverse problems, which poses the question of stability and uniqueness of the solution. A remedy is to select a specific solution by minimizing a bias functional over all interpolating solutions. This functional is frequently neither smooth nor convex (e.g. L1, L2/L1, nuclear norm, TV). In the first part of the talk, we study fast solvers for the so called Tikhonov approach, where the bilevel optimization problem is relaxed into “datafit + regularization” (e.g., the Lasso). We show that, for separable problems arising in ML, coordinate descent algorithms can be accelerated by Anderson extrapolation, which surpasses full gradient methods and inertial acceleration.
The Tikhonov approach can be costly, as it requires to calibrate the regularization strength over a grid of values, and thus to solve many optimization problems. In the second part of the talk, we present results on iterative regularization: a single optimization problem is solved, and the number of iterations acts as the regularizer. We derive novel results on the early stopped iterates, in the case where the bias is convex but not strongly convex.

The presentation is based on:

– https://arxiv.org/abs/2011.10065 Anderson acceleration of coordinate descent
– https://arxiv.org/abs/2006.09859 Iterative regularization for convex regularizers
– https://arxiv.org/abs/1907.05830 Dual extrapolation for sparse generalized linear models

(joint works with Alexandre Gramfort, Joseph Salmon, Samuel Vaiter, Quentin Bertrand, Cesare Molinari, Lorenzo Roscasco and Silvia Villa)

PhD and Postdoc seminar: “Recent advances on solving the Mumford-Shah model for discrete images
” by Marion Foare

Friday 11th February, 2018 at 11am.  – LIP Meeting room M7 (3rd floor)

Our next seminar will be given by Marion Foare, who will present on “Recent advances on solving the Mumford-Shah model for discrete images
”

Abstract-“Essential image processing and analysis tasks, such as image segmentation, simplification and denoising, can be conducted in a unified way by minimizing the Mumford-Shah functional. Although seductive, this minimization is in practice difficult because it requires to jointly define a sharp set of contours and a smooth version of the initial image. For this reason, various relaxations of the original formulations have been proposed, together with optimisation methods. In this talk, we propose several discrete approximations of the Mumford-Shah and their numerical resolution for image processing tasks. We compare the results with state-of-the-art convex relaxations of the Mumford–Shah functional, and show that the proposed methods lead to competitive denoising, restoration and segmentation results.

PhD Seminar & DL Journal Club: Anomalous diffusion for graph-based data classification

Friday, October 26th, 2018 at 11am.  – LIP Meeting room M7

Esteban Bautista  will present on “Anomalous diffusion for graph-based data classification”.

The talk will be a one time-merging of the PhD seminar and the DL Journal club of the DANTE team.