Modal Seminar, 2021-2022 (13 sessions)

Usual day: Tuesday at 11.00.

Place: Inria Lille – Nord Europe.

How to get thereen françaisin english.

OrganizerHemant Tyagi

Calendar feediCalendar (hosted by the seminars platform of University of Lille)

Most slides are available: check past sessions and archives.

Archives: 2020-20212019-20202018-20192017-20182016-20172015-20162014-2015, 2013-2014

Stephane Chretien

Date: June 7, 2022 (Tuesday) at 14.00 (plenary room)

Affiliation: University of Lyon 2
Webpage: Link
Title:   Benign overfitting : analysis of the generalisation paradox
Abstract:  Recent extensive numerical experiments in high scale machine learning have allowed to uncover a quite counterintuitive phase transition, as a function of the ratio between the sample size and the number of parameters in the model. As the number of parameters p approaches the sample size n, the generalisation error (a.k.a. testing error) increases, but in many cases, it starts decreasing again past the threshold p = n. This surprising phenomenon, brought to the theoretical community attention by Belkin and co-authors, has been thoroughly investigated lately, more specifically for simpler models than deep neural networks, such as the linear model when the parameter is taken to be the minimum norm solution to the least-square problem, mostly in the asymptotic regime when p and n tend to infinity. In the present work, we propose a finite sample analysis of non-linear models of ridge type, where we investigate the overparametrised regime of the double descent phenomenon for both the estimation problem and the prediction problem. Our results provide a precise analysis of the distance of the best estimator from the true parameter as well as a generalisation bound which complements recent works of Bartlett and co-authors and Chinot and co-authors. Our analysis is based on efficient but elementary tools closely related to the continuous Newton method.

Sebastian Kühnert

Date: May 31, 2022 (Tuesday) at 11.00 (online seminar)

Affiliation: previously at University of Rostock, Rostock (GER)
Webpage: Link
Title:  On lagged Covariance and Cross-Covariance Operators in Cartesian Products of Abstract Hilbert Spaces
Abstract: A major task in Functional Time Series Analysis is measuring the dependence within and between processes, for which lagged covariance and cross-covariance operators have proven to be a practical tool in well-established spaces. The talk focuses on estimators for lagged covariance and cross-covariance operators of processes of abstract Hilbert spaces, and in particular of processes in Cartesian products of Hilbert spaces, obtained by successively stacking Hilbert space-valued elements. Our main focus is on these estimator’s asymptotic properties for fixed and increasing lag and Cartesian powers. The processes are allowed to be non-centered, and to have values in different spaces when investigating the dependence between processes. Moreover, features of estimators for the principal components of our covariance operators for fixed and increasing Cartesian power are discussed. 

David Coupier

Date: May 24, 2022 (Tuesday) at 11.00

Affiliation:  Institut Mines Télécom Nord Europe
Webpage: Link
Title:  How Percolation occurs on 5G telecommunication networks
Abstract: The 5G’s Device-to-Device (D2D) technology consists in the possibility of short-range direct
communications (according to microscopic rules) between two devices or users without the
need for the signal to be routed through additional network infrastructure. Hence, a good
connectivity in a D2D network, i.e. a long-range connection (macroscopic property), can be
naturally interpreted as a percolation problem. Our goal is to modelize D2D networks using stochastic geometry tools, especially random mosaics for the urban network and point processes for users, and then to study their percolation properties.
Gaël Poux-Médard

Date: April 26, 2022 (Tuesday) at 11.00

Affiliation:   University Lyon 2
Webpage: Link
Title:  Dirichlet-Temporal point processes
Abstract: The textual content of a document and its publication date are intertwined. For instance, a scientific article builds on recent past works, according to underlying temporal dynamics –older publications may have less influence on recent ones. Documents could also be tweets, news articles, reddit posts, etc. Most existing works about this topic use time as a way to slice the data. This data is then fed to models that are trained sequentially. Other approaches model the content of documents first, and analyse their publication dynamics afterwards. However, more recent advances have shown that simultaneously using documents’ textual content and publication date improves the modelling accuracy by a significant margin. Better, they propose a method to explicitly model documents’ publication dynamics along with their textual content: the Dirichlet-Hawkes process. In my talk, I will first explain how to build this process from scratch by merging Dirichlet processes and Hawkes processes. In a second time, I will discuss this work from a broader perspective, the Dirichlet-Temporal-Point-Processes, and show you how it can help generating automated dataflows summaries, inferring temporal topic-topic interaction networks, and inferring topic-dependent diffusion subnetworks at scale, among other things.

Sebastian Stich

Date: April 5, 2022 (Tuesday) at 11.00 (Online seminar)

Affiliation:  CISPA Helmholtz Center for Information Security, Germany
Webpage: Link
Title:  Introduction to federated optimization and the communication efficiency of local update methods
AbstractFederated learning has emerged as an important paradigm in modern large-scale machine learning. Unlike in traditional centralized learning where models are trained using large datasets stored in a central server, in federated learning, the training data remains distributed over many clients, which may be phones, network sensors, hospitals, or alternative local information sources. A centralized model (referred to as server model) is then trained without ever transmitting client data over the network, thereby ensuring a basic level of privacy.
In this talk, we (i) give an introduction to federated learning and optimization, (ii) discuss local update methods that reduce communication rounds through intermittent communication and (iii) highlight the recent Scaffnew method, that improves over Scaffold (https://arxiv.org/abs/1910.06378).
The last part of the talk is based on https://arxiv.org/abs/2202.09357, joint work with K. Mishchenko, G. Malinovsky, and P. Richtárik.

 

Yann Issartel

Date: March 15, 2022 (Tuesday) at 11.00 (Online seminar)

Affiliation:   CREST-ENSAE
Webpage: Link
Title:  Seriation and 1D-localization in latent space models
Abstract: Motivated by applications in archeology for relative dating of objects, or in 2D-tomography for angular synchronization, we consider the problem of statistical seriation where one seeks to reorder a noisy disordered matrix of pairwise affinities. This problem can be recast in the powerful latent space terminology, where the affinity between a pair of items is modeled as a noisy observation of a function f(x_i,x_j) of the latent positions x_i, x_j of the two items on a one-dimensional space. This reformulation naturally leads to the problem of estimating the positions in the latent space. Under non-parametric assumptions on the affinity function f, we introduce a procedure that provably localizes all the latent positions with a maximum error of the order of the square root of log(n)/n. This rate is proven to be minimax optimal. Computationally efficient procedures are also analyzed, under some more restrictive assumptions. Our general results can be instantiated to the original problem of statistical seriation, leading to new bounds for the maximum error in the ordering.

Badih Ghattas

Date: Jan 25, 2022 (Tuesday) at 11.00 (Online seminar)

Affiliation:  Université d’Aix Marseille
Webpage: Link
Title:  Medical Image segmentation; issues and challenges

AbstractMedical image segmentation is at first view a simple supervised classification task. A model may be learnt to predict for each pixel of an input image X, a label Y from within J labels. A classical example is that of organ segmentation, the liver for instance; pixels corresponding to the liver have label 1 and the others have label -1. Labeling images needs a lot of time, often ~ 25 minutes for an organ for each patient when images are MRI or CT scans. For a long time researchers didn’t have labeled data and used unsupervised classification adapting classical approaches like k-means to the context. A great effort is done now to label images, and more and more labeled datasets are available. The most used approaches for supervised segmentation are with no doubt  deep neural networks. I ‘ll describe the most used architectures used in this domain, showing the issues and challenges of this modeling using several recent works done in collaboration with radiologists from the Timone hospital at Marseille.

 

Etienne Kronert

Date: Dec 7, 2021 (Tuesday) at 11.00 (Internal online seminar)
Affiliation:   Inria Lille
Webpage:
Title:  Detect anomalies in time series using reproducing kernels
Abstract: Computer systems play a crucial role in our modern world. This makes the consequences of attacks or failures on such systems critical. In order to detect these undesirable events and limit their impact, anomaly detection is a commonly used solution. Anomalies are detected as observations that are different from others.  A great challenge is to be able to adapt to normal changes. Another difficulty rarely dealt with is the estimation of the tail distribution. In this presentation we will present the use of kernel-based breakpoint detection methods to be able to adapt to all types of changes. We will also show how the classical Kernel Density Estimator method can be adapted to estimate the tail distribution more accurately.
Guillaume Braun
Date: Nov 30, 2021 (Tuesday) at 11.00 (Online seminar)
Affiliation:   Inria Lille
Webpage: Link
Title:  An iterative clustering algorithm for the Contextual Stochastic Block Model with optimality guarantees
Abstract: Real-world networks often come with side information that can help to improve the performance of network analysis tasks such as clustering. Despite a large number of empirical and theoretical studies conducted on network clustering methods during the past decade, the added value of side information and the methods used to incorporate it optimally in clustering algorithms are relatively less understood. We propose a new iterative algorithm to cluster networks with side information for nodes (in the form of covariates) and show that our algorithm is optimal under the Contextual Symmetric Stochastic Block Model.
Our algorithm can be applied to general Contextual Stochastic Block Models and avoids hyperparameter tuning in contrast to previously proposed methods. We confirm our theoretical results on synthetic data experiments where our algorithm significantly outperforms other methods, and show that it can also be applied to signed graphs. Finally we demonstrate the practical interest of our method on real data.
Dorina Thanou
Date: Nov 23, 2021 (Tuesday) at 11.00 (Online seminar)
Affiliation:   EPFL, Switzerland
Webpage: Link
Title:  Learning over graphs: A signal processing complement
Abstract: The effective representation, processing, analysis, and visualization of large-scale structured data, especially those related to complex domains such as networks and graphs, are one of the key questions in modern machine learning. Graph signal processing (GSP), a vibrant branch of signal processing models and algorithms that aims at handling data supported on graphs, opens new paths of research to address this challenge. In this talk, we will highlight how some GSP concepts and tools, such as graph filters and transforms, lead to the development of novel graph-based machine learning algorithms for representation learning and topology inference. Finally, we will show some illustrative applications in computer vision, and healthcare.
Romain Couillet
Date: Nov 9, 2021 (Tuesday) at 11.00 (Online seminar)
Affiliation:   University Grenoble-Alpes
Webpage: Link
Title:  Random matrices could steer the dangerous path taken by AI but even that is likely not enough
Abstract: Like most of our technologies today, AI dramatically increases the world’s carbon footprint, thereby strengthening the severity of the coming downfall of life on the planet. In this talk, I propose that recent advances in large dimensional mathematics, and especially random matrices, could help AI engage in the future economic degrowth. This being said, even those mitigating solutions are only temporary in regards to the imminence of collapse, which calls for drastically more decisive changes in the whole research world. I will discuss these aspects in a second part and hope to leave ample time for discussion.
Eglantine Karlé
Date: Oct 26, 2021 (Tuesday) at 11.00 (Online seminar)
Affiliation:  Inria Lille
Webpage:
Title:  Dynamic Ranking with the BTL Model: A Nearest Neighbor based Rank Centrality Method

Abstract:  Many applications such as recommendation systems or sports tournaments involve pairwise comparisons within a collection of $n$ items, the goal being to aggregate the binary outcomes of the comparisons in order to recover the latent strength and/or global ranking of the items. In recent years, this problem has received significant interest from a theoretical perspective with a number of methods being proposed, along with associated statistical guarantees under the assumption of a suitable generative model.

While these results typically collect the pairwise comparisons as one comparison graph $G$, however in many applications — such as the outcomes of soccer matches during a tournament — the nature of pairwise outcomes can evolve with time. Theoretical results for such a dynamic setting are relatively limited compared to the aforementioned static setting. We study an extension of the classic BTL (Bradley-Terry-Luce) model for the static setting to our dynamic setup under the assumption that the probabilities of the pairwise outcomes evolve smoothly over the time domain $[0,1]$. Given a sequence of comparison graphs $(G_{t’})_{t’ \in \mathcal{T}}$ on a regular grid $\mathcal{T} \subset [0,1]$, we aim at recovering the latent strengths of the items $w_t^* \in \mathbb{R}^n$ at any time $t \in [0,1]$. To this end, we adapt the Rank Centrality method — a popular spectral approach for ranking in the static case — by locally averaging the available data on a suitable neighborhood of $t$. When $(G_{t’})_{t’ \in \mathcal{T}}$ is a sequence of Erdös-Renyi graphs, we provide non-asymptotic $\ell_2$ and $\ell_{\infty}$ error bounds for estimating $w_t^*$ which in particular establishes the consistency of this method in terms of $n$, and the grid size $|\mathcal{T}|$. We also complement our theoretical analysis with experiments on real and synthetic data. (joint work with Hemant Tyagi)

Marc Lelarge
Date: Oct 5, 2021 (Tuesday) at 11.00 (Online seminar)
Affiliation:  Inria Paris
Webpage: Link
Title:  Expressive Power of Invariant and Equivariant Graph Neural Networks
Abstract: Various classes of Graph Neural Networks (GNN) have been proposed and shown to be successful in a wide range of applications with graph structured data. In this talk, we propose a theoretical framework able to compare the expressive power of these GNN architectures. The current universality theorems only apply to intractable classes of GNNs. Here, we prove the first approximation guarantees for practical GNNs, paving the way for a better understanding of their generalization. (joint work with Waiss Azizian)

Comments are closed.