Modal Seminar (2021-2022)

Usual day: Tuesday at 11.00.

Place: Inria Lille – Nord Europe.

How to get thereen françaisin english.

OrganizersHemant Tyagi

Calendar feediCalendar (hosted by the seminars platform of University of Lille)

Most slides are available: check past sessions and archives.

Archives: 2020-20212019-20202018-20192017-20182016-20172015-20162014-2015, 2013-2014

Upcoming

Guillaume Braun
Date: Nov 30, 2021 (Tuesday) at 11.00 (Online seminar)
Affiliation:   Inria Lille
Webpage: Link
Title:  An iterative clustering algorithm for the Contextual Stochastic Block Model with optimality guarantees
Abstract: Real-world networks often come with side information that can help to improve the performance of network analysis tasks such as clustering. Despite a large number of empirical and theoretical studies conducted on network clustering methods during the past decade, the added value of side information and the methods used to incorporate it optimally in clustering algorithms are relatively less understood. We propose a new iterative algorithm to cluster networks with side information for nodes (in the form of covariates) and show that our algorithm is optimal under the Contextual Symmetric Stochastic Block Model.
Our algorithm can be applied to general Contextual Stochastic Block Models and avoids hyperparameter tuning in contrast to previously proposed methods. We confirm our theoretical results on synthetic data experiments where our algorithm significantly outperforms other methods, and show that it can also be applied to signed graphs. Finally we demonstrate the practical interest of our method on real data.
Dorina Thanou
Date: Nov 23, 2021 (Tuesday) at 11.00 (Online seminar)
Affiliation:   EPFL, Switzerland
Webpage: Link
Title:  Learning over graphs: A signal processing complement
Abstract: The effective representation, processing, analysis, and visualization of large-scale structured data, especially those related to complex domains such as networks and graphs, are one of the key questions in modern machine learning. Graph signal processing (GSP), a vibrant branch of signal processing models and algorithms that aims at handling data supported on graphs, opens new paths of research to address this challenge. In this talk, we will highlight how some GSP concepts and tools, such as graph filters and transforms, lead to the development of novel graph-based machine learning algorithms for representation learning and topology inference. Finally, we will show some illustrative applications in computer vision, and healthcare.
Romain Couillet
Date: Nov 9, 2021 (Tuesday) at 11.00 (Online seminar)
Affiliation:   University Grenoble-Alpes
Webpage: Link
Title:  Random matrices could steer the dangerous path taken by AI but even that is likely not enough
Abstract: Like most of our technologies today, AI dramatically increases the world’s carbon footprint, thereby strengthening the severity of the coming downfall of life on the planet. In this talk, I propose that recent advances in large dimensional mathematics, and especially random matrices, could help AI engage in the future economic degrowth. This being said, even those mitigating solutions are only temporary in regards to the imminence of collapse, which calls for drastically more decisive changes in the whole research world. I will discuss these aspects in a second part and hope to leave ample time for discussion.
Eglantine Karlé
Date: Oct 26, 2021 (Tuesday) at 11.00 (Online seminar)
Affiliation:  Inria Lille
Webpage:
Title:  Dynamic Ranking with the BTL Model: A Nearest Neighbor based Rank Centrality Method

Abstract:  Many applications such as recommendation systems or sports tournaments involve pairwise comparisons within a collection of $n$ items, the goal being to aggregate the binary outcomes of the comparisons in order to recover the latent strength and/or global ranking of the items. In recent years, this problem has received significant interest from a theoretical perspective with a number of methods being proposed, along with associated statistical guarantees under the assumption of a suitable generative model.

While these results typically collect the pairwise comparisons as one comparison graph $G$, however in many applications — such as the outcomes of soccer matches during a tournament — the nature of pairwise outcomes can evolve with time. Theoretical results for such a dynamic setting are relatively limited compared to the aforementioned static setting. We study an extension of the classic BTL (Bradley-Terry-Luce) model for the static setting to our dynamic setup under the assumption that the probabilities of the pairwise outcomes evolve smoothly over the time domain $[0,1]$. Given a sequence of comparison graphs $(G_{t’})_{t’ \in \mathcal{T}}$ on a regular grid $\mathcal{T} \subset [0,1]$, we aim at recovering the latent strengths of the items $w_t^* \in \mathbb{R}^n$ at any time $t \in [0,1]$. To this end, we adapt the Rank Centrality method — a popular spectral approach for ranking in the static case — by locally averaging the available data on a suitable neighborhood of $t$. When $(G_{t’})_{t’ \in \mathcal{T}}$ is a sequence of Erdös-Renyi graphs, we provide non-asymptotic $\ell_2$ and $\ell_{\infty}$ error bounds for estimating $w_t^*$ which in particular establishes the consistency of this method in terms of $n$, and the grid size $|\mathcal{T}|$. We also complement our theoretical analysis with experiments on real and synthetic data. (joint work with Hemant Tyagi)

Marc Lelarge
Date: Oct 5, 2021 (Tuesday) at 11.00 (Online seminar)
Affiliation:  Inria Paris
Webpage: Link
Title:  Expressive Power of Invariant and Equivariant Graph Neural Networks
Abstract: Various classes of Graph Neural Networks (GNN) have been proposed and shown to be successful in a wide range of applications with graph structured data. In this talk, we propose a theoretical framework able to compare the expressive power of these GNN architectures. The current universality theorems only apply to intractable classes of GNNs. Here, we prove the first approximation guarantees for practical GNNs, paving the way for a better understanding of their generalization. (joint work with Waiss Azizian)

Comments are closed.