Seminars
For any questions, please contact Julie Mordacq.
2025
Monday, March 25th at 11AM (CEST)
Speaker: Johannes Lutzeyer (LIX, Ecole Polytechnique)
Title: An Analysis of Virtual Nodes in Graph Neural Networks
Abstract: In this talk, I will give an accessible introduction to Graph Neural Networks (GNNs), Graph Transformers and how virtual nodes can be incorporated into GNNs. I will then share recent work (ICLR’25), in which we attempt to understand the impact of the addition of virtual nodes to GNNs. I will (re-)introduce you to the concepts of 1) oversmoothing, 2) oversquashing and 3) node representation sensitivity analysis and we shall then observe GNNs with virtual nodes through these three lenses. This will allow us to compare GNNs with virtual nodes to both standard GNNs and Graph Transformers. Finally, I will aim to provide a quick review of the different ideas we have explored in the context of GNNs in the past years in the hope that some of our ideas are of use to the GeomeriX team members.
Monday, January 27th at 11AM (CEST)
Speaker: David Loiseaux (INRIA Saclay)
Title: Multiparameter Topological Persistence for Machine Learning
Abstract: Recent machine learning advancements have led to the development of huge datasets. Due to their sheer complexity and high dimensionality, such datasets can be very hard to handle in practice, and the need for particular techniques to extract meaningful information became a critical topic. In this talk, I will propose an approach based on algebraic topology, Topological Data Analysis, to derive rich and interpretable descriptors from datasets conveying geometrical or topological information. These descriptors enjoy various desirable mathematical properties, such as stability or robustness to noise. They can naturally be applied in various applications, such as point clouds, immunofluorescence images, nanomaterial design, time series, protein structures, etc. To be a bit more specific, I will focus on the multiparameter topological persistence case, which enables us to track the persistence of topological structures along multiple parameters (e.g., geometry + concentration of the sampling measure, or multiple protein markers at the same time).To do so, I will present a few multiparameter topological persistence invariants that my co-authors and I developed during my PhD, and explain some strategies to practically use them in a statistical learning context.a
2024
Wednesday, October 16th 2024 at 11AM (CEST)
Speaker: Marzieh Eidi (Max Planck Institute for Mathematics in the Sciences)
Title: Topology as Fluid Geometry: from Theory to Applications
Abstract: In this talk, I would like to present a view of how random walks can serve as the bridge between quantitative features of data, which are determined by geometric tools such as curvature, and qualitative features that can be detected by topological methods. After presenting some of the main ideas of geometric and topological data analysis via random walks on graphs, I will talk about the main challenges of extending these ideas to higher dimensions and I will discuss the known results as well as open problems in both theory and applications.
Thursday, October 10th 2024 at 11:15AM (CEST)
Speaker: Qixing Huang (UT Austin)
Title: Geometric Regularizations for 3D Shape Generation
Abstract: Generative models, which map a latent parameter space to instances in an ambient space, enjoy various applications in 3D Vision and related domains. A standard scheme of these models is probabilistic, which aligns the induced ambient distribution of a generative model from a prior distribution of the latent space with the empirical ambient distribution of training instances. While this paradigm has proven to be quite successful on images, its current applications in 3D generation encounter fundamental challenges in the limited training data and generalization behavior. The key difference between image generation and shape generation is that 3D shapes possess various priors in geometry, topology, and physical properties. Existing probabilistic 3D generative approaches do not preserve these desired properties, resulting in synthesized shapes with various types of distortions. In this talk, I will discuss recent work that seeks to establish a novel geometric framework for learning shape generators. The key idea is to model various geometric, physical, and topological priors of 3D shapes as suitable regularization losses by developing computational tools in differential geometry and computational topology. We will discuss the applications in deformable shape generation, latent space design, joint shape matching, and 3D man-made shape generation. This research is supported by NSF IIS 2413161.
Thursday, September 19th 2024 at 2PM (CEST)
Speaker: Omri Azencot and Ilan Naiman
Title: Representation Learning and Generative Modeling using Koopman-based Approaches
Abstract: The field of dynamical systems is undergoing a transformation driven by the integration of mathematical tools and algorithms from modern computing and data science. Traditional first-principles derivations and asymptotic reductions are increasingly being replaced by data-driven approaches that model systems using operator-theoretic or probabilistic frameworks. Over the past decade, Koopman spectral theory has gained prominence, offering a way to represent nonlinear dynamics through an infinite-dimensional linear operator acting on the space of all possible system measurements. This linearization of nonlinear dynamics holds great promise for enabling prediction, estimation, and control of nonlinear systems using methods traditionally applied to linear systems. In this talk, we will discuss background information on Koopman-based theory and practice, and we will present two recent works on representation learning and generative modeling, accepted to ICLR’23 and ICLR’24, respectively. The first work advances representation learning by achieving multifactor disentanglement, going beyond the usual two-factor approach. Namely, sequences are decomposed to multiple time-varying and time-invariant factors. Using a linear latent space assumption with Koopman autoencoder models, the authors introduce a spectral loss term to ensure disentanglement. The resulting unsupervised model is simple, effective, and enables new capabilities like swapping and transferring factors between characters. It outperforms other unsupervised methods and competes well with weakly- and self-supervised approaches. The second work presents Koopman VAE (KoVAE), a generative framework for realistic time series data that leverages VAE robustness and addresses GAN limitations. KoVAE’s novel latent prior, inspired by Koopman theory, uses a linear map for dynamic representation, allowing domain knowledge integration and stability analysis. It outperforms leading GANs and VAEs on benchmarks, improving metrics even with irregular data. Code and data can be obtained via https://github.com/azencot-group.
Thursday, September 12th 2024 at 2PM (CEST)
Speaker: Gautam Pai (University of Twente)
Title: Optimal Transport on the Lie Group of Roto-Translation
Abstract: The roto-translation group SE(2) has been of active interest in image analysis due to methods that lift the image data to multi-orientation representations defined in this Lie group. This has led to impactful applications of crossing-preserving flows for image de-noising, geodesic tracking, and roto-translation equivariant deep learning. In this talk, I will enumerate a computational framework for optimal transportation over Lie groups, with a special focus on SE(2). I will describe several theoretical aspects such as the non-optimality of group actions as transport maps, invariance and equivariance of optimal transport, and the quality of the entropic-regularized optimal transport plan using geodesic distance approximations. Finally, I will illustrate a Sinkhorn-like algorithm that can be efficiently implemented using fast and accurate distance approximations of the Lie group and GPU-friendly group convolutions. We report advancements with the experiments on 1) 2D shape/ image barycenters, 2) interpolation of planar orientation fields, and 3) Wasserstein gradient flows on SE(2). We observe that our framework of lifting images to SE(2) and optimal transport with left-invariant anisotropic metrics leads to equivariant transport along dominant contours and salient line structures in the image and leads to meaningful interpolations compared to their counterparts on R^2.
Thursday, July 18th 2024 at 2PM
Speaker: Noam Aigerman (University of Montreal, MILA)
Title: Manipulating, Deforming and Controlling Geometry with Machine Learning
Abstract: Production of 3D content relies on the ability to manipulate 3D objects by “deforming” them, i.e., moving around 3D points on the object: each frame in an animation sequence is a deformation of a base model; alternatively, generation of 3D shapes often relies on “sculpting” the object from other shapes through deformation, or otherwise adding additional details to an existing object. Thus, enabling neural networks to directly deform 3D objects can automate and improve such applications, making learning of deformations a heavily-researched area. However, devising learning-based methods to accurately and robustly produce deformations that meet practical application needs is a challenging and unsolved task, especially when considering less-explicit 3D representations, such as NeRFs, SDFs and Gaussian Splats. In this talk I will discuss the specific challenges that need to be overcome for a practical framework for learning deformations, exemplified through the recent directions my work has taken to tackle them.
Thursday, May 30th 2024 at 3PM (CEST)
Speaker: Antoine Guédon (ENPC)
Title: Building Editable 3D Representations with Gaussian Splatting for Animation and Gaming
Abstract: Gaussian Splatting (Kerbl et al.) has recently gained popularity in Image-Based Rendering (IBR) as it provides realistic real-time rendering and is significantly faster to train than NeRFs. However, Gaussian Splatting relies on an unstructured representation based on an unordered point cloud. For use in animation or video games, 3D artists require an editable and explicit 3D representation of the geometry that can be easily sculpted, edited, rigged, or animated, such as 3D meshes. In this presentation, I will first briefly reintroduce IBR and explain how 3D Gaussian Splatting addresses this task. I will then present our recent work on using Gaussian Splatting to reconstruct explicit 3D representations from RGB images that can be edited, sculpted, relit, or animated using traditional software like Blender. Specifically, I will introduce our work SuGaR, the first method to extract accurate surface meshes from 3D Gaussian Splatting representations within minutes on a single GPU. I will also discuss how SuGaR and our latest work, Gaussian Frosting, merge Gaussian splats and surface meshes to generate editable representations from RGB images with high-quality rendering, suitable even for fuzzy materials and volumetric effects near the surface.
Thursday, May 16th 2024 at 2PM (CEST)
Speaker: Arianna Rampini (Autodesk)
Title: Make-a-Shape: Building a 10 Million Scale 3D Generative Model
Abstract: This presentation explores the development of Make-A-Shape, a 3D generative model trained on a dataset comprising 10 million shapes. I will begin by addressing the limitations of current 3D generation approaches, followed by a detailed look at the technical innovations we implemented to effectively manage this extensive dataset. Specifically, I will focus on our proposed 3D representation that allows for high-fidelity shape encoding, while being compatible with DDPM architectures. Our generative framework is versatile, capable of conditioning on various input modalities such as text, images, point clouds, and voxels. Finally, I will discuss the potential impacts and downstream applications of our approach.
Thursday, May 2nd 2024 at 2PM (CEST)
Speaker: Minhyuk Sung (KAIST)
Title: Visual Content Generation with Image Diffusion Models: From Distillation to Diffusion Synchronization
Abstract: Image diffusion models have been leveraged as powerful priors for various visual content generation, as demonstrated by previous work such as DreamFusion, which introduced Score Distillation Sampling (SDS). While SDS has been widely adapted to many generation tasks, it has been confined to generation only, not editing, and has yielded suboptimal results due to using generative models as discriminators. In this talk, I will introduce our two recent works that extend the capabilities of image diffusion models for visual data editing and realistic content generation via diffusion synchronization. Firstly, I will present a novel loss function, named Posterior Distilling Sampling (PDS), which adapts SDS to edit visual content based on text prompts while preserving the identity of the given object. Secondly, I will introduce a diffusion synchronization technique, called SyncTweedies, which enables various visual data generation by using the diffusion models “as-is,” performing reverse diffusion instead of loss-based backpropagation. Based on our study examining all possible approaches to synchronizing multiple diffusion processes, I will describe the most effective approach that achieves the broadest applicability with the highest quality. I will conclude my talk with future directions for enhancing knowledge distillation in diffusion models for various visual content creation.
Thursday, March 28th 2024 at 14:00
Speaker: Davide Boscaini (Bruno Kessler Foundation)
Title: 3D object understanding on the shoulder of 2D foundation models
Abstract: The effective interaction and manipulation of objects require a comprehensive understanding of their 3D structure. For example, grasping a bottle requires identifying its rotation and translation within the environment, while holding a mug necessitates locating its handle. Traditionally, these tasks have been addressed using 3D deep learning methods. However, these methods demand a large amount of training data, and despite recent efforts, 3D datasets remain significantly smaller and less diverse compared to web-scale datasets of 2D images and text. In this talk, I will provide an overview of our recent work in the fields of 3D part segmentation and object 6D pose estimation. Specifically, I will demonstrate how we address these tasks by harnessing the knowledge of pre-trained 2D foundation models, without relying on any task-specific annotations or requiring any fine-tuning.
Wednesday, March 20th 2024 at 3PM (CEST)
Speaker: Miguel O’Malley (MPI MiS)
Title: Alpha magnitude and dimension
Abstract: (joint work with Sara Kalisnik and Nina Otter)
Magnitude, an isometric invariant of metric spaces, is known to bear rich connections to other desirable invariants, such as dimension, volume, and curvature. Connections between magnitude and persistent homology, a method to observe topological features in datasets, are well studied and fruitful. We leverage one such connection, persistent magnitude, to introduce alpha magnitude, a new invariant which bears many of the same properties of magnitude. We show in particular a strong connection to the Minkowski dimensions of compact subspaces of R^n and conjecture the connection exists in general.
Wednesday, March 20th 2024 at 1:30PM (CEST)
Speaker: Sara Kališnik (ETH Zürich)
Title: Persistent Homology for Ellipsoid Complexes
Abstract: (the first part is joint work with Davorin Lesnik, the second part is joint work with Bastian Rieck and Ana Zegarac)
A seminal result by Niyogi, Smale and Weinberger states that if a sample of a closed smooth submanifold of a Euclidean space is dense enough (relative to the reach of the manifold), there exists an interval of radii, for which the union of closed balls around sample points deformation retracts to the manifold. A tangent space is a good local approximation of a manifold, so we can expect that an object, elongated in the tangent direction, will better approximate the manifold than a ball. I will briefly review a result we proved with Davorin Lešnik that the union of ellipsoids of suitable size around sample points deformation retracts to the manifold while requiring much smaller density than in the case of union of balls. Then I will focus on work-in-progress with Ana Zegarac and Bastian Rieck on implementing ellipsoid complexes, and in particular, I will share some of the preliminary results.
Wednesday, March 13th 2024 at 10AM (CEST)
Speaker: Adrien Beaud and Alexandra Rören (CRESS, Inserm)
Title: Exploring methods for shoulder movement quality assessment using IMUs, in individuals with Subacromial Pain Syndrome
Thursday, February 29th 2024 at 2pm (CEST)
Speaker: Jarne Van den Herrewegen (Ghent University & Oqton)
Title: 3D SSL & Similarity: Findings, Challenges & Selected Works
Abstract: 3D deep learning made a leap forward in 2023 with the first foundation models showing up, new interest for multi-modality and open-source data. In this talk I will highlight the challenges I find most interesting, some of my work around these challenges and give my perspective from industry. This includes data-centric ML, 3D representation and 3D shape similarity through Self-Supervised Learning.
Thursday, February 8th 2024 at 11AM (CEST)
Speaker: Eric Goubault (LIX)
Title: Directed homology and persistence modules
Abstract: In this talk, we will try to give a self-contained account on a construction for a directed homology theory based on modules over algebras, linking it to both persistence homology and natural homology.
Persistence modules have been introduced originally for topological data analysis, where the data set seen at different « resolutions » is organized as a filtration of spaces. This has been further generalized to multidimensional persistence and « generalized » persistence, where a persistence module was defined to be any functor from a partially ordered set, or more generally a preordered set, to an arbitrary category (in general, a category of vector spaces).
Directed topology has its roots in a very different application area, concurrency and distributed systems theory rather than data analysis. Its goal is to study (multi-dimensional) dynamical systems, discrete (precubical sets, for application to concurrency theory) or continuous time (differential inclusions, for e.g. applications in control), that appear in the study of multi-agent systems. In this framework, topological spaces are « directed », meaning they have « preferred directions », for instance a cone in the tangent space, if we are considering a manifold, or the canonical local coordinate system in a precubical set. Natural homology, an invariant for directed topology, defines a natural system of modules, a further categorical generalization of (bi)modules, describing the evolution of the standard (simplicial, or singular) homology of certain path spaces, along their endpoints. Indeed, this is, in spirit, similar to persistence homology.
This talk will be concerned with a more « classical » construction of directed homology, mostly for precubical sets here, based on (bi)modules over (path) algebras, making it closer to classical homology with value in modules over rings, and of the techniques introduced for persistence modules. Still, this construction retains the essential information that natural homology is unveiling. Of particular interest will be the role of restriction and extension of scalars functors, that will be central to the discussion of Kunneth formulas and relative homology sequences. If time permits as well, we will discuss some « tameness » issues, for dealing with practical calculations.
2023
Thursday, November 30th 2023 at 11AM (CEST)
Speaker: Kate turner (Australian National University)
Title: Representing Vineyard Modules
Abstract: Time-series of persistence diagrams, more commonly known as vineyards, are a useful way to capture how multi-scale topological features vary over time. However, as the persistent homology is calculated and considered at each time step independently we do lose significant information in how the individual persistent homology classes evolve over time. A natural algebraic version of vineyards is a time series of persistence modules equipped with interleaving maps between the persistence modules at different time stamps. Let’s call this a vineyard module. I will set up the framework for representing a vineyard module via an indexed set of vines alongside a collection of matrices. Furthermore I will outline an algorithmic way to transform the bases of the persistence modules at each time step within the vineyard module to make the matrices within this representation as simple as possible. With some reasonable assumptions (analogous to those in Cerf theory) on the vineyard modules, this simplified representation can be completely described (up to isomorphism) by the underlying vineyard and a vector of finite length. While this vector representation is not in general guaranteed to be unique we can prove that it will be always zero when the vineyard module is isomorphic to the direct sum of vine modules.
Wednesday, November 22nd 2023 at 11AM (CEST)
Speaker: François Petit (CRESS, Inserm)
Title: Projected barcodes and distances for multi-parameter persistence modules
Abstract: In this talk, I will present the notion of projected barcodes and projected distances for multi-parameter persistence modules. Projected barcodes are defined as derived pushforward of persistence modules onto R and provide descriptor for multiprameter persistence modules. Projected distances come in two flavors: the integral sheaf metrics (ISM) and the sliced convolution distances (SCD). In the case where the persistence module considered is the sublevel-sets persistence modules of a function f : X -> R^n, we will explain how, under mild conditions, the projected barcode of this module by a linear map u : R^n \to R is the collection of sublevel-sets barcodes of the composition uf . In particular, it can be computed using software dedicated to one-parameter persistence modules. This is joint work with Nicolas Berkouk.
Wednesday, November 8th 2023 at 11AM (CEST)
Speaker: Luis Scoccola (University of Oxford)
Title: What do we want from invariants of multiparameter persistence modules?
Abstract: Various constructions relevant to practical problems such as clustering and graph classification give rise to multiparameter persistence modules (MPPM), that is, linear representations of non-totally ordered sets. Much of the mathematical interest in multiparameter persistence comes from the fact that there exists no tractable classification of MPPM up to isomorphism, meaning that there is a lot of room for devising invariants of MPPM that strike a good balance between discriminating power and complexity of their computation. However, there is no consensus on what type of information we want these invariants to provide us with, and, in particular, there seems to be no good notion of “global” or “high persistence” features of MPPM.
Thursday, November 2nd 2023 at 2PM (CEST)
Speaker: Aymen Merrouche (INRIA Grenoble)
Title: Deformation-Guided Unsupervised Non-Rigid Shape Matching
Abstract: We present an unsupervised data-driven approach for non-rigid shape matching. Shape matching identifies correspondences between two shapes and is a fundamental step in many computer vision and graphics applications. Our approach is designed to be particularly robust when matching shapes digitized using 3D scanners that contain fine geometric detail and suffer from different types of noise including topological noise caused by the coalescence of spatially close surface regions. We build on two strategies. First, using a hierarchical patch based shape representation we match shapes consistently in a coarse to fine manner, allowing for robustness to noise. This multi-scale representation drastically reduces the dimensionality of the problem when matching at the coarsest scale, rendering unsupervised learning feasible. Second, we constrain this hierarchical matching to be reflected in 3D by fitting a patch-wise near-rigid deformation model. Using this constraint, we leverage spatial continuity at different scales to capture global shape properties, resulting in matchings that generalize well to data with different deformations and noise characteristics. Experiments demonstrate that our approach obtains significantly better results on raw 3D scans than state-of-the-art methods, while performing on-par on standard test scenarios.
Tuesday, May 23rd 2023 at 11AM (CEST)
Speaker: Shreyas N. Samaga (Purdue University)
Title: GRIL: A 2-parameter Persistence based Vectorization for Machine Learning
Abstract: 1-parameter persistent homology, a cornerstone in Topological Data Analysis (TDA), studies the evolution of topological features such as connected components and cycles hidden in data. It has been applied to enhance the representation power of deep learning models, such as Graph Neural Networks (GNNs). To enrich the representations of topological features, here we propose to study 2-parameter persistence modules induced by bi-filtration functions. In this talk, we will introduce a vectorization method (GRIL) for 2-parameter persistence modules. We will present an algorithm to compute this vectorization and demonstrate its use for Machine Learning tasks by comparing the performance with the existing methods on graph benchmark datasets.
Friday, May 12th 2023 at 11AM (CEST)
Speaker: Vadim Lebovivi (Université Paris-Sud, Inria Saclay)
Title: Euler characteristic tools for topological data analysis
What geometric and topological information can one extract from data using Euler characteristic as a tool?
Abstract: Topological data analysis (TDA) aims at extracting geometric and topological information from data. Persistent homology does so through the application of homology to a 1-parameter filtered topological space built on the data at hand. In this talk, we argue for a different approach based on the pointwise computation of Euler characteristic instead of homology. We show that this simple descriptor achieve state-of-the-art performance in supervised tasks at a very low computational cost. Inspired by signal analysis, we compute integral transforms mixing Euler characteristic techniques with Lebesgue integration — so-called hybrid transforms — to provide highly efficient compressors of topological signals. As a consequence, they show remarkable performances in unsupervised settings. If time permits, we will state some stability results for these descriptors as well as asymptotic guarantees in random settings.
Thursday, April 20th 2023 at 2PM (CEST)
Speaker: Emery Pierson (U. of Lille)
Title: A Riemannian approach for the analysis of registered and unregistered human body shapes
Abstract: n this talk, we will focus on the problem of shape analysis of unregistered human bodies. We solve this problem by using a Riemannian approach. In a first step, we focus on the mapping of the human body surface to the space of metrics and normals. We equip this space with a family of Riemannian metrics, called Ebin (or DeWitt) metrics, and treat a human body surface as a point in a ”shape space” equipped with a family of Riemannian metrics. The family of metrics is invariant under rigid motions and reparametrizations, hence it induces a metric on the ”shape space” of surfaces. To assess the quality of the metric, we apply the metric to human bodies aligned to a given template and we show that this family of metrics allows us to distinguish the changes in shape and pose. We then present an efficient framework to compute geodesic paths between human shapes given the chosen metric and some basic tools for statistical shape analysis of human body surfaces. In a second step, we extend this work to human body scans and propose BaRe-ESA, a novel Riemannian framework for human body scan representation, interpolation and extrapolation. BaRe-ESA operates directly on unregistered meshes, i.e., without the need to establish prior point-to-point correspondences or to assume a consistent mesh structure. Our method relies on a latent space representation, which is equipped with an extended Riemannian metric associated to an invariant higher-order metric on the space of surfaces. Experimental results on the FAUST and DFAUST datasets show that BaRe-ESA brings significant improvements with respect to previous solutions in terms of shape registration, interpolation, and extrapolation. The efficiency and strength of our model is further demonstrated in applications such as motion transfer and random generation of body shape and pose.
2022
Thursday, November 3rd 2022 at 4PM (CEST)
Speakers: Derek Lim and Joshua Robinson (MIT CSAIL)
Title: Sign and Basis Invariant Networks for Spectral Graph Representation Learning
Abstract: We introduce SignNet and BasisNet — new neural architectures that are invariant to two key symmetries displayed by eigenvectors: (i) sign flips, since if v is an eigenvector then so is −v; and (ii) more general basis symmetries, which occur in higher dimensional eigenspaces with infinitely many choices of basis eigenvectors. We prove that under certain conditions our networks are universal, i.e., they can approximate any continuous function of eigenvectors with the desired invariances. When used with Laplacian eigenvectors, our networks are provably more expressive than existing spectral methods on graphs; for instance, they subsume all spectral graph convolutions, certain spectral graph invariants, and previously proposed graph positional encodings as special cases. Experiments show that our networks significantly outperform existing baselines on molecular graph regression, learning expressive graph representations, and learning neural fields on triangle meshes.