Return to Projects

Allegro Assai – deep sparsity & sketched learning

Dates : 2020 – 2025      Identifier: ANR-19-CHIA-0009

Project Summary          Project Publications

Algorithms, Approximations, Sparsity and Sketching for AI

AllegroAssai focuses on the design of machine learning techniques endowed both with statistical guarantees (to ensure their performance, fairness, privacy, etc.) and provable resource-efficiency (e.g. in terms of bytes and flops, which impact energy consumption and hardware costs), robustness in adversarial conditions for secure performance, and ability to leverage domain-specific models and expert knowledge. The vision of AllegroAssai is that the versatile notion of sparsity, together with sketching techniques using random features, are key in harnessing these fundamental tradeoffs. The first pillar of the project is to investigate sparsely connected deep networks, to understand the tradeoffs between the approximation capacity of a network architecture (ResNet, U-net, etc.) and its “trainability” with provably-good algorithms. A major endeavor is to design efficient regularizers promoting sparsely connected networks with provable robustness in adversarial settings. The second pillar revolves around the design and analysis of provably-good end-to-end sketching pipelines for versatile and resource-efficient large-scale learning, with controlled complexity driven by the structure of the data and that of the task rather than the dataset size.

AllegroAssai Publications & Software

Dates : 2020 – 2024      Identifier: ANR-19-CHIA-0009 Project Summary Project Publications Algorithms, Approximations, Sparsity and Sketching for AI Software The FAµST toolbox (C++ core, python & matlab wrappers) provides algorithms and data structures to decompose a given dense matrix into a product of sparse matrices in order to reduce its computational complexity (both for …