December 19, Alice Pellet Mary (KU, Leuven)

Place : LIX, Salle Philippe Flajolet

Title: An LLL Algorithm for Module Lattices

Abstract: A lattice is a discrete subgroup (i.e., ZZ-module) of RR^n (where ZZ and RR are the sets of integers and real numbers). The LLL algorithm is a central algorithm to manipulate lattice bases. It takes as input a basis of a Euclidean lattice, and, within a polynomial number of operations, it outputs another basis of the same lattice but consisting of rather short vectors. Lattices are used in post-quantum cryptography, as finding a shortest vector of a lattice, given an arbitrary basis, is supposed to be intractable even with a quantum computer. For practical reasons, many constructions based on lattices use structured lattices, in order to improve the efficiency of the schemes. Most of the time, these structured lattices are R-modules of small rank, where R is the ring of integers of some number field. As an example, 11 schemes among the 12 lattice-based candidates for the NIST post-quantum standardization process use module lattices, with modules of small rank. It is then tempting to try and adapt the LLL algorithm, which works over lattices (i.e., ZZ-modules), to these R-modules, as this would allow us to find rather short vectors of these module lattices. Previous works trying to extend the LLL algorithm to other rings of integers R focused on rings R that were Euclidean, as the LLL algorithm over ZZ crucially relies on the Euclidean division. In this talk, I will describe an LLL algorithm which works in any ring of integers R. This algorithm is heuristic and runs in quantum polynomial time if given access to an oracle solving the closest vector problem in a fixed lattice, depending only on the ring of integers R. This is a joint work with Changmin Lee, Damien Stehlé and Alexandre Wallet

December 3, 2019. Alessandro Neri (Grace, Marie Curie Fellowship)

Place : LIX, room Grace Hopper

Title : 3-Tensors: a natural way of representing rank-metric codes

Abstract : In coding theory, two important issues concern the encoding and the storage of a
code. In the most general case, given a finite set A and a positive integer n, a block
code C is a subset of A^n , endowed with a distance function (usually the Hamming one).
The space of messages M is then embedded in A^n , via an injective encoding map E,
such that E(M) = C. The need to have fast encoding and efficient representation of
codes led to look at algebraic structures on all the defining objects (the alphabet, the
space of messages and the code), and on the encoding map E. For this reason, one
usually only considers the alphabet as a field of q elements, the space of messages as
the vector space F_q^k , and the encoding map as a linear function into F_q^n , which leads
to the study of linear codes. In this framework, we can locate the generator matrix
of an [n, k] code C, which serves as a representation in order to store C, and also
as an encoding map. However, one can also read the parameters of the defining code
from its algebraic structure. This generator matrix can also be constructed for a vector
rank-metric code, and in analogous ways, one can extrapolate information on the code
from it.
In this talk we explain an analogous concept for rank-metric codes, which are con-
sidered as spaces of m × n matrices over a finite field F_q . This is the case of the
generator tensor, which is a similar object that one can use for storing, encoding and
reading the parameters of a rank-metric code. Moreover, the tensor representation
leads to the investigation of a new parameter, namely the tensor rank of an [n × m, k]
code C, that gives a measure on the storage and encoding complexity of C. This also
produces an interesting relation between rank-metric codes and algebraic complexity

It is a joint work with Eimear Byrne, Alberto Ravagnani and John Sheekey.

November 26, 2019. Isabella Panaccione (Grace)

Place : LIX, room Rosalind Franklin.

Title : The Power Error Locating Pairs algorithm

Abstract : In this talk, we will focus on particular decoding algorithms for Reed Solomon codes. It is known that for these codes, it is possible to correct an amount of errors equal to or even superior than half the minimum distance. In general though, beyond this bound, the uniqueness of the solution is no longer entailed. Hence, given a vector y ∈ (F_q)^n, it is possible to either use list decoding algorithms, like Sudan algorithm and Guruswami-Sudan algorithm, which return the list of all the closest codewords to y, or unique decoding algorithms, like the Power Decoding algorithm, which return the closest codeword to y if unique at the prize of some failure cases.

In this scenario, I will present a new unique decoding algorithm, that is the Power Error Locating Pairs algorithm. Based on Pellikaan’s Error Correcting Pairs algorithm, it has for Reed Solomon codes the same decoding radius as Sudan algorithm and Power Decoding algorithm, but with an improvement in terms of complexity. Moreover, like the Error Correcting Pairs algorithm, it can be performed on all codes which dispose from a special structure (among them, Reed Solomon codes, algebraic geometry codes and cyclic codes).

November 5, 2019 (with a follow-up on November 19). Maxime Romeas (Grace)

Place : LIX, room Nicole-Reine Lepaute.

Title : Cryptographie Constructive : présentation du modèle et application au contexte du stockage distant.

Abstract : La théorie “Constructive Cryptography” (CC) est un nouveau paradigme, introduit par Maurer et al. en 2011, permettant de définir la sécurité des primitives cryptographiques (chiffrement, signature, échange de clé, etc.) et de prouver la sécurité des protocoles utilisant ces primitives. Un schéma cryptographique est définie dans ce modèle comme construisant une certaine ressource (un canal, une clé partagée, un client/serveur) avec certaines garanties de sécurité à partir d’une ressource de même type ne possédant pas ces garanties.

Une particularité de ce modèle est qu’il est composable dans le sens où un protocole obtenu par composition de plusieurs constructions sûres est lui même sûr. Cette propriété de composabilité permet de simplifier la description et les preuves de sécurité des protocoles. Bien que cette approche constructive existe déjà dans d’autres modèles comme Universal Composability, le modèle de Maurer est très générique et abstrait ce qui lui confère de nombreux avantages.

Dans cet exposé, on commencera par présenter le paradigme CC puis on revisitera les exemples de la communication sécurisée et de la sécurisation de données distantes pour illustrer les gains apportés par leur traduction dans CC.

November 12, 2019. Simon Abelard (LIX).

Place : LIX, Salle Nicole-Reine Lepaute.

Title: Counting points on hyperelliptic curves defined over finite fields of
large characteristic: algorithms and complexity.

Counting points on algebraic curves has drawn a lot of attention due to its
many applications from number theory and arithmetic geometry to cryptography
and coding theory. In this talk, we focus on counting points on hyperelliptic
curves over finite fields of large characteristic p. In this setting, the most
suitable algorithms are currently those of Schoof and Pila, because their
complexities are polynomial in log p. However, their dependency in the genus g
of the curve is exponential, and this is already painful even in genus 3.

Our contributions mainly consist of establishing new complexity bounds with a
smaller dependency in g of the exponent of log p. For hyperelliptic curves,
previous work showed that it was quasi-quadratic, and we reduced it to a linear
dependency. Restricting to more special families of hyperelliptic curves with
explicit real multiplication (RM), we obtained a constant bound for this

In genus 3, we proposed an algorithm based on those of Schoof and
Gaudry-Harley-Schost whose complexity is prohibitive in general, but turns out
to be reasonable when the input curves have explicit RM. In this more favorable
case, we were able to count points on a hyperelliptic curve defined over a
64-bit prime field.

In this talk, we will carefully reduce the problem of counting points to that
of solving polynomial systems. More precisely, we will see how our results are
obtained by considering either smaller or structured systems.

Contains joint work with P. Gaudry and P.-J. Spaenlehauer.

October 17, 13h30, Thomas Debris (Royal Holloway university)

Place : LIX – Salle Henri Poincaré

Title : Wave: A New Family of Trapdoor One-Way Preimage Sampleable Functions Based on Codes

Abstract : We present here a new family of trapdoor one-way functions that are Preimage Sampleable on Average (PSA) based on codes: the Wave-PSA family. Our trapdoor function is one-way under two computational assumptions: the hardness of generic decoding for high weights and the indistinguishability of generalized (U, U + V )-codes. Our proof follows the GPV strategy [GPV08]. By including rejection sampling, we ensure the proper distribution for the trapdoor inverse output. The domain sampling property of our family is ensured by using and proving a variant of the left-over hash lemma. We instantiate the new Wave-PSA family with ternary generalized (U, U + V )-codes to design a “hash-and-sign” signature scheme which achieves existential unforgeability under adaptive chosen message attacks (EUF-CMA) in the random oracle model. For 128 bits of classical security, signature sizes are in the order of 13 thousand bits, the public key size in the order of 3 megabytes, and the rejection rate is limited to one rejection every 100 signatures.

May 21, 13h30, Adrien Koutsos, LSV (ENS Cachan)

Place: LIX, salle Henri Poincaré

Title: The 5G-AKA Authentication Protocol Privacy

Abstract: We study the 5G-AKA authentication protocol described in the 5G mobile communication standards. This version of AKA tries to achieve a better privacy than the 3G and 4G versions through the use of asymmetric randomized encryption. Nonetheless, we show that except for the IMSI-catcher attack, all known attacks against 5G-AKA privacy still apply.
Next, we modify the 5G-AKA protocol to prevent these attacks, while satisfying 5G-AKA efficiency constraints as much as possible. We then formally prove that our protocol is $\sigma$-unlinkable. This is a new security notion, which allows for a fine-grained quantification of a protocol privacy. Our security proof is carried out in the Bana-Comon indistinguishability logic. We also prove mutual authentication as a secondary result.

April 9, 13h30, Yann Rotella, Ratboud University (Nijmegen)

Place: LIX, salle Henri Poincaré

Title: Attaques par invariant: comment s’en protéger

Abstract: En 2011, Gregor Leander et ses co-auteurs ont décrit un nouveau type d’attaque sur les chiffrements par bloc, qui exploite l’existence d’un espace vectoriel invariant par les composantes utilisées dans ledit chiffrement. Ces attaques ont ensuite été généralisées en 2015 et sont appelées les attaques par invariant nonlinéaires.

Depuis, ces attaques ont mis en évidence de nouvelles vulnérabilités sur un grand nombre de chiffrements par bloc, notamment les chiffrements par bloc de type SPN (Substitution-Permutation Network) où les clefs de tours sont égales à la clef maître additionnée à une constant de tour souvent arbitrairement choisie.

Dans cette présentation, nous expliquons pourquoi ces attaques réalisables  sur certains chiffrements et nous en déduisons un nouveau critère de conception pour les chiffrements par bloc. Nous verrons comment choisir la couche linéaire ainsi que les constants de tour, afin de s’assurer de l’absence d’invariants.

Ce travail apparaît à un moment fondamental, puisqu’il aide les concepteurs de chiffrement et que le NIST standardise actuellement les algorithmes de chiffrements dits “à bas coût”. Ce travail a été publié à CRYPTO en 2017, en collaboration avec Christof Beierle, Anne Canteaut et Gregor Leander: Proving Resistance against Invariant attacks: How to choose the round constants.

March 14, 2019. 13h30. Vincent Neiger (Univ. Limoges)

Place: LIX, salle Henri Poincaré

Title: On the complexity of modular composition of generic polynomials

Abstract: This talk is about algorithms for modular composition of univariate polynomials, and for computing minimal polynomials. For two univariate polynomials a and g over a commutative field, modular composition asks to compute h(a) mod g for some given h, while the minimal polynomial problem is to compute h of minimal degree such that h(a) = 0 mod g. For generic g and a, we propose algorithms whose complexity bound improves upon previous algorithms and in particular upon Brent and Kung’s approach (1978); the new complexity bound is subquadratic in the degree of g and a even when using cubic-time matrix multiplication. Our improvement comes from the fast computation of specific bases of bivariate ideals, and from efficient operations with these bases thanks to fast univariate polynomial matrix algorithms.

Contains joint work with Seung Gyu Hyun, Bruno Salvy, Eric Schost, Gilles Villard.