March 31, 2020. Pierre Karpman

Title: Fast verification of masking schemes in characteristic two

Abstract: We revisit the matrix model for non-interference (NI) probing security of masking gadgets introduced by Belaïd et al. at CRYPTO 2017. This leads to two main results. 1) We generalise the theorems on which this model is based, so as to be able to apply them to masking schemes over any finite field —in particular GF(2)— and to be able to analyse the strong non-interference (SNI) security notion. We also follow Faust et al. (TCHES 2018) to additionally consider a robust probing model that takes hardware defects such as glitches into account. 2) We exploit this improved model to implement a very efficient verification algorithm that improves the performance of state-of-the-art software by three orders of magnitude. We show applications to variants of NI and SNI multiplication gadgets from Barthe et al. (EUROCRYPT 2017) which we verify to be secure up to order 11 after a significant parallel computation effort, whereas the previous largest proven order was 7; SNI refreshing gadgets (ibid.); and NI multiplication gadgets from Groß et al. (TIS@CCS 2016) secure in presence of glitches. We also reduce the randomness cost of some existing gadgets, notably for the implementation-friendly case of 8 shares, improving here the previous best results by 17% (resp. 19%) for SNI multiplication (resp. refreshing).

March 25, 2020. Youssef El Housni

Title:  Optimized and secure pairing-friendly elliptic curves suitable for one layer proof composition

Abstract:  A zero-knowledge proof is a method by which one can prove knowledge of general non-deterministic polynomial (NP) statements. SNARKs are in addition non-interactive, short and cheap to verify. This property makes them suitable for recursive proof composition, that is proofs attesting to the validity of other proofs. Recursive proof composi- tion has been empirically demonstrated for pairing-based SNARKs via tailored constructions of expensive elliptic curves. We thus construct on top of the curve BLS12-377 a new pairing-friendly elliptic curve which is STNFS-secure and optimized for one layer composition. We show that it is at least five times faster to verify a composed SNARK proof on this curve compared to the previous state-of-the-art. We propose to name the new curve BW6-761.

February 6, Ben Smith

Place : LIX, Salle Henri Poincaré

Title : The supersingular isogeny problem in genus 2 and beyond (with Craig Costello)

Abstract : Let A and A’ be superspecial principally polarized abelian varieties of dimension g > 1 over F_p. For any fixed prime \ell \not= ̸p, we give an algorithm that finds a path from A to A’ in the (\ell,…,\ell)-isogeny graph in \softO(p^{g-1}) group operations on a classical computer, and O(p^{(g-1)/2}) calls to a Grover oracle on a quantum computer. The idea is to find paths from A and A’ to nodes corresponding to products of lower-dimensional abelian varieties, and to recurse down in dimension until an elliptic path-finding algorithm to connect the paths in dimension g = 1. In the general case, where A and A’ are any two nodes in the graph, this algorithm presents an asymptotic improvement over all of the algorithms in the current literature. In the special case where A and A’ are a known and relatively small number of steps away from each other (as is the case in higher dimensional analogues of SIDH), it gives an asymptotic improvement over quantum claw-finding algorithms and the classical van Oorschot–Wiener algorithm.

January 28, Dmitrii Koselev

Place : LIX, Salle Henri Poincaré

Title : A new elliptic curve point compression method based on $\F{p}$-rationality of some generalized Kummer surfaces

Abstract : It is proposed a new compression method (to $2\log_2(p) + 3$ bits) for the $\mathbb{F}_{\!p^2}$-points of an elliptic curve $E_b\!: y^2 = x^3 + b$ (for $b \in \mathbb{F}_{\!p^2}^*$) of $j$-invariant $0$. It is based on $\mathbb{F}_{\!p}$-rationality of some generalized Kummer surface $GK_b$. This is the geometric quotient of the Weil restriction $R_b := \mathrm{R}_{\: \mathbb{F}_{\!p^2}/\mathbb{F}_{\!p}}(E_b)$ under the order $3$ automorphism restricted from $E_b$. More precisely, we apply the theory of conic bundles (i.e., conics over the function field $\mathbb{F}_{\!p}(t)$) to obtain explicit and quite simple formulas of a birational $\mathbb{F}_{\!p}$-isomorphism between $GK_b$ and $\mathbb{A}^{\!2}$. Our point compression method consists in computation of these formulas. To recover (in the decompression stage) the original point from $E_b(\mathbb{F}_{\!p^2}) = R_b(\mathbb{F}_{\!p})$ we find an inverse image of the natural map $R_b \to GK_b$ of degree $3$, i.e., we extract a cubic $\mathbb{F}_{\!p}$-root. For $p \not\equiv 1 \: (\mathrm{mod} \ 27)$ this is just a single exponentiation in $\mathbb{F}_{\!p}$, hence the new method seems to be much faster than the classical one with $x$ coordinate, which requires two exponentiations in $\mathbb{F}_{\!p}$.

 

December 19, Alice Pellet Mary (KU, Leuven)

Place : LIX, Salle Philippe Flajolet

Title: An LLL Algorithm for Module Lattices

Abstract: A lattice is a discrete subgroup (i.e., ZZ-module) of RR^n (where ZZ and RR are the sets of integers and real numbers). The LLL algorithm is a central algorithm to manipulate lattice bases. It takes as input a basis of a Euclidean lattice, and, within a polynomial number of operations, it outputs another basis of the same lattice but consisting of rather short vectors. Lattices are used in post-quantum cryptography, as finding a shortest vector of a lattice, given an arbitrary basis, is supposed to be intractable even with a quantum computer. For practical reasons, many constructions based on lattices use structured lattices, in order to improve the efficiency of the schemes. Most of the time, these structured lattices are R-modules of small rank, where R is the ring of integers of some number field. As an example, 11 schemes among the 12 lattice-based candidates for the NIST post-quantum standardization process use module lattices, with modules of small rank. It is then tempting to try and adapt the LLL algorithm, which works over lattices (i.e., ZZ-modules), to these R-modules, as this would allow us to find rather short vectors of these module lattices. Previous works trying to extend the LLL algorithm to other rings of integers R focused on rings R that were Euclidean, as the LLL algorithm over ZZ crucially relies on the Euclidean division. In this talk, I will describe an LLL algorithm which works in any ring of integers R. This algorithm is heuristic and runs in quantum polynomial time if given access to an oracle solving the closest vector problem in a fixed lattice, depending only on the ring of integers R. This is a joint work with Changmin Lee, Damien Stehlé and Alexandre Wallet

December 3, 2019. Alessandro Neri (Grace, Marie Curie Fellowship)

Place : LIX, room Grace Hopper

Title : 3-Tensors: a natural way of representing rank-metric codes

Abstract : In coding theory, two important issues concern the encoding and the storage of a
code. In the most general case, given a finite set A and a positive integer n, a block
code C is a subset of A^n , endowed with a distance function (usually the Hamming one).
The space of messages M is then embedded in A^n , via an injective encoding map E,
such that E(M) = C. The need to have fast encoding and efficient representation of
codes led to look at algebraic structures on all the defining objects (the alphabet, the
space of messages and the code), and on the encoding map E. For this reason, one
usually only considers the alphabet as a field of q elements, the space of messages as
the vector space F_q^k , and the encoding map as a linear function into F_q^n , which leads
to the study of linear codes. In this framework, we can locate the generator matrix
of an [n, k] code C, which serves as a representation in order to store C, and also
as an encoding map. However, one can also read the parameters of the defining code
from its algebraic structure. This generator matrix can also be constructed for a vector
rank-metric code, and in analogous ways, one can extrapolate information on the code
from it.
In this talk we explain an analogous concept for rank-metric codes, which are con-
sidered as spaces of m × n matrices over a finite field F_q . This is the case of the
generator tensor, which is a similar object that one can use for storing, encoding and
reading the parameters of a rank-metric code. Moreover, the tensor representation
leads to the investigation of a new parameter, namely the tensor rank of an [n × m, k]
code C, that gives a measure on the storage and encoding complexity of C. This also
produces an interesting relation between rank-metric codes and algebraic complexity
theory.

It is a joint work with Eimear Byrne, Alberto Ravagnani and John Sheekey.

November 26, 2019. Isabella Panaccione (Grace)

Place : LIX, room Rosalind Franklin.

Title : The Power Error Locating Pairs algorithm

Abstract : In this talk, we will focus on particular decoding algorithms for Reed Solomon codes. It is known that for these codes, it is possible to correct an amount of errors equal to or even superior than half the minimum distance. In general though, beyond this bound, the uniqueness of the solution is no longer entailed. Hence, given a vector y ∈ (F_q)^n, it is possible to either use list decoding algorithms, like Sudan algorithm and Guruswami-Sudan algorithm, which return the list of all the closest codewords to y, or unique decoding algorithms, like the Power Decoding algorithm, which return the closest codeword to y if unique at the prize of some failure cases.

In this scenario, I will present a new unique decoding algorithm, that is the Power Error Locating Pairs algorithm. Based on Pellikaan’s Error Correcting Pairs algorithm, it has for Reed Solomon codes the same decoding radius as Sudan algorithm and Power Decoding algorithm, but with an improvement in terms of complexity. Moreover, like the Error Correcting Pairs algorithm, it can be performed on all codes which dispose from a special structure (among them, Reed Solomon codes, algebraic geometry codes and cyclic codes).

November 5, 2019 (with a follow-up on November 19). Maxime Romeas (Grace)

Place : LIX, room Nicole-Reine Lepaute.

Title : Cryptographie Constructive : présentation du modèle et application au contexte du stockage distant.

Abstract : La théorie “Constructive Cryptography” (CC) est un nouveau paradigme, introduit par Maurer et al. en 2011, permettant de définir la sécurité des primitives cryptographiques (chiffrement, signature, échange de clé, etc.) et de prouver la sécurité des protocoles utilisant ces primitives. Un schéma cryptographique est définie dans ce modèle comme construisant une certaine ressource (un canal, une clé partagée, un client/serveur) avec certaines garanties de sécurité à partir d’une ressource de même type ne possédant pas ces garanties.

Une particularité de ce modèle est qu’il est composable dans le sens où un protocole obtenu par composition de plusieurs constructions sûres est lui même sûr. Cette propriété de composabilité permet de simplifier la description et les preuves de sécurité des protocoles. Bien que cette approche constructive existe déjà dans d’autres modèles comme Universal Composability, le modèle de Maurer est très générique et abstrait ce qui lui confère de nombreux avantages.

Dans cet exposé, on commencera par présenter le paradigme CC puis on revisitera les exemples de la communication sécurisée et de la sécurisation de données distantes pour illustrer les gains apportés par leur traduction dans CC.

November 12, 2019. Simon Abelard (LIX).

Place : LIX, Salle Nicole-Reine Lepaute.

Title: Counting points on hyperelliptic curves defined over finite fields of
large characteristic: algorithms and complexity.

Abstract:
Counting points on algebraic curves has drawn a lot of attention due to its
many applications from number theory and arithmetic geometry to cryptography
and coding theory. In this talk, we focus on counting points on hyperelliptic
curves over finite fields of large characteristic p. In this setting, the most
suitable algorithms are currently those of Schoof and Pila, because their
complexities are polynomial in log p. However, their dependency in the genus g
of the curve is exponential, and this is already painful even in genus 3.

Our contributions mainly consist of establishing new complexity bounds with a
smaller dependency in g of the exponent of log p. For hyperelliptic curves,
previous work showed that it was quasi-quadratic, and we reduced it to a linear
dependency. Restricting to more special families of hyperelliptic curves with
explicit real multiplication (RM), we obtained a constant bound for this
exponent.

In genus 3, we proposed an algorithm based on those of Schoof and
Gaudry-Harley-Schost whose complexity is prohibitive in general, but turns out
to be reasonable when the input curves have explicit RM. In this more favorable
case, we were able to count points on a hyperelliptic curve defined over a
64-bit prime field.

In this talk, we will carefully reduce the problem of counting points to that
of solving polynomial systems. More precisely, we will see how our results are
obtained by considering either smaller or structured systems.

Contains joint work with P. Gaudry and P.-J. Spaenlehauer.