Title: Counting points on hyperelliptic curves defined over finite fields of
large characteristic: algorithms and complexity.
Counting points on algebraic curves has drawn a lot of attention due to its
many applications from number theory and arithmetic geometry to cryptography
and coding theory. In this talk, we focus on counting points on hyperelliptic
curves over finite fields of large characteristic p. In this setting, the most
suitable algorithms are currently those of Schoof and Pila, because their
complexities are polynomial in log p. However, their dependency in the genus g
of the curve is exponential, and this is already painful even in genus 3.
Our contributions mainly consist of establishing new complexity bounds with a
smaller dependency in g of the exponent of log p. For hyperelliptic curves,
previous work showed that it was quasi-quadratic, and we reduced it to a linear
dependency. Restricting to more special families of hyperelliptic curves with
explicit real multiplication (RM), we obtained a constant bound for this
In genus 3, we proposed an algorithm based on those of Schoof and
Gaudry-Harley-Schost whose complexity is prohibitive in general, but turns out
to be reasonable when the input curves have explicit RM. In this more favorable
case, we were able to count points on a hyperelliptic curve defined over a
64-bit prime field.
In this talk, we will carefully reduce the problem of counting points to that
of solving polynomial systems. More precisely, we will see how our results are
obtained by considering either smaller or structured systems.
Contains joint work with P. Gaudry and P.-J. Spaenlehauer.
Title : Wave: A New Family of Trapdoor One-Way Preimage Sampleable Functions Based on Codes
Abstract : We present here a new family of trapdoor one-way functions that are Preimage Sampleable on Average (PSA) based on codes: the Wave-PSA family. Our trapdoor function is one-way under two computational assumptions: the hardness of generic decoding for high weights and the indistinguishability of generalized (U, U + V )-codes. Our proof follows the GPV strategy [GPV08]. By including rejection sampling, we ensure the proper distribution for the trapdoor inverse output. The domain sampling property of our family is ensured by using and proving a variant of the left-over hash lemma. We instantiate the new Wave-PSA family with ternary generalized (U, U + V )-codes to design a “hash-and-sign” signature scheme which achieves existential unforgeability under adaptive chosen message attacks (EUF-CMA) in the random oracle model. For 128 bits of classical security, signature sizes are in the order of 13 thousand bits, the public key size in the order of 3 megabytes, and the rejection rate is limited to one rejection every 100 signatures.
Abstract: We study the 5G-AKA authentication protocol described in the 5G mobile communication standards. This version of AKA tries to achieve a better privacy than the 3G and 4G versions through the use of asymmetric randomized encryption. Nonetheless, we show that except for the IMSI-catcher attack, all known attacks against 5G-AKA privacy still apply.
Next, we modify the 5G-AKA protocol to prevent these attacks, while satisfying 5G-AKA efficiency constraints as much as possible. We then formally prove that our protocol is $\sigma$-unlinkable. This is a new security notion, which allows for a fine-grained quantification of a protocol privacy. Our security proof is carried out in the Bana-Comon indistinguishability logic. We also prove mutual authentication as a secondary result.
Title: Attaques par invariant: comment s’en protéger
Abstract: En 2011, Gregor Leander et ses co-auteurs ont décrit un nouveau type d’attaque sur les chiffrements par bloc, qui exploite l’existence d’un espace vectoriel invariant par les composantes utilisées dans ledit chiffrement. Ces attaques ont ensuite été généralisées en 2015 et sont appelées les attaques par invariant nonlinéaires.
Depuis, ces attaques ont mis en évidence de nouvelles vulnérabilités sur un grand nombre de chiffrements par bloc, notamment les chiffrements par bloc de type SPN (Substitution-Permutation Network) où les clefs de tours sont égales à la clef maître additionnée à une constant de tour souvent arbitrairement choisie.
Dans cette présentation, nous expliquons pourquoi ces attaques réalisables sur certains chiffrements et nous en déduisons un nouveau critère de conception pour les chiffrements par bloc. Nous verrons comment choisir la couche linéaire ainsi que les constants de tour, afin de s’assurer de l’absence d’invariants.
Ce travail apparaît à un moment fondamental, puisqu’il aide les concepteurs de chiffrement et que le NIST standardise actuellement les algorithmes de chiffrements dits “à bas coût”. Ce travail a été publié à CRYPTO en 2017, en collaboration avec Christof Beierle, Anne Canteaut et Gregor Leander: Proving Resistance against Invariant attacks: How to choose the round constants.
Title: On the complexity of modular composition of generic polynomials
Abstract: This talk is about algorithms for modular composition of univariate polynomials, and for computing minimal polynomials. For two univariate polynomials a and g over a commutative field, modular composition asks to compute h(a) mod g for some given h, while the minimal polynomial problem is to compute h of minimal degree such that h(a) = 0 mod g. For generic g and a, we propose algorithms whose complexity bound improves upon previous algorithms and in particular upon Brent and Kung’s approach (1978); the new complexity bound is subquadratic in the degree of g and a even when using cubic-time matrix multiplication. Our improvement comes from the fast computation of specific bases of bivariate ideals, and from efficient operations with these bases thanks to fast univariate polynomial matrix algorithms.
Contains joint work with Seung Gyu Hyun, Bruno Salvy, Eric Schost, Gilles Villard.
Title : Jasmin: a workbench for high-assurance low-level programming.
Abstract : High-efficiency low-level programming — and in particular the implementation of cryptographic primitives — challenges formal verification methods for correctness and security of the implementations. To overcome this difficulty, the Jasmin language and compiler provide high-level abstractions and predictability of the generated assembly so as to write verifiable efficient low-level programs. The compiler is also formally verified in Coq so as to enable reasoning about functional correctness at the source level. Moreover we’ll present in this talk a proof methodology to enable reasoning about side-channel attack resistance at the source level: building on compiler correctness arguments, we show how to justify preservation, along the compilation process, of the constant-time property.
Title: Nearly linear time encodable codes beating the Gilbert-Varshamov bound.
Abstract: Error-correcting codes enable reliable transmission of information over an erroneous channel. One typically desires codes to transmit information at a high rate while still being able to correct a large fraction of errors. However, rate and relative distance (which quantifies the fraction of errors corrected) are competing quantities with a trade off. The Gilbert-Varshamov bound assures for every rate R, relative distance D and alphabet size Q, there exists an infinite family of codes with R + H_Q(D) >= 1-\epsilon. Constructing codes meeting or beating the Gilbert-Varshamov bound remained a long-standing open problem, until the advent of algebraic geometry codes by Goppa. In a seminal paper, for prime power squares Q ≥ 7^2, Tsfasman-Vladut-Zink constructed algebraic geometry codes beating the Gilbert-Varshamov bound. A rare occasion where an explicit construction yields better parameters than guaranteed by randomized arguments! For codes to find use in practice, one often requires fast encoding and decoding algorithms in addition to satisfying a good trade off between rate and minimum distance. A natural question, which remains unresolved, is if there exist linear time encodable and decodable codes meeting or beating the Gilbert-Varshamov bound. In this talk, I shall present the first nearly linear time encodable codes beating the Gilbert-Varshamov bound, along with a nearly quadratic decoding algorithm. Applications to secret sharing, explicit construction of pseudorandom objects, Probabilistically Checkable Interactive Proofs and the like will also be discussed.
Title : Un algorithme géométrique efficace pour le calcul d’espaces de Riemann-Roch.
Abstract : Le calcul effectif de bases d’espaces de Riemann-Roch intervient dans de
nombreux domaines pratiques, notamment pour l’arithmétique dans les jacobiennes
de courbes ou pour la construction de codes correcteurs d’erreurs
algébraico-géométriques. Nous proposons une variante probabiliste de type Las
Vegas de l’algorithme de Brill et Noether décrit par Goppa pour le calcul d’une
base de l’espace de Riemann-Roch L(D) associé à un diviseur D sur une courbe
plane projective C de degré d définie sur un corps parfait K suffisamment
grand. On prouve que sa complexité (estimée par le nombre d’opérations
arithmétiques dans le corps K) est en O(max(d^(2 \omega), deg(D_+)^(\omega)))
où \omega est la constante de l’algèbre linéaire et D_+ est le plus petit
diviseur effectif supérieur ou égal à D. Cet algorithme probabiliste peut
échouer, mais sous quelques conditions on prouve que sa probabilité d’échec est
bornée par O(max(d^4,deg(D_+)^2)/|E|) où E est un sous ensemble fini de K dans
lequel on peut choisir des éléments de K uniformément aléatoirement. Nous avons
implémenté cet algorithme en C++/NTL et nous présenterons des résultats
expérimentaux qui semblent indiquer une amélioration des temps de calculs par
rapport à l’implémentation dans le logiciel de calcul formel Magma.
Abstract : Oblivious linear-function evaluation (OLE) is a secure
two-party protocol allowing a receiver to learn any linear combination
of a pair of field elements held by a sender. OLE serves as a common
building block for secure computation of arithmetic circuits,
analogously to the role of oblivious transfer (OT) for boolean circuits.
A useful extension of OLE is vector OLE (VOLE), allowing the receiver to
learn any linear combination of two vectors held by the sender. In
several applications of OLE, one can replace a large number of instances
of OLE by a smaller number of long instances of VOLE. This motivates the
goal of amortizing the cost of generating long instances of VOLE.
We suggest a new approach for fast generation of pseudo-random instances
of VOLE via a deterministic local expansion of a pair of short
correlated seeds and no interaction. This provides the first example of
compressing a non-trivial and cryptographically useful correlation with
good concrete efficiency. Our VOLE generators can be used to enhance the
efficiency of a host of cryptographic applications. These include secure
arithmetic computation and noninteractive zero-knowledge proofs with
Our VOLE generators are based on a novel combination of function secret
sharing (FSS) for multi-point functions and linear codes in which
decoding is intractable. Their security can be based on variants of the
syndrome decoding assumption over large fields that resist known
attacks. We provide several constructions that offer tradeoffs between
different efficiency measures and the underlying intractability assumptions.
Toward the end of the talk, I will also discuss exciting recent
developments of this work regarding the compression of more general
(pseudo)random bilinear correlations.
Antonin Leroux, 13:30, élève école
Efficient Proactive Multi-Party Computation
Secure Multi-Party Computation (MPC) allows a set of “n” distrusting
parties to compute functions on their private inputs while guaranteeing
secrecy of inputs while ensuring correctness of the computation. Most
MPC protocols can achieve such security only against a minority of
corrupted parties (e.g., there is an honest majority > n/2). Based on
cryptographic assumptions, security against dishonest majorities can be
obtained but requires more computation and communication. These levels
of security are often not sufficient in real life especially threats
that require long-term security against powerful persistant attackers
(e.g., so called Advanced Persistent Threats). In such cases, all the
parties involved in the protocol may become corrupted at some point.
Proactive MPC (PMPC)aims to address such mobile persistent threats; PMPC
guarantees privacy and correctness against an adversary allowed to
change the set of corrupted parties over time but that is bounded by a
threshold at any given instant. Until recently, PMPC protocols existed
only against a dishonest minority. The first generic PMPC protocol
against a dishonest majority was introduced in a recent work to be
presented in September 2018, it presents a feasibility result
demonstrating that it can be achieved but with high communication
This talk presents our most recent work which develops an efficient
generic PMPC protocol secure against a dishonest majority. We improve
the overall complexity of the generic PMPC from O(n^4) to O(n^2)
communication. Two necessary stepping stones for generic PMPC are
Proactive Secret Sharing (PSS) and a secure distributed multiplication
protocol. In this work we introduce a new PSS scheme requiring only
O(n^2) communications. We also present a multiplication protocol against
dishonest majorities in the proactive setting; this protocol introduces
a new efficient way to perform multiplication in dishonest majority
without using pre-computation.