Juan Garay, March 12, Cryptography in the Blockchain Era

The advent of blockchain protocols such as Bitcoin has ignited much excitement, not only for their realization of novel financial instruments, but also for offering alternative solutions to classical problems in fault-tolerant distributed computing and cryptographic protocols. Underlying many of such protocols is a primitive known as “proof of work” (PoW), which for over 20 years has been liberally applied in the cryptography and security literature to a variety of settings, including spam mitigation, sybil attacks, and denial of service protection; its role in the design of blockchain-based protocols, however, is arguably its most impactful application. At a high level, the way PoWs enable such protocols is by slowing message passing for all parties indiscriminately, thus generating opportunities for honest parties’ views to converge, under the assumption that their aggregate computational power exceeds that of the adversary.

In this talk we discuss the evolution of our understanding of the PoW primitive, where pinning down the exact properties sufficient to prove the security of the protocols they enable has been elusive. We first identify such properties, and then construct a protocol whose security can be reduced to them in the standard model, assuming a common reference string (CRS). All previous protocols rely on the “random oracle” methodology. Second, regarding the realizability of important problems in distributed computing and cryptographic protocols, such as Byzantine agreement and secure multiparty computation, we show how this type of primitive allows us to realize them in the presence of a higher number of corrupted parties and weaker setup assumptions. We resolve this apparent contradiction with a new paradigm we call “Resource-Restricted Cryptography,” which hints at “how to do cryptography when cryptography is not possible.”

September 20, 2017, Simon Masson: Méthode GLV en dimension 4 sur les Q-courbes

On étudie une méthode de type GLV en dimension 4, basée sur des familles de Q-courbes avec un endomorphisme CM de petit degré. Par exemple, la courbe FourQ, proposée par Costello et Longa (Microsoft) et apportant 127 bits de sécurité, utilise cette méthode pour calculer efficacement [k]P. Afin d’étendre la proposition de Costello et Longa pour d’autres niveaux de sécurité, on développe des outils de recherche systématique de courbes et de construction des endomorphismes associés.

Nous obtenons une courbe apportant 254 bits de sécurité, possédant un endomorphisme agissant comme \sqrt{2} ainsi qu’un endomorphisme CM agissant comme \sqrt{-11} sur le groupe des points E(\mathbb{F}_{p^2}). On explicite et simplifie les expressions de ces endomorphismes. Le corps de définition de cette courbe possède une arithmétique efficace bien connue, qui permet d’accélérer à nouveau la multiplication scalaire.

June 16th, 2017, Thomas Prest (Thalès): Sharper Bounds in Lattice-Based Cryptography using the Rényi Divergence

The Rényi divergence is a measure of divergence between distributions. It has recently found several applications in lattice-based cryptography. The contribution of this paper is twofold. First, we give theoretic results which renders it more efficient and easier to use. This is done by providing two lemmas, which give tight bounds in very common situations – for distributions that are tailcut or have a bounded relative error. We then connect the Rényi divergence to the max-log distance. This allows the Rényi divergence to indirectly benefit from all the advantages of a distance. Second, we apply our new results to five practical usecases. It allows us to claim 256 bits of security for a floating-point precision of 53 bits, in cases that until now either required more than 150 bits of precision or were limited to 100 bits of security: rejection sampling, trapdoor sampling (61 bits in this case) and a new sampler by Micciancio and Walter. We also propose a new and compact approach for table-based sampling, and squeeze the standard deviation of trapdoor samplers by a factor that provides a gain of 30 bits of security in practice.

February 13, 2017, Joost Renes (Radboud Universiteit Nijmegen): Efficient compression of SIDH public keys

Supersingular-isogeny Diffie-Hellman (SIDH) is an attractive candidate for post-quantum key exchange, in large part due to its relatively small public key sizes. A recent paper showed that the public keys in the original SIDH scheme can be further compressed by around a factor of two, but reported that the performance penalty in utilizing this compression blew the overall SIDH runtime out by more than an order of magnitude. Given that the runtime of SIDH key exchange is currently its main drawback in relation to its lattice- and code-based post-quantum alternatives, an order of magnitude performance penalty for a factor of two improvement in bandwidth presents a trade-off that is unlikely to favor public-key compression in many scenarios.
This talk will introduce algorithms and techniques that accelerate SIDH public-key-compression by more than an order of magnitude, making it roughly as fast as a round of standalone SIDH key exchange, while further reducing the size of the compressed public keys by 12.5%. These improvements enable the practical use of compression, achieving public keys of only 330 bytes for the concrete parameters used to target 128 bits of quantum security and further strengthens SIDH as a promising post-quantum primitive.

January 5th, 2017, Nick Coxon: Systematic encoding of multiplicity codes

We describe a subquadratic algorithm for systematic encoding of multiplicity codes. These codes generalise the construction of Reed-Muller codes by additionally evaluating partial derivatives, allowing higher rates to be obtained while retaining sublinear local decoding. We base our encoding algorithm upon the multivariate interpolation and evaluation algorithms of van der Hoeven and Schost (2013), which we partially generalise to handle the presence of derivatives.

3 octobre 2016, Aurore Guillevic: Key-size updates for pairing-friendly elliptic curves

The building-bloc function in bilinear pairing-based cryptography is a Weil or Tate pairing over an elliptic (or hyperelliptic) curve, defined over a finite field, and of small embedding degree. The security relies on the hardness of a discrete logarithm computation in an elliptic curve and in a finite field: the target group of a cryptographic bilinear pairing is a finite field GF(p^n) where usually the embedding degree n is small: 1 ≤= n ≤= 24 and most of the time, n=12. (Small characteristic finite fields are now avoided: there exists a quasi-polynomial-time algorithm to compute DL). Until 2014 is was assumed that computing a discrete logarithm in a large characteristic field GF(p^n) was at least as hard as in a prime field GF(q) where log q = log p^n (i.e. of same total size). This is not true anymore: the Number Field Sieve algorithm now has better variants for such finite fields. To obtain the same security, finite fields should be enlarged. The asymptotic complexity of the NFS algorithm is not tight enough to obtain tight intervals for key-sizes according to a given security level. We propose to study the ingredient in NFS that determines the complexity: the size of the elements involved in the relation collection. Graphs of these sizes will be presented for 2 ≤ n ≤ 12, with quite interesting results in some cases. The norms were computed experimentally.