June 16th, 2017, Thomas Prest (Thalès): Sharper Bounds in Lattice-Based Cryptography using the Rényi Divergence

The Rényi divergence is a measure of divergence between distributions. It has recently found several applications in lattice-based cryptography. The contribution of this paper is twofold. First, we give theoretic results which renders it more efficient and easier to use. This is done by providing two lemmas, which give tight bounds in very common situations – for distributions that are tailcut or have a bounded relative error. We then connect the Rényi divergence to the max-log distance. This allows the Rényi divergence to indirectly benefit from all the advantages of a distance. Second, we apply our new results to five practical usecases. It allows us to claim 256 bits of security for a floating-point precision of 53 bits, in cases that until now either required more than 150 bits of precision or were limited to 100 bits of security: rejection sampling, trapdoor sampling (61 bits in this case) and a new sampler by Micciancio and Walter. We also propose a new and compact approach for table-based sampling, and squeeze the standard deviation of trapdoor samplers by a factor that provides a gain of 30 bits of security in practice.

February 13, 2017, Joost Renes (Radboud Universiteit Nijmegen): Efficient compression of SIDH public keys

Supersingular-isogeny Diffie-Hellman (SIDH) is an attractive candidate for post-quantum key exchange, in large part due to its relatively small public key sizes. A recent paper showed that the public keys in the original SIDH scheme can be further compressed by around a factor of two, but reported that the performance penalty in utilizing this compression blew the overall SIDH runtime out by more than an order of magnitude. Given that the runtime of SIDH key exchange is currently its main drawback in relation to its lattice- and code-based post-quantum alternatives, an order of magnitude performance penalty for a factor of two improvement in bandwidth presents a trade-off that is unlikely to favor public-key compression in many scenarios.
This talk will introduce algorithms and techniques that accelerate SIDH public-key-compression by more than an order of magnitude, making it roughly as fast as a round of standalone SIDH key exchange, while further reducing the size of the compressed public keys by 12.5%. These improvements enable the practical use of compression, achieving public keys of only 330 bytes for the concrete parameters used to target 128 bits of quantum security and further strengthens SIDH as a promising post-quantum primitive.

January 5th, 2017, Nick Coxon: Systematic encoding of multiplicity codes

We describe a subquadratic algorithm for systematic encoding of multiplicity codes. These codes generalise the construction of Reed-Muller codes by additionally evaluating partial derivatives, allowing higher rates to be obtained while retaining sublinear local decoding. We base our encoding algorithm upon the multivariate interpolation and evaluation algorithms of van der Hoeven and Schost (2013), which we partially generalise to handle the presence of derivatives.

3 octobre 2016, Aurore Guillevic: Key-size updates for pairing-friendly elliptic curves

The building-bloc function in bilinear pairing-based cryptography is a Weil or Tate pairing over an elliptic (or hyperelliptic) curve, defined over a finite field, and of small embedding degree. The security relies on the hardness of a discrete logarithm computation in an elliptic curve and in a finite field: the target group of a cryptographic bilinear pairing is a finite field GF(p^n) where usually the embedding degree n is small: 1 ≤= n ≤= 24 and most of the time, n=12. (Small characteristic finite fields are now avoided: there exists a quasi-polynomial-time algorithm to compute DL). Until 2014 is was assumed that computing a discrete logarithm in a large characteristic field GF(p^n) was at least as hard as in a prime field GF(q) where log q = log p^n (i.e. of same total size). This is not true anymore: the Number Field Sieve algorithm now has better variants for such finite fields. To obtain the same security, finite fields should be enlarged. The asymptotic complexity of the NFS algorithm is not tight enough to obtain tight intervals for key-sizes according to a given security level. We propose to study the ingredient in NFS that determines the complexity: the size of the elements involved in the relation collection. Graphs of these sizes will be presented for 2 ≤ n ≤ 12, with quite interesting results in some cases. The norms were computed experimentally.