#### Faster elliptic and hyperelliptic curve cryptography

B. Smith contributes to the development of faster arithmetic on elliptic curves and genus 2 Jacobians. First, an extended and more detailed treatment of his ℚ-curve construction for endomorphism-accelerated elliptic curves (ASIACRYPT 2013, EUROCRYPT 2014) appeared in the Journal of Cryptology. A simplified approach to essential precomputations was published in the proceedings of AGCT-14. Finally, with C. Costello and P.-N. Chung, he gave a new, efficient, uniform, and constant-time scalar multiplication algorithm for genus 2 Jacobians exploiting fast Kummer surface arithmetic and features of differential addition chains.

#### Quantum factoring

Integer factorization via Shor’s algorithm is a benchmark problem for general quantum computers, but surprisingly little work has been done on optimizing the algorithm for use as a serious factoring tool once large quantum computers are built. Given the limited size of contemporary quantum computers and the practical difficulties involved in building them, any optimizations to quantum factoring algorithms can lead to significant practical improvements. In cooperation with physicists F. Grosshans and T. Lawson, F. Morain and B. Smith derived a simple new quantum factoring algorithm for cryptographic integers; its expected runtime is lower than Shor’s factoring algorithm, and it should also be easier to implement in practice.

#### Cryptanalysis of code based cryptosystems by filtration attacks

The McEliece encryption scheme based on binary Goppa codes was one of the first public-key encryption schemes. Its security rests on the difficulty of decoding an arbitrary code (message attack) or on recovering a hidden code (key recovery attack). A new style of key recovery attack allowed A. Couvreur, A. Otmani and J.-P. Tillich to break (in polynomial time) McEliece based on wild Goppa codes over quadratic extensions and more recently to break the BBCRS cryptosystem. A. Couvreur, Irene Márquez–Corbella, and R. Pellikaan broke McEliece based on algebraic geometry codes from curves of arbitrary genus by reconstructing optimal polynomial time decoding algorithms from the raw data of a generator matrix.

#### Quantum LDPC codes

Quantum codes are the analogous of error correcting codes for a quantum computer. The well known family of CSS quantum codes are the CSS codes (Calderbank, Shor and Steane) can be represented by a pair of binary matrices (HX,HZ) such that HXHZT=0 . As in classical coding theory, if these matrices are sparse, then the code is said to be LDPC. An open problem in quantum coding theory is to get a family of quantum LDPC codes whose asymptotic minimum distance is in Ω(nα) for some α>1/2 . No such family is known and actually, only few known families of quantum LDPC codes have a minimum distance tending to infinity. Benjamin Audoux (I2M, Marseille) and A. Couvreur study the behaviour of iterated tensor powers of CSS codes and prove in particular that such families always have a minimum distance tending to infinity. They propose also 3 families of LDPC codes whose minimum distance is in Ω(nβ) for all β<1/2 .

#### Discrete Logarithm computations in finite fields with the NFS algorithm

The best discrete logarithm record computations in large characteristic finite fields are obtained with Number Field Sieve algorithm. This algorithm is made of four steps: polynomial selection; relation collection; linear algebra (computing with a matrix of millions of rows and columns); individual discrete logarithm computation. The two most time consuming steps are the relation collection step and the linear algebra step. The polynomial selection is quite fast but determines the whole complexity of the rest of the algorithm. The final step, individual discrete logarithm, though to be quite fast was shown by F. Morain and A. Guillevic to have an increasing complexity with respect to the extension degree. A. Guillevic reduced considerably this complexity.

As a consequence, F. Morain and A. Guillevic released with P. Gaudry and R. Barbulescu a major discrete logarithm record in a quadratic finite field GF (p2) of 180 decimal digits (dd), or 595 bits (Eurocrypt 2015). The last DL record in a prime field was held by the CARAMEL team of Nancy, in 2014, in a 180 dd prime field. Surprinsingly, the new record was much faster than the concurrent DL computation in a prime field of the same global size of 180dd, and even faster than the RSA modulus factorization of the same size.

#### Individual discrete logarithm computation

A big difference between prime fields and finite fields of small extension such as GF (p3) , GF (p4) and GF (p6) is the complexity of the final step of the NFS algorithm: computing the individual discrete logarithm of the target, given the large table of discrete logarithm of *small* elements. This table was obtained at the end of the linear algebra step. The target needs to be decomposed into small enough elements whose discrete logarithm is in the table, so that one can recompose the discrete logarithm of the target. This decomposition is quite fast for prime fields but we realized that is becomes more and more time consuming when the extension degree increase. A. Guillevic developed a new technique to improve considerably this step. The main idea is to use the structure of the finite field: the subfields. These improvements were presented at the Asiacrypt 2015 conference in Auckland, New Zealand and published in the proceedings^{.}

#### Information sets of multiplicity codes

The codes we used in our PIR protocols, namely Reed-Muller and their generalization Multiplicity codes, are locally *correctable* : that means that local decoding allows to retrieve encoded symbols. In most applications, it is very desirable to retrieve *information* symbols. Another line of work in this topic was thus to find an encoding method for multiplicity codes so as to directly recover an information symbol from local correction, and not an encoded symbol. To do so we defined information sets for multiplicity codes, and design a systematic encoding based on this information set. This work was presented at ISIT’2015 in Hong-Kong.

#### Weight distribution of Algebraic-Geometry codes

V. Ducet worked on the weight distribution of geometric codes following a method initiated by Duursma. More precisely he implemented his method in magma and was able to compute the weight distribution of the geometric codes coming from two optimal curves of genus 2 and 3 over the finite fields of size 16 and 9 respectively. The aim is to compute the weight distribution of the Hermitian code over the finite field of size 16.

#### Rank metric codes over infinite fields

We have constructed optimal codes, similar to Gabidulin codes, in the case of infinite fields. We use algebraic extensions, and we have determined the condition on the considered extension to enable this construction. For example, we can design codes with complex coefficients, using number fields and Galois automorphisms. In this setting, a channel introduces errors (a matrix of small rank is added to the codeword) and erasures ( rows and columns of the matrix are erased). We have developed an algorithm to recover the original codeword in the presence of an error of rank weigh. As opposed to the finite field case, we are confronted by coefficient size growth. We solve this problem by computing modulo prime ideals.

#### Hash function cryptanalysis

Cryptographic hash functions are versatile primitives that are used in many cryptographic protocols. A popular hash function is the SHA-1 algorithm. Although theoretical collision attacks were found in 2005, it is still being used in some applications (TLS certificates). In 2015, we improved the state-of-the-art on SHA-1 analysis in two ways. T. Espitau, P.-A. Fouque and P. Karpman improved the previous preimage attacks on SHA-1, reaching up to 62 rounds (out of 80), up from 57 (CRYPTO 2015). P. Karpman, T. Peyrin and M. Stevens developed collision attacks on the compression function of SHA-1 (i.e. freestart collisions). Tthe attack is practical, taking less than two weeks on a 64 GPU cluster (EUROCRYPT 2016).

#### Block cipher design and analysis

Block cipher analysis is a major research topic in cryptography.The community recently shifted focus to the setting of *authenticated encryption*, where one specifies a algorithms providing both encryption and authentication. During this year, we improved the state of the art in block cipher research. P. Karpman found a very efficient related-key attack on the CAESAR candidate Prøst-OTR. B. Minaud, P. Derbez, P.-A. Fouque and P. Karpman developed a family of attacks that breaks all the remaining unbroken instances of the ASASA construction (ASIACRYPT 2014). Using algebraic properties of the ciphers, the attack allows to recover a decryption algorithm in near-practical time. This applies to a multivariate public-key scheme, a classical block cipher and small block ciphers used in white-box constructions (ASIACRYPT 2015, best papers award).P. Karpman developed a compact 8-bit S-box with branch number three, which can be used to construct a lightweight block cipher on 8-bit microcontrollers.