International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Thomas Debris-Alazard

Publications

Year
Venue
Title
2024
EUROCRYPT
Reduction from sparse LPN to LPN, Dual Attack 3.0
The security of code-based cryptography relies primarily on the hardness of decoding generic linear codes. Until very recently, all the best algorithms for solving the decoding problem were information set decoders ($\mathsf{ISD}$). However, recently a new algorithm called \RLPN-decoding which relies on a completely different approach was introduced and it has been shown that \RLPN \ outperforms significantly $\mathsf{ISD}$ decoders for a rather large range of rates. This \RLPN \ decoder relies on two ingredients, first reducing decoding to some underlying \LPN{} problem, and then computing efficiently many parity-checks of small weight when restricted to some positions. We revisit \RLPN-decoding by noticing that, in this algorithm, decoding is in fact reduced to a sparse-\LPN{} problem, namely with a secret whose Hamming weight is small. Our new approach consists this time in making an additional reduction from sparse-\LPN{} to plain-\LPN{} with a coding approach inspired by $\mathsf{coded}$-$\mathsf{BKW}$. It outperforms significantly the $\mathsf{ISD}$'s and \RLPN \ for code rates smaller than $\hardestrate$. This algorithm can be viewed as the code based cryptography cousin of recent dual attacks in lattice based cryptography. We depart completely from the traditional analysis of this kind of algorithm which uses a certain number of independence assumptions that have been strongly questioned recently in the latter domain. We give instead a formula for the \LPNs noise relying on duality which allows to analyze the behavior of the algorithm by relying only on the analysis of a certain weight distribution. By using only a minimal assumption whose validity has been verified experimentally we are able to justify the correctness of our algorithm. This key tool, namely the duality formula, can be readily adapted to the lattice setting and is shown to give a simple explanation for some phenomena observed on dual attacks in lattices in \cite{DP23}.
2023
ASIACRYPT
Pseudorandomness of Decoding, Revisited: Adapting OHCP to Code-Based Cryptography
Maxime Bombar Alain Couvreur Thomas Debris-Alazard
Recent code-based cryptosystems rely, among other things, on the hardness of the decisional decoding problem. If the search version is well understood, both from practical and theoretical standpoints, the decision version has been less studied in the literature, and little is known about its relationships with the search version, especially for structured variants. On the other hand, in the world of Euclidean lattices, the situation is rather different, and many reductions exist, both for unstructured and structured versions of the underlying problems. For the latter versions, a powerful tool called the OHCP framework (for Oracle with Hidden Center Problem), which appears to be very general, has been introduced by Peikert et al. (STOC 2017) and has proved to be very useful as a black box inside reductions. In this work, we revisit this technique and extract the very essence of this framework, namely the Oracle Comparison Problem (OCP), to show how to recover the support of the error, solving an Oracle with Hidden Support Problem (OHSP), more suitable for code-based cryptography. This yields a new worst-case to average-case search-to-decision reduction for the Decoding Problem, as well as a new average-case to average-case reduction. We then turn to the structured versions and explain why this is not as straightforward as for Euclidean lattices. If we fail to give a search-to-decision reduction for structured codes, we believe that our work opens the way towards new reductions for structured codes, given that the OHCP framework proved to be so powerful in lattice-based cryptography. Furthermore, we also believe that this technique could be extended to codes endowed with other metrics, such as the rank metric, for which no reduction is known.
2022
CRYPTO
On Codes and Learning with Errors over Function Fields 📺
Maxime Bombar Alain Couvreur Thomas Debris-Alazard
It is a long standing open problem to find search to decision reductions for structured versions of the decoding problem of linear codes. Such results in the lattice-based setting have been carried out using number fields: Polynomial–LWE, Ring–LWE, Module–LWE and so on. We propose a function field version of the LWE problem. This new framework leads to another point of view on structured codes, e.g. quasi-cyclic codes, strengthening the connection between lattice-based and code-based cryptography. In particular, we obtain the first search to decision reduction for structured codes. Following the historical constructions in lattice–based cryptography, we instantiate our construction with function fields analogues of cyclotomic fields, namely Carlitz ex- tensions, leading to search to decision reductions on various versions of Ring-LPN, which have applications to secure multi party computation and to an authentication protocol.
2022
ASIACRYPT
Statistical Decoding 2.0: Reducing Decoding to LPN 📺
The security of code-based cryptography relies primarily on the hardness of generic decoding with linear codes. The best generic decoding algorithms are all improvements of an old algorithm due to Prange: they are known under the name of information set decoders (ISD). A while ago, a generic decoding algorithm which does not belong to this family was proposed: statistical decoding. It is a randomized algorithm that requires the computation of a large set of parity-checks of moderate weight, and uses some kind of majority voting on these equations to recover the error. This algorithm was long forgotten because even the best variants of it performed poorly when compared to the simplest ISD algorithm. We revisit this old algorithm by using parity-check equations in a more general way. Here the parity-checks are used to get LPN samples with a secret which is part of the error and the LPN noise is related to the weight of the parity-checks we produce. The corresponding LPN problem is then solved by standard Fourier techniques. By properly choosing the method of producing these low weight equations and the size of the LPN problem, we are able to outperform in this way significantly information set decodings at code rates smaller than 0.3. It gives for the first time after 60 years, a better decoding algorithm for a significant range which does not belong to the ISD family.
2020
PKC
Tight and Optimal Reductions for Signatures Based on Average Trapdoor Preimage Sampleable Functions and Applications to Code-Based Signatures 📺
André Chailloux Thomas Debris-Alazard
The GPV construction [ GPV08 ] presents a generic construction of signature schemes in the Hash and Sign paradigm and is used in some lattice based signatures. This construction requires a family $$mathcal {F}$$ of trapdoor preimage sampleable functions (TPSF). In this work we extend this notion to the weaker Average TPSF (ATPSF) and show that the GPV construction also holds for ATPSF in the Random Oracle Model (ROM). We also introduce the problem of finding a Claw with a random function (Claw(RF)) and present a tight security reduction to the Claw(RF) problem. Our reduction is also optimal meaning that an algorithm that solves the Claw(RF) problem breaks the scheme. We extend these results to the quantum setting and prove this same tight and optimal reduction in the QROM. Finally, we apply these results to code-based signatures, notably the Wave signature scheme and prove security for it in the ROM and the QROM, improving and extending the original analysis of [ DST19a ].
2019
ASIACRYPT
Wave: A New Family of Trapdoor One-Way Preimage Sampleable Functions Based on Codes
Thomas Debris-Alazard Nicolas Sendrier Jean-Pierre Tillich
We present here a new family of trapdoor one-way functions that are Preimage Sampleable on Average (PSA) based on codes, the Wave-PSA family. The trapdoor function is one-way under two computational assumptions: the hardness of generic decoding for high weights and the indistinguishability of generalized $$(U,U+V)$$-codes. Our proof follows the GPV strategy [28]. By including rejection sampling, we ensure the proper distribution for the trapdoor inverse output. The domain sampling property of our family is ensured by using and proving a variant of the left-over hash lemma. We instantiate the new Wave-PSA family with ternary generalized $$(U,U+V)$$-codes to design a “hash-and-sign” signature scheme which achieves existential unforgeability under adaptive chosen message attacks (EUF-CMA) in the random oracle model.
2018
ASIACRYPT
Two Attacks on Rank Metric Code-Based Schemes: RankSign and an IBE Scheme
Thomas Debris-Alazard Jean-Pierre Tillich
RankSign [30] is a code-based signature scheme proposed to the NIST competition for quantum-safe cryptography [5] and, moreover, is a fundamental building block of a new Identity-Based-Encryption (IBE) [26]. This signature scheme is based on the rank metric and enjoys remarkably small key sizes, about 10KBytes for an intended level of security of 128 bits. Unfortunately we will show that all the parameters proposed for this scheme in [5] can be broken by an algebraic attack that exploits the fact that the augmented LRPC codes used in this scheme have very low weight codewords. Therefore, without RankSign the IBE cannot be instantiated at this time. As a second contribution we will show that the problem is deeper than finding a new signature in rank-based cryptography, we also found an attack on the generic problem upon which its security reduction relies. However, contrarily to the RankSign scheme, it seems that the parameters of the IBE scheme could be chosen in order to avoid our attack. Finally, we have also shown that if one replaces the rank metric in the [26] IBE scheme by the Hamming metric, then a devastating attack can be found.