IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
19 July 2025
Felix Carvalho Rodrigues, Décio Gazzoni Filho, Gora Adj, Isaac A. Canales-Martínez, Jorge Chávez-Saab, Julio López, Michael Scott, Francisco Rodríguez-Henríquez
Finite field arithmetic is central to several cryptographic algorithms on embedded devices like the ARM Cortex-M4, particularly for elliptic curve and isogeny-based cryptography. However, rapid algorithm evolution, driven by initiatives such as NIST’s post-quantum standardization, might frequently render hand-optimized implementations obsolete.
We address this challenge with m4-modarith, a library generating C code with inline assembly for the Cortex-M4 that rivals custom-tuned assembly,
enabling agile development in this ever-changing landscape.
Our generated modular multiplications obtains fast performances, competitive with hand-optimized assembly implementations published in the literature, even outperforming some of them for Curve25519.
Two contributions are pivotal to this success.
First, we introduce a novel multiplication strategy that matches the memory access complexity of the operand caching method while being applicable to a larger cache size for Cortex-M4 implementations. Second, we generalize an efficient pseudo-Mersenne reduction strategy, and formally prove its correctness and applicability for most primes of cryptographic interest.
Our generator allowed agile optimization of SQIsign’s NIST PQC Round 2 submission, improving level 1 verification from 123 Mcycles to only 54 Mcycles, a $2.3\times$ speedup.
As an additional case study, we use our generator to improve performance of portable implementations of RFC~7748 by up to $2.2\times$.
Thi Van Thao Doan, Olivier Pereira, Thomas Peters
Proving the validity of ballots is a central element of verifiable elections. Such proofs can however create challenges when one desires to make a protocol receipt-free.
We explore the challenges raised by validity proofs in the context of protocols where threshold receipt-freeness is obtained by secret sharing an encryption of a vote between multiple authorities.
In such contexts, previous solutions verified the validity of votes by decrypting them after passing them through a mix-net. This approach however creates subtle privacy risks, especially when invalid votes leak structural patterns that threaten receipt-freeness.
We propose a different approach of threshold receipt-free voting in which authorities re-randomize ballot shares then jointly compute a ZK proof of ballot validity before letting the ballots enter a (possibly homomorphic) tallying phase. Our approach keeps the voter computational costs limited while offering verifiability and improving the ballot privacy of previous solutions.
We present two protocols that enable a group of servers to verify and publicly prove that encrypted votes satisfy some validity properties: Minimix, which preserves prior voter-side behavior with minimal overhead, and Homorand, which requires voters to submit auxiliary data to facilitate validation over large vote domains. We show how to use our two protocols within a threshold receipt-free voting framework. We provide formal security proofs and efficiency analyses to illustrate trade-offs in our designs.
Dilara Toprakhisar, Svetla Nikova, Ventzislav Nikov
Physical attacks pose a major challenge to the secure implementation of cryptographic algorithms. Although significant progress has been made in countering passive attacks such as side-channel analysis (SCA), protection against fault attacks is still less developed. One reason for this is the broader and more complex nature of fault attacks, which makes it difficult to create standardized fault evaluation methodologies for countermeasures like those used for SCA. This makes it easier to overlook potential vulnerabilities that attackers could exploit. RS-Mask, published at HOST 2020, is such a countermeasure that has been affected by the absence of a systematic analysis method. The fundamental concept behind the countermeasure is to maintain a uniform distribution of variables, regardless of whether they are faulty or correct. This property is particularly effective against Statistical Ineffective Fault Attacks (SIFA), which exploit the dependency between fault propagation and the secret data.
In this work, we present several fault scenarios involving single fault injections on the AES implementation protected with RS-Mask, where the fault propagation depends on the secret data. This happens because the random space mapping used in RS-Mask countermeasure retains a dependency on the secret data, as it is derived based on the S-box input. To address this, we propose a new countermeasure based on the core concept of RS-Mask, implementing a single mapping for all S-box inputs, involving an intrinsic duplication. Next, we evaluate the effectiveness of the new countermeasure against fault attacks by comparing the fault detection rate across all possible fault locations and values for every input. Additionally, we examine the output differences between faulty and correct outputs for each input. Our results show that the detection rate is uniform for each input, which ensures security against statistical attacks utilizing both effective and ineffective faults. Moreover, the output differences being uniform for each input ensures security against differential fault attacks.
In this work, we present several fault scenarios involving single fault injections on the AES implementation protected with RS-Mask, where the fault propagation depends on the secret data. This happens because the random space mapping used in RS-Mask countermeasure retains a dependency on the secret data, as it is derived based on the S-box input. To address this, we propose a new countermeasure based on the core concept of RS-Mask, implementing a single mapping for all S-box inputs, involving an intrinsic duplication. Next, we evaluate the effectiveness of the new countermeasure against fault attacks by comparing the fault detection rate across all possible fault locations and values for every input. Additionally, we examine the output differences between faulty and correct outputs for each input. Our results show that the detection rate is uniform for each input, which ensures security against statistical attacks utilizing both effective and ineffective faults. Moreover, the output differences being uniform for each input ensures security against differential fault attacks.
Edward Chen, Fraser Brown, Wenting Zheng
Homomorphic encryption (HE) offers strong privacy guarantees by enabling computation over encrypted data. However, the performance of tensor operations in HE is highly sensitive to how the plaintext data is packed into ciphertexts. Large tensor programs introduce numerous possible layout assignments, making it both challenging and tedious for users to manually write efficient HE programs.
In this paper, we present Rotom, a compilation framework that autovectorizes tensor programs into optimized HE programs. Rotom systematically explores a wide range of layout assignments, applies state-of-the-art optimizations, and automatically finds an equivalent, efficient HE program. At its core, Rotom utilizes a novel, lightweight ApplyRoll layout conversion operator to easily modify the underlying data layouts and unlock new avenues for performance gains. Our evaluation demonstrates that Rotom scalably compiles all benchmarks in under 5 minutes, reduces rotations in manually optimized protocols by up to 4×, and achieves up to 80× performance improvement over prior systems.
In this paper, we present Rotom, a compilation framework that autovectorizes tensor programs into optimized HE programs. Rotom systematically explores a wide range of layout assignments, applies state-of-the-art optimizations, and automatically finds an equivalent, efficient HE program. At its core, Rotom utilizes a novel, lightweight ApplyRoll layout conversion operator to easily modify the underlying data layouts and unlock new avenues for performance gains. Our evaluation demonstrates that Rotom scalably compiles all benchmarks in under 5 minutes, reduces rotations in manually optimized protocols by up to 4×, and achieves up to 80× performance improvement over prior systems.
Yuval Efron, Ling Ren
The synchrony model allows Byzantine Agreement (BA) protocols to be deterministic, tolerate minority
faults, and achieve the asymptotically optimal $O(n)$ rounds, and $O(n^2)$ bits of communication where $n$ is the number of parties.
We study the deterministic BA problem in a model in which every communication link is either synchronous or partially synchronous. Our main result for this model is that feasibility implies optimality: For every $\frac{n}{3}\leq f<\frac{n}{2}$, the minimal network conditions required for BA to be solvable against $f$ byzantine faults, are also sufficient for it to be solvable optimally, i.e., with $O(f)$ rounds and $O(f^2)$ communication. In particular, BA against minority byzantine faults can be solved when the synchronous links in the network form a mere path ($f$ synchronous links) as efficiently (up to constant factors) as when all communication links are synchronous ($\Omega(f^2)$ synchronous links).
Shokofeh VahidianSadegh, Alberto Ibarrondo, Lena Wiese
High-throughput technologies (e.g., the microarray) have fostered the rapid growth of gene expression data collection. These biomedical datasets, increasingly distributed among research institutes and hospitals, fuel various machine learning applications such as anomaly detection, prediction or clustering. In particular, unsupervised classification techniques based on biclustering like the Cheng and Church Algorithm (CCA) have proven to adapt particularly well to gene expression data. However, biomedical data is highly sensitive, hence its sharing across multiple entities introduces privacy and security concerns, with an ever-present threat of accidental disclosure or leakage of private patient information. To address such threat, this work introduces a novel, highly efficient privacy-preserving protocol based on secure multiparty computation (MPC) between two servers to compute CCA. Our protocol performs operations relying on an additive secret sharing and function secret sharing, leading us to reformulate the steps of the CCA into MPC-friendly equivalents. Leveraging lightweight cryptographic primitives, our new technique named FunBic-CCA is first to exploit the efficiency of function secret sharing to achieve fast evaluation of the CCA biclustering algorithm.
Julien Béguinot, Olivier Rioul, Loïc Masure, François-Xavier Standaert, Wei Cheng, Sylvain Guilley
Evaluating the security of a device against side-channel attacks is a difficult task. One prominent strategy for this purpose is to characterize the distribution of the rank of the correct key among the different key hypotheses produced by a maximum likelihood attack, depending on the number of measured traces. In practice, evaluators can estimate some statistics of the rank that are used as security indicators---e.g., the arithmetic and geometric mean rank, the median rank, the $\alpha$-marginal guesswork, or the success rate of level $L$. Yet, a direct estimation becomes time-consuming as security levels increase.
In this work, we provide new bounds on these figures of merit in terms of the mutual information between the secret and its side-channel leakages. These bounds provide theoretical insights on the evolution of the figures of merit in terms of noise level, computational complexity (how many keys are evaluated) and data complexity (how many side-channel traces are used for the attack). To the best of our knowledge, these bounds are the first to formally characterize security guarantees that depend on the computational power of the adversary, based on a measure of their informational leakages. It follows that our results enable fast shortcut formulas for the certification laboratories, potentially enabling them to speed up the security evaluation process. We demonstrate the tightness of our bounds on both synthetic traces (in a controlled environment) and real-world traces from two popular datasets (Aisylab/AES\_HD and SMAesH).
In this work, we provide new bounds on these figures of merit in terms of the mutual information between the secret and its side-channel leakages. These bounds provide theoretical insights on the evolution of the figures of merit in terms of noise level, computational complexity (how many keys are evaluated) and data complexity (how many side-channel traces are used for the attack). To the best of our knowledge, these bounds are the first to formally characterize security guarantees that depend on the computational power of the adversary, based on a measure of their informational leakages. It follows that our results enable fast shortcut formulas for the certification laboratories, potentially enabling them to speed up the security evaluation process. We demonstrate the tightness of our bounds on both synthetic traces (in a controlled environment) and real-world traces from two popular datasets (Aisylab/AES\_HD and SMAesH).
Yuntian Chen, Zhanyong Tang, Tianpei Lu, Bingsheng Zhang, Zhiying Shi, Zhiyuan Ning
Privacy-preserving machine learning (PPML) is critical for protecting sensitive data in domains like healthcare, finance, and recommendation systems. Fully Homomorphic Encryption (FHE) and Secure Multi-Party Computation (MPC) are key enablers of secure computation, yet existing hybrid approaches often suffer from fixed protocol assignments, resulting in inefficiencies across diverse network environments, such as LANs and WANs. To address this, we introduce CostSphere, a cost-model-driven framework that dynamically assigns FHE and MPC protocols to optimize computational efficiency under varying network conditions. Utilizing a predictive cost model based on MLIR’s TOSA-level dialect and an ILP-based solver, CostSphere ensures robust performance for Transformer-based models. Experimental results demonstrate that CostSphere delivers $6.68\times$ to $12.92\times$ improvements in inference runtime compared to state-of-the-art solutions like BumbleBee (NDSS ’25), enabling scalable and network-agnostic PPML across diverse computational scenarios.
Jianhua Wang, Tao Huang, Guang Zeng, Tianyou Ding, Shuang Wu, Siwei Sun
We introduce a concrete instance of the LRW+ paradigm: the Three-Hash Framework (THF), a mode for constructing tweakable block ciphers that employs three hash functions to process tweak inputs. Through a general practical cryptanalysis of THF in both single-tweak and multiple-tweak settings, and by comparing it with two representative modes, 4-LRW1 and 2-LRW2, we demonstrate that THF has the potential to achieve lower latency due to its more balanced resistance to both single- and multiple-tweak attacks.
Based on this framework, we design Blink, a family of tweakable block ciphers specifically optimized for minimal latency. The hash functions in Blink were carefully chosen to align with the low-latency requirements, ensuring both efficiency and strong security. A thorough cryptanalysis of Blink is provided, with an emphasis on its resistance to multiple-tweak attacks.
Finally, hardware evaluations highlight the exceptional low-latency performance of Blink, distinguishing it as one of the most efficient tweakable block ciphers.
Shuaishuai Li, Anyu Wang, Cong Zhang, Xiaoyun Wang
The field of private information retrieval (PIR) has made significant strides with a recent focus on protocols that offer sublinear online time, ensuring efficient access to public databases without compromising the privacy of the queries. The pioneering two-server PIR protocols developed by Corrigan-Gibbs and Kogan (EUROCRYPT 2020) enjoy the dual benefits of sublinear online time and statistical security. This allows their protocols to provide high efficiency and resist computationally unbounded adversaries. In this work, we extend this seminal work to the symmetric PIR (SPIR) context, where the protocol must ensure that the client is privy only to the requested database entries, with no knowledge of the remaining data. This enhancement aligns with scenarios where the confidentiality of non-requested information is as critical as the query itself. Our main result is the introduction of the first two-server SPIR protocols that achieve both sublinear online time and statistical security, together with an enhancement for achieving sublinear amortized time. Our protocols require a pragmatic level of shared randomness between the servers, which however is necessary for implementing statistical security in two-server SPIR, as showed by Gertner et al. (STOC 1998).
Gökçe Düzyol, Muhammed Said Gündoğan, Atakan Arslan
FrodoKEM is a post-quantum key encapsulation mechanism based on plain Learning With Errors (LWE). In contrast to module-lattice-based schemes, it relies on an unstructured variant of the LWE problem, providing more conservative and better-understood security guarantees. As a result, FrodoKEM has been recommended by European cybersecurity agencies such as BSI and ANSSI, and has also been proposed in international standardization efforts, including ISO and the IETF Internet-Draft process.
In this paper, we explore hardware-level parallelization techniques for FrodoKEM. To date, the only notable attempt to parallelize FrodoKEM in hardware was made by Howe et al. in 2021. In their work, the SHAKE function was identified as a performance bottleneck and replaced by the Trivium stream cipher. However, this replacement renders the implementation incompatible with standardized recommendations. In contrast, our work adheres strictly to the original FrodoKEM specification, including its use of SHAKE as the PRNG, and introduces a scalable architecture enabling high-throughput parallel execution.
For FrodoKEM-640, we present parallel architectures for key generation, encapsulation, and decapsulation. Our implementation achieves between 976 and 1077 operations per second, making it the fastest FrodoKEM hardware implementation reported to date. Furthermore, we propose a general architecture that offers a scalable area-throughput trade-off: by increasing the number of DSPs and proportionally scaling BRAM usage, our design can be scaled to achieve significantly higher performance beyond the reported implementation. This demonstrates that SHAKE is not inherently a barrier to parallel matrix multiplication, and that efficient, standard-compliant FrodoKEM implementations are achievable for high-speed cryptographic applications.
In this paper, we explore hardware-level parallelization techniques for FrodoKEM. To date, the only notable attempt to parallelize FrodoKEM in hardware was made by Howe et al. in 2021. In their work, the SHAKE function was identified as a performance bottleneck and replaced by the Trivium stream cipher. However, this replacement renders the implementation incompatible with standardized recommendations. In contrast, our work adheres strictly to the original FrodoKEM specification, including its use of SHAKE as the PRNG, and introduces a scalable architecture enabling high-throughput parallel execution.
For FrodoKEM-640, we present parallel architectures for key generation, encapsulation, and decapsulation. Our implementation achieves between 976 and 1077 operations per second, making it the fastest FrodoKEM hardware implementation reported to date. Furthermore, we propose a general architecture that offers a scalable area-throughput trade-off: by increasing the number of DSPs and proportionally scaling BRAM usage, our design can be scaled to achieve significantly higher performance beyond the reported implementation. This demonstrates that SHAKE is not inherently a barrier to parallel matrix multiplication, and that efficient, standard-compliant FrodoKEM implementations are achievable for high-speed cryptographic applications.
Dimitri Koshelev, Youssef El Housni, Georgios Fotiadis
A major challenge in elliptic curve cryptosystems consists in mitigating efficiently the small-subgroup attack. This paper explores batch subgroup membership testing (SMT) on pairing-friendly curves, particularly for the Barreto--Lynn--Scott family of embedding degree 12 (BLS12) due to its critical role in modern pairing-based cryptography. Our research introduces a novel two-step procedure for batch SMT to rapidly verify multiple points at once, cleverly combining the already existing tests based on the Tate pairing and a non-trivial curve endomorphism. We clarify why the invented technique is significantly faster (despite a negligible error probability) than testing each point individually. Moreover, it is applicable to prominent curves like BLS12-381 and BLS12-377 being frequently employed in zero-knowledge applications. Nonetheless, to further enhance the speed (or reduce the error probability) of the proposed batch point validation, we have generated two new BLS12 curves that are specifically optimized for this purpose. We also provide an open-source high-speed software implementation in Go, showcasing explicitly significant performance improvements achieved by our work.
El Hadji Mamadou DIA, Walid ARABI, Anis BKAKRIA, Reda YAICH
Decision trees are extensively employed in artificial intelligence and machine learning due to their interpretability, efficiency, and robustness-qualities that are particularly valued in sensitive domains such as healthcare, finance, and cybersecurity. In response to evolving data privacy regulations, there is an increasing demand for models that ensure data confidentiality during both training and inference. Homomorphic encryption emerges as a promising solution by enabling computations directly on encrypted data without exposing plaintext inputs. This survey provides a comprehensive review of privacy-preserving decision tree protocols leveraging homomorphic encryption. After
introducing fundamental concepts and the adopted methodology,
a dual-layer taxonomy is presented, encompassing system and
data characteristics as well as employed processing techniques.
This taxonomy facilitates the classification and comparison of
existing protocols, evaluating their effectiveness in addressing key
challenges related to privacy, efficiency, usability, and deploy-
ment. Finally, current limitations, emerging trends, and future
research directions are discussed to enhance the security and
practicality of homomorphic encryption frameworks for decision
trees in privacy-sensitive applications.
Sengim Karayalcin, Marina Krcek, Stjepan Picek
Deep learning-based side-channel analysis (DLSCA) represents a powerful paradigm for running side-channel attacks. DLSCA in a state-of-the-art can break multiple targets with only a single attack trace, requiring minimal feature engineering. As such, DLSCA also represents an extremely active research domain for both industry and academia. At the same time, due to domain activity, it becomes more difficult to understand what the current trends and challenges are.
In this systematization of knowledge, we provide a critical outlook on a number of developments in DLSCA in the last year, allowing us to offer concrete suggestions. Moreover, we examine the reproducibility perspective, finding that many works still struggle to provide results that can be used by the community.
In this systematization of knowledge, we provide a critical outlook on a number of developments in DLSCA in the last year, allowing us to offer concrete suggestions. Moreover, we examine the reproducibility perspective, finding that many works still struggle to provide results that can be used by the community.
Elie Eid, Aurélien Greuet, Nathan Reboud, Rina Zeitoun
FrodoKEM is a conservative lattice-based KEM based on the Learning With Errors problem. While it was not selected for NIST standardization, it remains a strong candidate for high-security applications and is recommended by several national agencies, including BSI, ANSSI, and the EUCC. Its reliance on CDT-based Gaussian sampling presents a significant challenge for side-channel secure implementations. While recent work by Gérard and Guerreau [GG25] has shown that masking FrodoKEM is feasible, the Gaussian sampler remains a major bottleneck, accounting for between 34% and 65% of the execution time. In this work, we introduce a new high-order masking gadget for CDT sampling, provably secure in the ISW probing model and significantly more efficient than previous approaches. We instantiate and evaluate our design in the context of FrodoKEM, with a complete first-order implementation on Cortex-M3, reflecting the most relevant threat model in practice.
Compared with [GG25] at first order, the cost of the sampler is reduced by at least 82% and the number of random generations by at least 69%.
Higher-order security is also fully supported through a generic C implementation, with some selected gadgets hand-optimized in assembly to improve efficiency.
Tim Ruffing
As of November 2021, Bitcoin supports “Taproot” spending policies whose on-chain format is a single elliptic curve point. A transaction spending the funds associated with a Taproot policy can be authorized by interpreting the curve point either (a) as a public key of the Schnorr signature scheme and providing a suitable signature, or (b) as a commitment to alternative spending conditions and satisfying those.
Since a sufficiently powerful quantum adversary would be able to forge Schnorr signatures, an upgrade to Bitcoin may, at some point in the future, disable the ability to spend existing funds via Schnorr signatures in order to prevent the havoc created by leaving a large fraction of the currency supply prone to theft. However, to avoid irrevocably losing all funds not migrated in time to (yet to be added) post-quantum signature schemes, it will be desirable for an upgrade disabling Schnorr signatures to retain the ability to spend funds by interpreting the curve point in a Taproot policy as a commitment to alternative spending conditions.
This paper justifies such an upgrade strategy by demonstrating the post-quantum security of Taproot as a commitment scheme. Specifically, it provides concrete upper bounds on the probability that a quantum adversary making some number of queries to a quantum random oracle can break the binding or hiding property. Since the bounds follow from powerful existing results, which enable reasoning as if dealing with a classical adversary, the proofs are accessible without a background in quantum computing.
Yufei Yuan, Haiyi Xu, Lei Zhang, Wenling Wu
In this paper, we revisit the standard approach to constructing neural distinguishers in symmetric cryptanalysis and introduce a game-like model, the Coin-Tossing model, to generalize this methodology. From the perspective of Boolean functions, we show that many classical cryptanalytic techniques can be generalized as a specific family of Boolean functions, termed the CPF class. We further explore the connection between learning CPF Boolean functions in the Coin-Tossing model and the well-known Learning Parity with Noise (LPN) problem. Leveraging the theoretical analysis, we identify key attributes of CPF functions that significantly affect how effectively machine learning algorithms can learn them. To validate our conclusions, we also conduct extensive experiments based on machine learning algorithms. Incorporating our theoretical insights, we propose an advanced 8-round and 9-round neural distinguisher for SPECK32/64 by reducing the problem complexity. Additionally, we propose a method based on the Goldreich-Levin algorithm to analyze and interpret what black-box distinguishers learn. Using this approach, we reinterpret several established neural distinguishers in terms of Fourier expansion. It is able to resolve the previous neural distinguisher in several Fourier terms. Notably, we identify a new type of distinguisher from neural networks that has not been discovered by cryptanalysts, which can be considered as a variant of the Differential-Linear distinguisher. We also demonstrate that the neural network not only learned the optimal Differential-Linear (DL) distinguishers found using existing MILP/MIQCP models, but also discovered even superior ones.
Keewoo Lee
A Private Information Retrieval (PIR) scheme allows a client to retrieve data from a database hosted on a remote server without revealing which location is being accessed. In Doubly-Efficient PIR (DEPIR), the server preprocesses the database offline into a data structure that enables it to answer any client query in sublinear time with respect to the database size $N$. The breakthrough work of Lin-Mook-Wichs (STOC’23) presented the first DEPIR construction from the Ring-LWE assumption. This remains essentially the only known construction to date, and realizing DEPIR from alternative foundations is an open problem. In particular, constructing DEPIR from plain LWE is still open.
In this work, we present a simple LWE-based construction of DEPIR in the CRS model. Our construction is inspired by SimplePIR (USENIX’23) and leverages the matrix–vector multiplication preprocessing of Williams (SODA’07). By applying the compiler of Lin–Mook–Wichs (Eurocrypt’25), we also obtain an sk-DEPIR scheme, a keyed variant of DEPIR, based on LWE in the standard model. All existing sk-DEPIR constructions, except the one implied by LMW23 based on Ring-LWE, rely on non-standard code-based assumptions. While our constructions are only barely doubly-efficient, with server computation of $O(N / \log{N})$, it was previously unknown whether even such modest sublinear efficiency could be achieved from unstructured, plain LWE.
In this work, we present a simple LWE-based construction of DEPIR in the CRS model. Our construction is inspired by SimplePIR (USENIX’23) and leverages the matrix–vector multiplication preprocessing of Williams (SODA’07). By applying the compiler of Lin–Mook–Wichs (Eurocrypt’25), we also obtain an sk-DEPIR scheme, a keyed variant of DEPIR, based on LWE in the standard model. All existing sk-DEPIR constructions, except the one implied by LMW23 based on Ring-LWE, rely on non-standard code-based assumptions. While our constructions are only barely doubly-efficient, with server computation of $O(N / \log{N})$, it was previously unknown whether even such modest sublinear efficiency could be achieved from unstructured, plain LWE.
Anders Lindman
Cascader, a novel key-exchange protocol based on an iterative multiplicative recurrence over a finite field, is introduced. In contrast to standard methods, e.g., traditional Diffie–Hellman and ECC, it replaces exponentiation and scalar multiplication with layered products, achieving commutativity and deterministic pseudorandom behavior.
Hugo Beeloo-Sauerbier Couvée, Antonia Wachter-Zeh, Violetta Weger
The seminal work of [Bardet et al., 2020] has shown the potential of algebraic modelings to solve the Rank Syndrome Decoding Problem (R-SDP). For most parameter ranges, their algorithm first needs to perform a guessing step, making it a hybrid approach between combinatorial and algebraic solvers. In this paper, we present a novel guessing technique, which, when applied to [Bardet et al., 2020], is able to attack the proposed parameters of RYDE, a second-round candidate for the additional NIST standardization process. In particular, we show that all of the proposed RYDE parameters have to be updated: We can reduce the security of RYDE-1 to 104 bits, RYDE-3 to 153 bits, and RYDE-5 to 210 bits.