IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
20 October 2025
Andrej Bogdanov, Rohit Chatterjee, Yunqi Li, Prashant Nalini Vasudevan
Prange's information set algorithm is a decoding algorithm for arbitrary linear codes. It decodes corrupted codewords of any $\mathbb{F}_2$-linear code $C$ of message length $n$ up to relative error rate $O(\log n / n)$ in $\mathsf{poly}(n)$ time. We show that the error rate can be improved to $O((\log n)^2 / n)$, provided: (1) the decoder has access to a polynomial-length advice string that depends on $C$ only, and (2) $C$ is $n^{-\Omega(1)}$-balanced.
As a consequence we improve the error tolerance in decoding random linear codes if inefficient preprocessing of the code is allowed. This reveals potential vulnerabilities in cryptographic applications of Learning Noisy Parities with low noise rate.
Our main technical result is that the Hamming weight of $Hw$, where $H$ is a random sample of *short dual* codewords, measures the proximity of a word $w$ to the code in the regime of interest. Given such $H$ as advice, our algorithm corrects errors by locally minimizing this measure. We show that for most codes, the error rate tolerated by our decoder is asymptotically optimal among all algorithms whose decision is based on thresholding $Hw$ for an arbitrary polynomial-size advice matrix $H$.
As a consequence we improve the error tolerance in decoding random linear codes if inefficient preprocessing of the code is allowed. This reveals potential vulnerabilities in cryptographic applications of Learning Noisy Parities with low noise rate.
Our main technical result is that the Hamming weight of $Hw$, where $H$ is a random sample of *short dual* codewords, measures the proximity of a word $w$ to the code in the regime of interest. Given such $H$ as advice, our algorithm corrects errors by locally minimizing this measure. We show that for most codes, the error rate tolerated by our decoder is asymptotically optimal among all algorithms whose decision is based on thresholding $Hw$ for an arbitrary polynomial-size advice matrix $H$.
Shichang Wang, Meicheng Liu, Shiqi Hou, Dongdai Lin
At CHES 2017, Banik et al. proposed a lightweight block cipher GIFT consisting of two versions GIFT-64 and GIFT-128. Recently, there are lots of authenticated encryption schemes that adopt GIFT-128 as their underlying primitive, such as GIFT-COFB and HyENA. To promote a comprehensive perception of the soundness of the designs, we evaluate their security against differential-linear cryptanalysis.
For this, automatic tools have been developed to search differential-linear approximation for the ciphers based on S-boxes. With the assistance of the automatic tools, we find 13-round differential-linear approximations for GIFT-COFB and HyENA. Based on the distinguishers, 18-round key-recovery attacks are given for the message processing phase and initialization phase of both ciphers. Moreover, the resistance of GIFT-64/128 against differential-linear cryptanalysis is also evaluated. The 12-round and 17-round differential-linear approximations are found for GIFT-64 and GIFT-128 respectively, which lead to 18-round and 19-round key-recovery attacks respectively. Here, we stress that our attacks do not threaten the security of these ciphers.
For this, automatic tools have been developed to search differential-linear approximation for the ciphers based on S-boxes. With the assistance of the automatic tools, we find 13-round differential-linear approximations for GIFT-COFB and HyENA. Based on the distinguishers, 18-round key-recovery attacks are given for the message processing phase and initialization phase of both ciphers. Moreover, the resistance of GIFT-64/128 against differential-linear cryptanalysis is also evaluated. The 12-round and 17-round differential-linear approximations are found for GIFT-64 and GIFT-128 respectively, which lead to 18-round and 19-round key-recovery attacks respectively. Here, we stress that our attacks do not threaten the security of these ciphers.
Taiyu Wang, Cong Zhang, Hong-Sheng Zhou, Xin Wang, Pengfei Chen, Wenli Wang, Kui Ren, Chun Chen
The generic group model (GGM) is fundamental for evaluating the feasibility and limitations of group-based cryptosystems. Two prominent versions of the GGM exist in the literature: Shoup's GGM and Maurer's GGM. Zhandry (CRYPTO 2022) points out inherent limitations in Maurer's GGM by demonstrating that several textbook cryptographic primitives, which are provably secure in Shoup's GGM, cannot be proven secure in Maurer's model.
In this work, we further investigate Shoup's GGM and identify novel limitations that have been previously overlooked. Specifically, to prevent generic algorithms from generating valid group elements without querying the oracle, the model typically employs sufficiently large encoding lengths. This leads to sparse encodings, a setting referred to as the sparse generic group model (sparse GGM). We emphasize that this sparseness introduces several constraints:
--Groups with AE and Black-Box Separation: Shoup's GGM is typically instantiated with elliptic curve groups, which admit admissible encodings (AE)—functions mapping from Z_N to elliptic curve points. We establish a black-box separation, showing that the sparse GGM fails to capture cryptographic groups that are both (1) computational Diffie-Hellman (CDH) secure and (2) compatible with admissible encodings.
--Comparison with EC-GGM: We examine the relationship between the sparse GGM and the Elliptic Curve Generic Group Model (EC-GGM) introduced by Groth and Shoup (EUROCRYPT 2022), which inherently yields CDH-secure groups with admissible encodings. Within the framework of indifferentiability, we prove that EC-GGM is strictly stronger than sparse GGM.
--Dense Groups and Black-Box Separation: We revisit groups with dense encodings and establish a black-box separation between CDH-secure dense groups and the sparse GGM.
--Extension to Bilinear Settings: Our results naturally extend to the sparse Generic Bilinear Group Model (GBM), demonstrating that the aforementioned constraints still hold.
In conclusion, our findings indicate that both feasibility and impossibility results in Shoup's GGM should be reinterpreted in a fine-grained manner, encouraging further exploration of cryptographic constructions and black-box separations in EC-GGM or dense GGM.
In this work, we further investigate Shoup's GGM and identify novel limitations that have been previously overlooked. Specifically, to prevent generic algorithms from generating valid group elements without querying the oracle, the model typically employs sufficiently large encoding lengths. This leads to sparse encodings, a setting referred to as the sparse generic group model (sparse GGM). We emphasize that this sparseness introduces several constraints:
--Groups with AE and Black-Box Separation: Shoup's GGM is typically instantiated with elliptic curve groups, which admit admissible encodings (AE)—functions mapping from Z_N to elliptic curve points. We establish a black-box separation, showing that the sparse GGM fails to capture cryptographic groups that are both (1) computational Diffie-Hellman (CDH) secure and (2) compatible with admissible encodings.
--Comparison with EC-GGM: We examine the relationship between the sparse GGM and the Elliptic Curve Generic Group Model (EC-GGM) introduced by Groth and Shoup (EUROCRYPT 2022), which inherently yields CDH-secure groups with admissible encodings. Within the framework of indifferentiability, we prove that EC-GGM is strictly stronger than sparse GGM.
--Dense Groups and Black-Box Separation: We revisit groups with dense encodings and establish a black-box separation between CDH-secure dense groups and the sparse GGM.
--Extension to Bilinear Settings: Our results naturally extend to the sparse Generic Bilinear Group Model (GBM), demonstrating that the aforementioned constraints still hold.
In conclusion, our findings indicate that both feasibility and impossibility results in Shoup's GGM should be reinterpreted in a fine-grained manner, encouraging further exploration of cryptographic constructions and black-box separations in EC-GGM or dense GGM.
Agha Aghayev, Nour-eddine Rahmani
The asymmetric cryptographic constructions upon on number-
theoretic hardness assumptions have become insecure, due to Shor’s
quantum algorithm and they will be vulnerable to large scale quantum
computers. Hence, the adaption to quantum-resistant cryptosystems is
a major task. Digital signatures, being a fundamental primitive in nu-
merous applications. Recently, a new approach by Nguyen et al. [9] has
claimed post-quantum security by basing the signature algorithm’s se-
curity on a variant of the discrete logarithm problem. In this paper, we
present a cryptanalysis of this construction and demonstrate a practi-
cal forgery attack that allows generating an unlimited number of valid
signatures—without access to a signing oracle.
Jonas Schupp, Marco Gianvecchio, Alessandro Barenghi, Patrick Karl, Gerardo Pelosi, Georg Sigl
Post-quantum cryptosystems are currently attracting significant research efforts due to the continuous improvements in quantum computing technologies, which led the National Institute of Standards and Technology (NIST) to open standardization competitions to foster proposals and public scrutiny of cryptosystems and digital signatures. Whilst NIST has chosen, after four selection rounds, three digital signature algorithms, it also has opened a new selection process as the chosen candidates were either relying only on lattice-based computationally hard problems, or had unsatisfactory performance figures. In this work, we propose two optimized implementations of the Codes and Restricted Objects Signature Scheme (CROSS) targeting the Cortex-M4 platform. One implementation targets the minimal possible stack size while the other trades some memory space for performance optimization using vectorization for some performance critical arithmetic operations. We show that all parameter sets fit within at maximum 24 kB of stack which corresponds to a reduction by a factor of 15 to 45 with respect to the reference implementation. The memory footprint of our implementation, taking the size of the binary and the signature also into account, is less than 128 kB. We additionally outline different stack reduction options which allow for a fine grained trade-off between memory footprint and performance of the algorithm. Notably, we also show that our memory optimizations alone have no significant impact on the signature verification of CROSS while we even achieve a speed-up factor of up to 1.7 when taking the stack and speed optimizations into account.
Diamante Simone CRESCENZO, Emanuele VALEA, Alberto BOSIO
Conventional cryptographic algorithms rely on hard mathematical problems to ensure an appropriate level of security. However, with the advent of quantum computing, classical cryptographic algorithms are expected to become vulnerable. For this reason, Post-Quantum Cryptography (PQC) algorithms have emerged as a response, being designed to resist quantum attacks. Most PQC algorithms rely on the Learning With Errors (LWE) problem, where generating pseudo-random controlled errors is crucial. A well-known solution is the use of hash functions followed by error samplers, implemented according to specific error distributions, whose implementation is challenging. This paper provides a proof of concept demonstrating how Approximate Computing (AxC) can be exploited in LWE-based cryptographic algorithms to alleviate this implementation bottleneck. The main idea is to use AxC circuits to run some of the algorithm's operations, introducing the required error for free thanks to the approximation. Our key contribution is demonstrating how AxC techniques can be effectively applied to LWE-based algorithms, highlighting a novel approach to generating and introducing the error. This concept has proven effective in an approximate implementation of the FrodoKEM algorithm, where we achieve a 50.3% reduction in the need for Gaussian sampling. Additionally, we observe a performance improvement of 2.19%, which further supports the feasibility of this approach. Overall, this work introduces and validates a new design direction for LWE-based cryptography through AxC, opening the way for further research.
Dimitri Koshelev
This article aims to consider batch hashing to elliptic curves. The given kind of hash functions found numerous applications in elliptic curve cryptography. In practice, a hash-to-curve function is often evaluated at a time by the same entity at many different inputs. It turns out that under certain mild conditions simultaneous evaluation can be carried out several times faster than separate ones. In this regard, the article introduces a new class of elliptic curves over finite fields, more appropriate for multiple hashing to them. Moreover, two explicit hashing-friendly Montgomery/twisted Edwards curves (of $\approx 128$ security bits) have been generated: one of CM discriminant $-7$, i.e., a GLV-friendly curve and one of huge CM discriminant, i.e., a CM-secure curve. The new elliptic curves are intentionally covered by so-called Klein's and Bring's curves of geometric genera $3$ and $4$, respectively. The latter are well studied in various algebraic geometry contexts, although they have not yet been (reasonably) applied in cryptography to the author's knowledge. Such a mathematical complication is justified, since conventional curves (from existing standards or of $j$-invariants $0$, $1728$) are seemingly less efficient for batch hashing.
Debranjan Pal, Anubhab Baksi, Surajit Mandal, Santanu Sarkar
A common way to perform classical cryptanalysis on symmetric-key ciphers is to encode the problem as an instance of the Mixed Integer Linear Programming (MILP) problem and then run the instance with an efficient solver. For this purpose, it is essential to model the components in a way that is compatible with the MILP formulation while preserving the characteristics of the cipher. In this work, we aim at the problem of efficiently encoding a substitution box (SBox for short). More specifically, we take the evaluation tables, namely, the Difference Distribution Table (DDT) that models for the differential cryptanalysis, the Linear Approximation Table (LAT) that models for the linear cryptanalysis, the Division Property Table (DPT) that models for the integral cryptanalysis, and the Boomerang Connectivity Table (BCT) that models for the boomerang cryptanalysis.
In the current stage, the only model proposed/used in the literature is to look for the non-zero entries in the evaluation table and encode for that (this ultimately leads to the minimum number of active SBoxes for the target cipher). In our first contribution, we experiment with the complementary concept of using the zero entries (i.e., treating the zero entries as nonzero entries and vice versa). In our second contribution, we observe that the top row and column of each table (which are always taken into consideration) are actually redundant, as those are inherently taken care of through additional constraints at another level (which we call the `secondary structure').
We show the efficacy of our proposals by furnishing results on the evaluation tables on multiple SBoxes. In fact, the first proposal reduces the number of constraints for certain SBox evaluation tables (depending on the properties of the table). The second proposal, on the other hand, always reduces the number of constraints from the state-of-the-art results.
In the current stage, the only model proposed/used in the literature is to look for the non-zero entries in the evaluation table and encode for that (this ultimately leads to the minimum number of active SBoxes for the target cipher). In our first contribution, we experiment with the complementary concept of using the zero entries (i.e., treating the zero entries as nonzero entries and vice versa). In our second contribution, we observe that the top row and column of each table (which are always taken into consideration) are actually redundant, as those are inherently taken care of through additional constraints at another level (which we call the `secondary structure').
We show the efficacy of our proposals by furnishing results on the evaluation tables on multiple SBoxes. In fact, the first proposal reduces the number of constraints for certain SBox evaluation tables (depending on the properties of the table). The second proposal, on the other hand, always reduces the number of constraints from the state-of-the-art results.
Benedikt Bünz, Kevin Choi, Chelsea Komlo
In this work, we present Golden, a non-interactive Distributed Key Generation (DKG) protocol. The core innovation of Golden is how it achieves publicly verifiability in a lightweight manner, allowing all participants to non-interactively verify that all other participants followed the protocol correctly. For this reason, Golden can be performed with only one round of (broadcast) communication. Non-interactive DKGs are important for distributed applications; as parties may go offline at any moment, reducing rounds of communication is a desirable feature. Golden outputs Shamir secret shares of a field element sk ∈Zp to all participants, and a public key PK= g^sk that is a discrete-logarithm commitment to sk. Further, the security of Golden requires only the hardness of discrete-logarithm assumptions, and so can be used over any elliptic curve where these assumptions hold.
Golden is more efficient than prior related schemes in both bandwidth and computation. For 50 participants, Golden requires only 223 kb bandwidth and 13.5 seconds of total runtime for each participant, in comparison to ElGamal-based non-interactive DKG, which requires 27.8 MB bandwidth and 40.5 seconds runtime per participant.
As a building block, we define a new exponent Verifiable Random Function (eVRF). Our eVRF uses a non-interactive key exchange (NIKE) as a building block to derive a Diffie-Hellman shared secret key, and proves correctness of this key with respect to the corresponding Diffie-Hellman public keys. Within Golden, participants use this eVRF in a pairwise manner to generate a one-time pad to encrypt Shamir secret shares to their respective recipients while ensuring public verifiability. As such, Golden avoids the use of public-key encryption schemes such as ElGamal, Paillier, or class groups, departing from prior schemes in the literature. Finally, our eVRF may be of independent interest to settings where publicly-verifiable encryption is desirable.
Golden is more efficient than prior related schemes in both bandwidth and computation. For 50 participants, Golden requires only 223 kb bandwidth and 13.5 seconds of total runtime for each participant, in comparison to ElGamal-based non-interactive DKG, which requires 27.8 MB bandwidth and 40.5 seconds runtime per participant.
As a building block, we define a new exponent Verifiable Random Function (eVRF). Our eVRF uses a non-interactive key exchange (NIKE) as a building block to derive a Diffie-Hellman shared secret key, and proves correctness of this key with respect to the corresponding Diffie-Hellman public keys. Within Golden, participants use this eVRF in a pairwise manner to generate a one-time pad to encrypt Shamir secret shares to their respective recipients while ensuring public verifiability. As such, Golden avoids the use of public-key encryption schemes such as ElGamal, Paillier, or class groups, departing from prior schemes in the literature. Finally, our eVRF may be of independent interest to settings where publicly-verifiable encryption is desirable.
19 October 2025
Felix Uhle, Nicolai Müller, Thorben Moos, Philipp Günther, Amir Moradi
Fault injection attacks target cryptographic primitives implemented in
hardware by deliberately inducing faults, leading to abnormal behavior that can be exploited to extract sensitive information. To mitigate these threats, practical and low-overhead countermeasures, whether at the algorithmic or physical level, are essential. However, the real-world effectiveness of these countermeasures remains uncertain, as it is not always clear whether, or to what extent, their underlying security assumptions hold in practice. Therefore, a thorough evaluation under realistic attack scenarios, including recent techniques such as Electromagnetic Fault Injection (EMFI), is crucial. In this work, we demonstrate the resistance of a protected real-world target chip against EMFI. Specifically, our fabricated 65 nm CMOS ASIC employs concurrent error correction based on the techniques described in Impeccable Circuits II as an algorithm-level countermeasure. At the physical level, the chip integrates multiple coils of varying sizes and positions that serve as sensors for electromagnetic fields. We employ a practical and affordable attack setup featuring a commercial fault-injection probe mounted on an XYZ stage for precise positioning over the ASIC. This setup allows us to investigate the effects of various attack parameters, including the probe’s position, pulse polarity, and voltage level. Our results highlight that the coils serve as a lightweight and effective countermeasure for the practical detection of EMFI attempts. In contrast, for concurrent error correction, a gap between theory and practice emerges: the protection overhead actually makes such designs more susceptible to EMFI in real-world scenarios.
18 October 2025
Nils Fleischhacker, Guilherme Rito
Deniable authentication is a very desirable guarantee for
secure messaging — and in particular Off-The-Record messaging: it allows
a sender Alice to authenticate her messages while retaining plausible
deniability on whether she authenticated any of her messages.
While deniable authentication is not achievable in standard Public-Key Infrastructures, Dodis et al. (TCC ’09) have shown that it is achievable in so-called Key-Registration with Knowledge setups. These guarantee: 1. dishonest parties’ key-pairs are honestly generated; and 2. dishonest parties have access to their secret keys. However, these are expensive assumptions, and it is unclear how securely realize them in practice.
In this paper we study the weakest setup requirements that are sufficient to enable deniable authentication. One of our main findings is that deniable authentication does require an expensive — but not necessarily impractical — Key-Registration procedure. Roughly, this procedure must guarantee that if a user’s key registration is successful, then the user can extract by themselves valid secret keys from their interaction with the registration authority. We show this setup is sufficient by capturing it via an appropriate security model and then proving the anonymity of a ring signature based on it. On the other hand, we explain why this setup seems inherently necessary by listing a series of attacks that void deniability as soon as a user manages to successfully register a public key for which it can convincingly claim it does not know a valid (corresponding) secret key.
The requirements we identify for key-registration protocols are expensive, but not necessarily impractical. Our second main contribution is showing how to amortize the per-user Key-Registration cost, which brings deniable authentication guarantees a step closer to practicality.
While deniable authentication is not achievable in standard Public-Key Infrastructures, Dodis et al. (TCC ’09) have shown that it is achievable in so-called Key-Registration with Knowledge setups. These guarantee: 1. dishonest parties’ key-pairs are honestly generated; and 2. dishonest parties have access to their secret keys. However, these are expensive assumptions, and it is unclear how securely realize them in practice.
In this paper we study the weakest setup requirements that are sufficient to enable deniable authentication. One of our main findings is that deniable authentication does require an expensive — but not necessarily impractical — Key-Registration procedure. Roughly, this procedure must guarantee that if a user’s key registration is successful, then the user can extract by themselves valid secret keys from their interaction with the registration authority. We show this setup is sufficient by capturing it via an appropriate security model and then proving the anonymity of a ring signature based on it. On the other hand, we explain why this setup seems inherently necessary by listing a series of attacks that void deniability as soon as a user manages to successfully register a public key for which it can convincingly claim it does not know a valid (corresponding) secret key.
The requirements we identify for key-registration protocols are expensive, but not necessarily impractical. Our second main contribution is showing how to amortize the per-user Key-Registration cost, which brings deniable authentication guarantees a step closer to practicality.
17 October 2025
Alper Çakan, Vipul Goyal, Fuyuki Kitagawa, Ryo Nishimaki, Takashi Yamakawa
Unclonable cryptography leverages the quantum no-cloning principle to copy-protect cryptographic functionalities. While most existing works address the basic single-copy security, the stronger notion of multi-copy security remains largely unexplored.
We introduce a generic compiler that upgrades collusion-resistant unclonable primitives to achieve multi-copy security, assuming only one-way functions. Using this framework, we obtain the first multi-copy secure constructions of public-key quantum money (termed quantum coins), single-decryptor encryption, unclonable encryption, and more. We also introduce an extended notion of quantum coins, called upgradable quantum coins, which allow weak (almost-public) verification under weaker assumptions and can be upgraded to full public verification under stronger assumptions by the bank simply publishing additional classical information.
Along the way, we give a generic compiler that upgrades single-copy secure single-decryptor encryption to a collusion-resistant one, assuming the existence of functional encryption, and construct the first multi-challenge secure unclonable encryption scheme, which we believe are of independent interest.
We introduce a generic compiler that upgrades collusion-resistant unclonable primitives to achieve multi-copy security, assuming only one-way functions. Using this framework, we obtain the first multi-copy secure constructions of public-key quantum money (termed quantum coins), single-decryptor encryption, unclonable encryption, and more. We also introduce an extended notion of quantum coins, called upgradable quantum coins, which allow weak (almost-public) verification under weaker assumptions and can be upgraded to full public verification under stronger assumptions by the bank simply publishing additional classical information.
Along the way, we give a generic compiler that upgrades single-copy secure single-decryptor encryption to a collusion-resistant one, assuming the existence of functional encryption, and construct the first multi-challenge secure unclonable encryption scheme, which we believe are of independent interest.
Aurélien Boeuf, Léo Perrin
Zero-Knowledge (ZK) protocols rely internally on hash functions for their security arguments. However, the hash functions that are the most efficient in this context differ substantially from e.g. SHA-3: their round function $R$ must enable an efficient arithmetization of its verification. In practice, it means that verifying if $y = R(x)$ involves as little finite field multiplications as possible. In turn, this design requirement implies a greater vulnerability to algebraic attacks. In fact, improvement of those have proved devastating, and imply the need to completely rethink the methods used to ensure security against them. In this paper, we show that it is possible to build a simple yet efficient security argument based on a precise estimate of the so-called “algebraic degree” of a system of equations. Furthermore, we show that the increase of this quantity across rounds is tightly connected to the cost of the hash function in two different arithmetizations, namely AIR and R1CS. We precisely quantify this relation by introducing ALgebraic Figures Of Merit (ALFOMs) that capture how efficient a specific primitive (and in fact its round function) are at increasing the security per unit of cost. This new insight allows us to better understand sometimes puzzling performance differences between state-of-the-art hash functions in the R1CS and AIR cases, and to provide a fair and simple comparison of their round functions in this context. Furthermore, we present a new group of round functions we called the Moirai which allow us to explore what a round function providing optimal performance/security tradeoff could look like.
Leandro Rometsch, Philipp-Florens Lehwalder, Anh-Tu Hoang, Dominik Kaaser, Stefan Schulte
Self-Sovereign Identity (SSI) frameworks enable individuals to receive and present digital credentials in a user-controlled way. Revocation mechanisms ensure that invalid or withdrawn credentials cannot be misused. These revocation mechanisms must be scalable (e.g., at national scale) and preserve core SSI principles such as privacy, user control, and interoperability. Achieving both is hard, and finding a suitable trade-off remains a key challenge in SSI research. This paper introduces UPPR, a revocation mechanism for One-Show Verifiable Credentials (oVCs) and unlinkable Anonymous Credentials (ACs). Revocations are managed using per-credential Verifiable Random Function (VRF) tokens, which are published in a Bloom filter cascade on a blockchain. Holders prove non-revocation via a VRF proof for oVCs or a single Zero-Knowledge Proof for ACs. The construction prevents revocation status tracking, allows holders to stay offline, and hides issuer revocation behavior. We analyze the privacy properties of UPPR and provide a prototype implementation on Ethereum. Our implementation enables off-chain verification at no cost. On-chain checks cost 0.56-0.84 USD, while issuers pay only 0.00002-0.00005 USD per credential to refresh the revocation state.
Shibam Ghosh, Bastien Michel, María Naya-Plasencia
ARADI is a low-latency block cipher introduced by the U.S. National Security Agency (NSA) for secure and efficient memory encryption applications. In contrast to most ciphers proposed in the academic community, the design rationale for ARADI has not been publicly disclosed, limiting external evaluation to independent cryptanalysis. Several such analyses have already been published, with the most effective attacks to date reaching up to 12 out of 16 rounds. In this work, we present a differential meet-in-the-middle attack on ARADI that incorporates several new optimizations and dedicated techniques, enabling, for the first time, an attack extending to 14 rounds of the cipher.
Thomas Marquet, Elisabeth Oswald
Deep learning has become a powerful tool for profiled side-channel analysis, especially for its ability to defeat masking countermeasures. However, obtaining a successful deep learning model when the attacker cannot access the internal randomness of the profiling device, remains a challenge. The "plateau effect" hinders the training of the model, as the optimization stalls in flat regions of the loss landscape at initialization, which makes the outcome of the training run uncertain. Previous works showed that multi-task learning allows to overcome this problem by leveraging redundant features across multiple tasks, such as shared randomness or common masking flow. We continue the discussion by using belief propagation on a larger graph to guide the learning. We introduce a multi-task learning model that explicitly integrates a factor graph reflecting the algebraic dependencies among intermediates in the computations of Kyber's inverse Number Theoretic Transform (iNTT). Such framework allow the model to learn a joint representation of the related tasks that is mutually beneficial, and provides a mechanism to overcome such plateaus. For the first time, we show that one can perform a belief propagation during training even when one does not have access to the internal randomness, on the masked shares, potentially improving greatly the performances of the attack.
Ziyu Zhao, Antonio Sanso, Giuseppe Vitto, Jintai Ding
Poseidon and Poseidon2 are cryptographic hash functions crafted for efficient zero-knowledge proof systems and have seen wide adoption in practical applications. We introduce the use of the Graeffe transform in univariate polynomial solving within this line of work. The proposed method streamlines the root recovery process in interpolation attacks and achieves several orders of magnitude acceleration in practical settings, enabling a new and more efficient class of attacks against Poseidon targeting round-reduced permutations and constrained input/output instances. We release open-source code and describe our method in detail, demonstrating substantial improvements over prior approaches: reductions in wall time by a factor of $2^{13}$ and in memory usage by a factor of $2^{4.5}$. Memory-access costs for NTTs turn out to be a dominant barrier in practice. And we prove that this cost increases at least as the $4/3$-power of the input size (up to logarithmic factors), which suggests the commonly used pseudo-linear cost model may underestimate the true resource requirements. This behavior contrasts with multivariate equation solving, whose main bottleneck remains finite-field linear algebra. We argue that, when selecting parameters, designers should account for interpolation-based attacks explicitly, since their practical hardness is determined by different, and sometimes stronger, resource constraints than those of multivariate techniques.
Kyungbae Jang, Anubhab Baksi, Hwajeong Seo
Quantum depth plays a critical role in determining the performance of quantum implementations, yet quantum programming tools often fail to produce depth-optimal compilations of linear layers. In this work, we present a systematic and automated framework that reorders quantum gate sequences of linear layers to obtain depth-efficient quantum implementations. Our method consistently produces circuits with lower depth compared to prior implementations.
We apply the framework to a range of cryptographic operations, including the AES MixColumn, internal layers of the AES S-box, binary field squaring, and modular reduction in binary field multiplication. In all these cases, our method achieves meaningful reductions in quantum depth—for example, lowering the depth of the AES MixColumn and S-box circuits.
This work explores optimal quantum circuit designs for quantum programming tools, improves the accuracy of quantum resource estimation for cryptanalysis, and supports more realistic evaluations of post-quantum security.
We apply the framework to a range of cryptographic operations, including the AES MixColumn, internal layers of the AES S-box, binary field squaring, and modular reduction in binary field multiplication. In all these cases, our method achieves meaningful reductions in quantum depth—for example, lowering the depth of the AES MixColumn and S-box circuits.
This work explores optimal quantum circuit designs for quantum programming tools, improves the accuracy of quantum resource estimation for cryptanalysis, and supports more realistic evaluations of post-quantum security.
Zhengjun Cao, Lihua Liu
We show that the authentication scheme (IEEE Internet Things J., 24310-24322, 2024) cannot be practically implemented, because it misused the elliptic curve group law---point multiplication, which is represented as $kP$, where $k$ is an integer and $P$ is a point on an elliptic curve. But the scheme uses the false representation $Q_iQ_j$ to construct verification equations, where $Q_i$ and $Q_j$ are two points. Besides, we find that an adversary can retrieve the target relay device's secret key using the intercepted message via open channels.
Liyan Chen, Cody Freitag, Zhengzhong Jin, Daniel Wichs
We construct the first unambiguous succinct non-interactive arguments (SNARGs) for P and incrementally verifiable computation (IVC) for P from the polynomial hardness of learning with errors (LWE). Unambiguity guarantees that it is computationally hard to find two distinct accepting proofs for the same statement.
As an application, we establish the first PPAD hardness result based on the polynomial hardness of LWE combined with a widely believed complexity assumption.
Central to our approach is a new notion of rate-1 witness-unambiguous batch arguments for NP, which we give the first construction from the polynomial hardness of LWE. This notion may be of independent interest.
Central to our approach is a new notion of rate-1 witness-unambiguous batch arguments for NP, which we give the first construction from the polynomial hardness of LWE. This notion may be of independent interest.