IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
29 October 2025
Victor Delfour, Marc-Olivier Killijian
The growing need for secure computation has spurred interest in cryptographic techniques that operate on encrypted data without revealing its content. Fully Homomorphic Encryption (FHE), especially LWE-based schemes, enables such processing while preserving confidentiality. Decentralized computing offers scalable resources without requiring in-house servers, but it relies heavily on the confidentiality guarantees of underlying schemes. While many existing protocols successfully protect input privacy, function confidentiality remains a largely overlooked but crucial aspect of secure delegated computation.
In this work, we present a novel Oblivious Universal Function (OUF) scheme that enables a client to outsource computation of an arbitrary function to an untrusted server while hiding both the input data and the function being applied. Our construction leverages LWE-based FHE and a virtual-tape evaluation model to support composable, non-interactive, and reusable function execution. Crucially, the server remains oblivious not only to the encrypted inputs but also to the structure, type, and identity of the function it is executing. OUF thus bridges the gap between theoretical privacy guarantees and practical secure computation in decentralized environments.
In this work, we present a novel Oblivious Universal Function (OUF) scheme that enables a client to outsource computation of an arbitrary function to an untrusted server while hiding both the input data and the function being applied. Our construction leverages LWE-based FHE and a virtual-tape evaluation model to support composable, non-interactive, and reusable function execution. Crucially, the server remains oblivious not only to the encrypted inputs but also to the structure, type, and identity of the function it is executing. OUF thus bridges the gap between theoretical privacy guarantees and practical secure computation in decentralized environments.
Allison Bishop, Matthew Green, Yuval Ishai, Abhishek Jain, Paul Lou
In a secret sharing scheme for a monotone access structure $\mathcal{A}:\{0,1\}^n\rightarrow \{0,1\}$, a dealer can share a secret $s$ to $n$ parties such that any authorized subset of parties $A\in\mathcal{A}$ can recover $s$ while all other subsets learn nothing about $s$. In this work, we study fully anonymous secret sharing (FASS), which strengthens standard secret sharing by requiring the following properties:
- Share Anonymity. The shares belonging to any unauthorized set of parties not only hide the secret, but also all identifiable information such as party identities and whether or not the shares were generated together. In particular, it suffices that such shares be uniform and independent.
- Anonymous Reconstruction. The reconstruction algorithm does not need to know the reconstructing set of parties.
Efficient FASS exists for threshold access structures. For general access structures, the only known construction relies on a monotone DNF representation of $\mathcal{A}$ and has per-party share size $\Omega(\ell n)$ where $\ell$ is the number of minterms of $\mathcal{A}$. This leaves an exponential gap between standard secret sharing and FASS even for simple access structures. Moreover, even in the threshold case, known schemes could not achieve optimal robust reconstruction when mixing shares of multiple secrets.
Motivated by a recent work of Eldridge et al. [USENIX'24], who demonstrated an application of FASS to stalker detection, we initiate a systematic study of FASS, obtaining the following main results.
- Near-Optimal Information-Theoretic FASS. We obtain strong lower bounds, showing that the dependence on the DNF size is generally inherent. In particular, the share size can be exponential in the number of parties or even in the minimum CNF size. This stands in sharp contrast to standard secret sharing, where no super-polynomial lower bounds are known, and where the share size is upper bounded by the CNF size. For DNF with $\ell$ small minterms, we improve the previous upper bound to $\tilde O(\ell)$, matching our lower bound up to a polylogarithmic factor.
- Computational FASS. We show that the above negative results can be circumvented in the computational setting, obtaining FASS schemes with succinct shares. Under the learning with errors (LWE) assumption, we present a general compiler from standard secret sharing to FASS that preserves the share size of the underlying scheme. For natural graph access structures, we directly construct succinct FASS from either one-way functions or bilinear maps.
- Robust FASS. We show that simple modifications of our computational FASS schemes can allow for robust reconstruction of a polynomially unbounded number of secrets from any mixture of their authorized shares.
- Share Anonymity. The shares belonging to any unauthorized set of parties not only hide the secret, but also all identifiable information such as party identities and whether or not the shares were generated together. In particular, it suffices that such shares be uniform and independent.
- Anonymous Reconstruction. The reconstruction algorithm does not need to know the reconstructing set of parties.
Efficient FASS exists for threshold access structures. For general access structures, the only known construction relies on a monotone DNF representation of $\mathcal{A}$ and has per-party share size $\Omega(\ell n)$ where $\ell$ is the number of minterms of $\mathcal{A}$. This leaves an exponential gap between standard secret sharing and FASS even for simple access structures. Moreover, even in the threshold case, known schemes could not achieve optimal robust reconstruction when mixing shares of multiple secrets.
Motivated by a recent work of Eldridge et al. [USENIX'24], who demonstrated an application of FASS to stalker detection, we initiate a systematic study of FASS, obtaining the following main results.
- Near-Optimal Information-Theoretic FASS. We obtain strong lower bounds, showing that the dependence on the DNF size is generally inherent. In particular, the share size can be exponential in the number of parties or even in the minimum CNF size. This stands in sharp contrast to standard secret sharing, where no super-polynomial lower bounds are known, and where the share size is upper bounded by the CNF size. For DNF with $\ell$ small minterms, we improve the previous upper bound to $\tilde O(\ell)$, matching our lower bound up to a polylogarithmic factor.
- Computational FASS. We show that the above negative results can be circumvented in the computational setting, obtaining FASS schemes with succinct shares. Under the learning with errors (LWE) assumption, we present a general compiler from standard secret sharing to FASS that preserves the share size of the underlying scheme. For natural graph access structures, we directly construct succinct FASS from either one-way functions or bilinear maps.
- Robust FASS. We show that simple modifications of our computational FASS schemes can allow for robust reconstruction of a polynomially unbounded number of secrets from any mixture of their authorized shares.
Tim Seuré
We present SCORE, a modified version of the SlotToCoeff operation tailored to encrypted real vectors in the CKKS scheme, where SCORE stands for “SlotToCoeff Optimization for Real-Vector Encryption”. This approach accelerates CKKS bootstrapping algorithms that employ the SlotToCoeff operation as their final step, provided the inputs are encryptions of real vectors. We demonstrate its utility through proof-of-concept implementations for two such algorithms: the conventional bootstrapping algorithm and the SPRU algorithm. Our results indicate notable performance gains of up to 2× compared to the original formulations of the algorithms.
Alessandro Melloni, Martijn Stam, Øyvind Ytrehus
Anonymous communication networks (ACNs) aim to thwart an adversary, who controls or observes chunks of the communication network, from determining the respective identities of two communicating parties. We focus on low-latency ACNs such as Tor, which target a practical level of anonymity without incurring an unacceptable transmission delay.
While several definitions have been proposed to quantify the level of anonymity provided by high-latency, message-centric ACNs (such as mix-nets and DC-nets), this approach is less relevant to Tor, where user–destination pairs communicate over secure overlay circuits. Moreover, existing evaluation methods of traffic analysis attacks on Tor appear somewhat ad hoc and fragmented. We propose a fair evaluation framework for such attacks against onion routing systems by identifying and discussing the crucial components for evaluation, including how to consider various adversarial goals, how to factor in the adversarial ability to collect information relevant to the attack, and how these components combine to suitable metrics to quantify the adversary's success.
While several definitions have been proposed to quantify the level of anonymity provided by high-latency, message-centric ACNs (such as mix-nets and DC-nets), this approach is less relevant to Tor, where user–destination pairs communicate over secure overlay circuits. Moreover, existing evaluation methods of traffic analysis attacks on Tor appear somewhat ad hoc and fragmented. We propose a fair evaluation framework for such attacks against onion routing systems by identifying and discussing the crucial components for evaluation, including how to consider various adversarial goals, how to factor in the adversarial ability to collect information relevant to the attack, and how these components combine to suitable metrics to quantify the adversary's success.
Anja Lehmann, Andrey Sidorenko, Alexandros Zacharakis
Anonymous credentials enable the unlinkable presentation of previously attested information, or even only predicates thereof. They are a versatile tool and currently enjoy attention in various real-world applications, ranging from the European Digital Identity project to Privacy Pass. While each application usually requires their own tailored variant of anonymous credentials, they all share the same common blueprint. So far, this has not been leveraged though, and currently several proposals either targeting monolithic variants of core components such as BBS signatures, or application-specific protocols undergo standardization. This is clearly not optimal, as the same work gets repeated multiple times, while still risking ending up with many slight modifications of the same main idea and protocols. In this work we present our vision to use a modular approach to build anonymous credential systems: they are built from a core component – consisting of a commitment, signature and NIZK scheme – that can be extended with additional commitment-based modules in a plug-and-play manner. We sketch modules for pseudonyms, range proofs and device binding. Importantly, apart from the committed input, all modules are entirely independent of each other. We use this modularity to propose a concrete instantiation that uses BBS signatures for the core component and ECDSA signatures for device binding, addressing the need to bind modern credential schemes to legacy signatures in secure hardware elements.
Vipul Goyal, Abhishek Jain, Aditi Partap
In a secret sharing scheme for a monotone access structure $\mathcal{A}$, one can share a secret among a set of parties such that all subsets of parties authorized by $\mathcal{A}$ can reconstruct the secret while all other subsets learn nothing. However, what if an unauthorized subset of parties collude and offer their shares for sale? Specifically, suppose that the parties pool their shares to create a reconstruction box that reveals the secret upon receiving enough additional shares as input. To deter this behavior, Goyal et al. (CRYPTO'21) introduced the notion of traceable secret sharing (TSS), where it is possible to provably trace reconstruction boxes containing leaked secret shares back to their respective parties. Goyal et al. and subsequent work presented definitions and constructions of TSS for the threshold access structure.
In this work, we revisit the notion of TSS. - We identify shortcomings in previous formulations of TSS and present new, strengthened definitions that are not achieved by known constructions. We show that it is easy to build reconstruction boxes for which the tracing guarantees of all previous works fail. - We extend the study of TSS beyond threshold to general access structures. Our new definitions are, in fact, necessary for this setting as natural adaptations of existing definitions become vacuous for general access structures. - We present new constructions of TSS that satisfy our definitions for all access structures captured by monotone circuits. One of our constructions relies solely on one-way functions while the other additionally relies on indistinguishability obfuscation.
In this work, we revisit the notion of TSS. - We identify shortcomings in previous formulations of TSS and present new, strengthened definitions that are not achieved by known constructions. We show that it is easy to build reconstruction boxes for which the tracing guarantees of all previous works fail. - We extend the study of TSS beyond threshold to general access structures. Our new definitions are, in fact, necessary for this setting as natural adaptations of existing definitions become vacuous for general access structures. - We present new constructions of TSS that satisfy our definitions for all access structures captured by monotone circuits. One of our constructions relies solely on one-way functions while the other additionally relies on indistinguishability obfuscation.
25 October 2025
Joe Doyle
Singh et. al. recently uploaded a preprint describing a new hash function inspired by the Collatz Conjecture. After performing some cursory tests, the proposed function appears to be completely unsuitable for cryptographic purposes, and should not be used.
Amos Beimel, Yuval Ishai, Eyal Kushilevitz, Hanjun Li
We initiate a systematic study of information-theoretic cryptography with {\em weak privacy}, only requiring that the adversary cannot rule out any possible secret. For a parameter $00$. We obtain the following main results.
Positive results. We present efficient WP constructions for generalized secret sharing, decomposable randomized encodings, and the related notions of garbling schemes and PSM protocols, as well as interactive secure multiparty computation protocols in the plain model and in the OT-hybrid model.
For secret sharing, we settle a question of Beimel and Franklin (TCC 2007), showing that every $n$-party access structure admits a WP scheme with per-party share size $O(n)$. When all unauthorized sets have constant size, we get a $p$-WP scheme with constant share size and $p\ge 1/poly(n)$.
Negative result. For decomposable randomized encodings, we show that a previous lower bound technique of Ball et al.\ (ITCS 2020) applies also to the WP notion. Together with our upper bound, this shows that the optimal WP garbling size of the worst-case $f:\{0,1\}^n\to\{0,1\}$ is $\tilde{\Theta}(n^2)$.
Application. While WP may seem like an unrealistically weak security notion, we demonstrate its usefulness towards achieving traditional security guarantees. Concretely, under the standard LPN assumption, we show that any $p$-WP secret-sharing scheme with inverse-polynomial $p$ implies a {\em computationally secure} secret-sharing scheme for a related access structure. Together with our positive results for WP secret sharing, this implies a super-polynomial improvement of the share size for a natural class of access structures.
Positive results. We present efficient WP constructions for generalized secret sharing, decomposable randomized encodings, and the related notions of garbling schemes and PSM protocols, as well as interactive secure multiparty computation protocols in the plain model and in the OT-hybrid model.
For secret sharing, we settle a question of Beimel and Franklin (TCC 2007), showing that every $n$-party access structure admits a WP scheme with per-party share size $O(n)$. When all unauthorized sets have constant size, we get a $p$-WP scheme with constant share size and $p\ge 1/poly(n)$.
Negative result. For decomposable randomized encodings, we show that a previous lower bound technique of Ball et al.\ (ITCS 2020) applies also to the WP notion. Together with our upper bound, this shows that the optimal WP garbling size of the worst-case $f:\{0,1\}^n\to\{0,1\}$ is $\tilde{\Theta}(n^2)$.
Application. While WP may seem like an unrealistically weak security notion, we demonstrate its usefulness towards achieving traditional security guarantees. Concretely, under the standard LPN assumption, we show that any $p$-WP secret-sharing scheme with inverse-polynomial $p$ implies a {\em computationally secure} secret-sharing scheme for a related access structure. Together with our positive results for WP secret sharing, this implies a super-polynomial improvement of the share size for a natural class of access structures.
Chandan Dey, Soumya Sahoo, Santanu Sarkar
The ARADI block cipher is developed by the U.S. National Security Agency (NSA) as part of its efforts to design secure and efficient cryptographic algorithms.
In this paper, we present the first detailed analysis of the lightweight block cipher ARADI under differential fault attacks. Although ARADI is structured around word-wise operations, its security fundamentally depends on bit-level properties, making it vulnerable to carefully crafted fault models. Bit-based fault models require a large number of faults, while word-based models reduce the number of faults but make precise identification of fault locations difficult. To overcome these limitations, we introduce a new nibble-based approach using permissible nibble differences (PNDs). This model provides an effective tradeoff between the number of required faults and the complexity of identifying fault positions. In our attack, the adversary randomly injects nibble faults before the last two rounds and constructs differential equations involving unknown key variables without knowing the exact fault values. Our results demonstrate that, with at most 108 random fault injections, the entire master key of ARADI can be recovered within a practical time complexity. This work reveals previously unexplored vulnerabilities of ARADI under fault attacks and highlights the need to evaluate lightweight cipher designs using nibble-level fault models, along with the traditional bit- and word-level analyses.
Surajit Mandal, Sandip Kumar Mondal, Raghvendra Rohit, Santanu Sarkar
This study investigates and improves the differential cryptanalysis of the ARADI block cipher, a low-latency cryptographic system developed by the United States National Security Agency for memory encryption. The preliminary security assessment of ARADI revealed deficiencies that require additional examination. This study revisits and corrects the key recovery attack introduced by Bellini et al. (Indocrypt 2024). By correcting these mistakes, the research presents a precise 11-round key recovery method with revised attack complexities. Additionally, the analysis is expanded to introduce a key recovery attack for 12 rounds of ARADI, leaving the security margin of the cipher to only 4 rounds.
Ignacio Amores-Sesar, Michelle Yeo
We propose a novel timestamping mechanism for consensus protocols that reliably assigns submission times to honest transactions while preventing adversarial transactions from forging their timestamps. Our mechanism remains secure even under asynchronous networks and in the presence of corrupted parties. We demonstrate how it can be integrated into the three main families of consensus protocols and show its compatibility with layer-2 solutions. This construction enables robust layer-2 implementations that do not rely on timing assumptions, remain resilient under network saturation, and prevent collusion between users and validators to censor honest users.
Peter Gaži, Saad Quader, Alexander Russell
Random beacons play a critical role in blockchain protocols by providing publicly verifiable, unpredictable randomness essential for secure assignment of protocol roles such as block producers and committee membership. In the interest of efficiency, many deployed blockchains adopt beacon algorithms that suffer from grinding: an adversarial attack in which a party exploits freedom given by the protocol to bias the outcome of the random beacon by resampling it several times and picking the most desirable outcome. To compound the problem, beacons often operate in an iterative manner, where the beacon output produced during one protocol epoch serves as the random seed for the beacon’s invocation in the next epoch. This amplifies the security threat, as such attacks may then aggregate their power over many epochs.
In this article, we formulate a generic framework for information-theoretic analysis of grinding in iterated randomness beacons. We define the natural grinding capacity of a beacon, intuitively corresponding to the amount of grinding it allows with a uniformly random seed. We then prove that sufficiently strong tail bounds on this quantity can be transformed into a guarantee on smooth min-entropy of the iterated beacon’s output, even conditioned on all past outputs and irrespective of the inner workings of the beacon. Such min-entropy guarantees can immediately be translated into corresponding statements about various applications of the beacon to committee selection, incentives, or underlying protocol security.
Our main technical result concerns conventional longest-chain protocols, where we establish that the combinatorial structure of the forest of longest chains can be leveraged to control grinding. Instantiating the generic framework with these grinding upper bounds, we establish that the randomness beacon of the Ouroboros Praos protocol is secure against adversaries controlling up to about 12% of stake—even without any assumptions bounding the adversarial computational power invested into grinding. This is a qualitatively new guarantee for the protocol.
In this article, we formulate a generic framework for information-theoretic analysis of grinding in iterated randomness beacons. We define the natural grinding capacity of a beacon, intuitively corresponding to the amount of grinding it allows with a uniformly random seed. We then prove that sufficiently strong tail bounds on this quantity can be transformed into a guarantee on smooth min-entropy of the iterated beacon’s output, even conditioned on all past outputs and irrespective of the inner workings of the beacon. Such min-entropy guarantees can immediately be translated into corresponding statements about various applications of the beacon to committee selection, incentives, or underlying protocol security.
Our main technical result concerns conventional longest-chain protocols, where we establish that the combinatorial structure of the forest of longest chains can be leveraged to control grinding. Instantiating the generic framework with these grinding upper bounds, we establish that the randomness beacon of the Ouroboros Praos protocol is secure against adversaries controlling up to about 12% of stake—even without any assumptions bounding the adversarial computational power invested into grinding. This is a qualitatively new guarantee for the protocol.
Rutchathon Chairattana-Apirom, Dennis Hofheinz, Stefano Tessaro
This paper studies the concrete security of BBS signatures (Boneh, Boyen, Shacham, CRYPTO '04; Camenisch and Lysyanskaya, CRYPTO '04), a popular algebraic construction of digital signatures which underlies practical privacy-preserving authentication systems and is undergoing standardization by the W3C and IRTF.
Sch\"age (Journal of Cryptology '15) gave a tight standard-model security proof under the $q$-SDH assumption for a less efficient variant of the scheme, called BBS+--here, $q$ is the number of issued signatures. In contrast, the security proof for BBS (Tessaro and Zhu, EUROCRYPT '23), also under the $q$-SDH assumption, is \emph{not} tight. Nonetheless, this recent proof shifted both standardization and industry adoption towards the more efficient BBS, instead of BBS+, and for this reason, it is important to understand whether this tightness gap is inherent. Recent cryptanalysis by Chairattana-Apirom and Tessaro (ASIACRYPT '25) also shows that a tight reduction to $q$-SDH is the best we can hope for.
This paper closes this gap in two different ways. On the positive end, we show a novel tight reduction for BBS in the case where each message is signed at most once--this case covers in particular the common practical use case which derandomizes signing. On the negative end, we use a meta-reduction argument to prove that if we allow generating multiple signatures for the same message, then {\em no} algebraic reduction to $q$-SDH (and its variants) can be tight.
Sch\"age (Journal of Cryptology '15) gave a tight standard-model security proof under the $q$-SDH assumption for a less efficient variant of the scheme, called BBS+--here, $q$ is the number of issued signatures. In contrast, the security proof for BBS (Tessaro and Zhu, EUROCRYPT '23), also under the $q$-SDH assumption, is \emph{not} tight. Nonetheless, this recent proof shifted both standardization and industry adoption towards the more efficient BBS, instead of BBS+, and for this reason, it is important to understand whether this tightness gap is inherent. Recent cryptanalysis by Chairattana-Apirom and Tessaro (ASIACRYPT '25) also shows that a tight reduction to $q$-SDH is the best we can hope for.
This paper closes this gap in two different ways. On the positive end, we show a novel tight reduction for BBS in the case where each message is signed at most once--this case covers in particular the common practical use case which derandomizes signing. On the negative end, we use a meta-reduction argument to prove that if we allow generating multiple signatures for the same message, then {\em no} algebraic reduction to $q$-SDH (and its variants) can be tight.
Palak, Thomas Haines
In this paper, we present formally verified proofs of the popular KZG Polynomial Commitment Schemes (PCSs), including the security proofs for the properties of correctness, polynomial binding, evaluation binding and hiding. Polynomial commitment schemes have various applications in cryptography and computer science, including verifiable computation, blockchain and cryptocurrencies, secure multi-party computation as well as in the construction of ZK-SNARKs. To validate security, we utilise EasyCrypt, an interactive theorem prover that allows for formal verification of cryptographic primitives and protocols. This approach enforces correct proofs which cover all required cases and formalising assumptions reducing the risk of overlooked vulnerabilities. This formalisation validates the current understanding of KZG's PCSs as secure while clarifying various issues in the original claims.
24 October 2025
Antoine Sidem, Qingju Wang
The increasing use of multi-party computation (MPC) has spurred the design of symmetric key primitives specifically suited for MPC environments. Recently, weak pseudorandom functions (wPRFs) based on the alternating moduli paradigm have been proposed as a promising class of MPC-friendly primitives. The wPRF proposed at CRYPTO 2024, in its One-to-One parameter set, has been shown to be vulnerable to a key recovery attack dubbed Zeroed-Out, exploiting collisions in the queries.
In this paper, we identify that the aforementioned wPRFs conform to a specific structure, called pointwise-keyed function, and further show a different, general key recovery attack. This method, applied to wPRFs in the One-to-One parameter set attacked by Zeroed-out, improves upon the complexity and achieves an attack with complexity below the birthday bound, and stays effective against the proposed countermeasures. For the first time, it succeeds in attacking one of the two Many-to-One parameter sets and stays effective against one of the proposed countermeasures. We also consider its applicability to the alternative wPRF of similar structure proposed by Boneh et al at TCC 2018.
In this paper, we identify that the aforementioned wPRFs conform to a specific structure, called pointwise-keyed function, and further show a different, general key recovery attack. This method, applied to wPRFs in the One-to-One parameter set attacked by Zeroed-out, improves upon the complexity and achieves an attack with complexity below the birthday bound, and stays effective against the proposed countermeasures. For the first time, it succeeds in attacking one of the two Many-to-One parameter sets and stays effective against one of the proposed countermeasures. We also consider its applicability to the alternative wPRF of similar structure proposed by Boneh et al at TCC 2018.
Yi Chen, Xiaoyang Dong, Ruijie Ma, Yantian Shen, Anyu Wang, Hongbo Yu, Xiaoyun Wang
The machine learning problem of model extraction was first
introduced in 1991 and gained prominence as a cryptanalytic challenge
starting with Crypto 2020. For over three decades, research in this field
has primarily focused on ReLU-based neural networks. In this work, we
take the first step towards the cryptanalytic extraction of PReLU neural
networks, which employ more complex nonlinear activation functions
than their ReLU counterparts.
We propose a raw output-based parameter recovery attack for PReLU
networks and extend it to more restrictive scenarios where only the top-m probability scores are accessible. Our attacks are rigorously evaluated
through end-to-end experiments on diverse PReLU neural networks, including models trained on the MNIST dataset. To the best of our knowledge, this is the first practical demonstration of PReLU neural network
extraction across three distinct attack scenarios.
22 October 2025
Rachel Thomas, Oliwia Kempinski, Hari Kailad, Emma Margaret Shroyer, Ian Miers, Gabriel Kaptchuk
We present cryptographic personas, an approach for facilitating access to pseudonymous speech within communities without enabling abuse. In systems equipped with cryptographic personas, users are able to authenticate to the service provider under new, unlinkable personas at will and post messages under those personas. When users violate community norms, their ability to post anonymously can be revoked. We develop two significant improvements to existing work on anonymous banning systems that make it possible to integrate cryptographic personas into real-time applications like group messaging: we show how to push expensive proof generation into an offline phase and find a way to optimize server-side overhead using recent proof folding techniques. We implement cryptographic personas, integrating them into a variety of settings, and show that they are concretely efficient enough for deployment.
Eshika Saxena, Alberto Alfarano, François Charton, Emily Wenger, Kristin Lauter
AI-powered attacks on Learning with Errors (LWE), an important hard math problem in post-quantum cryptography, rival or outperform "classical" attacks on LWE under certain parameter settings. Despite the promise of this approach, a dearth of accessible data limits AI practitioners' ability to study and improve these attacks. Creating LWE data for AI model training is time- and compute-intensive and requires significant domain expertise. To fill this gap and accelerate AI research on LWE attacks, we propose the TAPAS datasets, a Toolkit for Analysis of Post-quantum cryptography using AI Systems. These datasets cover several LWE settings and can be used off-the-shelf by AI practitioners to prototype new approaches to cracking LWE. This work documents TAPAS dataset creation, establishes attack performance baselines, and lays out directions for future work.
Bing-Jyue Chen, Lilia Tang, David Heath, Daniel Kang
Permutation and multiset checks underpin many SNARKs, yet existing techniques either incur superlinear prover time or rely on auxiliary commitments with soundness error that grows linearly in the input size. We present new arguments with linear-time provers and logarithmic soundness, without auxiliary commitments.
Prior work achieving logarithmic soundness error arithmetizes the permutation as a product of several multilinear polynomials, a formulation chosen for compatibility with the classic Sumcheck PIOP. A simpler alternative treats permutations as multilinear extensions of their permutation matrices. While this formulation was previously believed to require quadratic prover time, we show that this overhead can be eliminated by taking a linear-algebraic perspective. This viewpoint has a key advantage: partially evaluating the multilinear polynomial of the permutation requires no additional field operations and amounts to applying the inverse permutation to the verifier's challenge vector. This makes the step essentially free in terms of algebraic cost, unlike in prior approaches. Compared to concurrent work BiPerm (Bünz et al., ePrint Archive, 2025), our scheme requires no permutation preprocessing and supports prover-supplied permutations.
We show a sparsity-aware PCS like Dory (Lee, TCC, 2021) can compile our PIOP to a SNARK such that the resulting SNARK prover still runs in time $O(n)$. Our construction is the first logarithmically-sound SNARK with an $O(n)$-time prover for both permutation and multiset checks. We further prove a matching optimal prover lower bound, and we identify specific permutations that can be evaluated by the verifier in $O(\mathrm{polylog}(n))$-time. The ability to evaluate these permutations in $O(\mathrm{polylog}(n))$ time allows the verifier to avoid relying on prover-supplied commitments or evaluation proofs. As a result, we obtain the first logarithmically sound, field-agnostic SNARK with an $O(n)$-time prover in this setting.
Prior work achieving logarithmic soundness error arithmetizes the permutation as a product of several multilinear polynomials, a formulation chosen for compatibility with the classic Sumcheck PIOP. A simpler alternative treats permutations as multilinear extensions of their permutation matrices. While this formulation was previously believed to require quadratic prover time, we show that this overhead can be eliminated by taking a linear-algebraic perspective. This viewpoint has a key advantage: partially evaluating the multilinear polynomial of the permutation requires no additional field operations and amounts to applying the inverse permutation to the verifier's challenge vector. This makes the step essentially free in terms of algebraic cost, unlike in prior approaches. Compared to concurrent work BiPerm (Bünz et al., ePrint Archive, 2025), our scheme requires no permutation preprocessing and supports prover-supplied permutations.
We show a sparsity-aware PCS like Dory (Lee, TCC, 2021) can compile our PIOP to a SNARK such that the resulting SNARK prover still runs in time $O(n)$. Our construction is the first logarithmically-sound SNARK with an $O(n)$-time prover for both permutation and multiset checks. We further prove a matching optimal prover lower bound, and we identify specific permutations that can be evaluated by the verifier in $O(\mathrm{polylog}(n))$-time. The ability to evaluate these permutations in $O(\mathrm{polylog}(n))$ time allows the verifier to avoid relying on prover-supplied commitments or evaluation proofs. As a result, we obtain the first logarithmically sound, field-agnostic SNARK with an $O(n)$-time prover in this setting.
Zhuo Huang, Weijia Wang, Xiaogang Zhou, Yu Yu
HQC (Hamming Quasi-Cyclic) was selected as the fifth algorithm in the NIST suite of post-quantum cryptographic (PQC) standards. As the only code-based algorithm currently standardized by NIST, HQC offers a good balance between security assurance, performance, and implementation simplicity. Most existing power analyses against HQC are of the SPA style: they can recover secrets with a small number of traces, but can only tolerate limited noise. In this paper, we develop a chosen-ciphertext DPA-style attack methodology against HQC. We formalize a dedicated chosen-ciphertext setting in which the adversary selects $(\mathbf{u},\mathbf{v})$ to target the intermediate value $\mathbf{v}\oplus(\mathbf{u}\mathbf{y})$ over $\mathbb{F}_2[x]/(x^n-1)$. We further optimize the attack by reducing its computational complexity and generalizing it to target masked HQC implementations. The proposed approach is validated through both simulation and practical experiments. In noiseless simulations, full-key recovery is achieved with just \(10\) traces, and the required number of traces increases linearly with 1/SNR. In practical evaluations on an STM32F4 microprocessor, the secret key can be recovered with \(45\) traces without profiling and \(10\) traces with profiling. When first-order masking is applied, key recovery on the same hardware target remains feasible by exploiting second-order features, requiring approximately \(7{,}500\) traces without profiling. Our results establish a direct and analyzable connection between leakage on \(\mathbf{v}\oplus \mathbf{u}\mathbf{y}\) and end-to-end key recovery, emphasizing the necessity of higher-order masking countermeasures for HQC implementations.