International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

07 August 2025

Jintong Yu, Yuxuan Wang, Shipei Qu, Yubo Zhao, Yipeng Shi, Pei Cao, Xiangjun Lu, Chi Zhang, Dawu Gu, Cheng Hong
ePrint Report ePrint Report
With the advancement of deep learning techniques, Deep Learning-based Non-profiled Side-Channel Analysis (DL-NSCA) can automatically learn and combine features, making it a promising method that can skip the manual and precise selection of Points of Interest (PoIs). Existing DL-NSCA methods assume that the attacker can identify a short leakage interval (usually less than 5000 points) containing PoIs from raw traces (more than 100,000 points) and then feed the leakage interval into the neural network to recover the key. However, in practice, the attacker often faces a black-box scenario with unknown underlying implementations, making locating the short interval from raw traces challenging, especially when masking countermeasures exist. To address this issue, we propose a lightweight end-to-end DL-NSCA model called convWIN-MCR, which consists of a performance-optimizing component, convWIN, and an accelerator component, MCR. It can efficiently process raw traces without the need to manually identify the short leakage interval. On the public dataset ASCADv1, while the state-of-the-art model Multi-Output Regression (MOR) requires 28,000 traces and 24 minutes to recover the key from the leakage interval with 1,400 feature points, our framework only requires 6,000 traces in 13 minutes to directly analyze raw traces with 250,000 feature points. To further validate the practical applicability of our framework, we successfully crack a commercial USIM card by analyzing its raw traces and recovering its 128-bit AES key.
Expand
Simone Colombo, Damian Vizár
ePrint Report ePrint Report
A growing body of work addresses the security of cryptographic systems in the presence of mass surveillance, a threat made concrete by Snowden’s revelations and the widespread use of spyware against journalists and activists. In this paper, we investigate the security of symmetric encryption faced with simultaneous algorithm substitution attacks (ASAs) and key exfiltration (KE). The security of symmetric encryption in presence of ASAs or KE alone was established but no result deals with their coordinated deployment. Yet, that is a necessary step to be made if we are to achieve actual security against mass surveillance. We formalize this setting, and prove that no scheme alone stands chance against coordinated ASA and KE, by describing a realistic attack. We then describe a new kind of schemes, which make use of externally supplied randomness. We formalize their security and give a construction which provably resists simultaneous ASAs and KE when paired with a verifiable source of randomness, with security bounds in the concrete security spirit.
Expand
Jiping Yu, Kun Chen, Xiaoyu Fan, Yunyi Chen, Xiaowei Zhu, Wenguang Chen
ePrint Report ePrint Report
Encrypted matrix-vector multiplication is a fundamental component of a variety of applications that involve data privacy concerns. Current algorithms utilizing fully homomorphic encryption (FHE) generally use batching to enhance computational efficiency while neglecting the sparsity of the matrices, a characteristic that exists naturally in many practical situations. Alternatively, porting plaintext algorithms that address sparsity may fail to utilize batching and introduce additional privacy concerns.

We propose Lodia, an efficient outsourced SpMV algorithm for batched FHE schemes without sacrificing privacy. It only requires $\Theta((n+m)\log(n+m)/s)$ FHE operations, where $n$ is the number of rows/columns, $m$ is the number of non-zero elements of the matrix, and $s$ is the batch size of the FHE scheme. This is optimal for $m=\Omega(n)$ and $m=O(n^\rho)$ for some $\rho<2$ (i.e., $an \le m \le bn^\rho$ asymptotically), covering most practical cases. To our knowledge, no method has been published with better than $\Theta(n^2/s)$ FHE operations, suitable for any sparse matrix, and without privacy concerns.

Lodia utilizes a novel low-diagonal decomposition, which decomposes a sparse matrix into a series of special matrices named low-diagonal matrices. Based on a conventional method encoding the matrix in diagonal order, each low-diagonal matrix can be efficiently multiplied by a vector. This results in an efficient SpMV method suitable for any sparse matrix. Experiments show that Lodia practically achieves a speedup of up to $96\times$ compared to baselines that ignore matrix sparsity, and up to $3.6\times$ compared to implementations even with fewer security guarantees. This is the first SpMV solution on encrypted data that can process a substantial matrix with over 8 million rows/columns and 125 million non-zero elements.
Expand
Luke Beckwith, Andre Esser, Edoardo Persichetti, Paolo Santini, Floyd Zweydinger
ePrint Report ePrint Report
LESS is a signature scheme based on the code equivalence problem that has advanced to the second round of the NIST PQC standardization process. While promising, the scheme suffers from relatively large signatures and moderate to slow signing and verification times. Chou, Santini, and Persichetti recently introduced a variant of LESS relying on canonical forms to significantly reduce signature sizes. However, the overall performance impact of this approach remained largely unclear. In this work, we provide the first implementation of the new LESS variant and show that, in its original form, it performs poorly due to the overhead of computing canonical forms in a naïve way. We then introduce a series of algorithmic and implementation-level optimizations that reduce this overhead to about 10%, showing that the signature size reduction comes at minor cost. In addition, we present further improvements to the signature scheme as a whole, as well as a re-parameterization. The resulting scheme achieves speedups of 2.5× to 10× over the Round 1 NIST submission, while maintaining the reduced signature sizes.
Expand
Haotian Yin, Jie Zhang, Wanxin Li, Yuji Dong, Eng Gee Lim, Dominik Wojtczak
ePrint Report ePrint Report
Proxy re-encryption is a cryptographic scheme enabling a delegator (user $i$) to delegate its decryption right to a valid delegatee (user $j$) through a proxy, who cannot extract any information about the message during the procedure. An important security notion is the security against collusion between the proxy and the delegatee. In this case, the adversary has the secret key of the delegatee, $\mathsf{sk}_j$, and the re-encryption key, $\mathsf{rk}_{i\to j}$. The master secret security is first formalised by Ateniese et al. (NDSS'05) to capture the secrecy of $i$'s secret key during collusion. This notion was further formalised by Zhou et al. (ASIACRYPT'23) as the indistinguishability of re-encrypted ciphertext against chosen-message attacks, called collusion safety, which implies the master secret security. In this paper, we find that a PRE scheme is not master secret secure as they claimed, and many other schemes were not master secret secure. Then, we propose a generic construction to achieve collusion safety at the cost of doubling the key size from the IND-CPA secure PRE, enjoying a much better generality and efficiency than the existing technique by secret sharing.
Expand
Minka Mi Nguidjoi Thierry Emmanuel, Mani Onana Flavien Serge, Djotio Ndié Thomas, Atsa Etoundi Roger
ePrint Report ePrint Report
This article presents the architectural design of Zero Knowledge Non-Repudiation (ZK-NR), a layered cryptographic protocol enabling post-quantum secure, legally interpretable, and verifiably non-repudiable attestations. Built upon STARK-based zero-knowledge proofs, hybrid post-quantum signatures, and entropy-accumulating ledger anchoring, ZK-NR satisfies the structural properties of both the Q2CSI framework and the NIZK-E model. The protocol achieves semantic interpretability by structurally separating contextual proofs from bounded explanations, while maintaining cryptographic soundness under the Universal Composability framework. Formal UC proofs are deferred to Article A.2 in this series. This article constitutes the first entry in the ZK-NR series, focused specifically on the protocol's architectural and functional design. Subsequent articles will cover its mathematical foundations, implementation strategies, and operational validation using formal verification environments.
Expand

04 August 2025

Divesh Aggarwal, Pranjal Dutta, Saswata Mukherjee, Satyajeet Nagargoje, Maciej Obremski
ePrint Report ePrint Report
Randomness is a fundamental requirement in cryptographic systems, enabling secure encryption, commitments, and zero-knowledge proofs. However, real-world randomness sources often suffer from weaknesses that adversaries can exploit, leading to significant security vulnerabilities. While deterministic randomness extraction from a single min-entropy source is impossible, two-source extractors provide a robust solution by generating nearly uniform randomness from two independent weak sources. Moreover, cryptographic systems must also be resilient to leakage and tampering attacks, necessitating the development of non-malleable two-source extractors.

In this work, we construct a two-source non-malleable extractor in the Common Reference String (CRS) model, where a random low-degree polynomial is sampled once and made accessible to independent random sources, the distinguisher, and the tamperer. Our extractor requires only linear min-entropy in both sources and doesn't rely on strong computational assumptions, in contrast to prior constructions requiring computational assumptions such as sub-exponential hardness of the Decisional Diffie-Hellman (DDH) problem. Notably, our construction builds upon and relies on the recent breakthrough proof of the polynomial Freiman-Ruzsa conjecture. A connection of the Freiman-Ruzsa conjecture with two-source extractors was considered in prior work [ZBS11],[AGMR24], but their construction did not achieve non-malleability.

Our results advance the state of non-malleable cryptographic primitives, with applications in secure storage, leakage-resilient cryptography, and privacy amplification. By eliminating the need for strong computational hardness assumptions, our techniques provide a more foundational and widely applicable method for randomness extraction.

We also show, that the requirements on CRS for our application are so mild that the CRS can be sampled with $2$ party computation even when one of the parties is malicious (setting in which establishing unbiased coins is impossible).
Expand
Sebastian Angel, Sofía Celi, Elizabeth Margolin, Pratyush Mishra, Martin Sander, Jess Woods
ePrint Report ePrint Report
We introduce Coral, a system for proving in zero- knowledge that a committed byte stream corresponds to a structured object in accordance to a Context Free Grammar. Once a prover establishes the validity of the parsed object with Coral, they can selectively prove facts about the object—such as fields in Web API responses or in JSON Web Tokens—–to third parties or blockchains. Coral reduces the problem of correct parsing to a few simple checks over a left-child right-sibling tree and introduces a novel segmented memory abstraction that unifies and extends prior constructions for RAM in zkSNARKs. Our implementation of Coral runs on a standard laptop, and non-interactively proves the parsing of real Web responses (JSON) and files (TOML and C) in seconds. The resulting proofs are small and cheap to verify.
Expand
Jan Bormet, Arka Rai Choudhuri, Sebastian Faust, Sanjam Garg, Hussien Othman, Guru-Vamsi Policharla, Ziyan Qu, Mingyuan Wang
ePrint Report ePrint Report
Threshold encrypted mempools protect the privacy of transactions up until the point their inclusion on chain is confirmed. They are a promising approach to protection against front-running attacks on decentralized blockchains.

Recent works have introduced two key properties that an encryption scheme must satisfy in order to scale to large scale decentralized blockchains such as Ethereum: Silent Setup [Garg-Kolonelos-Policharla-Wang, CRYPTO'24], demands that a threshold encryption scheme does not require any interaction during the setup phase and only relies on the existence of Public Key Infrastructure. Batched Decryption [Choudhuri-Garg-Piet-Policharla, USENIX'24], demands that an entire block containing $B$ encrypted transactions can be decrypted using communication that is independent of (or sublinear in) $B$, without compromising the privacy of transactions that have not yet been confirmed.

While existing constructions achieve either Silent Setup or Batched Decryption independently, a truly decentralized and scalable encrypted mempool requires both properties to be satisfied simultaneously. In this work, we present the first ``Batched Threshold Encryption scheme with Silent Setup'' built using bilinear pairings. We provide formal definitions for the primitive, and prove security in the Generic Group Model. We provide several optimizations and implement our scheme to evaluate its performance. Our experiments demonstrate its efficiency for deployment in blockchain systems.
Expand
Nick Aquina, Simon Rommel, Idelfonso Tafur Monroy
ePrint Report ePrint Report
Cascader has been introduced as a new key exchange protocol based on iterative multiplicative recurrence. This short note presents a practical shared key recovery attack on the Cascader key exchange protocol. This note also shows that Cascader as a hash function is not collision resistant, presents a new upper bound on the output space of Cascader and shows that a Cascader-based KDF is not secure against an Adaptive Chosen Public Inputs Attack (CPM).
Expand
Joshua Limbrey, Andrew Mendelsohn
ePrint Report ePrint Report
In Submission 2025/1391 to the IACR Cryptology ePrint Archive, the Inverse Discrete Logarithm Problem (IDLP) is introduced and used to build a key exchange protocol and a KEM. The author claims both classical and post-quantum security for IDLP and therefore for the proposed protocols. It is the purpose of this note to give an efficient quantum algorithm for IDLP, based on the algorithm of Shor. We give an implementation of our algorithm, replacing the use of Shor's algorithm with an oracle.
Expand
Juliane Krämer, Patrick Struck, Maximiliane Weishäupl
ePrint Report ePrint Report
In this note we analyze the various binding properties of combiners for KEMs. We show that several properties follow easily for the most general combiner—assuming a collision-resistant hash function—while more performance-oriented versions require the respective property from one or both of the underlying KEMs. Other properties are not obtained as directly and require either more properties of one underlying KEM or the respective property from both KEMs.
Expand
Seyoung Yoon, Gyeongju Song, Kyungbae Jang, Sangmin Cha, Hwajeong Seo
ePrint Report ePrint Report
As quantum computing technology rapidly advances, threats to existing symmetric-key and public-key cryptosystems are becoming increasingly real. In this study, we implement a SHA-1 quantum circuit that operates efficiently in a quantum computing environment. We optimize the quantum circuit, focusing on minimizing total circuit depth, a key performance indicator of quantum algorithms. The SHA-1 quantum circuit implementation used 985 qubits, resulting in a measured circuit depth of 9,026. Furthermore, by integrating this optimized circuit with the Grover algorithm, we establish the foundation for an efficient quantum attack on the SHA-1 algorithm. This research is significant not only because it presents a resource-efficient SHA-1 quantum implementation but also because it enables accelerated attacks in a quantum computing environment.
Expand
Dan Boneh, Joachim Neu, Valeria Nikolaenko, Aditi Partap
ePrint Report ePrint Report
Data availability sampling (DAS) is an important technique to horizontally scale consensus protocols without compromising on the number of adversarial nodes that can be tolerated. DAS is on the technical roadmap of major blockchains such as Ethereum. A major challenge for DAS schemes, that has not been formally studied in the literature, is how incomplete shares can be repaired. The need for repairing data shares motivates key aspects of Ethereum's DAS-based sharding vision called "Danksharding". In this work, we make two contributions. First, we provide a new definitional framework that formalizes the notion of repair, along with the security guarantees that a DAS scheme must provide. Second, we propose a new DAS scheme designed with efficient repair in mind, based on locally-correctable multiplicity codes. To facilitate using these codes, we introduce a new multivariate polynomial commitment scheme that (i) supports efficient openings of partial derivatives of a committed polynomial, (ii) supports fast batch opening proof generation at many points, and (iii) has an algorithm to recompute (repair) opening proofs at a point from only a few other proofs. The proposed scheme improves upon the state-of-the-art Ethereum Fulu DAS scheme, slated for deployment in late 2025/early 2026, in storage overhead, repair bandwidth and coordination, while only slightly increasing dispersal cost and sampling bandwidth. Our techniques readily carry over to data availability schemes based on verifiable information dispersal (VID).
Expand
Matteo Campanelli, Dario Fiore, Mahak Pancholi
ePrint Report ePrint Report
Incrementally Verifiable Computation (IVC) allows one to prove the correctness of a computation of potentially unbounded length in an incremental way, while a computationally weak client can efficiently check its correctness in time sublinear in the computation's length. IVC is particularly useful in several real-world applications such as scalable blockchains, distributed computation, and verifiable machine learning. Yet, most existing IVC schemes are only provably secure for constant-depth computations. Arguing their security for computations of polynomial depth relies on heuristic assumptions, raising both theoretical and practical concerns. In this work, we delve into the security foundations of incremental proof systems, addressing two main questions. First, we revisit the security analysis, in the unbounded-depth regime, of the canonical construction of IVC based on the recursive composition of SNARKs. We extend this analysis to include SNARKs that are straightline extractable in the algebraic group model (AGM) and some additional oracle model. As a consequence of our result, we obtain novel instantiations of IVC for unbounded-depth computations based on AGM-based SNARKs, such as Groth16 or Marlin, to name a few—an important class of SNARKs not captured by similar analyses in prior work [Chiesa et al. TCC 2024]. Second, we consider incremental proof systems for arbitrary depth computations in which full-blown extractability is not necessary. We study under what conditions they can be instantiated from the recursive composition of "plain" building blocks (SNARKs, folding, accumulation schemes), that is without requiring special straightline extractability. We introduce incremental functional commitments (incremental FC), a primitive that allows one to commit to a large data $D$ and later prove a function $f(D)$. The key aspect is that both the committing and proving functionalities operate incrementally, processing $D$ in a streaming, piece-by-piece manner. Also, like in standard FCs, their security property is a form of evaluation binding, a notion that is weaker than knowledge-soundness (it states that it is hard to produce two valid proofs for the same commitment and two distinct outputs). Our second main result consists of a construction of incremental FCs based on recursive composition of SNARKs and its security analysis, which shows that arbitrarily deep compositions of primitives with non-straightline extractors do not suffer from inherent security limitations.
Expand

03 August 2025

Yalan Wang, Liqun Chen, Yangguang Tian, Long Meng, Christopher J.P. Newton
ePrint Report ePrint Report
The World Wide Web Consortium (W3C) has established standards for decentralized identities (DIDs) and verifiable credentials (VCs). A DID serves as a unique identifier for an entity, while a VC validates specific attributes associated with the DID holder. To prove ownership of credentials, users generate verifiable presentations (VPs). To enhance privacy, the W3C standards advocate for randomizable signatures in VC creation and zero-knowledge proofs for VP generation. However, these standards face a significant limitation: they cannot effectively verify cross-domain credentials while maintaining anonymity. In this paper, we present Anonymous Verifiable Presentations with Extended Usability (AVPEU), a novel framework that addresses this limitation through the introduction of a notary system. At the technical core of AVPEU lies our proposed randomizable message-hiding signature scheme. We provide both a generic construction of AVPEU and specific implementations based on Boneh-Boyen-Shacham (BBS), Camenisch-Lysyanskaya (CL), and Pointcheval-Sanders (PS) signature. Our experimental results demonstrate the feasibility of these schemes.
Expand
Yalan Wang, Bryan Kumara, Harsh Kasyap, Liqun Chen, Sumanta Sarkar, Christopher J.P. Newton, Carsten Maple, Ugur Ilker Atmaca
ePrint Report ePrint Report
All-but-one Vector Commitments (AVCs) allow a committed vector to be verified by randomly opening all but one of the committed values. Typically, AVCs are instantiated using Goldwasser-Goldreich-Micali (GGM) trees. Generating these trees comprises a significant computational cost for AVCs due to a large number of hash function calls. Recently, correlated GGM (cGGM) trees were proposed to halve the number of hash calls and Batched AVCs (BAVCs) using one large GGM tree were integrated to FAEST to form the FAEST version 2 signature scheme, which improves efficiency and reduces the signature size. However, further optimizations on BAVC schemes remain possible. Inspired by the large-GGM based BAVC and the cGGM tree, this paper proposes BACON, a BAVC with aborts scheme by leveraging a large cGGM tree. BACON executes multiple instances of AVC in a single batch and enables an abort mechanism to probabilistically reduce the commitment size. We prove that BACON is secure under the ideal cipher model and the random oracle model. We also discuss the possible application of the proposed BACON, i.e., FAEST version 2. Furthermore, because the number of hash calls in a large cGGM tree is halved compared with that used in a large GGM tree, theoretically, our BACON is more efficient than the state-of-the-art BAVC scheme.
Expand
Mirza Ahad Baig, Christoph Ullrich Günther, Krzysztof Pietrzak
ePrint Report ePrint Report
The blocks in the Bitcoin blockchain record the amount of work W that went into creating them through proofs of work. When honest parties control a majority of the work, consensus is achieved by picking the chain with the highest recorded weight. Resources other than work have been considered to secure such longest-chain blockchains. In Chia, blocks record the amount of disk-space S (via a proof of space) and sequential computational steps V (through a VDF).

In this paper, we ask what weight functions Γ(S,V,W) (that assign a weight to a block as a function of the recorded space, speed, and work) are secure in the sense that whenever the weight of the resources controlled by honest parties is larger than the weight of adversarial parties, the blockchain is secure against private double-spending attacks.

We completely classify such functions in an idealized “continuous” model: Γ(S,V,W) is secure against private double-spending attacks if and only if it is homogeneous of degree one in the timed resources V and W, i.e., αΓ(S,V,W)=Γ(S,α V, α W). This includes the Bitcoin rule Γ(S,V,W)=W and the Chia rule Γ(S,V,W) = S · V. In a more realistic model where blocks are created at discrete time-points, one additionally needs some mild assumptions on the dependency on S (basically, the weight should not grow too much if S is slightly increased, say linear as in Chia).

Our classification is more general and allows various instantiations of the same resource. It provides a powerful tool for designing new longest-chain blockchains. E.g., consider combining different PoWs to counter centralization, say the Bitcoin PoW W_1 and a memory-hard PoW W_2. Previous work suggested to use W_1+W_2 as weight. Our results show that using e.g., √(W_1)·√(W_2) or min{W_1,W_2} are also secure, and we argue that in practice these are much better choices.
Expand
Sofiane Azogagh, Zelma Aubin Birba, Sébastien Gambs, Marc-Olivier Killijian
ePrint Report ePrint Report
While the use of homomorphic encryption (HE) for encrypted inference has received considerable attention, its application for the training of machine learning (ML) models remains comparatively underexplored, primarily due to the high computational overhead traditionally associated with fully homomorphic encryption (FHE). In this work, we address this challenge by leveraging the inherent connection between inference and training in the context of Extremely Randomized Trees (ERT), thereby enabling efficient training directly over encrypted data. More precisely, we instantiate this approach by the training of ERT within the TFHE framework. Our implementation demonstrates that it is possible to train ERTs on encrypted datasets with a runtime significantly lower than current state-of-the-art methods for training Random Forests in the encrypted domain while achieving comparable predictive accuracy. This result highlights a promising direction for practical privacy-preserving machine learning using FHE. Our second main contribution consists in leveraging the properties of ERTs to create the first ML model that enables private unlearning. This approach makes the unlearning process indistinguishable from training, thus allowing clients to conceal the true nature of the operations being conducted on the model.
Expand
Vincenzo Botta, Simone Bottoni, Matteo Campanelli, Emanuele Ragnoli, Alberto Trombetta
ePrint Report ePrint Report
Verifiable Databases (VDBs) enable clients delegating storage to a provider without having to trust it: given a claimed response to a query, they can check its correctness by holding a short digest to the database and a small related certificate (a proof). The foundational role of databases and the increasing trend of storage delegation make this an important primitive. Existing VDB approaches face fundamental tradeoffs (on which we improve in this work). A line of work on VDB designs has leveraged general purpose proof systems (SNARKs). The resulting constructions are expressive—they support a large class of queries—but require cumbersome intermediate representations, often rely on heuristic assumptions and are overall very complex. Other prior approaches adopted cleverly combined specialized authenticated data structures (e.g., set accumulators). These designs tend to be simple, elegant and rely on well founded cryptographic assumptions; however they have limited expressivity and some undesirable efficiency features (e.g., scaling quadratically in the number of columns). We present $\mathsf{qedb}$, a novel construction for verifiable databases that addresses these limitations. $\mathsf{qedb}$ supports more expressive queries than previous approaches based on specialized data structures. At the same time it preserves their simplicity and improves on several of their limitations: it removes the quadratic dependency on the number of columns present in state-of-the-art constructions; its proof sizes are completely independent of the database size (without requiring any circuit representation). One of our primary contributions is a foundational framework that cleanly separates VDB logic from cryptographic instantiations. At its essence, it resembles other common information theoretic frameworks, such as Polynomial Interactive Oracle Proofs (PIOPs). At the same time it diverges from existing approaches by being slightly specialized for the database setting. We demonstrate how to instantiate our framework using modern pairing-based linear-map vector commitments and set accumulators. More in general, we show that our building blocks can be derived from extractable homomorphic polynomial commitments. Being modular, our approach permits alternative instantiations, such as with lattice-based polynomial commitments enabling post-quantum security. We implemented $\mathsf{qedb}$ in Rust and experimentally showed that it efficiently scales to datasets with millions of rows while maintaining competitive proving and verification times. This evidence indicates that our approach provides a foundation for practical, secure, and expressive verifiable database systems.
Expand
◄ Previous Next ►