International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

04 August 2025

Divesh Aggarwal, Pranjal Dutta, Saswata Mukherjee, Satyajeet Nagargoje, Maciej Obremski
ePrint Report ePrint Report
Randomness is a fundamental requirement in cryptographic systems, enabling secure encryption, commitments, and zero-knowledge proofs. However, real-world randomness sources often suffer from weaknesses that adversaries can exploit, leading to significant security vulnerabilities. While deterministic randomness extraction from a single min-entropy source is impossible, two-source extractors provide a robust solution by generating nearly uniform randomness from two independent weak sources. Moreover, cryptographic systems must also be resilient to leakage and tampering attacks, necessitating the development of non-malleable two-source extractors.

In this work, we construct a two-source non-malleable extractor in the Common Reference String (CRS) model, where a random low-degree polynomial is sampled once and made accessible to independent random sources, the distinguisher, and the tamperer. Our extractor requires only linear min-entropy in both sources and doesn't rely on strong computational assumptions, in contrast to prior constructions requiring computational assumptions such as sub-exponential hardness of the Decisional Diffie-Hellman (DDH) problem. Notably, our construction builds upon and relies on the recent breakthrough proof of the polynomial Freiman-Ruzsa conjecture. A connection of the Freiman-Ruzsa conjecture with two-source extractors was considered in prior work [ZBS11],[AGMR24], but their construction did not achieve non-malleability.

Our results advance the state of non-malleable cryptographic primitives, with applications in secure storage, leakage-resilient cryptography, and privacy amplification. By eliminating the need for strong computational hardness assumptions, our techniques provide a more foundational and widely applicable method for randomness extraction.

We also show, that the requirements on CRS for our application are so mild that the CRS can be sampled with $2$ party computation even when one of the parties is malicious (setting in which establishing unbiased coins is impossible).
Expand
Sebastian Angel, Sofía Celi, Elizabeth Margolin, Pratyush Mishra, Martin Sander, Jess Woods
ePrint Report ePrint Report
We introduce Coral, a system for proving in zero- knowledge that a committed byte stream corresponds to a structured object in accordance to a Context Free Grammar. Once a prover establishes the validity of the parsed object with Coral, they can selectively prove facts about the object—such as fields in Web API responses or in JSON Web Tokens—–to third parties or blockchains. Coral reduces the problem of correct parsing to a few simple checks over a left-child right-sibling tree and introduces a novel segmented memory abstraction that unifies and extends prior constructions for RAM in zkSNARKs. Our implementation of Coral runs on a standard laptop, and non-interactively proves the parsing of real Web responses (JSON) and files (TOML and C) in seconds. The resulting proofs are small and cheap to verify.
Expand
Jan Bormet, Arka Rai Choudhuri, Sebastian Faust, Sanjam Garg, Hussien Othman, Guru-Vamsi Policharla, Ziyan Qu, Mingyuan Wang
ePrint Report ePrint Report
Threshold encrypted mempools protect the privacy of transactions up until the point their inclusion on chain is confirmed. They are a promising approach to protection against front-running attacks on decentralized blockchains.

Recent works have introduced two key properties that an encryption scheme must satisfy in order to scale to large scale decentralized blockchains such as Ethereum: Silent Setup [Garg-Kolonelos-Policharla-Wang, CRYPTO'24], demands that a threshold encryption scheme does not require any interaction during the setup phase and only relies on the existence of Public Key Infrastructure. Batched Decryption [Choudhuri-Garg-Piet-Policharla, USENIX'24], demands that an entire block containing $B$ encrypted transactions can be decrypted using communication that is independent of (or sublinear in) $B$, without compromising the privacy of transactions that have not yet been confirmed.

While existing constructions achieve either Silent Setup or Batched Decryption independently, a truly decentralized and scalable encrypted mempool requires both properties to be satisfied simultaneously. In this work, we present the first ``Batched Threshold Encryption scheme with Silent Setup'' built using bilinear pairings. We provide formal definitions for the primitive, and prove security in the Generic Group Model. We provide several optimizations and implement our scheme to evaluate its performance. Our experiments demonstrate its efficiency for deployment in blockchain systems.
Expand
Nick Aquina, Simon Rommel, Idelfonso Tafur Monroy
ePrint Report ePrint Report
Cascader has been introduced as a new key exchange protocol based on iterative multiplicative recurrence. This short note presents a practical shared key recovery attack on the Cascader key exchange protocol. This note also shows that Cascader as a hash function is not collision resistant, presents a new upper bound on the output space of Cascader and shows that a Cascader-based KDF is not secure against an Adaptive Chosen Public Inputs Attack (CPM).
Expand
Joshua Limbrey, Andrew Mendelsohn
ePrint Report ePrint Report
In Submission 2025/1391 to the IACR Cryptology ePrint Archive, the Inverse Discrete Logarithm Problem (IDLP) is introduced and used to build a key exchange protocol and a KEM. The author claims both classical and post-quantum security for IDLP and therefore for the proposed protocols. It is the purpose of this note to give an efficient quantum algorithm for IDLP, based on the algorithm of Shor. We give an implementation of our algorithm, replacing the use of Shor's algorithm with an oracle.
Expand
Juliane Krämer, Patrick Struck, Maximiliane Weishäupl
ePrint Report ePrint Report
In this note we analyze the various binding properties of combiners for KEMs. We show that several properties follow easily for the most general combiner—assuming a collision-resistant hash function—while more performance-oriented versions require the respective property from one or both of the underlying KEMs. Other properties are not obtained as directly and require either more properties of one underlying KEM or the respective property from both KEMs.
Expand
Seyoung Yoon, Gyeongju Song, Kyungbae Jang, Sangmin Cha, Hwajeong Seo
ePrint Report ePrint Report
As quantum computing technology rapidly advances, threats to existing symmetric-key and public-key cryptosystems are becoming increasingly real. In this study, we implement a SHA-1 quantum circuit that operates efficiently in a quantum computing environment. We optimize the quantum circuit, focusing on minimizing total circuit depth, a key performance indicator of quantum algorithms. The SHA-1 quantum circuit implementation used 985 qubits, resulting in a measured circuit depth of 9,026. Furthermore, by integrating this optimized circuit with the Grover algorithm, we establish the foundation for an efficient quantum attack on the SHA-1 algorithm. This research is significant not only because it presents a resource-efficient SHA-1 quantum implementation but also because it enables accelerated attacks in a quantum computing environment.
Expand
Dan Boneh, Joachim Neu, Valeria Nikolaenko, Aditi Partap
ePrint Report ePrint Report
Data availability sampling (DAS) is an important technique to horizontally scale consensus protocols without compromising on the number of adversarial nodes that can be tolerated. DAS is on the technical roadmap of major blockchains such as Ethereum. A major challenge for DAS schemes, that has not been formally studied in the literature, is how incomplete shares can be repaired. The need for repairing data shares motivates key aspects of Ethereum's DAS-based sharding vision called "Danksharding". In this work, we make two contributions. First, we provide a new definitional framework that formalizes the notion of repair, along with the security guarantees that a DAS scheme must provide. Second, we propose a new DAS scheme designed with efficient repair in mind, based on locally-correctable multiplicity codes. To facilitate using these codes, we introduce a new multivariate polynomial commitment scheme that (i) supports efficient openings of partial derivatives of a committed polynomial, (ii) supports fast batch opening proof generation at many points, and (iii) has an algorithm to recompute (repair) opening proofs at a point from only a few other proofs. The proposed scheme improves upon the state-of-the-art Ethereum Fulu DAS scheme, slated for deployment in late 2025/early 2026, in storage overhead, repair bandwidth and coordination, while only slightly increasing dispersal cost and sampling bandwidth. Our techniques readily carry over to data availability schemes based on verifiable information dispersal (VID).
Expand
Matteo Campanelli, Dario Fiore, Mahak Pancholi
ePrint Report ePrint Report
Incrementally Verifiable Computation (IVC) allows one to prove the correctness of a computation of potentially unbounded length in an incremental way, while a computationally weak client can efficiently check its correctness in time sublinear in the computation's length. IVC is particularly useful in several real-world applications such as scalable blockchains, distributed computation, and verifiable machine learning. Yet, most existing IVC schemes are only provably secure for constant-depth computations. Arguing their security for computations of polynomial depth relies on heuristic assumptions, raising both theoretical and practical concerns. In this work, we delve into the security foundations of incremental proof systems, addressing two main questions. First, we revisit the security analysis, in the unbounded-depth regime, of the canonical construction of IVC based on the recursive composition of SNARKs. We extend this analysis to include SNARKs that are straightline extractable in the algebraic group model (AGM) and some additional oracle model. As a consequence of our result, we obtain novel instantiations of IVC for unbounded-depth computations based on AGM-based SNARKs, such as Groth16 or Marlin, to name a few—an important class of SNARKs not captured by similar analyses in prior work [Chiesa et al. TCC 2024]. Second, we consider incremental proof systems for arbitrary depth computations in which full-blown extractability is not necessary. We study under what conditions they can be instantiated from the recursive composition of "plain" building blocks (SNARKs, folding, accumulation schemes), that is without requiring special straightline extractability. We introduce incremental functional commitments (incremental FC), a primitive that allows one to commit to a large data $D$ and later prove a function $f(D)$. The key aspect is that both the committing and proving functionalities operate incrementally, processing $D$ in a streaming, piece-by-piece manner. Also, like in standard FCs, their security property is a form of evaluation binding, a notion that is weaker than knowledge-soundness (it states that it is hard to produce two valid proofs for the same commitment and two distinct outputs). Our second main result consists of a construction of incremental FCs based on recursive composition of SNARKs and its security analysis, which shows that arbitrarily deep compositions of primitives with non-straightline extractors do not suffer from inherent security limitations.
Expand

03 August 2025

Yalan Wang, Liqun Chen, Yangguang Tian, Long Meng, Christopher J.P. Newton
ePrint Report ePrint Report
The World Wide Web Consortium (W3C) has established standards for decentralized identities (DIDs) and verifiable credentials (VCs). A DID serves as a unique identifier for an entity, while a VC validates specific attributes associated with the DID holder. To prove ownership of credentials, users generate verifiable presentations (VPs). To enhance privacy, the W3C standards advocate for randomizable signatures in VC creation and zero-knowledge proofs for VP generation. However, these standards face a significant limitation: they cannot effectively verify cross-domain credentials while maintaining anonymity. In this paper, we present Anonymous Verifiable Presentations with Extended Usability (AVPEU), a novel framework that addresses this limitation through the introduction of a notary system. At the technical core of AVPEU lies our proposed randomizable message-hiding signature scheme. We provide both a generic construction of AVPEU and specific implementations based on Boneh-Boyen-Shacham (BBS), Camenisch-Lysyanskaya (CL), and Pointcheval-Sanders (PS) signature. Our experimental results demonstrate the feasibility of these schemes.
Expand
Yalan Wang, Bryan Kumara, Harsh Kasyap, Liqun Chen, Sumanta Sarkar, Christopher J.P. Newton, Carsten Maple, Ugur Ilker Atmaca
ePrint Report ePrint Report
All-but-one Vector Commitments (AVCs) allow a committed vector to be verified by randomly opening all but one of the committed values. Typically, AVCs are instantiated using Goldwasser-Goldreich-Micali (GGM) trees. Generating these trees comprises a significant computational cost for AVCs due to a large number of hash function calls. Recently, correlated GGM (cGGM) trees were proposed to halve the number of hash calls and Batched AVCs (BAVCs) using one large GGM tree were integrated to FAEST to form the FAEST version 2 signature scheme, which improves efficiency and reduces the signature size. However, further optimizations on BAVC schemes remain possible. Inspired by the large-GGM based BAVC and the cGGM tree, this paper proposes BACON, a BAVC with aborts scheme by leveraging a large cGGM tree. BACON executes multiple instances of AVC in a single batch and enables an abort mechanism to probabilistically reduce the commitment size. We prove that BACON is secure under the ideal cipher model and the random oracle model. We also discuss the possible application of the proposed BACON, i.e., FAEST version 2. Furthermore, because the number of hash calls in a large cGGM tree is halved compared with that used in a large GGM tree, theoretically, our BACON is more efficient than the state-of-the-art BAVC scheme.
Expand
Mirza Ahad Baig, Christoph Ullrich Günther, Krzysztof Pietrzak
ePrint Report ePrint Report
The blocks in the Bitcoin blockchain record the amount of work W that went into creating them through proofs of work. When honest parties control a majority of the work, consensus is achieved by picking the chain with the highest recorded weight. Resources other than work have been considered to secure such longest-chain blockchains. In Chia, blocks record the amount of disk-space S (via a proof of space) and sequential computational steps V (through a VDF).

In this paper, we ask what weight functions Γ(S,V,W) (that assign a weight to a block as a function of the recorded space, speed, and work) are secure in the sense that whenever the weight of the resources controlled by honest parties is larger than the weight of adversarial parties, the blockchain is secure against private double-spending attacks.

We completely classify such functions in an idealized “continuous” model: Γ(S,V,W) is secure against private double-spending attacks if and only if it is homogeneous of degree one in the timed resources V and W, i.e., αΓ(S,V,W)=Γ(S,α V, α W). This includes the Bitcoin rule Γ(S,V,W)=W and the Chia rule Γ(S,V,W) = S · V. In a more realistic model where blocks are created at discrete time-points, one additionally needs some mild assumptions on the dependency on S (basically, the weight should not grow too much if S is slightly increased, say linear as in Chia).

Our classification is more general and allows various instantiations of the same resource. It provides a powerful tool for designing new longest-chain blockchains. E.g., consider combining different PoWs to counter centralization, say the Bitcoin PoW W_1 and a memory-hard PoW W_2. Previous work suggested to use W_1+W_2 as weight. Our results show that using e.g., √(W_1)·√(W_2) or min{W_1,W_2} are also secure, and we argue that in practice these are much better choices.
Expand
◄ Previous Next ►