International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

03 August 2025

Sofiane Azogagh, Zelma Aubin Birba, Sébastien Gambs, Marc-Olivier Killijian
ePrint Report ePrint Report
While the use of homomorphic encryption (HE) for encrypted inference has received considerable attention, its application for the training of machine learning (ML) models remains comparatively underexplored, primarily due to the high computational overhead traditionally associated with fully homomorphic encryption (FHE). In this work, we address this challenge by leveraging the inherent connection between inference and training in the context of Extremely Randomized Trees (ERT), thereby enabling efficient training directly over encrypted data. More precisely, we instantiate this approach by the training of ERT within the TFHE framework. Our implementation demonstrates that it is possible to train ERTs on encrypted datasets with a runtime significantly lower than current state-of-the-art methods for training Random Forests in the encrypted domain while achieving comparable predictive accuracy. This result highlights a promising direction for practical privacy-preserving machine learning using FHE. Our second main contribution consists in leveraging the properties of ERTs to create the first ML model that enables private unlearning. This approach makes the unlearning process indistinguishable from training, thus allowing clients to conceal the true nature of the operations being conducted on the model.
Expand
Vincenzo Botta, Simone Bottoni, Matteo Campanelli, Emanuele Ragnoli, Alberto Trombetta
ePrint Report ePrint Report
Verifiable Databases (VDBs) enable clients delegating storage to a provider without having to trust it: given a claimed response to a query, they can check its correctness by holding a short digest to the database and a small related certificate (a proof). The foundational role of databases and the increasing trend of storage delegation make this an important primitive. Existing VDB approaches face fundamental tradeoffs (on which we improve in this work). A line of work on VDB designs has leveraged general purpose proof systems (SNARKs). The resulting constructions are expressive—they support a large class of queries—but require cumbersome intermediate representations, often rely on heuristic assumptions and are overall very complex. Other prior approaches adopted cleverly combined specialized authenticated data structures (e.g., set accumulators). These designs tend to be simple, elegant and rely on well founded cryptographic assumptions; however they have limited expressivity and some undesirable efficiency features (e.g., scaling quadratically in the number of columns). We present $\mathsf{qedb}$, a novel construction for verifiable databases that addresses these limitations. $\mathsf{qedb}$ supports more expressive queries than previous approaches based on specialized data structures. At the same time it preserves their simplicity and improves on several of their limitations: it removes the quadratic dependency on the number of columns present in state-of-the-art constructions; its proof sizes are completely independent of the database size (without requiring any circuit representation). One of our primary contributions is a foundational framework that cleanly separates VDB logic from cryptographic instantiations. At its essence, it resembles other common information theoretic frameworks, such as Polynomial Interactive Oracle Proofs (PIOPs). At the same time it diverges from existing approaches by being slightly specialized for the database setting. We demonstrate how to instantiate our framework using modern pairing-based linear-map vector commitments and set accumulators. More in general, we show that our building blocks can be derived from extractable homomorphic polynomial commitments. Being modular, our approach permits alternative instantiations, such as with lattice-based polynomial commitments enabling post-quantum security. We implemented $\mathsf{qedb}$ in Rust and experimentally showed that it efficiently scales to datasets with millions of rows while maintaining competitive proving and verification times. This evidence indicates that our approach provides a foundation for practical, secure, and expressive verifiable database systems.
Expand
Florian Krieger, Florian Hirner, Ahmet Can Mert, Sujoy Sinha Roy
ePrint Report ePrint Report
Fully Homomorphic Encryption (FHE) and Post-Quantum Cryptography (PQC) involve polynomial multiplications, which are a common performance bottleneck. To resolve this bottleneck, polynomial multiplications are often accelerated in hardware using the Number-Theoretic Transformation (NTT) or the Fast Fourier Transformation (FFT). In particular, NTT operates over modular rings while FFT operates over complex numbers. NTT and FFT are widely deployed in applications with diverse parameter sets, leading to long design times for hardware accelerators. Existing hardware generation tools have limited functionality since they do not support generic on-the-fly twiddle factor generation or different memory-related optimizations. This paper improves the hardware design process and presents a generic and flexible tool to generate FFT and NTT architectures. In contrast to prior work, we combine on-the-fly twiddle factor generation and stall-free memory accesses. Moreover, we enhance hardware design flexibility through memory-optimized or routing-optimized design strategies. While our memory-optimized strategy minimizes twiddle factors in ROM, our routing-optimized strategy allows significantly higher clock frequencies on FPGAs. These optimization strategies allow effective customization of NTT/FFT architectures, spanning from low-end PQC to high-end FHE accelerators. Compared to existing works, we reach up to 15.9x lower latency and up to 7.4x improved ATP for FFT applications such as the Falcon signature scheme. Considering other NTT tools, we decrease latency by up to 1.8x and 2x for PQC and FHE parameter sets, respectively.
Expand
Yifan Song, Xiaxi Ye
ePrint Report ePrint Report
In this work, we study the communication complexity of MPC achieving perfect security with optimal resilience ($t
On the positive side, we construct a perfectly secure MPC protocol for SIMD circuits with a communication complexity of $O(|C|)$ elements assuming preprocessing data of size $O(|C|)$, where the preprocessing data consists of packed Beaver triples over bivariate polynomials. Furthermore, we show that packed Beaver triples over bivariate polynomials can be prepared at an amortized cost of $O(1)$ elements plus $O(1)$ three-party Beaver triples per secret.

On the negative side, we establish a communication lower bound proving that preparing packed Beaver triples over bivariate polynomials requires at least $\Omega(n)$ elements of communication per secret. This lower bound is derived by first proving a communication lower bound for verifying the correctness of packed Beaver triples with perfect security with abort, and then efficiently reducing the task of verifying packed Beaver triples to preparing packed Beaver triples over bivariate polynomials. To match this bound, we give a concrete construction for packed Beaver triples over bivariate polynomials with $O(n)$ elements per secret, demonstrating the tightness of our lower bound.

Our proof technique also extends to show that for the task of computing the inner-product of two length-$|C|$ vectors, any MPC protocol that achieves perfect security with abort requires either $\Omega(|C|\cdot n)$ elements of communication or $\Omega(|C|)$ elements of preprocessing data.
Expand
Michele Ciampi, Yun Lu, Rafail Ostrovsky, Vassilis Zikas
ePrint Report ePrint Report
Common blockchain protocols are monolithic, i.e., their security relies on a single assumption, e.g., honest majority of hashing power (Bitcoin) or stake (Cardano, Algorand, Ethereum). In contrast, so-called optimistic approaches (Thunderella, Meshcash) rely on a combination of assumptions to achieve faster transaction liveness.

We revisit, redesign, and augment the optimistic paradigm to a tiered approach. Our design assumes a primary (Tier 1) and a secondary (Tier 2, also referred to as fallback) blockchain, and achieves full security also in a tiered fashion: If the assumption underpinning the primary chain holds, then we guarantee safety, liveness and censorship resistance, irrespectively of the status of the fallback chain. And even if the primary assumption fails, all security properties are still satisfied (albeit with a temporary slow down) provided the fallback assumption holds. To our knowledge, no existing optimistic or tiered approach preserves both safety and liveness when any one of its underlying blockchain (assumptions) fails. The above is achieved by a new detection-and-recovery mechanism that links the two blockchains, so that any violation of safety, liveness, or censorship resistance on the (faster) primary blockchain is temporary—it is swiftly detected and recovered on the secondary chain—and thus cannot result in a persistent fork or halt of the blockchain ledger.

We instantiate the above paradigm using a primary chain based on proof of reputation (PoR) and a fallback chain based on proof of stake (PoS). Our construction uses the PoR and PoS blockchains in a mostly black-box manner—where rather than assuming a concrete construction we distill abstract properties on the two blockchains that are sufficient for applying our tiered methodology. In fact, choosing reputation as the resource of the primary chain opens the door to an incentive mechanism—which we devise and analyze—that tokenizes reputation in order to deter cheating and boost participation (on both the primary/PoR and the fallback/PoS blockchain). As we demonstrate, such tokenization in combination with interpreting reputation as a built-in system-wide credit score, allows for embedding in our two-tiered methodology a novel mechanism which provides collateral-free, multi-use payment-channel-like functionality where payments can be instantly confirmed.
Expand
Chen-Da Liu-Zhang, Christian Matt, Søren Eller Thomsen
ePrint Report ePrint Report
Message dissemination is a fundamental building block in distributed systems and guarantees that any message sent eventually reaches all parties. State of the art provably secure protocols for disseminating messages have a per-party communication complexity that is linear in the inverse of the fraction of parties that are guaranteed to be honest in the worst case. Unfortunately, this per-party communication complexity arises even in cases where the actual fraction of parties that behave honestly is close to 1. In this paper, we propose an optimistic message dissemination protocol that adopts to the actual conditions in which it is deployed, with optimal worst-case per-party communication complexity. Our protocol cuts the complexity of prior provably secure protocols for 49% worst-case corruption almost in half under optimistic conditions and allows practitioners to combine efficient heuristics with secure fallback mechanisms.
Expand
Lianglin Yan, Pengfei Zeng, Peizhe Song, Mingsheng Wang
ePrint Report ePrint Report
CKKS bootstrapping requires a significant computational overhead and modulus consumption. In this work, we improve the homomorphic linear transformation algorithm with lower time complexity and less modulus consumption.

We first propose a novel rescaling operation, called level-conserving rescaling, that acts on CoeffsToSlots for saving moduli. Secondly, we reconstruct the rotation keys and merge the plaintext-ciphertext multiplication and rescaling operations into the key-switching procedure, which reduces the time complexity of matrix-vector multiplication for matrices with $\le$64 non-zero diagonals, albeit with increased space overhead. By combining the two methods in CoeffsToSlots in a non-trivial manner, we not only further accelerate the homomorphic linear transformations and save one level of moduli, but also reduce the total size of rotation keys.

Experiments demonstrate the practicability of our techniques. Compared to the state of the art (Bossuat et al., Eurocrypt’21), our approaches: (1) increase the remaining homomorphic capacity, allowing fewer bootstrapping operations in large-depth circuit evaluation; (2) accelerate the CoeffsToSlots by a factor of 1.17$\sim$1.23 and reduce its rotation key size by 11.8$\%\sim$15.0$\%$. Furthermore, for better efficiency, we can speed up the fastest state-of-the-art bootstrapping scheme by 1.28 times at the cost of moderate additional space. The bootstrapping precision and failure probability remain identical to previous method.
Expand
Freja Elbro, Violetta Weger
ePrint Report ePrint Report
The Syndrome Decoding Problem (SDP) underpins the security of most code-based cryptographic schemes, and Information Set Decoding (ISD) algorithms are the fastest known solvers for most parameter sets. While ISD is well developed in the binary setting, the landscape for non-binary ISD is less mature. Most $q$-ary methods are straightforward generalizations of their binary counterparts, with the recent projective Stern algorithm being the only exception. However, no existing algorithm is designed to leverage the specific algebraic properties of extension fields. This research gap -- highlighted by the first-round NIST PQC proposal SDitH -- motivates our central question: is decoding over an extension field fundamentally easier than over a prime field of similar size?

This work explores whether the algebraic structure of extension fields can accelerate ISD. We analyze several techniques for translating the SDP to the base field, including the expansion map, subfield subcodes, and the trace map. We also develop new BJMM variants that restrict base list vectors to “small” field elements, aiming to counter the performance loss of advanced ISD when $q$ is large.

Contrary to our initial intuition, our results provide no evidence of an asymptotic speedup, suggesting that decoding over extension fields is not easier than over prime fields. Additionally, we make two contributions of independent interest: we show that a three-level BJMM algorithm gives a slight improvement over the two-level version for small fields, and we extend Meurer’s proof to show that the complexity of advanced ISD algorithms converges to Prange’s, even when parameters grow simultaneously.
Expand
MOHAMMAD VAZIRI
ePrint Report ePrint Report
In this paper, we present a simple meet-in-the-middle attack that requires low data and memory resources. To evaluate the complexity of the attack, we also propose an automated tool that calculates the time, data, and memory complexities based on the suggested matching points. Our method operates at the bit level and employs a known-plaintext attack, with no constraints on the attacker's choice of data. We apply our tool on various lightweight block ciphers, including CRAFT, Midori, WARP, PRESENT, and ARADI. For CRAFT, our tool successfully identified an attack targeting 15 rounds using 3 known plaintexts. In the case of Midori64 and Midori128, the tool proposed attacks on 5 rounds with 16 known plaintexts and 7 rounds with 3 known plaintexts, respectively. For WARP, the tool discovered an attack on 18 rounds utilizing 7 known plaintexts. Additionally, for PRESENT80, the tool identified an attack on 6 rounds with 18 known plaintexts, and for ARADI, an attack on 5 rounds with 28 known plaintexts was determined.
Expand
Maxim Orlovsky
ePrint Report ePrint Report
The paper defines a novel type of consensus for a distributed smart contract system, named RGB, which is based on the concept of client-side validation, separating the contract state and operations from the blockchain. With this approach, contracts are sharded (each contract is a standalone shard), kept, and validated only by contract participants, providing native scalability and privacy mechanisms, exceeding all existing blockchain-based smart contract systems while not compromising on security or decentralization. The system is designed to operate on top of compatible layers 1, such as an UTXO-based blockchain (e.g., Bitcoin) without relying on it for transaction ordering or state replication. Instead, RGB keeps the state client-side, operating as partially replicated state machines (PRiSM). It employs a novel SONIC (State machine with Ownership Notation Involving Capabilities) architecture, which provides capability-based access control to the contract state, individually owned and operated by a well-defined contract parties via novel single-use seal mechanism. RGB does state validation using zk-AluVM virtual machine, designed to support zk-STARK provers. It has a single security assumption of the collision-resistance hash function and, thus, is quantum-secure. The proposed RGB consensus is distinct from traditional blockchain-based smart contract systems; it is scalable, provably-secure, and formally verifiable.
Expand
Afonso Arriaga, Manuel Barbosa, Stanislaw Jarecki
ePrint Report ePrint Report
Recent KEM-to-PAKE compilers follow the Encrypted Key Exchange (EKE) paradigm (or a variant thereof), where the KEM public key is password-encrypted. While constant-time implementations of KEMs typically avoid secret-dependent branches and memory accesses, this requirement does not usually extend to operations involving the expansion of the public key because public keys are generally assumed to be public. A notable example is $\mathsf{ML\textrm{-}KEM}$, which expands a short seed $\rho$ into a large matrix $\mathsf{A}$ of polynomial coefficients using rejection sampling---a process that is variable-time but usually does not depend on any secret. However, in PAKE protocols that password-encrypt the compressed public key, this introduces the risk of timing honest parties and mounting an offline dictionary attack against the measurement. This is particularly concerning given the well-known real-world impact of such attacks on PAKE protocols.

In this paper we show two approaches which yield $\mathsf{ML\textrm{-}KEM}$-based PAKEs that resist timing attacks. First, we explore constant-time alternatives to $\mathsf{ML\textrm{-}KEM}$ rejection sampling: one that refactors the original $\mathsf{SampleNTT}$ algorithm into constant-time style code, whilst preserving its functionality, and two that modify the matrix expansion procedure to abandon rejection sampling and rely instead on large-integer modular arithmetic. All the proposed constant-time algorithms are slower than the current rejection sampling implementations, but they are still reasonably fast in absolute terms. Our conclusion is that adopting constant-time methods will imply both performance penalties and difficulties in using off-the-shelf $\mathsf{ML\textrm{-}KEM}$ implementations. Alternatively, we present the first $\mathsf{ML\textrm{-}KEM}$-to-PAKE compiler that mitigates this issue by design: our proposal transmits the seed $\rho$ in the clear, decoupling password-dependent runtime variations from the matrix expansion step. This means that vanilla implementations of $\mathsf{ML\textrm{-}KEM}$ can be used as a black-box. Our new protocol $\mathsf{Tempo}$ builds on the ideas from $\mathsf{CHIC}$, which considered splitting the KEM public key, adopts the two-round Feistel approach for password encryption of the non-expandable part of the public key, and leverages the proof techniques from $\mathsf{NoIC}$ to show that, despite the malleability permitted by the two-round Feistel, it is sufficient for password extraction and protocol simulation in the UC framework.
Expand
Halil İbrahim Kaplan
ePrint Report ePrint Report
The advent of quantum computing threatens the security assumptions underpinning classical public-key cryptographic algorithms such as RSA and ECC. As a response, the cryptographic community has focused on developing quantum-resistant alternatives, with hash-based signature schemes emerging as a compelling option due to their reliance on well-understood hash functions rather than number-theoretic hard- ness assumptions. This paper presents a comprehensive review of hash- based signature schemes, including Lamport, WOTS, XMSS, XMSSMT , and SPHINCS+, examining their structural design, key generation, sign- ing, and verification processes. Emphasis is placed on their classification as stateful and stateless schemes, as well as their practical integration us- ing Merkle trees and address structures. Furthermore, the paper analyzes several notable cryptanalytic attacks-such as intermediate value guess- ing, Antonov’s attack, multi-target attacks, and fault injection strate- gies-that pose risks to these constructions. By discussing both their strengths and vulnerabilities, this work highlights the viability of hash- based signatures as secure and efficient candidates for post-quantum digital signatures.
Expand

01 August 2025

Deirdre Connolly, Kathrin Hövelmanns, Andreas Hülsing, Stavros Kousidis, Matthias Meijers
ePrint Report ePrint Report
This work presents an exhaustive analysis of QSF, the KEM combiner used by X-Wing (Communications in Cryptology 1(1), 2024). While the X-Wing paper focuses on the applicability of QSF for combining ML-KEM-768 with X25519, we discuss its applicability for combining other post-quantum KEM with other instantiations of ECDH.

To this end, we establish simple conditions that allow one to check whether a KEM is compatible with QSF by proving ciphertext second‑preimage resistance C2PRI for several variants of the Fujisaki–Okamoto (FO) transform. Applying these results to post-quantum KEMs that are either standardized or under consideration for standardization, we show that QSF can also be used with all of these, including ML-KEM-1024, (e)FrodoKEM, HQC, Classic McEliece, and sntrup.

We also present QSI, a variation of QSF and show that any two KEM can be combined by hashing their concatenated keys. The result is a hybrid KEM which is IND-CCA-secure as long as one of the KEM is IND-CCA- and the other C2PRI-secure.

Finally, we also analyze QSF and QSI regarding their preservation of the recently introduced family of binding properties for KEM.
Expand
George Teseleanu
ePrint Report ePrint Report
Let $N = pq$ be the product of two balanced prime numbers $p$ and $q$. In 2023, Cotan and Te\c seleanu introduced a family of RSA-like cryptosystems based on the key equation $ed - k(p^n - 1)(q^n - 1) = 1$, where $n \geq 1$. Note that when $n = 1$, we obtain the classical RSA scheme, while $n = 2$ yields the variant proposed by Elkamchouchi, Elshenawy, and Shaban. In this paper, we present a novel attack that combines continued fractions with lattice-based methods for the case $n = 2^i$, where $i > 2$ is an integer. This represents a natural continuation of previous research, which successfully applied similar techniques for $n = 1, 2, 4$.
Expand
Dariush Abbasinezhad-Mood
ePrint Report ePrint Report
In smart grid (SG), key agreement protocols (KAPs) are used as one of the most prevalent means to establish secure data transmission channels between smart meters (SMs) and service providers (SPs). Quite recently, Wu et al. have indicated the vulnerability of Hu et al.'s KAP to key compromise impersonation (KCI) attack and proposed a security-enhanced one for secure communications of SMs and SPs in SG. Not to undermine the noteworthy contributions of their work, this comment demonstrates that their own KAP, i.e., Wu et al.'s scheme is still vulnerable to KCI attack. Accordingly, we suggest a simple modification to fix the KCI attack issue. Our attack procedure gives some delicate hints to scholars to protect their schemes against the KCI attack in future researches.
Expand
Gilad Asharov, Anirudh Chandramouli, Ran Cohen, Yuval Ishai
ePrint Report ePrint Report
An important requirement in synchronous protocols is that, even when a party receives all its messages for a given round ahead of time, it must wait until the round officially concludes before sending its messages for the next round. In practice, however, implementations often overlook this waiting requirement. This leads to a mismatch between the security analysis and real-world deployments, giving adversaries a new, unaccounted-for capability: the ability to ``peek into the future.'' Specifically, an adversary can force certain honest parties to advance to round $r+1$, observe their round $r+1$ messages, and then use this information to determine its remaining round $r$ messages. We refer to adversaries with this capability as ``super-rushing" adversaries.

We initiate a study of secure computation in the presence of super-rushing adversaries. We focus on understanding the conditions under which existing synchronous protocols remain secure in the presence of super-rushing adversaries. We show that not all protocols remain secure in this model, highlighting a critical gap between theoretical security guarantees and practical implementations. Even worse, we show that security against super-rushing adversaries is not necessarily maintained under sequential composition.

Despite those limitations, we present a general positive result: secret-sharing based protocols in the perfect setting, such as BGW, or those that are based on multiplication triplets, remain secure against super-rushing adversaries. This general theorem effectively enhances the security of such protocols ``for free.'' It shows that these protocols do not require parties to wait for the end of a round, enabling potential optimizations and faster executions without compromising security. Moreover, it shows that there is no need to spend efforts to achieve perfect synchronization when establishing the communication networks for such protocols.
Expand
Michael Schaller
ePrint Report ePrint Report
In this paper we introduce a rank $2$ lattice over a polynomial ring arising from the public key of the BIKE cryptosystem. The secret key is a sparse vector in this lattice. We study properties of this lattice and generalize the recovery of weak keys from "Weak keys for the quasi-cyclic MDPC public key encryption scheme". In particular, we show that they implicitly solved a shortest vector problem in the lattice we constructed. Rather than finding only a shortest vector, we obtain a reduced basis of the lattice which makes it possible to check for more weak keys.
Expand
Sergio Demian Lerner, Ariel Futoransky
ePrint Report ePrint Report
This paper presents FLEX (Fraud proofs with Lightweight Escrows for eXits), a garbled circuit-based protocol designed to facilitate two-party disputes on Bitcoin without requiring permanent security bonds. FLEX enables conditional security deposits that are only activated in the event of a dispute, reducing the financial overhead for both parties. The main goal of FLEX is to improve the capital efficiency of BitVM-based bridges in a permissioned challenge setting but can also be used to improve the security of any other fraud proof-based protocol such as payment channels. The paper also introduces enhancements that allow faster reimbursements in scenarios where one party's node is unavailable, while preserving security and minimizing race conditions.
Expand
Mikhail Suslov
ePrint Report ePrint Report
We introduce the \(Inverse\ Discrete\ Logarithm\ Problem\) (iDLP) framework, which inverts traditional discrete logarithm assumptions by making the exponent public but deliberately non-invertible modulo the group order, while hiding the base. This creates a many-to-one algebraic mapping that is computationally irreversible under both classical and quantum attack models.

Within this framework, we define three post-quantum cryptographic primitives: Inverse Discrete Diffie–Hellman (IDDH), Inverse Discrete Key Encapsulation (IDKE), and Inverse Discrete Data Encapsulation (IDDE). Using a 512-bit modulus (prime or semiprime), a random generator \( g \), and a public exponent \( y \) with \(\gcd(y, \varphi(m)) = 2\), the masking function \[ \mathsf{Mask}_{g,y}(x) := g^{x y} \bmod m \] induces a two-to-one mapping that renders discrete logarithm inversion infeasible.

Our security analysis shows that known quantum algorithms yield only multiple candidates, requiring exhaustive search among equivalence classes, which remains intractable at 512-bit parameters. We demonstrate efficient prototype implementations with sub-millisecond key operations and AES-GCM-level data throughput. Full source code and parameters are publicly available at \url{https://github.com/AdamaSoftware/InverseDiscrete/}.
Expand
Mehdi Beriane, Muhammed Ali Bingol
ePrint Report ePrint Report
Zero-knowledge rollups represent a critical scaling solution for Ethereum, yet their practical deployment faces significant challenges in on-chain verification costs. This paper presents a comprehensive implementation of the Tokamak zkEVM verifier, specifically optimized for the BLS12-381 elliptic curve operations introduced by EIP-2537. We detail the complete verification architecture, from EVM compatible data formatting for pairing checks, multi-scalar multiplication (MSM), and elliptic curve addition, to the non-interactive protocol design between prover and verifier. Our key contribution lies in novel optimization techniques that substantially reduce on-chain verification costs. Through strategic polynomial aggregation and scalar factorization, we minimize G1 exponentiations from 40 to 31, achieving gas savings of 108,000 units per verification. Additionally, we introduce a dynamic barycentric interpolation method that replaces computationally intensive FFT operations, resulting in 92-95% gas reduction for sparse polynomial evaluations. We further present proof aggregation strategies that minimize precompile calls while maintaining the 128-bit security guarantees of BLS12-381. Our implementation demonstrates that careful protocol design and mathematical optimizations can make zk-rollup verification economically viable on Ethereum. The techniques presented are compatible with the upcoming Pectra upgrade and provide a blueprint for efficient on-chain verification of complex zero-knowledge proofs. Experimental results show total gas costs reduced from 857,200 to 748,450 units for complete proof verification, making our approach practical for high-throughput rollup deployments.
Expand
◄ Previous Next ►