International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

04 August 2025

Juliane Krämer, Patrick Struck, Maximiliane Weishäupl
ePrint Report ePrint Report
In this note we analyze the various binding properties of combiners for KEMs. We show that several properties follow easily for the most general combiner—assuming a collision-resistant hash function—while more performance-oriented versions require the respective property from one or both of the underlying KEMs. Other properties are not obtained as directly and require either more properties of one underlying KEM or the respective property from both KEMs.
Expand
Seyoung Yoon, Gyeongju Song, Kyungbae Jang, Sangmin Cha, Hwajeong Seo
ePrint Report ePrint Report
As quantum computing technology rapidly advances, threats to existing symmetric-key and public-key cryptosystems are becoming increasingly real. In this study, we implement a SHA-1 quantum circuit that operates efficiently in a quantum computing environment. We optimize the quantum circuit, focusing on minimizing total circuit depth, a key performance indicator of quantum algorithms. The SHA-1 quantum circuit implementation used 985 qubits, resulting in a measured circuit depth of 9,026. Furthermore, by integrating this optimized circuit with the Grover algorithm, we establish the foundation for an efficient quantum attack on the SHA-1 algorithm. This research is significant not only because it presents a resource-efficient SHA-1 quantum implementation but also because it enables accelerated attacks in a quantum computing environment.
Expand
Dan Boneh, Joachim Neu, Valeria Nikolaenko, Aditi Partap
ePrint Report ePrint Report
Data availability sampling (DAS) is an important technique to horizontally scale consensus protocols without compromising on the number of adversarial nodes that can be tolerated. DAS is on the technical roadmap of major blockchains such as Ethereum. A major challenge for DAS schemes, that has not been formally studied in the literature, is how incomplete shares can be repaired. The need for repairing data shares motivates key aspects of Ethereum's DAS-based sharding vision called "Danksharding". In this work, we make two contributions. First, we provide a new definitional framework that formalizes the notion of repair, along with the security guarantees that a DAS scheme must provide. Second, we propose a new DAS scheme designed with efficient repair in mind, based on locally-correctable multiplicity codes. To facilitate using these codes, we introduce a new multivariate polynomial commitment scheme that (i) supports efficient openings of partial derivatives of a committed polynomial, (ii) supports fast batch opening proof generation at many points, and (iii) has an algorithm to recompute (repair) opening proofs at a point from only a few other proofs. The proposed scheme improves upon the state-of-the-art Ethereum Fulu DAS scheme, slated for deployment in late 2025/early 2026, in storage overhead, repair bandwidth and coordination, while only slightly increasing dispersal cost and sampling bandwidth. Our techniques readily carry over to data availability schemes based on verifiable information dispersal (VID).
Expand
Matteo Campanelli, Dario Fiore, Mahak Pancholi
ePrint Report ePrint Report
Incrementally Verifiable Computation (IVC) allows one to prove the correctness of a computation of potentially unbounded length in an incremental way, while a computationally weak client can efficiently check its correctness in time sublinear in the computation's length. IVC is particularly useful in several real-world applications such as scalable blockchains, distributed computation, and verifiable machine learning. Yet, most existing IVC schemes are only provably secure for constant-depth computations. Arguing their security for computations of polynomial depth relies on heuristic assumptions, raising both theoretical and practical concerns. In this work, we delve into the security foundations of incremental proof systems, addressing two main questions. First, we revisit the security analysis, in the unbounded-depth regime, of the canonical construction of IVC based on the recursive composition of SNARKs. We extend this analysis to include SNARKs that are straightline extractable in the algebraic group model (AGM) and some additional oracle model. As a consequence of our result, we obtain novel instantiations of IVC for unbounded-depth computations based on AGM-based SNARKs, such as Groth16 or Marlin, to name a few—an important class of SNARKs not captured by similar analyses in prior work [Chiesa et al. TCC 2024]. Second, we consider incremental proof systems for arbitrary depth computations in which full-blown extractability is not necessary. We study under what conditions they can be instantiated from the recursive composition of "plain" building blocks (SNARKs, folding, accumulation schemes), that is without requiring special straightline extractability. We introduce incremental functional commitments (incremental FC), a primitive that allows one to commit to a large data $D$ and later prove a function $f(D)$. The key aspect is that both the committing and proving functionalities operate incrementally, processing $D$ in a streaming, piece-by-piece manner. Also, like in standard FCs, their security property is a form of evaluation binding, a notion that is weaker than knowledge-soundness (it states that it is hard to produce two valid proofs for the same commitment and two distinct outputs). Our second main result consists of a construction of incremental FCs based on recursive composition of SNARKs and its security analysis, which shows that arbitrarily deep compositions of primitives with non-straightline extractors do not suffer from inherent security limitations.
Expand

03 August 2025

Yalan Wang, Liqun Chen, Yangguang Tian, Long Meng, Christopher J.P. Newton
ePrint Report ePrint Report
The World Wide Web Consortium (W3C) has established standards for decentralized identities (DIDs) and verifiable credentials (VCs). A DID serves as a unique identifier for an entity, while a VC validates specific attributes associated with the DID holder. To prove ownership of credentials, users generate verifiable presentations (VPs). To enhance privacy, the W3C standards advocate for randomizable signatures in VC creation and zero-knowledge proofs for VP generation. However, these standards face a significant limitation: they cannot effectively verify cross-domain credentials while maintaining anonymity. In this paper, we present Anonymous Verifiable Presentations with Extended Usability (AVPEU), a novel framework that addresses this limitation through the introduction of a notary system. At the technical core of AVPEU lies our proposed randomizable message-hiding signature scheme. We provide both a generic construction of AVPEU and specific implementations based on Boneh-Boyen-Shacham (BBS), Camenisch-Lysyanskaya (CL), and Pointcheval-Sanders (PS) signature. Our experimental results demonstrate the feasibility of these schemes.
Expand
Yalan Wang, Bryan Kumara, Harsh Kasyap, Liqun Chen, Sumanta Sarkar, Christopher J.P. Newton, Carsten Maple, Ugur Ilker Atmaca
ePrint Report ePrint Report
All-but-one Vector Commitments (AVCs) allow a committed vector to be verified by randomly opening all but one of the committed values. Typically, AVCs are instantiated using Goldwasser-Goldreich-Micali (GGM) trees. Generating these trees comprises a significant computational cost for AVCs due to a large number of hash function calls. Recently, correlated GGM (cGGM) trees were proposed to halve the number of hash calls and Batched AVCs (BAVCs) using one large GGM tree were integrated to FAEST to form the FAEST version 2 signature scheme, which improves efficiency and reduces the signature size. However, further optimizations on BAVC schemes remain possible. Inspired by the large-GGM based BAVC and the cGGM tree, this paper proposes BACON, a BAVC with aborts scheme by leveraging a large cGGM tree. BACON executes multiple instances of AVC in a single batch and enables an abort mechanism to probabilistically reduce the commitment size. We prove that BACON is secure under the ideal cipher model and the random oracle model. We also discuss the possible application of the proposed BACON, i.e., FAEST version 2. Furthermore, because the number of hash calls in a large cGGM tree is halved compared with that used in a large GGM tree, theoretically, our BACON is more efficient than the state-of-the-art BAVC scheme.
Expand
Mirza Ahad Baig, Christoph Ullrich Günther, Krzysztof Pietrzak
ePrint Report ePrint Report
The blocks in the Bitcoin blockchain record the amount of work W that went into creating them through proofs of work. When honest parties control a majority of the work, consensus is achieved by picking the chain with the highest recorded weight. Resources other than work have been considered to secure such longest-chain blockchains. In Chia, blocks record the amount of disk-space S (via a proof of space) and sequential computational steps V (through a VDF).

In this paper, we ask what weight functions Γ(S,V,W) (that assign a weight to a block as a function of the recorded space, speed, and work) are secure in the sense that whenever the weight of the resources controlled by honest parties is larger than the weight of adversarial parties, the blockchain is secure against private double-spending attacks.

We completely classify such functions in an idealized “continuous” model: Γ(S,V,W) is secure against private double-spending attacks if and only if it is homogeneous of degree one in the timed resources V and W, i.e., αΓ(S,V,W)=Γ(S,α V, α W). This includes the Bitcoin rule Γ(S,V,W)=W and the Chia rule Γ(S,V,W) = S · V. In a more realistic model where blocks are created at discrete time-points, one additionally needs some mild assumptions on the dependency on S (basically, the weight should not grow too much if S is slightly increased, say linear as in Chia).

Our classification is more general and allows various instantiations of the same resource. It provides a powerful tool for designing new longest-chain blockchains. E.g., consider combining different PoWs to counter centralization, say the Bitcoin PoW W_1 and a memory-hard PoW W_2. Previous work suggested to use W_1+W_2 as weight. Our results show that using e.g., √(W_1)·√(W_2) or min{W_1,W_2} are also secure, and we argue that in practice these are much better choices.
Expand
Sofiane Azogagh, Zelma Aubin Birba, Sébastien Gambs, Marc-Olivier Killijian
ePrint Report ePrint Report
While the use of homomorphic encryption (HE) for encrypted inference has received considerable attention, its application for the training of machine learning (ML) models remains comparatively underexplored, primarily due to the high computational overhead traditionally associated with fully homomorphic encryption (FHE). In this work, we address this challenge by leveraging the inherent connection between inference and training in the context of Extremely Randomized Trees (ERT), thereby enabling efficient training directly over encrypted data. More precisely, we instantiate this approach by the training of ERT within the TFHE framework. Our implementation demonstrates that it is possible to train ERTs on encrypted datasets with a runtime significantly lower than current state-of-the-art methods for training Random Forests in the encrypted domain while achieving comparable predictive accuracy. This result highlights a promising direction for practical privacy-preserving machine learning using FHE. Our second main contribution consists in leveraging the properties of ERTs to create the first ML model that enables private unlearning. This approach makes the unlearning process indistinguishable from training, thus allowing clients to conceal the true nature of the operations being conducted on the model.
Expand
Vincenzo Botta, Simone Bottoni, Matteo Campanelli, Emanuele Ragnoli, Alberto Trombetta
ePrint Report ePrint Report
Verifiable Databases (VDBs) enable clients delegating storage to a provider without having to trust it: given a claimed response to a query, they can check its correctness by holding a short digest to the database and a small related certificate (a proof). The foundational role of databases and the increasing trend of storage delegation make this an important primitive. Existing VDB approaches face fundamental tradeoffs (on which we improve in this work). A line of work on VDB designs has leveraged general purpose proof systems (SNARKs). The resulting constructions are expressive—they support a large class of queries—but require cumbersome intermediate representations, often rely on heuristic assumptions and are overall very complex. Other prior approaches adopted cleverly combined specialized authenticated data structures (e.g., set accumulators). These designs tend to be simple, elegant and rely on well founded cryptographic assumptions; however they have limited expressivity and some undesirable efficiency features (e.g., scaling quadratically in the number of columns). We present $\mathsf{qedb}$, a novel construction for verifiable databases that addresses these limitations. $\mathsf{qedb}$ supports more expressive queries than previous approaches based on specialized data structures. At the same time it preserves their simplicity and improves on several of their limitations: it removes the quadratic dependency on the number of columns present in state-of-the-art constructions; its proof sizes are completely independent of the database size (without requiring any circuit representation). One of our primary contributions is a foundational framework that cleanly separates VDB logic from cryptographic instantiations. At its essence, it resembles other common information theoretic frameworks, such as Polynomial Interactive Oracle Proofs (PIOPs). At the same time it diverges from existing approaches by being slightly specialized for the database setting. We demonstrate how to instantiate our framework using modern pairing-based linear-map vector commitments and set accumulators. More in general, we show that our building blocks can be derived from extractable homomorphic polynomial commitments. Being modular, our approach permits alternative instantiations, such as with lattice-based polynomial commitments enabling post-quantum security. We implemented $\mathsf{qedb}$ in Rust and experimentally showed that it efficiently scales to datasets with millions of rows while maintaining competitive proving and verification times. This evidence indicates that our approach provides a foundation for practical, secure, and expressive verifiable database systems.
Expand
Florian Krieger, Florian Hirner, Ahmet Can Mert, Sujoy Sinha Roy
ePrint Report ePrint Report
Fully Homomorphic Encryption (FHE) and Post-Quantum Cryptography (PQC) involve polynomial multiplications, which are a common performance bottleneck. To resolve this bottleneck, polynomial multiplications are often accelerated in hardware using the Number-Theoretic Transformation (NTT) or the Fast Fourier Transformation (FFT). In particular, NTT operates over modular rings while FFT operates over complex numbers. NTT and FFT are widely deployed in applications with diverse parameter sets, leading to long design times for hardware accelerators. Existing hardware generation tools have limited functionality since they do not support generic on-the-fly twiddle factor generation or different memory-related optimizations. This paper improves the hardware design process and presents a generic and flexible tool to generate FFT and NTT architectures. In contrast to prior work, we combine on-the-fly twiddle factor generation and stall-free memory accesses. Moreover, we enhance hardware design flexibility through memory-optimized or routing-optimized design strategies. While our memory-optimized strategy minimizes twiddle factors in ROM, our routing-optimized strategy allows significantly higher clock frequencies on FPGAs. These optimization strategies allow effective customization of NTT/FFT architectures, spanning from low-end PQC to high-end FHE accelerators. Compared to existing works, we reach up to 15.9x lower latency and up to 7.4x improved ATP for FFT applications such as the Falcon signature scheme. Considering other NTT tools, we decrease latency by up to 1.8x and 2x for PQC and FHE parameter sets, respectively.
Expand
Yifan Song, Xiaxi Ye
ePrint Report ePrint Report
In this work, we study the communication complexity of MPC achieving perfect security with optimal resilience ($t
On the positive side, we construct a perfectly secure MPC protocol for SIMD circuits with a communication complexity of $O(|C|)$ elements assuming preprocessing data of size $O(|C|)$, where the preprocessing data consists of packed Beaver triples over bivariate polynomials. Furthermore, we show that packed Beaver triples over bivariate polynomials can be prepared at an amortized cost of $O(1)$ elements plus $O(1)$ three-party Beaver triples per secret.

On the negative side, we establish a communication lower bound proving that preparing packed Beaver triples over bivariate polynomials requires at least $\Omega(n)$ elements of communication per secret. This lower bound is derived by first proving a communication lower bound for verifying the correctness of packed Beaver triples with perfect security with abort, and then efficiently reducing the task of verifying packed Beaver triples to preparing packed Beaver triples over bivariate polynomials. To match this bound, we give a concrete construction for packed Beaver triples over bivariate polynomials with $O(n)$ elements per secret, demonstrating the tightness of our lower bound.

Our proof technique also extends to show that for the task of computing the inner-product of two length-$|C|$ vectors, any MPC protocol that achieves perfect security with abort requires either $\Omega(|C|\cdot n)$ elements of communication or $\Omega(|C|)$ elements of preprocessing data.
Expand
Michele Ciampi, Yun Lu, Rafail Ostrovsky, Vassilis Zikas
ePrint Report ePrint Report
Common blockchain protocols are monolithic, i.e., their security relies on a single assumption, e.g., honest majority of hashing power (Bitcoin) or stake (Cardano, Algorand, Ethereum). In contrast, so-called optimistic approaches (Thunderella, Meshcash) rely on a combination of assumptions to achieve faster transaction liveness.

We revisit, redesign, and augment the optimistic paradigm to a tiered approach. Our design assumes a primary (Tier 1) and a secondary (Tier 2, also referred to as fallback) blockchain, and achieves full security also in a tiered fashion: If the assumption underpinning the primary chain holds, then we guarantee safety, liveness and censorship resistance, irrespectively of the status of the fallback chain. And even if the primary assumption fails, all security properties are still satisfied (albeit with a temporary slow down) provided the fallback assumption holds. To our knowledge, no existing optimistic or tiered approach preserves both safety and liveness when any one of its underlying blockchain (assumptions) fails. The above is achieved by a new detection-and-recovery mechanism that links the two blockchains, so that any violation of safety, liveness, or censorship resistance on the (faster) primary blockchain is temporary—it is swiftly detected and recovered on the secondary chain—and thus cannot result in a persistent fork or halt of the blockchain ledger.

We instantiate the above paradigm using a primary chain based on proof of reputation (PoR) and a fallback chain based on proof of stake (PoS). Our construction uses the PoR and PoS blockchains in a mostly black-box manner—where rather than assuming a concrete construction we distill abstract properties on the two blockchains that are sufficient for applying our tiered methodology. In fact, choosing reputation as the resource of the primary chain opens the door to an incentive mechanism—which we devise and analyze—that tokenizes reputation in order to deter cheating and boost participation (on both the primary/PoR and the fallback/PoS blockchain). As we demonstrate, such tokenization in combination with interpreting reputation as a built-in system-wide credit score, allows for embedding in our two-tiered methodology a novel mechanism which provides collateral-free, multi-use payment-channel-like functionality where payments can be instantly confirmed.
Expand
Chen-Da Liu-Zhang, Christian Matt, Søren Eller Thomsen
ePrint Report ePrint Report
Message dissemination is a fundamental building block in distributed systems and guarantees that any message sent eventually reaches all parties. State of the art provably secure protocols for disseminating messages have a per-party communication complexity that is linear in the inverse of the fraction of parties that are guaranteed to be honest in the worst case. Unfortunately, this per-party communication complexity arises even in cases where the actual fraction of parties that behave honestly is close to 1. In this paper, we propose an optimistic message dissemination protocol that adopts to the actual conditions in which it is deployed, with optimal worst-case per-party communication complexity. Our protocol cuts the complexity of prior provably secure protocols for 49% worst-case corruption almost in half under optimistic conditions and allows practitioners to combine efficient heuristics with secure fallback mechanisms.
Expand
Lianglin Yan, Pengfei Zeng, Peizhe Song, Mingsheng Wang
ePrint Report ePrint Report
CKKS bootstrapping requires a significant computational overhead and modulus consumption. In this work, we improve the homomorphic linear transformation algorithm with lower time complexity and less modulus consumption.

We first propose a novel rescaling operation, called level-conserving rescaling, that acts on CoeffsToSlots for saving moduli. Secondly, we reconstruct the rotation keys and merge the plaintext-ciphertext multiplication and rescaling operations into the key-switching procedure, which reduces the time complexity of matrix-vector multiplication for matrices with $\le$64 non-zero diagonals, albeit with increased space overhead. By combining the two methods in CoeffsToSlots in a non-trivial manner, we not only further accelerate the homomorphic linear transformations and save one level of moduli, but also reduce the total size of rotation keys.

Experiments demonstrate the practicability of our techniques. Compared to the state of the art (Bossuat et al., Eurocrypt’21), our approaches: (1) increase the remaining homomorphic capacity, allowing fewer bootstrapping operations in large-depth circuit evaluation; (2) accelerate the CoeffsToSlots by a factor of 1.17$\sim$1.23 and reduce its rotation key size by 11.8$\%\sim$15.0$\%$. Furthermore, for better efficiency, we can speed up the fastest state-of-the-art bootstrapping scheme by 1.28 times at the cost of moderate additional space. The bootstrapping precision and failure probability remain identical to previous method.
Expand
Freja Elbro, Violetta Weger
ePrint Report ePrint Report
The Syndrome Decoding Problem (SDP) underpins the security of most code-based cryptographic schemes, and Information Set Decoding (ISD) algorithms are the fastest known solvers for most parameter sets. While ISD is well developed in the binary setting, the landscape for non-binary ISD is less mature. Most $q$-ary methods are straightforward generalizations of their binary counterparts, with the recent projective Stern algorithm being the only exception. However, no existing algorithm is designed to leverage the specific algebraic properties of extension fields. This research gap -- highlighted by the first-round NIST PQC proposal SDitH -- motivates our central question: is decoding over an extension field fundamentally easier than over a prime field of similar size?

This work explores whether the algebraic structure of extension fields can accelerate ISD. We analyze several techniques for translating the SDP to the base field, including the expansion map, subfield subcodes, and the trace map. We also develop new BJMM variants that restrict base list vectors to “small” field elements, aiming to counter the performance loss of advanced ISD when $q$ is large.

Contrary to our initial intuition, our results provide no evidence of an asymptotic speedup, suggesting that decoding over extension fields is not easier than over prime fields. Additionally, we make two contributions of independent interest: we show that a three-level BJMM algorithm gives a slight improvement over the two-level version for small fields, and we extend Meurer’s proof to show that the complexity of advanced ISD algorithms converges to Prange’s, even when parameters grow simultaneously.
Expand
MOHAMMAD VAZIRI
ePrint Report ePrint Report
In this paper, we present a simple meet-in-the-middle attack that requires low data and memory resources. To evaluate the complexity of the attack, we also propose an automated tool that calculates the time, data, and memory complexities based on the suggested matching points. Our method operates at the bit level and employs a known-plaintext attack, with no constraints on the attacker's choice of data. We apply our tool on various lightweight block ciphers, including CRAFT, Midori, WARP, PRESENT, and ARADI. For CRAFT, our tool successfully identified an attack targeting 15 rounds using 3 known plaintexts. In the case of Midori64 and Midori128, the tool proposed attacks on 5 rounds with 16 known plaintexts and 7 rounds with 3 known plaintexts, respectively. For WARP, the tool discovered an attack on 18 rounds utilizing 7 known plaintexts. Additionally, for PRESENT80, the tool identified an attack on 6 rounds with 18 known plaintexts, and for ARADI, an attack on 5 rounds with 28 known plaintexts was determined.
Expand
Maxim Orlovsky
ePrint Report ePrint Report
The paper defines a novel type of consensus for a distributed smart contract system, named RGB, which is based on the concept of client-side validation, separating the contract state and operations from the blockchain. With this approach, contracts are sharded (each contract is a standalone shard), kept, and validated only by contract participants, providing native scalability and privacy mechanisms, exceeding all existing blockchain-based smart contract systems while not compromising on security or decentralization. The system is designed to operate on top of compatible layers 1, such as an UTXO-based blockchain (e.g., Bitcoin) without relying on it for transaction ordering or state replication. Instead, RGB keeps the state client-side, operating as partially replicated state machines (PRiSM). It employs a novel SONIC (State machine with Ownership Notation Involving Capabilities) architecture, which provides capability-based access control to the contract state, individually owned and operated by a well-defined contract parties via novel single-use seal mechanism. RGB does state validation using zk-AluVM virtual machine, designed to support zk-STARK provers. It has a single security assumption of the collision-resistance hash function and, thus, is quantum-secure. The proposed RGB consensus is distinct from traditional blockchain-based smart contract systems; it is scalable, provably-secure, and formally verifiable.
Expand
Afonso Arriaga, Manuel Barbosa, Stanislaw Jarecki
ePrint Report ePrint Report
Recent KEM-to-PAKE compilers follow the Encrypted Key Exchange (EKE) paradigm (or a variant thereof), where the KEM public key is password-encrypted. While constant-time implementations of KEMs typically avoid secret-dependent branches and memory accesses, this requirement does not usually extend to operations involving the expansion of the public key because public keys are generally assumed to be public. A notable example is $\mathsf{ML\textrm{-}KEM}$, which expands a short seed $\rho$ into a large matrix $\mathsf{A}$ of polynomial coefficients using rejection sampling---a process that is variable-time but usually does not depend on any secret. However, in PAKE protocols that password-encrypt the compressed public key, this introduces the risk of timing honest parties and mounting an offline dictionary attack against the measurement. This is particularly concerning given the well-known real-world impact of such attacks on PAKE protocols.

In this paper we show two approaches which yield $\mathsf{ML\textrm{-}KEM}$-based PAKEs that resist timing attacks. First, we explore constant-time alternatives to $\mathsf{ML\textrm{-}KEM}$ rejection sampling: one that refactors the original $\mathsf{SampleNTT}$ algorithm into constant-time style code, whilst preserving its functionality, and two that modify the matrix expansion procedure to abandon rejection sampling and rely instead on large-integer modular arithmetic. All the proposed constant-time algorithms are slower than the current rejection sampling implementations, but they are still reasonably fast in absolute terms. Our conclusion is that adopting constant-time methods will imply both performance penalties and difficulties in using off-the-shelf $\mathsf{ML\textrm{-}KEM}$ implementations. Alternatively, we present the first $\mathsf{ML\textrm{-}KEM}$-to-PAKE compiler that mitigates this issue by design: our proposal transmits the seed $\rho$ in the clear, decoupling password-dependent runtime variations from the matrix expansion step. This means that vanilla implementations of $\mathsf{ML\textrm{-}KEM}$ can be used as a black-box. Our new protocol $\mathsf{Tempo}$ builds on the ideas from $\mathsf{CHIC}$, which considered splitting the KEM public key, adopts the two-round Feistel approach for password encryption of the non-expandable part of the public key, and leverages the proof techniques from $\mathsf{NoIC}$ to show that, despite the malleability permitted by the two-round Feistel, it is sufficient for password extraction and protocol simulation in the UC framework.
Expand
Halil İbrahim Kaplan
ePrint Report ePrint Report
The advent of quantum computing threatens the security assumptions underpinning classical public-key cryptographic algorithms such as RSA and ECC. As a response, the cryptographic community has focused on developing quantum-resistant alternatives, with hash-based signature schemes emerging as a compelling option due to their reliance on well-understood hash functions rather than number-theoretic hard- ness assumptions. This paper presents a comprehensive review of hash- based signature schemes, including Lamport, WOTS, XMSS, XMSSMT , and SPHINCS+, examining their structural design, key generation, sign- ing, and verification processes. Emphasis is placed on their classification as stateful and stateless schemes, as well as their practical integration us- ing Merkle trees and address structures. Furthermore, the paper analyzes several notable cryptanalytic attacks-such as intermediate value guess- ing, Antonov’s attack, multi-target attacks, and fault injection strate- gies-that pose risks to these constructions. By discussing both their strengths and vulnerabilities, this work highlights the viability of hash- based signatures as secure and efficient candidates for post-quantum digital signatures.
Expand

01 August 2025

Deirdre Connolly, Kathrin Hövelmanns, Andreas Hülsing, Stavros Kousidis, Matthias Meijers
ePrint Report ePrint Report
This work presents an exhaustive analysis of QSF, the KEM combiner used by X-Wing (Communications in Cryptology 1(1), 2024). While the X-Wing paper focuses on the applicability of QSF for combining ML-KEM-768 with X25519, we discuss its applicability for combining other post-quantum KEM with other instantiations of ECDH.

To this end, we establish simple conditions that allow one to check whether a KEM is compatible with QSF by proving ciphertext second‑preimage resistance C2PRI for several variants of the Fujisaki–Okamoto (FO) transform. Applying these results to post-quantum KEMs that are either standardized or under consideration for standardization, we show that QSF can also be used with all of these, including ML-KEM-1024, (e)FrodoKEM, HQC, Classic McEliece, and sntrup.

We also present QSI, a variation of QSF and show that any two KEM can be combined by hashing their concatenated keys. The result is a hybrid KEM which is IND-CCA-secure as long as one of the KEM is IND-CCA- and the other C2PRI-secure.

Finally, we also analyze QSF and QSI regarding their preservation of the recently introduced family of binding properties for KEM.
Expand
◄ Previous Next ►