IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
05 September 2025
Sedric Nkotto
Kyber a.k.a ML-KEM has been stardardized by NIST under FIPS-203 and will
definetely in the coming years be implemented in several commercial products.
However the resilience of implementations against side channel attacks is still an open
and practical concern. One of the drawbacks of the ongoing side channel analysis
research related to PQC schemes is the availability of open source datasets. Luckily
some opensource datasets start popping up. For instance the one recently published
by Rezaeezade et al. in [2]. This dataset captures power consumption during a pair-
pointwise multiplication occuring in the course of ML-KEM decapsulation process
and involving the decapsulation (sub)key and ciphertexts. In this paper we present
a template side channel attack targetting that operation, which yields a complete
recovery of the decapsulation secret (sub)key.
Gustavo Banegas, Anaëlle Le Dévéhat, Benjamin Smith
Many signature applications---such as root certificates,
secure software updates, and authentication protocols---involve
long-lived public keys that are transferred or installed once
and then used for many verifications.
This key longevity makes post-quantum signature schemes with
conservative assumptions (e.g., structure-free lattices)
attractive for long-term security.
But many such schemes, especially those with short
signatures, suffer from extremely large public keys. Even
in scenarios where bandwidth is not a major concern, large
keys increase storage costs and slow down verification.
We address this with a method to replace large public keys in
GPV-style signatures with smaller, private verification keys.
This significantly reduces verifier storage and
runtime while preserving security. Applied to
the conservative, short-signature schemes
\Wave and \Squirrels,
our method compresses \Squirrels[-I] keys from
\SI{665}{\kilo\byte} to \SI{20.7}{\kilo\byte} and \Wave[822] keys
from \SI{3.5}{\mega\byte} to \SI{207.97}{\kilo\byte}.
Ioannis Alexopoulos, Zeta Avarikioti, Paul Gerhart, Matteo Maffei, Dominique Schröder
Bitcoin secures over a trillion dollars in assets but remains largely absent from decentralized finance (DeFi) due to its restrictive scripting language. The emergence of BitVM, which enables verification of arbitrary off-chain computations via on-chain fraud proofs, opens the door to expressive Bitcoin-native applications without altering consensus rules. A key challenge for smart contracts executed on a public blockchain, however, is the privacy of data: for instance, bid privacy is crucial in auctions and transaction privacy is leveraged in MEV-mitigation techniques such as proposer-builder separation. While different solutions have been recently proposed for Ethereum, these are not applicable to Bitcoin.
In this work, we present BitPriv, the first Bitcoin-compatible protocol to condition payments based on the outcome of a secure two-party computation (2PC). The key idea is to let parties lock collateral on-chain and to evaluate a garbled circuit off-chain: a cut-and-choose mechanism deters misbehavior and any violation can be proven and punished on-chain via BitVM. This design achieves security against rational adversaries, ensuring that deviation is irrational under financial penalties.
We showcase the new class of applications enabled by BitPriv as well as evaluate its performance through a privacy-preserving double-blind marketplace in Bitcoin. In the optimistic case, settlement requires only two transactions and under \$3 in fees; disputes are more expensive (≈\$507) with their cost tied to the specific BitVM implementation, but their mere feasibility acts as a strong deterrent. BitPriv provides a blueprint for building enforceable, privacy-preserving DeFi primitives on Bitcoin without trusted hardware, sidechains, or protocol changes.
Sebastian Kolby, Lawrence Roy, Jure Sternad, Sophia Yakoubov
A Private Information Retrieval (PIR) protocol allows a client to learn the $i$th row of a database held by one or more servers, without revealing $i$ to the servers. A Random-Index PIR (RPIR) protocol, introduced by Gentry et al. (TCC 2021), is a PIR protocol where, instead of being chosen by the client, $i$ is random. This has applications in e.g. anonymous committee selection. Both PIR and RPIR protocols are interesting only if the communication complexity is smaller than the database size; otherwise, the trivial solution where the servers send the entire database suffices.
Unlike PIR, where the client must send at least one message (to encode information about $i$), RPIR can be executed in a single round of server-to-client communication. In this paper, we study such one-round, information-theoretic RPIR protocols. The only known construction in this setting is SimpleMSRPIR (Gentry et al.), which requires the servers to communicate approximately $\frac{N}{2}$ bits, $N$ being the database size. We show an $\Omega(\sqrt{N})$ lower bound on communication complexity for one-round two-server information-theoretic RPIR, and a sublinear upper bound. Finally, we show how to use a sublinear amount of database-independent correlated randomness among multiple servers to get near-optimal online communication complexity (the size of one row plus the size of one index description per server).
03 September 2025
Yashvanth Kondi, Ian McQuoid, Kelsey Melissaris, Claudio Orlandi, Lawrence Roy, LaKyah Tyner
Strong Asymmetric Password-Authenticated Key Exchange (saPAKE) enables a client, holding only a low-entropy password, to repeatedly establish shared high-entropy session keys with a server holding a digest of the expected password. Integrally, the only online attacks afforded to the adversary are those inevitable impersonation and dictionary attacks. As opposed to previous modeling, saPAKE addionally requires that any offline password search against the server’s storage takes place
after adaptive server compromise.
We present OneTwoPAKE, the first saPAKE protocol to simultaneously: 1. realize the full (unweakened ) strong aPAKE functionality; 2. not admit a speedup in an offline password search; (aka, has simulation-rate of 1 ); 3. use only a single round trip, with the client speaking first; and 4. avoid generic algebraic models.
Similar to prior work, we instantiate our saPAKE from an OPRF over insecure channels secure against adaptive server compromise. In contrast to prior work, our OPRF is online-extractable and input-committing, enabling our protocol to realize the full saPAKE functionality.
Of independent interest are our OPRF functionality and construction. We introduce the first formal model of such an OPRF, and our OPRF protocol is the first Dodis-Yampolskiy-based OPRF proven UC-secure against malicious adversaries without authenticated channels.
Our framework demonstrates the feasibility of achieving all of the above properties simultaneously. Though our constructions are not as efficient as those of prior work, our saPAKE boasts the minimal round complexity, achieves full security, and, in terms of idealized models, relies only on the random oracle model. As future work may further close the efficiency gap, our framework may lead to practically deployable solutions.
We present OneTwoPAKE, the first saPAKE protocol to simultaneously: 1. realize the full (unweakened ) strong aPAKE functionality; 2. not admit a speedup in an offline password search; (aka, has simulation-rate of 1 ); 3. use only a single round trip, with the client speaking first; and 4. avoid generic algebraic models.
Similar to prior work, we instantiate our saPAKE from an OPRF over insecure channels secure against adaptive server compromise. In contrast to prior work, our OPRF is online-extractable and input-committing, enabling our protocol to realize the full saPAKE functionality.
Of independent interest are our OPRF functionality and construction. We introduce the first formal model of such an OPRF, and our OPRF protocol is the first Dodis-Yampolskiy-based OPRF proven UC-secure against malicious adversaries without authenticated channels.
Our framework demonstrates the feasibility of achieving all of the above properties simultaneously. Though our constructions are not as efficient as those of prior work, our saPAKE boasts the minimal round complexity, achieves full security, and, in terms of idealized models, relies only on the random oracle model. As future work may further close the efficiency gap, our framework may lead to practically deployable solutions.
Sangmin Cha, GyeongJu Song, Seyoung Yoon, Hwajeong Seo
Quantum attacks such as Grover’s algorithm reduce the security of classical hash functions such as MD5. In this paper, we present
an efficient quantum circuit for the MD5 hash function and apply Grover’s
algorithm to perform an effective pre-image attack. Our MD5 quantum
circuit first focuses on reducing quantum circuit depth, while also considering the number of qubits. To this end, we applied Draper’s carrylookahead adder to the MD5 quantum circuit and modified its structure.
When Draper’s adder was applied to the proposed MD5 structure, there
was an approximately 81.4% reduction compared to before application.
As a result, the proposed MD5 quantum circuit uses only 858 qubits and
has a depth of 7,598.
Attribute-based Quantum Broadcast Encryption with Composite Policies via Symmetric Unitary t-Designs
Sayatan Ganguly, Shion Samadder Chaudhury
In an emerging era of quantum technologies, securing quantum communications is paramount. In this paper, we introduce two frameworks for attribute-based quantum broadcast encryption (AB-QBE), enabling fine-grained access control over sensitive quantum data. The first scheme, Multi-Policy Quantum Private Broadcast Encryption (MP-QPBE), leverages symmetric unitary \(t\)-designs to construct a protocol where decryption is possible upon satisfying a composite set of access policies simultaneously. This approach ensures that a user can only access the broadcasted quantum state if their attributes fulfill multiple predefined criteria. In our MP-QPBE, we first perform symmetrization of the initial quantum message for the encryption of the tensor product of distinct pure states, a scenario not directly addressed by previous quantum private broadcasting schemes. We demonstrate that this method is information-theoretically secure. The second scheme, Component-wise Independent Quantum Broadcast Encryption (CI-QBE), offers an alternative, lossless approach where each quantum message is encrypted independently using a unitary \(1\)-design. It provides greater flexibility and is applicable to arbitrary quantum states, including mixed states, without the information loss inherent in symmetrization. We provide a comprehensive security analysis for both constructions, proving their robustness against unauthorized users and adversaries with quantum side information. A comparative analysis highlights the trade-offs between the two schemes in terms of security guarantees, quantum resource requirements, and practical applicability, offering a nuanced perspective on designing secure multi-user quantum communication systems.
Sayatan Ganguly, Shion Samadder Chaudhury
Several subscription-based systems require fine-grained,
dynamic access control, which traditional broadcast
encryption schemes fail to handle efficiently. Attribute-based
and policy-based access structures offer a flexible and scalable
solution by encoding access rules directly into the encryption
mechanism. With the advent and development of quantum
networks, attribute-based and policy-based quantum broadcast
encryption (PBQBE) becomes necessary for enforcing such
controls securely in a post-quantum world. In this paper, we
present a general construction of quantum broadcast encryption
for multiple messages simultaneously, based on routing and
traversing a given attribute-based and, consequently, a policy-based
access tree when a user satisfies a unique policy. Our
scheme can handle dynamic systems with frequently changing
access patterns and can be deployed seamlessly with classical
systems. Compared to existing constructions, our scheme offers
hierarchical access, policy control, scalability, and revocation
while maintaining provable security guarantees.
How Hard Can It Be to Formalize a Proof? Lessons from Formalizing CryptoBox Three Times in EasyCrypt
François Dupressoir, Andreas Hülsing, Cameron Low, Matthias Meijers, Charlotte Mylog, Sabine Oechsner
Provable security is a cornerstone of modern cryptography, aiming to provide formal and precise security guarantees. However, for various reasons, security proofs are not always properly verified, possibly leading to unwarranted security claims and, in the worst case, deployment of insecure constructions. To further enhance trust and assurance, machine-checked cryptography makes these proofs more formal and rigorous. Unfortunately, the complexity of writing machine-verifiable proofs remains prohibitively high in many interesting use-cases. In this paper, we investigate the sources of this complexity, specifically examining how the style of security definitions influences the difficulty of constructing machine-verifiable proofs in the context of game-playing security.
Concretely, we present a new security proof for the generic construction of a PKAE scheme from a NIKE and AE scheme, written in a code-based, game-playing style à la Bellare and Rogaway, and compare it to the same proof written in the style of state-separating proofs, a methodology for developing modular game-playing security proofs. Additionally, we explore a third “blended” style designed to avoid anticipated difficulties with the formalization. Our findings suggest that the choice of definition style impacts proof complexity—including, we argue, in detailed pen-and-paper proofs—with trade-offs depending on the proof writer’s goals.
Concretely, we present a new security proof for the generic construction of a PKAE scheme from a NIKE and AE scheme, written in a code-based, game-playing style à la Bellare and Rogaway, and compare it to the same proof written in the style of state-separating proofs, a methodology for developing modular game-playing security proofs. Additionally, we explore a third “blended” style designed to avoid anticipated difficulties with the formalization. Our findings suggest that the choice of definition style impacts proof complexity—including, we argue, in detailed pen-and-paper proofs—with trade-offs depending on the proof writer’s goals.
Tsai Yi-Ju
This paper addresses the question of how many distinct Montgomery curves exist over a finite field $\mathbb{F}_p$ for a given order. We provide a complete answer by presenting precise counting formulas from two perspectives: (1) the number of $\mathbb{F}_p$-isomorphism classes, and (2) the number of coefficient pairs $(A,B)$. Additionally, we propose conjectural formulas that approximate the probability of a randomly chosen curve having an order with a specific, cryptographically relevant structure. As a byproduct of our methods, we also establish a novel identity for the Kronecker-Hurwitz class number.
Feixiang Zhao
Homomorphic attribute-based encryption (HABE) is a useful cryptographic primitive that supports both fine-grained access control and computation over ciphertexts. However, existing HABE schemes are limited to the homomorphic evaluation of circuits with either bounded depth or a restricted number of inputs. To address this problem, we introduce a bootstrappable, fully homomorphic attribute-based encryption (FHABE) scheme that supports computations of circuits with unbounded depth over cross-attribute ciphertexts. Compared to state-of-the-art schemes, the proposed scheme also has a more lightweight ciphertext and eliminates the reliance on the random oracle model. Additionally, we extend the FHABE to a proxy re-encryption setting, proposing an attribute-based homomorphic proxy re-encryption scheme that facilitates efficient sharing of encrypted data. This scheme is the first lattice-based multi-hop attribute-based proxy re-encryption scheme with unbounded hops of re-encryptions, which may be of independent interest.
Sebastian Faller, Guilhem Niot, Michael Reichle
Blind signatures are a central tool for privacy-preserving protocols. They allow users to obtain signatures from a signer without the signer seeing the signed message. For instance, it enables electronic cash: signatures correspond to coins which can be issued by the bank in a privacy-preserving manner via blind signing. To mitigate the risk of key compromise, threshold blind signatures allow the distribution of the signing key amongst N parties. While recent works have focused on improving the security of this primitive in the classical setting, no construction is known to date in the post-quantum setting. We present the first construction of a threshold blind signature secure in the post-quantum setting, based on lattices. We prove its security under an interactive variant of the SIS assumption introduced in [Agrawal et al., CCS’22]. Our construction has a reasonable overhead of a factor of roughly 1.4 X to 2.5 X in signature size over comparable non-threshold blind signatures over lattices under heuristic but natural assumptions.
Karla Friedrichs, Anja Lehmann, Cavit Özbay
Oblivious pseudorandom functions (OPRFs) allow the blind evaluation of a pseudorandom function, which makes them a versatile building block that enjoys usage in numerous applications. So far, security of OPRFs is predominantly captured in the Universal Composability (UC) framework, where an ideal functionality covers the expected security and privacy properties. While the OPRF functionality appears intuitive at first, the ideal-world paradigm also comes with a number of challenges: from imposing idealized building blocks when building OPRFs, to the lack of modularity, and requiring intricate UC knowledge to securely maneuver their usage. Game-based definitions are a simpler way to cover security properties. They model each property in a single game, which grants modularity in formalizing, proving, and using OPRFs. Interestingly, the few game-based works on OPRFs each re-invent the security model, with considerable variation. Thus, the advantages of the game-based approach remain out of reach: definitions are not easily accessible and comparability across works is low. In this work, we therefore systematize all existing notions into a clear, hierarchical framework. We unify or separate properties, making hidden relations explicit. This effort reveals the necessity of two novel properties: an intermediate privacy notion and a stronger unpredictability notion. Finally, we analyze the two most prominent constructions in our framework: HashDH and 2HashDH. The former does not achieve UC security, but has advantages in applications that require key rotation or updatability; yet it lacks a security analysis. We show that it achieves most security properties in our framework. We also observe that HashDH and 2HashDH do not satisfy our strongest privacy notion, indicating that the guarantees by the UC functionality are not as well understood as we might expect them to be. Overall, we hope that our framework facilitates the usage and design of OPRFs.
Aleck Nash, Christian Eduardo Terron Garcia, Henry Chimal-Dzul, Kim-Kwang Raymond Choo
Consensus protocols are an important building block in blockchain and blockchain-based systems. The recent focus in developing practical quantum computers reinforces the importance of designing quantum-resistant cryptographic protocols, and in the context of this paper quantum-resistant consensus protocols. In this paper, we systematically review the extant literature on quantum-resistant consensus protocols published between 2019 and 2024. As part of
the review, we identify a number of limitations and challenges and discuss potential future research directions.
Taehun Kang, Dongheo Heo, Jeonghwan Lee, Suhri Kim, Changmin Lee
Since supersingular isogeny Diffie-Hellman (SIDH) was broken by a polynomial-time attack, several countermeasures were proposed. Among them, terSIDH has been highlighted for its high performance, yet it exposes a side-channel vulnerability. The total isogeny degree depends on the private key, causing variation in isogeny computation times. This dependency makes terSIDH susceptible to timing attacks. The ratio between the worst- and the best-case execution times of terSIDH was about 32.67 times. In this work, we propose the first constant-time terSIDH. By adding dummy operations, we reduce the dependency of isogeny computations on the private key, ensuring that the algorithm’s execution time remains constant regardless of the private key. Since such countermeasures are generally slower than their non-constant-time counterparts, we applied additional optimizations to mitigate this overhead. Moreover, we present the first C implementation of terSIDH having 128-bit security. We compared our method against CSIDH-4096, a representative isogeny-based key exchange protocol with the same security level. For key generation, the worst-case of terSIDH, CSIDH-4096, and our constant-time method take 1.77 s, 5.18 s, and 1.58 s, respectively. These results demonstrate that our proposed constant-time method is faster than the worst-case of terSIDH while also being significantly more efficient than CSIDH-4096.
Manuel Barbosa, Matthias J. Kannwischer, Thing-han Lim, Peter Schwabe, Pierre-Yves Strub
Decryption errors play a crucial role in the security of KEMs based on Fujisaki-Okamoto because the concrete security guarantees provided by this transformation directly depend on the probability of such an event being bounded by a small real number. In this paper we present an approach to formally verify the claims of statistical probabilistic bounds for incorrect decryption in lattice-based KEM constructions. Our main motivating example is the PKE encryption scheme underlying ML-KEM. We formalize the statistical event that is used in the literature to heuristically approximate ML-KEM decryption errors and confirm that the upper bounds given in the literature for this event are correct. We consider FrodoKEM as an additional example, to demonstrate the wider applicability of the approach and the verification of a correctness bound without heuristic approximations. We also discuss other (non-approximate) approaches to bounding the probability of ML-KEM decryption.
Maria Leslie, Ratna Dutta
In a t-out-of-n threshold secret sharing scheme, accountability is crucial when a subset of f < t servers collude to leak secret shares. Traceable Threshold Secret Sharing (TTSS) ensures that leaked shares can be traced back to the compromised servers while preventing false accusations through non-imputability. In Crypto’24, Boneh et al. proposed new definitions and more practical constructions for TTSS based on Shamir’s and Blakley’s secret sharing schemes, removing the practical limitation of existing TTSS.
Our work presents a traceable secret sharing scheme built upon an additive variant of the Asmuth-Bloom scheme, relying only on black-box access to the reconstruction box R. In our model, a subset of f < t colluding servers can construct a reconstruction box R that recovers the secret with the assistance of an additional t − f random shares. We note that integrating tracing in the standard (t, n)-Asmuth-Bloom Secret Sharing (ABSS) scheme exhibits a tracing leakage issue. We fix this limitation by introducing additive variants of ABSS, ABSS-I and ABSS-II that retain the security of the original scheme ABSS while splitting the secret s into t additive components and generating all shares from the additive components of s. Based on ABSS-I, we construct a TTSS scheme, TTSS-I, that introduces traceability into the framework and is proven to be universally traceable in the random oracle model, assuming R is a universally good reconstruction box. We integrate a tracing mechanism in ABSS-II and propose a second scheme, TTSS-II, which extends TTSS-I by additionally concealing partial information about the additive component of the secret s to introduce non-imputability to prevent the tracer from falsely accusing any honest party by fabricating evidence of its corruption. The security of TTSS-II is also in the random oracle model and relies on the hardness of the discrete logarithm problem.
Our work presents a traceable secret sharing scheme built upon an additive variant of the Asmuth-Bloom scheme, relying only on black-box access to the reconstruction box R. In our model, a subset of f < t colluding servers can construct a reconstruction box R that recovers the secret with the assistance of an additional t − f random shares. We note that integrating tracing in the standard (t, n)-Asmuth-Bloom Secret Sharing (ABSS) scheme exhibits a tracing leakage issue. We fix this limitation by introducing additive variants of ABSS, ABSS-I and ABSS-II that retain the security of the original scheme ABSS while splitting the secret s into t additive components and generating all shares from the additive components of s. Based on ABSS-I, we construct a TTSS scheme, TTSS-I, that introduces traceability into the framework and is proven to be universally traceable in the random oracle model, assuming R is a universally good reconstruction box. We integrate a tracing mechanism in ABSS-II and propose a second scheme, TTSS-II, which extends TTSS-I by additionally concealing partial information about the additive component of the secret s to introduce non-imputability to prevent the tracer from falsely accusing any honest party by fabricating evidence of its corruption. The security of TTSS-II is also in the random oracle model and relies on the hardness of the discrete logarithm problem.
Yuhang Zeng, Zhixin Dong, Xian Xu
HotStuff has gained widespread application in scenarios such as consortium chains in recent years due to its linear view change and pipelined decision making mechanisms. Although there have been studies on the performance of this algorithm, there remains a lack of analysis and formal termination proofs regarding its composability. This paper, for the first time, constructs a comprehensive formal system for the HotStuff protocol in a partially synchronous network environment under the Universally Composable (UC) framework and proves the termination of the HotStuff protocol within the UC framework.
Specifically, we establish the ideal function and demonstrate that the HotStuff protocol UC realizes this ideal function, which guarantees the indistinguishability between the real protocol of HotStuff and the ideal function.
Notably, in the context of network delay attacks, this paper uses a phased time analysis method with an exponential backoff timeout mechanism to show that the protocol can still achieve consensus within a bounded amount of time.
The results of this work not only establish a formal proof paradigm for analyzing termination of BFT protocols in a composable framework, but also provide important theoretical foundations for ensuring reliable termination in industrial-grade blockchain systems (such as the Meta Diem project and Chang’an Chain that employ HotStuff).
Michel Seck, Abdoul Aziz Ciss
Recently, Cotan and Teseleanu (NordSec 2023) published an RSA-like cryptosystem. While in RSA, the public exponent $e$ and the private exponent $d$ are related by the equation $ed - k(p-1)(q-1) = 1$, in their scheme, $e$ and $d$ are related to the equation $ed - k(p^n-1)(q^n-1) = 1$ for some positive integer $n$. Teseleanu (CSCML 2024) showed that one can factor the modulus $N$ using a lattice attack if $d$ is small. In this paper, we extend his attack by showing that if the private exponent is either too small or too large, one can factor $N$ in polynomial time by solving the generalized equation $eu - (p^n - 1)(q^n - 1)v = \pm 1$ using lattice reduction techniques.
Hamza Abusalah, Gaspard Anthoine, Gennaro Avitabile, Emanuele Giunta
We study the inherent limitations of additive accumulators and updatable vector commitments (VCs) with constant-size digest (i.e., independent of the number of committed elements).
Specifically, we prove two lower bounds on the expected number of membership proofs that must be updated when a \emph{single} element is added (or updated) in such data structures. Our results imply that when the digest bit length approaches the concrete security level, then the expected number of proofs invalidated due to an append operation for a digest committing to $n$ elements is nearly maximal: $n-\mathsf{negl}(\lambda)$ in the case of exponential-size universes, and $n-o(n)$ for super-polynomial universes. Our results have significant implications for stateless blockchain designs relying on constant-size VCs, suggesting that the overhead of frequent proof updates may offset the benefits of reducing global state storage.
Specifically, we prove two lower bounds on the expected number of membership proofs that must be updated when a \emph{single} element is added (or updated) in such data structures. Our results imply that when the digest bit length approaches the concrete security level, then the expected number of proofs invalidated due to an append operation for a digest committing to $n$ elements is nearly maximal: $n-\mathsf{negl}(\lambda)$ in the case of exponential-size universes, and $n-o(n)$ for super-polynomial universes. Our results have significant implications for stateless blockchain designs relying on constant-size VCs, suggesting that the overhead of frequent proof updates may offset the benefits of reducing global state storage.