IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
20 August 2025
Arka Rai Choudhuri, Aarushi Goel, Aditya Hegde, Abhishek Jain
A homomorphic secret sharing (HSS) scheme allows a client to delegate a computation to a group of untrusted servers while achieving input privacy as long as at least one server is honest. In recent years, many HSS schemes have been constructed that have, in turn, found numerous applications to cryptography.
Prior work on HSS focuses on the setting where the servers are semi-honest. In this work we study HSS in the setting of malicious evaluators. We propose the notion of HSS with verifiable evaluation (ve-HSS) that guarantees correctness of output even when all the servers are corrupted. ve-HSS retains all the attractive features of HSS and adds the new feature of succinct public verification of output.
We present black-box constructions of ve-HSS by devising generic transformations for semi-honest HSS schemes (with negligible error). This provides a new non-interactive method for verifiable and private outsourcing of computation.
Prior work on HSS focuses on the setting where the servers are semi-honest. In this work we study HSS in the setting of malicious evaluators. We propose the notion of HSS with verifiable evaluation (ve-HSS) that guarantees correctness of output even when all the servers are corrupted. ve-HSS retains all the attractive features of HSS and adds the new feature of succinct public verification of output.
We present black-box constructions of ve-HSS by devising generic transformations for semi-honest HSS schemes (with negligible error). This provides a new non-interactive method for verifiable and private outsourcing of computation.
Sharath Pendyala, Rahul Magesh, Elif Bilge Kavun, Aydin Aysu
FALCON is a NIST-selected post-quantum digital signature scheme whose performance bottleneck lies in the SamplerZ subroutine for discrete Gaussian sampling. We present a throughput-optimized, full hardware implementation of SamplerZ that introduces several architectural and algorithmic innovations to significantly accelerate signature generation. Our design incorporates a datapath-aware floating-point arithmetic pipeline that strategically balances latency and resource utilization. We introduce a novel Estrin's Scheme-based polynomial evaluator to accelerate exponential approximation, and implement a constant-latency BerExp routine using floating-point exponentiation IP, thereby eliminating critical-path logic associated with fixed-point decomposition. Additionally, we optimize rejection handling through parallel sampling loops, full loop unrolling, and a speed-optimized flooring circuit, collectively enabling high-throughput discrete Gaussian sampling. As a result, these optimizations yield FPGA implementations of SamplerZ that achieve 55%-71% reduction in sampling latency, leading to a 36%-46% reduction in overall FALCON signature generation latency compared to the current state-of-the-art. Furthermore, our design achieves up to a 48% reduction in the Area-Time Product (ATP) of SamplerZ, setting a new benchmark for high-throughput and efficient discrete Gaussian sampling, advancing the practical deployment of post-quantum lattice-based signatures in high-performance cryptographic hardware.
Shlomi Dolev, Avraham Yagudaev, Moti Yung
Rekeying is an effective technique for protecting symmetric ciphers against side-channel and key-search attacks. Since its introduction, numerous rekeying schemes have been developed. We introduce Post-Quantum Stateless Auditable Rekeying (PQ-STAR), a novel post-quantum secure stateless rekeying scheme with audit support. PQ-STAR is presented in three variants of increasing security guarantees: (i) Plain PQ-STAR lets an authorized auditor decrypt and verify selected ciphertexts; (ii) Commitment-based PQ-STAR with the additional binding guarantee from the commitments, preventing a malicious sender from potentially claiming a random or wrong session key. (iii) Zero-knowledge PQ-STAR equips each session key with a signature-based zero-knowledge proof (ZKP), which proves that the session key was derived honestly, without ever revealing the secret preimage. We formally prove that all variants achieve key-uniqueness, index-hiding, and forward-secrecy, even if a probabilistic polynomial-time (PPT) adversary arbitrarily learns many past session keys. PQ-STAR provides a formally verified, stateless, and audit-capable rekeying primitive that can be seamlessly integrated as a post-quantum upgrade for existing symmetric-key infrastructures.
Ittai Abraham, Gilad Asharov
Asynchronous byzantine agreement extension studies the message complexity of $L$-bit multivalued asynchronous byzantine agreement given access to a binary asynchronous Byzantine agreement protocol.
We prove that asynchronous byzantine agreement extension can be solved with perfect security and optimal resilience in $O(nL+n^2 \log n)$ total communication (in bits) in addition to a single call to a binary asynchronous Byzantine agreement protocol. For $L = O(n \log n)$, this gives an asymptotically optimal protocol, resolving a question that remained open for nearly two decades.
List decoding is a fundamental concept in theoretical computer science and cryptography, enabling error correction beyond the unique decoding radius and playing a critical role in constructing robust codes, hardness amplification, and secure cryptographic protocols. A key novelty of our perfectly secure and optimally resilient asynchronous byzantine agreement extension protocol is that it uses list decoding - making a striking new connection between list decoding and asynchronous Byzantine agreement.
We prove that asynchronous byzantine agreement extension can be solved with perfect security and optimal resilience in $O(nL+n^2 \log n)$ total communication (in bits) in addition to a single call to a binary asynchronous Byzantine agreement protocol. For $L = O(n \log n)$, this gives an asymptotically optimal protocol, resolving a question that remained open for nearly two decades.
List decoding is a fundamental concept in theoretical computer science and cryptography, enabling error correction beyond the unique decoding radius and playing a critical role in constructing robust codes, hardness amplification, and secure cryptographic protocols. A key novelty of our perfectly secure and optimally resilient asynchronous byzantine agreement extension protocol is that it uses list decoding - making a striking new connection between list decoding and asynchronous Byzantine agreement.
Anasuya Acharya, Carmit Hazay, Vladimir Kolesnikov, Manoj Prabhakaran
SCALES (Small Clients And Larger Ephemeral Servers) (Acharya et al., TCC 2022, CRYPTO 2024) is a recently proposed model for MPC with several attractive features, including resilience to adaptive corruption. Known SCALES constructions, while offering reasonable asymptotics for large-scale MPC, incur high concrete costs both in computation and communication.
As our primary contribution, we dramatically improve both asymptotic and concrete costs of SCALES for permutation branching programs (PBP), a well-motivated practical model of computation. We achieve linear cost in program length, input size, and the security parameter. Our instantiations of the building blocks may be of independent interest.
Further, we present generic transformations to extend any semi-honestly secure SCALES protocol to achieve (1) guaranteed output delivery in the presence of mixed adversaries (that corrupt servers maliciously and clients semi-honestly) in the all-but-one corruption setting; and (2) protocols for computing general functionalities where each server's computation scales sub-linearly in the function~size.
Avik Chakraborti, Bishwajit Chakraborty, Nilanjan Datta, Avijit Dutta, Ashwin Jha, Sougata Mandal, Hrithik Nandi, Mridul Nandi, Abishanka Saha
Construction of efficient and provably-secure (T)PRPs and (fixed/variable input-length) PRFs has been one of the central open problem in modern symmetric-key cryptography. Many Feistel-based constructions has been proposed and analysed to solve this problem. Inspired by some recent works, in this paper, we revisit the problem of constructing provably secure Feistel constructions using permutations as the round functions. More specifically, following the idea of Naor and Reingold, we try to reduce the number of inner permutations used by replacing them with cheap hash functions, without sacrificing optimal security. We affirmatively show that with the use of a suitable hash function along with a four-round Feistel construction, which uses only three independent permutations, one can achieve optimally secure (T)PRPs and PRFs.
Liam Eagen
Bitcoin is a decentralized, permissionless network for digital payments. Bitcoin also supports a limited set of smart contracts, which restrict how bitcoin can be spent, through bitcoin script. In order to support more expressive scripting functionality, Robin Linus introduced the BitVM family of protocols. These implement a weaker form of ``optimistic" smart contracts, and for the first time allowed bitcoin to verify arbitrary computation. BitVM allows a challenger to publish a ``fraud proof" that the computation was carried out incorrectly which can be verified on chain, even when the entire computation cannot. Jermey Rubin introduced an alternative optimistic smart contract protocol called Delbrag. This protocol uses Garbled Circuits (GC) to replace the BitVM fraud proof with by simply revealing a secret. He also introduced the Grug technique for malicious security.
We introduce a new formalization of GC based optimistic techniques called Garbled Locks or Glocks. Much like Delbrag, we use the GC to leak a secret and produce a signature as a fraud proof. We further propose the first concretely practical construction that does not require Grug. Like BitVM2 and Delbrag, Glock25 reduces verification of arbitrary bounded computation to verification of a SNARK. In Glock25, we use a designated verifier version of a modified of the SNARK Pari with smaller proof size. We make Glock25 maliciously secure using a combination of Cut-and-Choose, Verifiable Secret Sharing (VSS), and Adaptor Signatures. These techniques reduce the communication, computational, and on-chain complexity of the protocol compared to other approaches to construct a Glock, e.g. based on Groth16.
We introduce a new formalization of GC based optimistic techniques called Garbled Locks or Glocks. Much like Delbrag, we use the GC to leak a secret and produce a signature as a fraud proof. We further propose the first concretely practical construction that does not require Grug. Like BitVM2 and Delbrag, Glock25 reduces verification of arbitrary bounded computation to verification of a SNARK. In Glock25, we use a designated verifier version of a modified of the SNARK Pari with smaller proof size. We make Glock25 maliciously secure using a combination of Cut-and-Choose, Verifiable Secret Sharing (VSS), and Adaptor Signatures. These techniques reduce the communication, computational, and on-chain complexity of the protocol compared to other approaches to construct a Glock, e.g. based on Groth16.
Krishnendu Chatterjee, Jan Matyáš Křišťan, Stefan Schmid, Jakub Svoboda, Michelle Yeo
Payment channel networks (PCNs) are a promising technology that alleviates blockchain scalability by shifting the transaction load from the blockchain to the PCN.
Nevertheless, the network topology has to be carefully designed to maximise the transaction throughput in PCNs. Additionally, users in PCNs also have to make optimal decisions on which transactions to forward and which to reject to prolong the lifetime of their channels.
In this work, we consider an input sequence of transactions over $p$ parties. Each transaction consists of a transaction size, source, and target, and can be either accepted or rejected (entailing a cost).
The goal is to design a PCN topology among the $p$ cooperating parties, along with the channel capacities, and then output a decision for each transaction in the sequence to minimise the cost of creating and augmenting channels,
as well as the cost of rejecting transactions.
Our main contribution is an $\mathcal{O}(p)$ approximation algorithm for the problem with $p$ parties.
We further show that with some assumptions on the distribution of transactions, we can reduce the approximation ratio to $\mathcal{O}(\sqrt{p})$.
We complement our theoretical analysis with an empirical study of our assumptions and approach in the context of the Lightning Network.
Yue Huang, Xin Wang, Haibin Zhang, Sisi Duan
Conventional Byzantine fault-tolerant protocols focus on the workflow within a group of nodes. In recent years, many applications of consensus involve communication across groups. Examples include communication between infrastructures running replicated state machine, sharding-based protocols, and cross-chain bridges. Unfortunately, little efforts have been made to model the properties for communication across groups.
In this work, we propose a new primitive called cross-consensus reliable broadcast (XRBC). The XRBC primitive models the security properties of communication between two groups, where at least one group executes a consensus protocol. We provide three constructions of XRBC under different assumptions and present three different applications for our XRBC protocols: a cross-shard coordination protocol via a case study of Reticulum (NDSS 2024), a protocol for cross-shard transactions via a case study of Chainspace (NDSS 2018), and a solution for cross-chain bridge. Our evaluation results show that our protocols are highly efficient and benefit different applications. For example, in our case study on Reticulum, our approach achieves 61.16% lower latency than the vanilla approach.
In this work, we propose a new primitive called cross-consensus reliable broadcast (XRBC). The XRBC primitive models the security properties of communication between two groups, where at least one group executes a consensus protocol. We provide three constructions of XRBC under different assumptions and present three different applications for our XRBC protocols: a cross-shard coordination protocol via a case study of Reticulum (NDSS 2024), a protocol for cross-shard transactions via a case study of Chainspace (NDSS 2018), and a solution for cross-chain bridge. Our evaluation results show that our protocols are highly efficient and benefit different applications. For example, in our case study on Reticulum, our approach achieves 61.16% lower latency than the vanilla approach.
Charlotte Bonte, Georgio Nicolas, Nigel P. Smart
We discuss how Fully Homomorphic Encryption (FHE), and in particular the TFHE scheme, can be used to define an e-voting scheme for the Alternative Vote (AV) election system. This system has a more complex tallying phase than traditional First-Past-The-Post (FPTP) election variants. Previous work on e-voting schemes that used homomorphic encryption has focused on FPTP systems only, and utilized mainly linearly homomorphic encryption. We show, by using FHE, that more complex electoral systems such as AV can also be supported by homomorphic encryption. We show security of our protocol by considering it as a simple MPC functionality, and we also show practicality by presenting some experimental runtimes using the tfhe-rs library.
Gopal Anantharaman, Jintai Ding
A Symmetric Key Encryption scheme using Camera Zooming is pre-
sented using a familiar Paper-pencil cipher. The Camera can have a mag-
nification/scaling up to some integer. The encrypter and decrypter are
two hardware systems that are assumed to have the capability to zoom a
given image with text from the resolution of a single character to a page
by applying an appropriate scaling factor and an appropriate polynomial
time zoom algorithm. Using the symmetric key, the Camera or a zooming
algorithm implementation repeatedly zooms on different boxes in an im-
age filled with cipher text and spurious redundant non-correlated pseudo-
random data. Then, it decrypts the cipher using the Merkle-Hellman
Knapsack Cryptosystem (MHKC) trapdoor algorithm or variants for ef-
ficient encryption/decryption. As shown, the MHKC algorithm’s crypt-
analysis vulnerability using Lattice-based LLL and other density attacks
would not affect this scheme—as long as the key is private.
Ting-Yun Yeh
Kleptography was first proposed by Adam Young and Moti
Yung in 1996, while algorithm substitution attack was introduced by Mi-
hir Bellare et al. as a variation of kleptography in 2014 after the Dual EC
incident with the confidential documents revelation by Edward Snowden.
These two paradigms share a common goal: to enable attackers to embed
covert capabilities into cryptographic implementations while maintaining
the appearance of normal functionality. The goal of this paper is to con-
solidate existing research on kleptographic attacks, integrate it into a uni-
fied definition, and explore future directions for the research in this field.
This paper begins by introducing and comparing the two major branches
of kleptographic attacks: traditional kleptography and post-Snowden al-
gorithm substitution attack, highlighting their theoretical distinctions,
threat models, and historical development. Then, it analyzes the spe-
cific goals that attackers aim to achieve through such subversions and
propose a generalized definition of algorithm substitution attack that in-
clude all the goals. The paper also presents practical examples framed
within my definition and classify prior research works as either strong
or weak attacks based on their structure and undetectability. Finally, it
discusses the current landscape of research in kleptographic attacks, and
then suggest future directions for the attack and defense perspectives.
Tianyao Gu, Afonso Tinoco, Sri Harish G Rajan, Elaine Shi
Making 2-party computation scale up to big datasets is a long-cherished dream of our community. More than a decade ago, a line of work has implemented and optimized interactive RAM-model 2-party computation (2PC), achieving somewhat reasonable concrete performance on large datasets, but unfortunately suffering from $\widetilde{O}(T)$ roundtrips for a $T$-time computation. Garbled RAM promises to compress the number of roundtrips to $2$, and encouragingly, a line of recent work has designed concretely efficient Garbled RAM schemes whose asymptotic communication and computation costs almost match the best known interactive RAM-model 2PC, but still leaves $({\sf poly})\log\log$ gaps.
We present ${\sf PicoGRAM}$, a practical garbled RAM (GRAM) scheme that not only asymptotically matches the prior best RAM-model 2PC, but also achieves an order of magnitude concrete improvement in online time relative to interactive RAM-model 2PC, on a dataset of size $8$GB. Moreover, our work also gives the first Garbled RAM whose total cost (including bandwidth and computation) achieves an optimal dependency on the database size (up to an arbitrarily small super-constant factor).
Our work shows that for high-value real-life applications such as Signal, blockchains, and Meta that require oblivious accesses to large datasets, Garbled RAM is a promising direction towards eventually removing the trusted hardware assumption that exist in production implementations today. Our open source code is available at https://github.com/picogramimpl/picogram.
We present ${\sf PicoGRAM}$, a practical garbled RAM (GRAM) scheme that not only asymptotically matches the prior best RAM-model 2PC, but also achieves an order of magnitude concrete improvement in online time relative to interactive RAM-model 2PC, on a dataset of size $8$GB. Moreover, our work also gives the first Garbled RAM whose total cost (including bandwidth and computation) achieves an optimal dependency on the database size (up to an arbitrarily small super-constant factor).
Our work shows that for high-value real-life applications such as Signal, blockchains, and Meta that require oblivious accesses to large datasets, Garbled RAM is a promising direction towards eventually removing the trusted hardware assumption that exist in production implementations today. Our open source code is available at https://github.com/picogramimpl/picogram.
Paul Gerhart, Davide Li Calsi, Luigi Russo, Dominique Schröder
Threshold Schnorr signatures enable $t$-out-of-$n$ parties to collaboratively produce signatures that are indistinguishable from standard Schnorr signatures, ensuring compatibility with existing verification systems. While static-secure constructions are well understood and achieve optimal round complexity, obtaining full adaptive security - withstanding up to $t-1$ dynamic corruptions under standard assumptions has proven elusive: Recent impossibility results (CRYPTO’25) either rule out known proof techniques for widely deployed schemes or require speculative assumptions and idealized models, while positive examples achieving full adaptivity from falsifiable assumptions incur higher round complexity (EUROCRYPT’25, CRYPTO’25).
We overcome these barriers with the first round-optimal threshold Schnorr signature scheme that, under a slightly relaxed security model, achieves full adaptive security from DDH in the random oracle model.
Our model is relaxed in the sense that the adversary may adaptively corrupt parties at any time, but each signer must refresh part of their public key after a fixed number of signing queries. These updates are executed via lightweight, succinct, stateless tokens, preserving the aggregated signature format. Our construction is enabled by a new proof technique, equivocal deterministic nonce derivation, which may be of independent interest.
We overcome these barriers with the first round-optimal threshold Schnorr signature scheme that, under a slightly relaxed security model, achieves full adaptive security from DDH in the random oracle model.
Our model is relaxed in the sense that the adversary may adaptively corrupt parties at any time, but each signer must refresh part of their public key after a fixed number of signing queries. These updates are executed via lightweight, succinct, stateless tokens, preserving the aggregated signature format. Our construction is enabled by a new proof technique, equivocal deterministic nonce derivation, which may be of independent interest.
Sourav Das, Ling Ren, Ziling Yang
Threshold decryption schemes allow a group of decryptors, each holding a private key share, to jointly decrypt ciphertexts. Over the years, numerous threshold decryption schemes have been proposed for applications such as secure data storage, internet auctions, and voting, and recently as a tool to protect against miner-extractable value attacks in blockchain. Despite the importance and popularity of threshold decryption, many natural and practical threshold decryption schemes have only been proven secure against static adversaries.
In this paper, we present two threshold decryption schemes that withstand malicious adaptive corruption. Our first scheme is based on the standard ElGamal encryption scheme and is secure against chosen plaintext attack~(CPA). Our second scheme, based on the chosen ciphertext attack~(CCA) secure Shoup-Gennaro encryption scheme, is also CCA secure. Both of our schemes have non-interactive decryption protocols and comparable efficiency to their static secure counterparts. Building on the technique introduced by Das and Ren (CRYPTO 2024), our threshold ElGamal decryption scheme relies on the hardness of Decisional Diffie-Hellman and the random oracle model.
In this paper, we present two threshold decryption schemes that withstand malicious adaptive corruption. Our first scheme is based on the standard ElGamal encryption scheme and is secure against chosen plaintext attack~(CPA). Our second scheme, based on the chosen ciphertext attack~(CCA) secure Shoup-Gennaro encryption scheme, is also CCA secure. Both of our schemes have non-interactive decryption protocols and comparable efficiency to their static secure counterparts. Building on the technique introduced by Das and Ren (CRYPTO 2024), our threshold ElGamal decryption scheme relies on the hardness of Decisional Diffie-Hellman and the random oracle model.
Hanlin Liu, Xiao Wang, Kang Yang, Longhui Yin, Yu Yu
The Regular Syndrome Decoding (RSD) problem, introduced nearly two decades ago, is a regular version of the Syndrome Decoding (SD) problem, where the noise vector is divided into consecutive, equally sized blocks, each containing exactly one noisy coordinate. Recently, RSD has gained renewed attention for its applications in Pseudorandom Correlation Generator (PCG) and more. Very recently, several works presented the improved algebraic approach (AGB for short) and ISD approach (including regular-ISD and regular-RP) to utilize the regular structure of noise vectors and provide a more precise security evaluation for the RSD problem.
In this paper, we refine the AGB algorithm from a one-round process to a two-round process, and refer to the new algebraic algorithm as AGB 2.0. For each round, we guess a few noise-free positions, followed by a tailored partial XL algorithm. This interleaving strategy increases the probability of success by reducing the number of guessing noise-free positions and effectively lowers the problem's dimension in each round. By fine-tuning position guesses in each round and optimizing the aggregate running time, our AGB 2.0 algorithm reduces the concrete security of the RSD problem by up to $6$ bits for the parameter sets used in prior works, compared to the best-known attack. In particular, for a specific parameter set in Wolverine, the RSD security is $7$ bits below the $128$-bit target. We analyze the asymptotic complexity of algebraic attacks on the RSD problem over a finite field $\mathbb{F}$ with the field size $|\mathbb{F}|>2$, when the noise rate $\rho$ and code rate $R$ satisfy $\rho + R < 1$. If $n \rho R^2 = O(1)$ where $n$ is the noise length, the RSD problem over $\mathbb{F}$ can be solved in polynomial time, but it does not hold for the SD problem. We show that the ISD and its variants, including regular-ISD and regular-RP, are asymptotically less efficient than AGB for solving RSD problems with $R = o(1/\log(n))$ and $|\mathbb{F}| > e^{n\rho R}$.
In this paper, we refine the AGB algorithm from a one-round process to a two-round process, and refer to the new algebraic algorithm as AGB 2.0. For each round, we guess a few noise-free positions, followed by a tailored partial XL algorithm. This interleaving strategy increases the probability of success by reducing the number of guessing noise-free positions and effectively lowers the problem's dimension in each round. By fine-tuning position guesses in each round and optimizing the aggregate running time, our AGB 2.0 algorithm reduces the concrete security of the RSD problem by up to $6$ bits for the parameter sets used in prior works, compared to the best-known attack. In particular, for a specific parameter set in Wolverine, the RSD security is $7$ bits below the $128$-bit target. We analyze the asymptotic complexity of algebraic attacks on the RSD problem over a finite field $\mathbb{F}$ with the field size $|\mathbb{F}|>2$, when the noise rate $\rho$ and code rate $R$ satisfy $\rho + R < 1$. If $n \rho R^2 = O(1)$ where $n$ is the noise length, the RSD problem over $\mathbb{F}$ can be solved in polynomial time, but it does not hold for the SD problem. We show that the ISD and its variants, including regular-ISD and regular-RP, are asymptotically less efficient than AGB for solving RSD problems with $R = o(1/\log(n))$ and $|\mathbb{F}| > e^{n\rho R}$.
Michael Adjedj, Geoffroy Couteau, Arik Galansky, Nikolaos Makriyannis, Oren Yomtov
The industry is moving away from passwords for authentication and authorization, with hardware devices for storing long-term cryptographic keys emerging as the leading alternative. However, these devices often have limited displays and remain vulnerable to theft, malware, or tricking users into signing malicious payloads. Current systems provide little fallback security in such cases. Any solution must also meet strict requirements: compatibility with industry standards, scalability to handle high request volumes, and high availability.
We present a novel design for authentication and authorization that meets these demands. Our approach virtualizes the authenticating/authorizing party via a two-party signing protocol with a helper entity, ensuring that keys remain secure even if a device is compromised and that every signed message conforms to a security policy.
We formalize the required properties for such protocols and show how they are met by existing schemes (e.g., FROST for Schnorr, Boneh–Haitner–Lindell-Segev'25 for ECDSA). Motivated by the widespread use of ECDSA (FIDO2/Passkeys, blockchains), we introduce a new, optimized two-party ECDSA protocol that is significantly more efficient than prior work. At its core is a new variant of exponent-VRF, improving on earlier constructions and of independent interest. We validate our design with a proof-of-concept virtual authenticator for the FIDO2 Passkeys framework.
We present a novel design for authentication and authorization that meets these demands. Our approach virtualizes the authenticating/authorizing party via a two-party signing protocol with a helper entity, ensuring that keys remain secure even if a device is compromised and that every signed message conforms to a security policy.
We formalize the required properties for such protocols and show how they are met by existing schemes (e.g., FROST for Schnorr, Boneh–Haitner–Lindell-Segev'25 for ECDSA). Motivated by the widespread use of ECDSA (FIDO2/Passkeys, blockchains), we introduce a new, optimized two-party ECDSA protocol that is significantly more efficient than prior work. At its core is a new variant of exponent-VRF, improving on earlier constructions and of independent interest. We validate our design with a proof-of-concept virtual authenticator for the FIDO2 Passkeys framework.
Jonas Janneck, Jonas Meers, Massimo Ostuzzi, Doreen Riepel
An Authenticated Key Encapsulation Mechanism (AKEM) combines public-key encryption and digital signatures to provide confidentiality and authenticity.
AKEMs build the core of Hybrid Public Key Encryption (RFC 9180) and serve as a useful abstraction for messaging applications like the Messaging Layer Security (MLS) protocol (RFC 9420) and Signal's X3DH protocol. To date, most existing AKEM constructions either rely on classical (non post-quantum) assumptions or on unoptimized black-box approaches leading to suboptimal efficiency.
In this work, we choose a different abstraction level to combine KEMs and identification schemes more efficiently by leveraging randomness reuse. We construct a generic scheme and identify the necessary security requirements on the underlying KEM and identification scheme when reusing parts of their randomness. This allows for a concrete instantiation from isogenies based on the POKÉ KEM (EUROCRYPT'25) and the SQIsignHD identification scheme (EUROCRYPT'24). To be used in our black-box construction, the identification scheme requires the more advanced security property of response non-malleability. Hence, we further show that a slight modification of SQIsignHD satisfies this notion, which might be of independent interest.
Putting everything together, our final scheme yields the most compact AKEM from PQ assumptions with public keys of 366 bytes and ciphertexts of 216 bytes while fulfilling the strongest confidentiality and authenticity notions.
In this work, we choose a different abstraction level to combine KEMs and identification schemes more efficiently by leveraging randomness reuse. We construct a generic scheme and identify the necessary security requirements on the underlying KEM and identification scheme when reusing parts of their randomness. This allows for a concrete instantiation from isogenies based on the POKÉ KEM (EUROCRYPT'25) and the SQIsignHD identification scheme (EUROCRYPT'24). To be used in our black-box construction, the identification scheme requires the more advanced security property of response non-malleability. Hence, we further show that a slight modification of SQIsignHD satisfies this notion, which might be of independent interest.
Putting everything together, our final scheme yields the most compact AKEM from PQ assumptions with public keys of 366 bytes and ciphertexts of 216 bytes while fulfilling the strongest confidentiality and authenticity notions.
14 August 2025
Anubhav Baweja, Alessandro Chiesa, Elisabetta Fedele, Giacomo Fenzi, Pratyush Mishra, Tushar Mopuri, Andrew Zitek-Estrada
The sumcheck protocol is a fundamental building block in the design of probabilistic proof systems, and has become a key component of recent work on efficient succinct arguments.
We study time-space tradeoffs for the prover of the sumcheck protocol in the streaming model, and provide upper and lower bounds that tightly characterize the efficiency achievable by the prover.
$\bullet{}$ For sumcheck claims about a single multilinear polynomial we demonstrate an algorithm that runs in time $O(kN)$ and uses space $O(N^{1/k})$ for any $k \geq 1$. For non-adaptive provers (a class which contains all known sumcheck prover algorithms) we show this tradeoff is optimal.
$\bullet{}$ For sumcheck claims about products of multilinear polynomials, we describe a prover algorithm that runs in time $O(N(\log \log N + k))$ and uses space $O(N^{1/k})$ for any $k \geq 1$. We show that, conditioned on the hardness of a natural problem about multiplication of multilinear polynomials, any ``natural'' prover algorithm that uses space $O(N^{1/2 - \varepsilon})$ for some $\varepsilon > 0$ must run in time $\Omega(N(\log \log N + \log \varepsilon))$.
We implement and evaluate the prover algorithm for products of multilinear polynomials. We show that our algorithm consumes up to $120\times$ less memory compare to the linear-time prover algorithm, while incurring a time overhead of less than $2\times$.
The foregoing algorithms and lowerbounds apply in the interactive proof model. We show that in the polynomial interactive oracle proof model one can in fact design a new protocol that achieves a better time-space tradeoff of $O(N^{1/k})$ space and $O(N(\log^* N + k))$ time for any $k \geq 1$.
We study time-space tradeoffs for the prover of the sumcheck protocol in the streaming model, and provide upper and lower bounds that tightly characterize the efficiency achievable by the prover.
$\bullet{}$ For sumcheck claims about a single multilinear polynomial we demonstrate an algorithm that runs in time $O(kN)$ and uses space $O(N^{1/k})$ for any $k \geq 1$. For non-adaptive provers (a class which contains all known sumcheck prover algorithms) we show this tradeoff is optimal.
$\bullet{}$ For sumcheck claims about products of multilinear polynomials, we describe a prover algorithm that runs in time $O(N(\log \log N + k))$ and uses space $O(N^{1/k})$ for any $k \geq 1$. We show that, conditioned on the hardness of a natural problem about multiplication of multilinear polynomials, any ``natural'' prover algorithm that uses space $O(N^{1/2 - \varepsilon})$ for some $\varepsilon > 0$ must run in time $\Omega(N(\log \log N + \log \varepsilon))$.
We implement and evaluate the prover algorithm for products of multilinear polynomials. We show that our algorithm consumes up to $120\times$ less memory compare to the linear-time prover algorithm, while incurring a time overhead of less than $2\times$.
The foregoing algorithms and lowerbounds apply in the interactive proof model. We show that in the polynomial interactive oracle proof model one can in fact design a new protocol that achieves a better time-space tradeoff of $O(N^{1/k})$ space and $O(N(\log^* N + k))$ time for any $k \geq 1$.
Katharina Boudgoust, Corentin Jeudy, Erkan Tairi, Weiqiang Wen
The Module Learning With Errors (M-LWE) problem has become a fundamental hardness assumption for lattice-based cryptography. It offers an attractive trade-off between strong robustness guarantees, sometimes directly based on worst-case lattice problems, and efficiency of the subsequent cryptographic primitives. Different flavors of M-LWE have then been introduced towards improving performance. Such variants look at different secret-error distributions and might allow for additional hints on the secret-error vector. Existing hardness results however only cover restricted classes of said distributions, or are tailored to specific leakage models. This lack of generality hinders the design of efficient and versatile cryptographic schemes, as each new distribution or leakage model requires a separate and nontrivial hardness evaluation.
In this work, we address this limitation by establishing the hardness of MLWE under general distributions. As a first step, we show that MLWE remains hard when the error vector follows an arbitrary bounded distribution with sufficient entropy, with some restriction on the number of samples. Building on this, we then reduce to the Hermite Normal Form (HNF) where the secret-error vector follows said arbitrary distribution. Overall, our result shows the actual shape of the distribution does not matter, as long as it keeps sufficient entropy.
To demonstrate the versatility of our framework, we further analyze a range of leakage scenarios. By examining the residual entropy given the leakage, we show that our results of M-LWE with general distributions encompass various types of leakage. More precisely, we cover exact and approximate linear hints which are widely used in recent cryptographic designs, as well as quadratic, and even non-algebraic forms, some of which were not yet covered by any theoretical hardness guarantees. The generality of our results aims at facilitating future cryptographic designs and security analyses.
In this work, we address this limitation by establishing the hardness of MLWE under general distributions. As a first step, we show that MLWE remains hard when the error vector follows an arbitrary bounded distribution with sufficient entropy, with some restriction on the number of samples. Building on this, we then reduce to the Hermite Normal Form (HNF) where the secret-error vector follows said arbitrary distribution. Overall, our result shows the actual shape of the distribution does not matter, as long as it keeps sufficient entropy.
To demonstrate the versatility of our framework, we further analyze a range of leakage scenarios. By examining the residual entropy given the leakage, we show that our results of M-LWE with general distributions encompass various types of leakage. More precisely, we cover exact and approximate linear hints which are widely used in recent cryptographic designs, as well as quadratic, and even non-algebraic forms, some of which were not yet covered by any theoretical hardness guarantees. The generality of our results aims at facilitating future cryptographic designs and security analyses.