IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
08 October 2025
Samuel Bouaziz--Ermann, Minki Hhan, Garazi Muguruza, Quoc-Huy Vu
There are various notions of quantum pseudorandomness, such as pseudorandom unitaries (PRUs), pseudorandom state generators (PRSGs), and pseudorandom function-like state generators (PRFSGs). Unlike the different notions of classical pseudorandomness, which are known to be existentially equivalent to each other, the relationship between quantum pseudorandomness has yet to be fully established.
We present some evidence suggesting that some quantum pseudorandomness is unlikely to be constructed from others, or at least is hard to construct unless some conjectures are false. This indicates that quantum pseudorandomness could behave quite differently from classical pseudorandomness. We study new oracle worlds where one form of quantum pseudorandomness exists but another does not, under certain assumptions or constraints, and we provide potential directions toward achieving full black-box separation. More precisely: - We give a unitary oracle relative to which PRFSGs exist, but PRUs without using ancilla do not. This can be extended to general PRUs if a structural property of the PRU algorithm can be proven. - Assuming a conjecture similar to an isoperimetric inequality, we show a unitary oracle world where log-length output PRFSGs exist, but proving the existence of quantum-computable pseudorandom generators (QPRGs) with negligible correctness error is as hard as proving that BQP ≠ QCMA. This result suggests that the inverse-polynomial error in the state-of-the-art construction of QPRGs from log-length PRSGs is inherent. - Assuming the same conjecture, we prove that some natural methods of constructing super-log-length output PRSGs from log-length output PRFSGs are impossible. This partly complements the known hardness of shrinking the PRSG output lengths. Along the way, we also discuss other potential approaches to extend the PRSG output lengths.
All our worlds are based on (variants of) oracles that output Haar-random quantum states for each bit string, which can be viewed as a quantum version of the random oracle model, where output strings are replaced by quantum states.
Our results highlight technical difficulties when dealing with ancillary registers, measurements, and adaptivity in the quantum setting. As one of our key technical tools, we show an intriguing gentle behavior of intermediate measurements in algorithms producing outcome states with high purity, which may be of independent interest.
We present some evidence suggesting that some quantum pseudorandomness is unlikely to be constructed from others, or at least is hard to construct unless some conjectures are false. This indicates that quantum pseudorandomness could behave quite differently from classical pseudorandomness. We study new oracle worlds where one form of quantum pseudorandomness exists but another does not, under certain assumptions or constraints, and we provide potential directions toward achieving full black-box separation. More precisely: - We give a unitary oracle relative to which PRFSGs exist, but PRUs without using ancilla do not. This can be extended to general PRUs if a structural property of the PRU algorithm can be proven. - Assuming a conjecture similar to an isoperimetric inequality, we show a unitary oracle world where log-length output PRFSGs exist, but proving the existence of quantum-computable pseudorandom generators (QPRGs) with negligible correctness error is as hard as proving that BQP ≠ QCMA. This result suggests that the inverse-polynomial error in the state-of-the-art construction of QPRGs from log-length PRSGs is inherent. - Assuming the same conjecture, we prove that some natural methods of constructing super-log-length output PRSGs from log-length output PRFSGs are impossible. This partly complements the known hardness of shrinking the PRSG output lengths. Along the way, we also discuss other potential approaches to extend the PRSG output lengths.
All our worlds are based on (variants of) oracles that output Haar-random quantum states for each bit string, which can be viewed as a quantum version of the random oracle model, where output strings are replaced by quantum states.
Our results highlight technical difficulties when dealing with ancillary registers, measurements, and adaptivity in the quantum setting. As one of our key technical tools, we show an intriguing gentle behavior of intermediate measurements in algorithms producing outcome states with high purity, which may be of independent interest.
Yiting Liu, Biming Zhou, Haodong Jiang
In the post-quantum migration of the traditional key establishment protocol, hybrid key encapsulation mechanisms (KEMs) are recommended by standards bodies, including NIST, ETSI, and national security agencies like NCSC-UK, BSI-Germany etc.
Recently, several hybrid KEMs with CCA security such as XOR-then-MAC, Dual-PRF and X-Wing (being standardized by IETF) are proposed based on CCA KEMs obtained by applying the complicated Fujisaki-Okamoto transform to public-key encryption (PKE) schemes.
In some cryptographic protocols such as PQ-Noise and Signal, 1CCA security (similar to CCA security except that the adversary is restricted to one single decapsulation query) is required.
However, no specific scheme has been designed to specifically achieve 1CCA security (excluding the schemes that aim to achieve CCA security, as they inherently encompass 1CCA security).
In this paper, we propose CUKEM, a concise and unified hybrid KEM framework built directly on PKEs, and its variant CUKEM+, which achieves CCA security by replacing one PKE component with a nominal group. We prove that our schemes, equipped with different modules, achieve standard security notions in both the random oracle model and the quantum random oracle model, including IND-CPA, IND-1CCA, and IND-CCA. Compared to existing KEM-based constructions, \sys and CUKEM+ are more concise, as they simplify or even eliminate certain hash operations without compromising security. Our evaluation shows that the CCA-secure CUKEM+ achieves encapsulation and decapsulation speedups of up to 22.28% and 16.22%, respectively, over X-Wing, while the 1CCA-secure CUKEM attains gains of up to 13.97% and 104.31%.
In this paper, we propose CUKEM, a concise and unified hybrid KEM framework built directly on PKEs, and its variant CUKEM+, which achieves CCA security by replacing one PKE component with a nominal group. We prove that our schemes, equipped with different modules, achieve standard security notions in both the random oracle model and the quantum random oracle model, including IND-CPA, IND-1CCA, and IND-CCA. Compared to existing KEM-based constructions, \sys and CUKEM+ are more concise, as they simplify or even eliminate certain hash operations without compromising security. Our evaluation shows that the CCA-secure CUKEM+ achieves encapsulation and decapsulation speedups of up to 22.28% and 16.22%, respectively, over X-Wing, while the 1CCA-secure CUKEM attains gains of up to 13.97% and 104.31%.
Lewis Glabush, Patrick Longa, Michael Naehrig, Chris Peikert, Douglas Stebila, Fernando Virdia
Large-scale quantum computers capable of implementing Shor's algorithm pose a significant threat to the security of the most widely used public-key cryptographic schemes. This risk has motivated substantial efforts by standards bodies and government agencies to identify and standardize quantum-safe cryptographic systems. Among the proposed solutions, lattice-based cryptography has emerged as the foundation for some of the most promising protocols.
This paper describes FrodoKEM, a family of conservative key-encapsulation mechanisms (KEMs) whose security is based on generic, "unstructured" lattices. FrodoKEM is proposed as an alternative to the more efficient lattice schemes that utilize algebraically structured lattices, such as the recently standardized ML-KEM scheme. By relying on generic lattices, FrodoKEM minimizes the potential for future attacks that exploit algebraic structures while enabling simple and compact implementations.
Our plain C implementations demonstrate that, despite its conservative design and parameterization, FrodoKEM remains practical. For instance, the full protocol at NIST security level 1 runs in approximately 0.97 ms on a server-class processor, and 4.98 ms on a smartphone-class processor.
FrodoKEM obtains (single-target) IND-CCA security using a variant of the Fujisaki-Okamoto transform, applied to an underlying public-key encryption scheme called FrodoPKE. In addition, using a new tool called the Salted Fujisaki-Okamoto (SFO) transform, FrodoKEM is also shown to tightly achieve multi-target security, without increasing the FrodoPKE message length and with a negligible performance impact, based on the multi-target IND-CPA security of FrodoPKE.
Theophilus Agama
We prove an extension of the lower bound due to Sch\"onhage on addition chains.
Pierrick Dartois, Jonathan Komada Eriksen, Riccardo Invernizzi, Frederik Vercauteren
In this paper, we revisit the recent PEGASIS algorithm that computes an effective group action of the class group of any imaginary quadratic order $R$ on a set of supersingular elliptic curves primitively oriented by $R$. Although PEGASIS was the first algorithm showing the practicality of computing unrestricted class group actions at higher security levels, it is complicated and prone to failures, which leads to many rerandomizations.
In this work, we present a new algorithm, qt-Pegasis, which is much simpler, but at the same time faster and removes the need for rerandomization of the ideal we want to act with, since it never fails. It leverages the main technique of the recent qlapoti approach. However, qlapoti solves a norm equation in a quaternion algebra, which corresponds to the full endomorphism ring of a supersingular elliptic curve. We show that the algorithm still applies in the quadratic setting, by embedding the quadratic ideal into a quaternion ideal using a technique similar to the one applied in KLaPoTi. This way, we can reinterpret the output of qlapoti as four equivalent quadratic ideals, instead of two equivalent quaternion ideals. We then show how to construct a Clapoti-like diagram in dimension $2$, which embeds the action of the ideal in a $4$-dimensional isogeny.
We implemented our qt-Pegasis algorithm in SageMath for the CSURF group action, and we achieve a speedup over PEGASIS of $1.8\times$ for the 500-bit parameters and $2.6\times$ for the 4000-bit parameters.
In this work, we present a new algorithm, qt-Pegasis, which is much simpler, but at the same time faster and removes the need for rerandomization of the ideal we want to act with, since it never fails. It leverages the main technique of the recent qlapoti approach. However, qlapoti solves a norm equation in a quaternion algebra, which corresponds to the full endomorphism ring of a supersingular elliptic curve. We show that the algorithm still applies in the quadratic setting, by embedding the quadratic ideal into a quaternion ideal using a technique similar to the one applied in KLaPoTi. This way, we can reinterpret the output of qlapoti as four equivalent quadratic ideals, instead of two equivalent quaternion ideals. We then show how to construct a Clapoti-like diagram in dimension $2$, which embeds the action of the ideal in a $4$-dimensional isogeny.
We implemented our qt-Pegasis algorithm in SageMath for the CSURF group action, and we achieve a speedup over PEGASIS of $1.8\times$ for the 500-bit parameters and $2.6\times$ for the 4000-bit parameters.
Anna Guinet, Carina Graw, Lukas Koletzko, Jan Richter-Brockmann, Holger Dette, Tim Güneysu
The random probing model is a theoretical model that abstracts the physical leakage of an embedded device running a cryptographic scheme with more realistic assumptions compared to the threshold probing model. It assumes that the wires of the target device leak their assigned values with probability $p$, and the said values may reveal information about secret data, which could lead to a security violation. From that, we can compute the probability $\epsilon$ that a side-channel adversary may learn secret data from any random combination of wires as a function of the number of wire combinations that breaches security with rate $p$. This model is used to evaluate the security of masked cryptographic implementations, or simply named circuits; and the research community has been focusing so far on approximating or estimating the probability $\epsilon$ for one circuit. Yet, no proposition has been made to quickly compare the probability $\epsilon$ of different circuits, e.g., a circuit and its optimized version. In this context, we present two statistical tests to make decisions about the level of security in the random probing model: the equivalence test compares the security of two circuits in terms of $\epsilon$'s and the superiority test decides whether the undetermined $\epsilon$ of one circuit falls below a security threshold $\epsilon_0$, both with quantified uncertainty about the computations of the probabilities $\epsilon$'s. The validity of these tests is proven mathematically sound and verified via empirical studies on small masked S-boxes.
André Chailloux, Paul Hermouet
Chen, Liu, and Zhandry [CLZ22] introduced the problems S|LWE⟩ and C|LWE⟩ as quantum analogues of the Learning with Errors problem, designed to construct quantum algorithms for the Inhomogeneous Short Integer Solution (ISIS) problem. Several later works have used this framework for constructing new quantum algorithms in specific cases. However, the general relation between all these problems is still unknown.
In this paper, we investigate the equivalence between S|LWE⟩ and ISIS. We present the first fully generic reduction from ISIS to S|LWE⟩, valid even in the presence of errors in the underlying algorithms. We then explore the reverse direction, introducing an inhomogeneous variant of C|LWE⟩, denoted IC|LWE⟩, and show that IC|LWE⟩ reduces to S|LWE⟩. Finally, we prove that, under certain recoverability conditions, an algorithm for ISIS can be transformed into one for S|LWE⟩. We instantiate this reverse reduction by tweaking a known algorithm for (I)SIS∞ in order to construct quantum algorithm for S|LWE⟩ when the alphabet size q is a small power of 2, recovering some results of Bai et al. [BJK+ 25]. Our results thus clarify the landscape of reductions between S|LWE⟩ and ISIS, and we show both their strong connection as well as the remaining barriers for showing full equivalence.
Yuval Efron, Joachim Neu, Ling Ren, Ertem Nusret Tas
In the context of Byzantine consensus problems such as Byzantine broadcast (BB) and Byzantine agreement (BA), the good-case setting aims to study the minimal possible latency of a BB or BA protocol under certain favorable conditions, namely the designated leader being correct (for BB), or all parties having the same input value (for BA). We provide a full characterization of the feasibility and impossibility of good-case latency, for both BA and BB, in the synchronous sleepy model. Surprisingly to us, we find irrational resilience thresholds emerging: 2-round good-case BB is possible if and only if at all times, at least $\frac{1}{\varphi} \approx 0.618$ fraction of the active parties are correct, where $\varphi = \frac{1+\sqrt{5}}{2} \approx 1.618$ is the golden ratio; 1-round good-case BA is possible if and only if at least $\frac{1}{\sqrt{2}} \approx 0.707$ fraction of the active parties are correct.
Prabhanjan Ananth, Eli Goldin
Quantum cryptographic definitions are often sensitive to the number of copies of the cryptographic states revealed to an adversary. Making definitional changes to the number of copies accessible to an adversary can drastically affect various aspects including the computational hardness, feasibility, and applicability of the resulting cryptographic scheme. This phenomenon appears in many places in quantum cryptography, including quantum pseudorandomness and unclonable cryptography.
To address this, we present a generic approach to boost single-copy security to multi-copy security and apply this approach to many settings. As a consequence, we obtain the following new results: • One-copy stretch pseudorandom state generators (under mild assumptions) imply the existence of t-copy stretch pseudorandom state generators, for any fixed polynomial t. • One-query pseudorandom unitaries with short keys (under mild assumptions) imply the existence of t-query pseudorandom unitaries with short keys, for any fixed polynomial t. • Assuming indistinguishability obfuscation and other standard cryptographic assumptions, there exist identical-copy secure unclonable primitives such as public-key quantum money and quantum copy-protection.
To address this, we present a generic approach to boost single-copy security to multi-copy security and apply this approach to many settings. As a consequence, we obtain the following new results: • One-copy stretch pseudorandom state generators (under mild assumptions) imply the existence of t-copy stretch pseudorandom state generators, for any fixed polynomial t. • One-query pseudorandom unitaries with short keys (under mild assumptions) imply the existence of t-query pseudorandom unitaries with short keys, for any fixed polynomial t. • Assuming indistinguishability obfuscation and other standard cryptographic assumptions, there exist identical-copy secure unclonable primitives such as public-key quantum money and quantum copy-protection.
Alisa Pankova, Jelizaveta Vakarjuk
European Digital Identity (EUDI) Wallet aims to provide end users with a way to get attested credentials from issuers, and present them to different relying parties. An important property mentioned in the regulatory frameworks is the possibility to revoke a previously issued credential. While it is possible to issue a short-lived credential, in some cases it may be inconvenient, and a separate revocation service which allows to revoke a credential at any time may be necessary.
In this work, we propose a full end-to-end description of a generic credential revocation system, which technically relies on a single server and secure transmission channels between parties. We prove security of the proposed revocation functionality in the universal composability model, and estimate its efficiency based on a proof-of-concept implementation.
In this work, we propose a full end-to-end description of a generic credential revocation system, which technically relies on a single server and secure transmission channels between parties. We prove security of the proposed revocation functionality in the universal composability model, and estimate its efficiency based on a proof-of-concept implementation.
Antonin Leroux, Maxime Roméas
Updatable Encryption (UE) allows ciphertexts to be updated under new keys without decryption, enabling efficient key rotation. Constructing post-quantum UE with strong security guarantees is challenging: the only known CCA-secure scheme, COM-UE, uses bitwise encryption, resulting in large ciphertexts and high computational costs.
We introduce DINE, a CCA-secure, isogeny-based post-quantum UE scheme that is both compact and efficient. Each encryption, decryption, or update requires only a few power-of-2 isogeny computations in dimension 2 to encrypt 28B messages, yielding 320B ciphertexts and 224B update tokens at NIST security level 1---significantly smaller than prior constructions. Our full C implementation demonstrates practical performances: updates in 7ms, encryptions in 48ms, and decryptions in 86ms.
Our design builds on recent advances in isogeny-based cryptography, combining high-dimensional isogeny representations with the Deuring correspondence. We also introduce new algorithms for the Deuring correspondence which may be of independent interest. Moreover, the security of our scheme relies on new problems that might open interesting perspectives in isogeny-based cryptography.
We introduce DINE, a CCA-secure, isogeny-based post-quantum UE scheme that is both compact and efficient. Each encryption, decryption, or update requires only a few power-of-2 isogeny computations in dimension 2 to encrypt 28B messages, yielding 320B ciphertexts and 224B update tokens at NIST security level 1---significantly smaller than prior constructions. Our full C implementation demonstrates practical performances: updates in 7ms, encryptions in 48ms, and decryptions in 86ms.
Our design builds on recent advances in isogeny-based cryptography, combining high-dimensional isogeny representations with the Deuring correspondence. We also introduce new algorithms for the Deuring correspondence which may be of independent interest. Moreover, the security of our scheme relies on new problems that might open interesting perspectives in isogeny-based cryptography.
Martin R. Albrecht, Joël Felderhoff, Russell W. F. Lai, Oleksandra Lapiha, Ivy K. Y. Woo
Leftover Hash Lemma (LHL) states that \(\mathbf{X} \cdot \mathbf{v}\) for a Gaussian \(\mathbf{v}\) is an essentially independent Gaussian sample. It has seen numerous applications in cryptography for hiding sensitive distributions of \(\mathbf{v}\). We generalise the Gaussian LHL initially stated over \(\mathbb{Z}\) by Agrawal, Gentry, Halevi, and Sahai (2013) to modules over number fields. Our results have a sub-linear dependency on the degree of the number field and require only polynomial norm growth: \(\lVert\mathbf{v}\rVert/\lVert\mathbf{X}\rVert\). To this end, we also prove when \(\mathbf{X}\) is surjective (assuming the Generalised Riemann Hypothesis) and give bounds on the smoothing parameter of the kernel of \(\mathbf{X}\). We also establish when the resulting distribution is independent of the geometry of \(\mathbf{X}\) and establish the hardness of the \(k\)-SIS and \(k\)-LWE problems over modules (\(k\)-MSIS/\(k\)-MLWE) based on the hardness of SIS and LWE over modules (MSIS/MLWE) respectively, which was assumed without proof in prior works.
Seunghyun Cho, Eunyoung Seo, Young-Sik Kim
We propose Locally Recoverable Data Availability Sampling (LR-DAS), which upgrades binary, threshold-based availability to graded verification by leveraging optimal locally recoverable codes (e.g., Tamo-Barg). Local groups of size $r+\alpha$ serve as atomic certification units: once $r$ verified openings fix a degree-$
Benedikt Bünz, Jessica Chen, Zachary DeStefano
Permutation and lookup arguments are at the core of most deployed SNARK protocols today. Most modern techniques for performing them require a grand product check. This requires either committing to large field elements (E.g. in Plonk) or using GKR (E.g. in Spartan) which has worse verifier cost and proof size. Sadly, both have a soundness error that grows linearly with the input size.
We present two permutation arguments that have $\text{polylog}(n)/|\mathbb{F}|$ soundness error -- for reasonable input size $n=2^{32}$ and field size $|\mathbb{F}|=2^{128}$, the soundness error improves significantly from $2^{-96}$ to $2^{-120}$. Moreover, the arguments achieve $\log(n)$ verification cost and proof size without ever needing to commit to anything beyond the witness. $\mathsf{BiPerm}$ only requires the prover to perform $O(n)$ field operations on top of committing to the witness, but at the cost of limiting the choices of PCS. We show a stronger construction, $\mathsf{MulPerm}$, which has no restriction on the PCS choice and its prover performs essentially linear field operations, $n\cdot \tilde O(\sqrt{\log(n)})$.
Our permutation arguments generalize to lookups. We demonstrate how our arguments can be used to improve SNARK systems such as HyperPlonk and Spartan, and build a GKR-based protocol for proving non-uniform circuits.
We present two permutation arguments that have $\text{polylog}(n)/|\mathbb{F}|$ soundness error -- for reasonable input size $n=2^{32}$ and field size $|\mathbb{F}|=2^{128}$, the soundness error improves significantly from $2^{-96}$ to $2^{-120}$. Moreover, the arguments achieve $\log(n)$ verification cost and proof size without ever needing to commit to anything beyond the witness. $\mathsf{BiPerm}$ only requires the prover to perform $O(n)$ field operations on top of committing to the witness, but at the cost of limiting the choices of PCS. We show a stronger construction, $\mathsf{MulPerm}$, which has no restriction on the PCS choice and its prover performs essentially linear field operations, $n\cdot \tilde O(\sqrt{\log(n)})$.
Our permutation arguments generalize to lookups. We demonstrate how our arguments can be used to improve SNARK systems such as HyperPlonk and Spartan, and build a GKR-based protocol for proving non-uniform circuits.
Kunming Jiang, Fraser Brown, Riad S. Wahby
General-purpose probabilistic proof systems operate on
programs expressed as systems of arithmetic constraints—an
unfriendly representation. There are two broad approaches
in the literature to turning friendlier, high-level programs
into constraints suitable for proof systems: direct translation
and CPU emulation. Direct translators compile a program
into highly optimized constraints; unfortunately, this process
requires expressing all possible paths through the program,
which results in compile times that scale with the program’s
runtime rather than its size. In addition, the prover must
pay the cost of every possible program path, even those
untaken by a given input. In contrast, CPU emulators don’t
compile programs to constraints; instead, they “execute”
those programs, expressed as CPU instructions, on a CPU
emulator that is itself expressed as constraints. As a result,
this approach can’t perform powerful, program-specific op-
timizations, and may require thousands of constraints where
direct translation could use a clever handful. Worse, CPU
emulators inherit an impractically expensive program state
representation from the CPUs they emulate.
This paper presents a compiler and proof system,
CoBBl, that combines the benefits of CPU emulation and
direct translation: it takes advantage of program-specific
optimizations, but doesn’t pay for an unnecessary state
representation or unexecuted computation. COBBL outper-
forms CirC, a state-of-the-art direct translator, by $1–30\times$ on
compile time and $26–350\times$ on prover time, and outperforms
Jolt, a state-of-the-art CPU emulator, on prover time by $1.1–
1.8\times$ on Jolt-friendly benchmarks, and up to $100\times$ on other
benchmarks.
Anindya Ganguly, Angshuman Karmakar, Suparna Kundu, Debranjan Pal, Sumanta Sarkar
Blind signatures (BS) allow a signer to produce a valid signature on a message without learning the message itself. They have niche applications in privacy-preserving protocols such as digital cash and electronic voting. Non-interactive blind signatures (NIBS) remove the need for interaction between the signer and the user. In the post-quantum era, lattice-based NIBS schemes are studied as candidates for long-term security.
In Asiacrypt 2024, Baldimtsi et al. proposed the first lattice-based NIBS construction, whose security relies on the random one-more inhomogeneous short integer solution (rOM-ISIS) assumption. This rOM-ISIS is considered to be a non-standard assumption. Later, Zhang et al. introduced another lattice-based construction in ProvSec 2024, and proved its security under the standard module short integer solution (MSIS) assumption. We analyse the security of the latter scheme. In the random oracle model, we show that it fails to achieve both nonce blindness and receiver blindness. We present explicit attacks where an adversary breaks both properties with probability~1. Our attack is based on a crucial observation that uncovers a flaw in the design. Specifically, this flaw allows an attacker to link a message-signature pair with its presignature-nonce pair. In addition, we also identify a flaw in the unforgeability proof. Finally, we suggest a modification to address the issue, which is similar to Baldimtsi et al. construction, and its security relies again on the non-standard rOM-ISIS assumption. This work again raises the question of the feasibility of achieving NIBS from standard assumptions.
In Asiacrypt 2024, Baldimtsi et al. proposed the first lattice-based NIBS construction, whose security relies on the random one-more inhomogeneous short integer solution (rOM-ISIS) assumption. This rOM-ISIS is considered to be a non-standard assumption. Later, Zhang et al. introduced another lattice-based construction in ProvSec 2024, and proved its security under the standard module short integer solution (MSIS) assumption. We analyse the security of the latter scheme. In the random oracle model, we show that it fails to achieve both nonce blindness and receiver blindness. We present explicit attacks where an adversary breaks both properties with probability~1. Our attack is based on a crucial observation that uncovers a flaw in the design. Specifically, this flaw allows an attacker to link a message-signature pair with its presignature-nonce pair. In addition, we also identify a flaw in the unforgeability proof. Finally, we suggest a modification to address the issue, which is similar to Baldimtsi et al. construction, and its security relies again on the non-standard rOM-ISIS assumption. This work again raises the question of the feasibility of achieving NIBS from standard assumptions.
Konrad Hanff, Anja Lehmann, Cavit Özbay
Privacy Pass is an anonymous authentication protocol which was initially designed by Davidson et al. (PETS’18) to reduce the number of CAPTCHAs that TOR users must solve. It issues single-use authentication tokens with anonymous and unlinkable redemption guarantees. The issuer and verifier of the protocol share a symmetric key, and tokens are privately verifiable. The protocol has sparked interest from both academia and industry, which led to an Internet Engineering Task Force (IETF) standard. While Davidson et al. formally analyzed the original protocol, the IETF standard introduces several changes to their protocol. Thus, the standardized version’s formal security remains unexamined. We fill this gap by analyzing the IETF standard’s privately verifiable Privacy Pass protocol.
In particular, there are two main discrepancies between the analyzed and standardized version: First, the IETF version introduces a redemption context, that can be used for blindly embedding a validity period into the Privacy Pass tokens. We show that this variant has significant differences to public metadata extension that has been proposed for the same purpose in the literature. Redemption context offers better privacy and security than public metadata. We capture both stronger guarantees through game-based security definitions and show that the currently considered one-more unforgeability notion for Privacy Pass is insufficient when a redemption context is used. Thus, we propose a new property, targeted context unforgeability, and prove its incomparability to one-more unforgeability. Second, Davidson et al. focused on a concrete Diffie-Hellman based construction, whereas the IETF version is built generically from a verifiable oblivious pseudorandom function (VOPRF). Further, the analyzed protocol omitted the full redemption phase needed to prevent double-spending. We prove that the generic IETF construction satisfies the desired security and privacy guarantees covering the full life-cycle of tokens. Our analysis relies on natural security properties of VOPRFs, providing compatibility with any secure VOPRF instantiation. This enables crypto agility, e.g., allowing to switch to efficient quantum-safe VOPRFs when they become available.
In particular, there are two main discrepancies between the analyzed and standardized version: First, the IETF version introduces a redemption context, that can be used for blindly embedding a validity period into the Privacy Pass tokens. We show that this variant has significant differences to public metadata extension that has been proposed for the same purpose in the literature. Redemption context offers better privacy and security than public metadata. We capture both stronger guarantees through game-based security definitions and show that the currently considered one-more unforgeability notion for Privacy Pass is insufficient when a redemption context is used. Thus, we propose a new property, targeted context unforgeability, and prove its incomparability to one-more unforgeability. Second, Davidson et al. focused on a concrete Diffie-Hellman based construction, whereas the IETF version is built generically from a verifiable oblivious pseudorandom function (VOPRF). Further, the analyzed protocol omitted the full redemption phase needed to prevent double-spending. We prove that the generic IETF construction satisfies the desired security and privacy guarantees covering the full life-cycle of tokens. Our analysis relies on natural security properties of VOPRFs, providing compatibility with any secure VOPRF instantiation. This enables crypto agility, e.g., allowing to switch to efficient quantum-safe VOPRFs when they become available.
Barbara Jiabao Benedikt, Marc Fischlin
Fiat-Shamir signatures replace the challenge in interactive identification schemes by a hash value over the commitment, the message, and possibly the signer's public key. This construction paradigm is well known and widely used in cryptography, for example, for Schnorr signatures and CRYSTALS-Dilithium. There is no ``general recipe'' for constructing Fiat-Shamir signatures regarding the inputs and their order for the hash computation, though, since the hash function is usually modeled as a monolithic random oracle. In practice, however, the hash function is based on the Merkle-Damgård or the sponge design.
Our work investigates whether there are advisable or imprudent input orders for hashing in Fiat-Shamir signatures. We examine Fiat-Shamir signatures with plain and nested hashing using Merkle-Damgård or sponge-based hash functions. We analyze these constructions in both classical and quantum settings. As part of our investigations, we introduce new security properties following the idea of quantum-annoyance of Eaton and Stebila (PQCrypto 2021), called annoyance for user exposure and signature forgeries. These properties ensure that an adversary against the hash function cannot gain a significant advantage when attempting to extend a successful attack on a single signature forgery to multiple users or to multiple forgeries of a single user. Instead, the adversary must create extra forgeries from scratch. Based on our analysis, we derive a simple rule: When using Fiat-Shamir signatures, one should hash the commitment before the message; all other inputs may be ordered arbitrarily.
Our work investigates whether there are advisable or imprudent input orders for hashing in Fiat-Shamir signatures. We examine Fiat-Shamir signatures with plain and nested hashing using Merkle-Damgård or sponge-based hash functions. We analyze these constructions in both classical and quantum settings. As part of our investigations, we introduce new security properties following the idea of quantum-annoyance of Eaton and Stebila (PQCrypto 2021), called annoyance for user exposure and signature forgeries. These properties ensure that an adversary against the hash function cannot gain a significant advantage when attempting to extend a successful attack on a single signature forgery to multiple users or to multiple forgeries of a single user. Instead, the adversary must create extra forgeries from scratch. Based on our analysis, we derive a simple rule: When using Fiat-Shamir signatures, one should hash the commitment before the message; all other inputs may be ordered arbitrarily.
Ganyuan Cao, Sylvain Chatel, Christian Knabenhans
On-the-fly multi-party computation (MPC), introduced by López-Alt, Tromer, and Vaikuntanathan (STOC 2012), enables clients to dynamically join a computation without remaining continuously online. Yet, the original proposal suffers from substantial efficiency and expressivity limitations hindering practical deployments. Even though various techniques have been proposed to mitigate these shortcomings, seeing on-the-fly MPC as a combination of independent building blocks jeopardizes the security of the original model.
Thus, we revisit on-the-fly MPC in light of recent advances and extend its formal framework to incorporate efficiency and expressivity improvements. Our approach is built around \emph{multi-group homomorphic encryption} (MGHE), which generalizes threshold and multi-key HE and serves as the core primitive for on-the-fly MPC. Our contributions are fourfold: i) We propose new security notions for MGHE (e.g., IND-CPA with partial decryption, circuit privacy) and justify their suitability to the on-the-fly MPC. ii) We present the first ideal functionality for MGHE in the Universal Composability (UC) framework and characterize the conditions under which it can be realized, via reductions to our proposed security notions. iii) We present a generic protocol that securely realizes our on-the-fly MPC functionality against a semi-malicious adversary from our MGHE functionality. iv) Finally, we provide two generic compilers that lift these protocols to withstand a fully malicious adversary by leveraging zero-knowledge arguments.
Our analysis in the UC framework enables modular protocol analysis, where more efficient schemes can be seamlessly substituted as long as they meet the required security defined by the functionalities, retaining the security guarantees offered by the original construction.
Thus, we revisit on-the-fly MPC in light of recent advances and extend its formal framework to incorporate efficiency and expressivity improvements. Our approach is built around \emph{multi-group homomorphic encryption} (MGHE), which generalizes threshold and multi-key HE and serves as the core primitive for on-the-fly MPC. Our contributions are fourfold: i) We propose new security notions for MGHE (e.g., IND-CPA with partial decryption, circuit privacy) and justify their suitability to the on-the-fly MPC. ii) We present the first ideal functionality for MGHE in the Universal Composability (UC) framework and characterize the conditions under which it can be realized, via reductions to our proposed security notions. iii) We present a generic protocol that securely realizes our on-the-fly MPC functionality against a semi-malicious adversary from our MGHE functionality. iv) Finally, we provide two generic compilers that lift these protocols to withstand a fully malicious adversary by leveraging zero-knowledge arguments.
Our analysis in the UC framework enables modular protocol analysis, where more efficient schemes can be seamlessly substituted as long as they meet the required security defined by the functionalities, retaining the security guarantees offered by the original construction.
Jonas Janneck
Following the announcement of the first winners of the NIST post-quantum cryptography standardization process in 2022, cryptographic protocols are now undergoing migration to the newly standardized schemes. In most cases, this transition is realized through a hybrid approach, in which algorithms based on classical hardness assumptions, such as the discrete logarithm problem, are combined with post-quantum algorithms that rely on quantum-resistant assumptions, such as the Short Integer Solution (SIS) problem.
A combiner for signature schemes can be obtained by simply concatenating the signatures of both schemes. This construction preserves unforgeability of the underlying schemes; however, it does not extend to stronger notions, such as strong unforgeability. Several applications, including authenticated key exchange and secure messaging, inherently require strong unforgeability, yet no existing combiner is known to achieve this property.
This work introduces three practical combiners that preserve strong unforgeability and all BUFF (beyond unforgeability features) properties. Each combiner is tailored to a specific class of classical signature schemes capturing all broadly used schemes that are strongly unforgeable. Remarkably, all combiners can be instantiated with any post-quantum signature scheme in a black-box way making deployment practical and significantly less error prone. The proposed solutions are further highly efficient and have signatures that are at most the size of the (insecure) concatenation combiner. For instance, our most efficient combiner enables the combination of EdDSA with ML-DSA, yielding a signature size that is smaller than the sum of an individual EdDSA signature and an individual ML-DSA signature.
Additionally, we identify a novel signature property that we call random-message validity and show that it can be used to replace the BUFF transform with the more efficient Pornin-Stern transform. The notion may be of independent interest.