IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
08 October 2025
Michael Reichle, Zoé Reinke
Blind signatures are a versatile cryptographic primitive with many applications, especially in privacy-preserving technologies. Threshold blind signature schemes (TBS) enhance blind signatures with a signing procedure distributed among up to n signers to reduce the risk attached to the compromise of the secret key.
Blind signatures and TBS in pairing-free groups often rely on strong assumptions, e.g., the algebraic group model (AGM) or interactive assumptions. A recent line of work initiated by Chairattana-apirom, Tessaro and Zhu (Crypto'24), hereafter CTZ, manages to construct blind signatures in pairing-free groups in the random oracle model (ROM) without resorting to the AGM. While CTZ gives a construction from CDH, the scheme suffers from large signatures. Recent works have improved the efficiency, however at the cost of relying on a decisional assumption, namely DDH. In this work, we close this gap by giving an efficient blind signature in pairing-free groups proven secure under CDH in the ROM. Our signatures are of size 320 Byte which is an 32× improvement over CTZ’s CDH-based construction. Further, we give the first TBS in pairing-free groups that does not rely on the AGM by thresholdizing our blind signature. Likewise, our TBS is proven secure under CDH in the ROM.
To achieve this, our starting point is the efficient scheme introduced by Klooß, Reichle and Wagner (Asiacrypt'24). We manage to avoid the DDH assumption in the security argument by carefully hiding critical information from the user during the signing phase. At the cost of only 3 additional Zp elements in signature size, this allows us to prove security under CDH.
Blind signatures and TBS in pairing-free groups often rely on strong assumptions, e.g., the algebraic group model (AGM) or interactive assumptions. A recent line of work initiated by Chairattana-apirom, Tessaro and Zhu (Crypto'24), hereafter CTZ, manages to construct blind signatures in pairing-free groups in the random oracle model (ROM) without resorting to the AGM. While CTZ gives a construction from CDH, the scheme suffers from large signatures. Recent works have improved the efficiency, however at the cost of relying on a decisional assumption, namely DDH. In this work, we close this gap by giving an efficient blind signature in pairing-free groups proven secure under CDH in the ROM. Our signatures are of size 320 Byte which is an 32× improvement over CTZ’s CDH-based construction. Further, we give the first TBS in pairing-free groups that does not rely on the AGM by thresholdizing our blind signature. Likewise, our TBS is proven secure under CDH in the ROM.
To achieve this, our starting point is the efficient scheme introduced by Klooß, Reichle and Wagner (Asiacrypt'24). We manage to avoid the DDH assumption in the security argument by carefully hiding critical information from the user during the signing phase. At the cost of only 3 additional Zp elements in signature size, this allows us to prove security under CDH.
Jean-François Biasse, Fang Song
In this paper, we provide details on the proofs of the quantum polynomial time algorithm of Biasse and Song (SODA 16) for computing the $S$-unit group of a number field. This algorithm directly implies polynomial time methods to calculate class groups, $S$-class groups, relative class group and unit group, ray class groups, solve the principal ideal problem, solve certain norm equations, and decompose ideal classes in the ideal class group. Additionally, combined with a result of Cramer, Ducas, Peikert and Regev (Eurocrypt 2016), the resolution of the principal ideal problem allows one to find short generators of a principal ideal. Likewise, methods due to Cramer, Ducas and Wesolowski (Eurocrypt 2017) use the resolution of the principal ideal problem and the decomposition of ideal classes to find so-called ``mildly short vectors'' in ideal lattices of cyclotomic fields.
Chengrui Dang, Xv Zhou, Bei Liang
Fuzzy PSI is a variant of PSI, which on input a set of points from the receiver and sender respectively, allows the receiver to learn which of the sender's points lie within a threshold distance $\delta$ under a specific distance metric.
Baarsen and Pu (EUROCRYPT'24) first proposed efficient fuzzy PSI protocols for general $L_{p}$ distances (where $p \in [1, \infty]$) in $d$-dimensional space, achieving communication complexity linear in the input size, $\delta$, and $2^d d$. However, they leave open the question of whether the prefix technique of Chakraborti et al. (USENIX Security'23) can further reduce the communication complexity of their fuzzy PSI protocols in both low and high dimensions.
In this work, we thoroughly explore using the prefix technique to reduce the complexity of fuzzy PSI. First, we propose fuzzy matching protocols for $L_{\infty}$ and $L_p$ distances, where the communication complexity is improved from $O(\delta d)$ to $O(\log\delta\, d)$ for $L_\infty$, and from $O(\delta^p)$ to $O((\log \delta)^d p)$ for $L_p$ distance. By applying our fuzzy matching protocol in conjunction with spatial hashing, we propose fuzzy PSI protocols for low-dimensional space. For high-dimensional space, we present the first fuzzy PSI protocols achieving communication and computation complexity that scales logarithmically in $\delta$ and linearly in dimension $d$ and input set sizes.
We implement our fuzzy PSI protocols and compare them with state-of-the-art protocols. Experimental results demonstrate that our protocols achieve superior performance for large $\delta$: for input size $N=2^8$, $d=5$, and $\delta=256$, our protocol requires $10$--$36\times$ less running time and $3$--$4.5\times$ lower communication than existing protocols.
Baarsen and Pu (EUROCRYPT'24) first proposed efficient fuzzy PSI protocols for general $L_{p}$ distances (where $p \in [1, \infty]$) in $d$-dimensional space, achieving communication complexity linear in the input size, $\delta$, and $2^d d$. However, they leave open the question of whether the prefix technique of Chakraborti et al. (USENIX Security'23) can further reduce the communication complexity of their fuzzy PSI protocols in both low and high dimensions.
In this work, we thoroughly explore using the prefix technique to reduce the complexity of fuzzy PSI. First, we propose fuzzy matching protocols for $L_{\infty}$ and $L_p$ distances, where the communication complexity is improved from $O(\delta d)$ to $O(\log\delta\, d)$ for $L_\infty$, and from $O(\delta^p)$ to $O((\log \delta)^d p)$ for $L_p$ distance. By applying our fuzzy matching protocol in conjunction with spatial hashing, we propose fuzzy PSI protocols for low-dimensional space. For high-dimensional space, we present the first fuzzy PSI protocols achieving communication and computation complexity that scales logarithmically in $\delta$ and linearly in dimension $d$ and input set sizes.
We implement our fuzzy PSI protocols and compare them with state-of-the-art protocols. Experimental results demonstrate that our protocols achieve superior performance for large $\delta$: for input size $N=2^8$, $d=5$, and $\delta=256$, our protocol requires $10$--$36\times$ less running time and $3$--$4.5\times$ lower communication than existing protocols.
David Kretzler, Yong Li
In anonymous token protocols, clients obtain access tokens by proving eligibility for the usage of a resource and later get access to the resource by redeeming the token. The server verifying eligibility and providing the resource cannot link the token issuance to its redemption. To prevent ineligible clients from accessing resources, it is crucial to prevent the transfer or sale of tokens. Durak et al. (CCS'24) propose binding tokens to valuable insurance secrets, which must be known to redeem the tokens. The value of the insurance secret deters the vendor from transferring the secret to the token buyer, who cannot redeem the token without the secret. However, the authors do not consider scenarios, where the token vendor assists the buyer during token redemption. Their construction, therefore, falls short to guarantee non-transferability when facing a vendor-aided token redemption.
We address this gap by introducing the concept of anonymous tokens with betrayability. Our notion ensures that a token buyer, that is able to redeem a bought token, either knows the insurance secret or is able to betray the vendor in a vendor-aided redemption. The betrayal allows the buyer to steal the insurance secret without being detected. This way, we make the support in the token redemption equivalent to a transfer of the insurance secret and, hence, inherit the transfer deterrence of the insurance secret even when considering a vendor-aided token redemption. We formalize our new security notion, present a protocol for anonymous tokens with betrayability, prove its security, and provide an implementation and experimental evaluation.
We address this gap by introducing the concept of anonymous tokens with betrayability. Our notion ensures that a token buyer, that is able to redeem a bought token, either knows the insurance secret or is able to betray the vendor in a vendor-aided redemption. The betrayal allows the buyer to steal the insurance secret without being detected. This way, we make the support in the token redemption equivalent to a transfer of the insurance secret and, hence, inherit the transfer deterrence of the insurance secret even when considering a vendor-aided token redemption. We formalize our new security notion, present a protocol for anonymous tokens with betrayability, prove its security, and provide an implementation and experimental evaluation.
Vincent Ehrmanntraut, Ulrike Meyer
Finding shortest paths in graphs is one of the fundamental combinatorial optimization problems with numerous applications. Privacy constraints in these applications have lead to an extensive line of research on the so-called privacy-preserving (length of) shortest path problem. A Secure Multi-Party Computation (SMPC) protocol that solves this problem computes the lengths of shortest paths on a secret graph in a distributed fashion while ensuring that the graph remains secret. While many such protocols have been proposed in the past, they only compute the length of the shortest paths and not the paths themselves.
In this paper, we address this shortcoming but also propose a novel adaptable protocol design that chooses between different sub-protocols for the building blocks it uses depending on input size, the network delay and bandwidth, and the security model the protocol operates in. Our protocol thus finds the run-time optimal combination of sub-protocols for a given environment. We compare the resulting adaptive protocol to a less flexible baseline protocol in an extensive evaluation using the MP-SPDZ framework, spanning multiple network and security settings. Compared to the baseline protocol, the adaptivity of our optimized path construction leads to speedups from 4% to 32% in the overall time to find the shortest path.
Further, we find and fix a small leakage in two prior (length of) shortest path protocols, and report multiple observations that can be used to improve the practical runtime of other SMPC protocols.
In this paper, we address this shortcoming but also propose a novel adaptable protocol design that chooses between different sub-protocols for the building blocks it uses depending on input size, the network delay and bandwidth, and the security model the protocol operates in. Our protocol thus finds the run-time optimal combination of sub-protocols for a given environment. We compare the resulting adaptive protocol to a less flexible baseline protocol in an extensive evaluation using the MP-SPDZ framework, spanning multiple network and security settings. Compared to the baseline protocol, the adaptivity of our optimized path construction leads to speedups from 4% to 32% in the overall time to find the shortest path.
Further, we find and fix a small leakage in two prior (length of) shortest path protocols, and report multiple observations that can be used to improve the practical runtime of other SMPC protocols.
Ariel Gabizon, Nishat Koti
We give a formal analysis of the optimized variant of the $\mathsf{gemini}$ polynomial commitment scheme [BCHO22] used by the $\href{https://github.com/AztecProtocol/aztec-packages}{\text{Aztec Network}}$. Our work is motivated by an attack on a previous implementation [GL25].
Minjoo Sim, Subin Jo, Hyuntae Song, Eunseong Kim, Hwajeong Seo
The rapid advancement of quantum computing threatens the security of widely deployed public-key cryptosystems, creating an urgent need for practical migration to post-quantum cryptographic (PQC) standards. Although the U.S. National Institute of Standards and Technology (NIST) and Korea’s KpqC initiative have recently standardized PQC algorithms, integrating them into Transport Layer Security (TLS)~1.3 remains operationally challenging. Larger certificates, higher handshake costs, and incompatibility with legacy clients make naive deployment impractical in production environments.
While hot reload has long been supported in classical TLS deployments (e.g., Nginx, HAProxy), these mechanisms were designed for RSA/ECC contexts with small keys and certificates, and they do not address PQC-specific challenges.
This work presents a systematic demonstration of \emph{hot reload and rollback in a PQC-enabled TLS context}, incorporating a policy-driven state machine for staged migration and rollback under realistic constraints such as increased handshake latency and legacy-client compatibility. We propose a bridge-server-based framework that operates at the TLS library level, enabling zero-downtime migration across classical, hybrid, and PQC deployments. Experimental evaluation shows that PQC handshakes incur a $\approx$5.8--6.4$\times$ increase in latency in compute-bound settings relative to ECC baselines, while the relative overhead is significantly smaller in network-bound scenarios, highlighting the importance of deployment context. These findings provide a feasible path toward secure and incremental PQC integration in TLS infrastructures, contributing to broader strategies for achieving crypto-agility under evolving cryptographic standards.
This work presents a systematic demonstration of \emph{hot reload and rollback in a PQC-enabled TLS context}, incorporating a policy-driven state machine for staged migration and rollback under realistic constraints such as increased handshake latency and legacy-client compatibility. We propose a bridge-server-based framework that operates at the TLS library level, enabling zero-downtime migration across classical, hybrid, and PQC deployments. Experimental evaluation shows that PQC handshakes incur a $\approx$5.8--6.4$\times$ increase in latency in compute-bound settings relative to ECC baselines, while the relative overhead is significantly smaller in network-bound scenarios, highlighting the importance of deployment context. These findings provide a feasible path toward secure and incremental PQC integration in TLS infrastructures, contributing to broader strategies for achieving crypto-agility under evolving cryptographic standards.
06 October 2025
Villanova University (Villanova, PA, USA)
Requirements
Preferred to be in majors of CE/CS/EE, Mathematics related majors are also desirable.
Proficiency in English both speaking and writing abilities Efficiency in programming Languages such as VHDL/Verilog, Python, CC++, and so on. Great enthusiasm for doing research oriented tasks. Excellent teamwork member.
Degree: both BS and MS graduates or similar are warmly welcomed to apply.
Deadline: better to start at Spring 2026. It is always better to apply as early as possible. Position is open until it is filled. Note: Villanova is rolling based and admission is flexible!
Research Area: Post-quantum oriented security and cryptography, hardware acceleration and related security topics.
Brief Introduction of Villanova University. Villanova University is a private Catholic research university in Villanova, Pennsylvania, United States. Located in the western suburbs of Philadelphia, Villanova University is known for its excellent academic reputation and fantastic campus location. Villanova University is ranked No. 57 in the U.S. according to the US News Ranking Report 2026. Notable alumni include the new Pope Leo XIV and former First Lady Jill Biden.
Preferred to be in majors of CE/CS/EE, Mathematics related majors are also desirable.
Proficiency in English both speaking and writing abilities Efficiency in programming Languages such as VHDL/Verilog, Python, CC++, and so on. Great enthusiasm for doing research oriented tasks. Excellent teamwork member.
Degree: both BS and MS graduates or similar are warmly welcomed to apply.
Deadline: better to start at Spring 2026. It is always better to apply as early as possible. Position is open until it is filled. Note: Villanova is rolling based and admission is flexible!
Research Area: Post-quantum oriented security and cryptography, hardware acceleration and related security topics.
Brief Introduction of Villanova University. Villanova University is a private Catholic research university in Villanova, Pennsylvania, United States. Located in the western suburbs of Philadelphia, Villanova University is known for its excellent academic reputation and fantastic campus location. Villanova University is ranked No. 57 in the U.S. according to the US News Ranking Report 2026. Notable alumni include the new Pope Leo XIV and former First Lady Jill Biden.
Closing date for applications:
Contact: Dr. Jiafeng Harvest Xie
More information: https://www.ece.villanova.edu/~jxie02/
Karlsruhe Institute of Technology
The cryptocurrency Bitcoin operates without a central authority. This is made possible by (digital) proof of work, which, however, comes with enormous energy consumption and CO₂ emissions. Our project transfers this principle into the physical world: we are conducting research on physical one-way functions, reproducible physical processes that generate measurable outputs from input parameters, but for which solving the inverse problem is difficult. Based on this, we are developing a proof of physical work protocol that has the potential to significantly reduce global CO₂ emissions.
As an interdisciplinary junior research group, we are opening up a new field of research at the interface of cryptography, engineering, and natural sciences. The project runs for five years and is funded by the Carl-Zeiss-Stiftung.
Further information can be found here: https://doi.org/10.1002/advs.202409386
Closing date for applications:
Contact: Frank Rhein
More information: https://www.pse.kit.edu/english/karriere/joboffer.php?id=165449
03 October 2025
Minjoo Sim, Hyunjun Kim, Minwoo Lee, Hwajeong Seo
Polynomial multiplication over $\mathbb{F}_2[x]$ is a fundamental building block in code-based and lattice-based cryptography, particularly on lightweight embedded devices where dedicated carry-less multiply instructions are unavailable. This paper presents a high-speed, constant-time implementation of radix-16 polynomial multiplication on the ARM Cortex-M4, combining zero-padding with recursive Karatsuba layers. Building on the radix-16 decomposition proposed by Chen et al. in TCHES’21, we replace the conventional schoolbook inner multiplier with a multi-level Karatsuba scheme. This optimization reduces cycle counts on the ARM Cortex-M4 while preserving constant-time execution. To further optimize efficiency, the design minimizes packing and unpacking overhead by operating at 128-bit granularity and employs a five-stage pipeline—Decomposition, Padding, Multiplication, Unpadding, and Reassembly—implemented entirely with data-independent shifts, XORs, and masks. Experimental results on the Cortex-M4 show that our optimized $ct\_poly32\_mul\_64\_bit$ implementation achieves up to 31\% improvement over the best existing constant-time baseline, demonstrating the efficiency and scalability of recursive Karatsuba for resource-constrained cryptographic applications.
Seyoung Yoon, Hyunji Kim, Hwajeong Seo
We propose CA-MCPQ, a context-aware post-quantum-secure extension of the Model Context Protocol (MCP). Unlike standard MCP, which leaves authentication, encryption, and authorization optional or implementation-specific, CA-MCPQ elevates them to mandatory protocol-level mechanisms. The design incorporates post-quantum mutual authentication, KEM-derived session keys, and authenticated sequencing to ensure session integrity and prevent replay. Role-based access control is enforced, while token-based authentication is eliminated entirely. AI dynamically infers the required security tier from contextual information and negotiates compatible PQC algorithms; each response returns a reliability score that quantifies alignment with the requested security level.
This architecture addresses critical vulnerabilities of MCP—including token misuse, session hijacking, impersonation, and quantum attack—while preserving interoperability. Notably, our evaluation shows that the cryptographic and transport overheads are negligible compared to model computation, confirming that strong post-quantum assurances can be achieved without degrading system performance. Overall, CA-MCPQ provides a practical path toward secure-by-design AI agent ecosystems and lays the groundwork for future extensions such as agent–agent secure coordination.
This architecture addresses critical vulnerabilities of MCP—including token misuse, session hijacking, impersonation, and quantum attack—while preserving interoperability. Notably, our evaluation shows that the cryptographic and transport overheads are negligible compared to model computation, confirming that strong post-quantum assurances can be achieved without degrading system performance. Overall, CA-MCPQ provides a practical path toward secure-by-design AI agent ecosystems and lays the groundwork for future extensions such as agent–agent secure coordination.
Kamil Doruk Gur, Patrick Hough, Jonathan Katz, Caroline Sandsbråten, Tjerand Silde
We present Olingo, a framework for threshold lattice signatures that is the first to offer all desired properties for real-world implementations of quantum-secure threshold signatures: small keys and signatures, low communication and round complexity, non-interactive online signing, distributed key generation (DKG), and identifiable abort.
Our starting point is the framework of Gur, Katz, and Silde (PQCrypto 2024). We change the underlying signature scheme to Raccoon (Katsumata et al., Crypto 2024), remove the trapdoor commitments, and apply numerous improvements and optimizations to achieve all the above properties. We provide detailed proofs of security for our new framework and present concrete parameters and benchmarks.
At the $128$-bit security level, for up to $1024$ parties and supporting $2^{60}$ signatures, our scheme has $2.6$ KB public keys and $9.7$ KB signatures; while signing requires communication of $953$ KB per party. Using the LaBRADOR proof system (Beullens and Seiler, Crypto 2023), this can be further reduced to $596$ KB. An optimistic non-interactive version of our scheme requires only $83$ KB communication per party.
Our starting point is the framework of Gur, Katz, and Silde (PQCrypto 2024). We change the underlying signature scheme to Raccoon (Katsumata et al., Crypto 2024), remove the trapdoor commitments, and apply numerous improvements and optimizations to achieve all the above properties. We provide detailed proofs of security for our new framework and present concrete parameters and benchmarks.
At the $128$-bit security level, for up to $1024$ parties and supporting $2^{60}$ signatures, our scheme has $2.6$ KB public keys and $9.7$ KB signatures; while signing requires communication of $953$ KB per party. Using the LaBRADOR proof system (Beullens and Seiler, Crypto 2023), this can be further reduced to $596$ KB. An optimistic non-interactive version of our scheme requires only $83$ KB communication per party.
Alexander May, Massimo Ostuzzi, Henrik Ressler
We propose a novel algorithm to solve underdetermined systems of multivariate quadratic (MQ) equations over finite fields. In modern MQ signature schemes such as MAYO QR-UOV and SNOVA finding solutions to such systems is equivalent to signature forgery.
The current benchmark for estimating forgery bit complexity is Hashimoto’s algorithm which transforms the original underdetermined MQ system $P$ into a more tractable system $\tilde{P}$. A hybrid combination of solving $\tilde{P}$ via Gröbner basis and exhaustive search eventually solves $P$.
We introduce a novel transformation that pushes the hybrid approach to its extreme. Specifically, we reduce the underdetermined MQ system to a sequence of quadratic equations in a single variable at the cost of a larger exhaustive search. As a consequence, signature forgery no longer relies on the hardness of MQ solving but becomes pure guessing via exhaustive search. This in turn implies that signature forgery is significantly more vulnerable against quantum attacks via Grover search.
We provide accurate estimates for the classical and quantum bit complexity of forging signatures for MAYO QR-UOV and SNOVA using our novel algorithm. We reduce the quantum security of all security levels of MAYO QR-UOV and SNOVA.
The current benchmark for estimating forgery bit complexity is Hashimoto’s algorithm which transforms the original underdetermined MQ system $P$ into a more tractable system $\tilde{P}$. A hybrid combination of solving $\tilde{P}$ via Gröbner basis and exhaustive search eventually solves $P$.
We introduce a novel transformation that pushes the hybrid approach to its extreme. Specifically, we reduce the underdetermined MQ system to a sequence of quadratic equations in a single variable at the cost of a larger exhaustive search. As a consequence, signature forgery no longer relies on the hardness of MQ solving but becomes pure guessing via exhaustive search. This in turn implies that signature forgery is significantly more vulnerable against quantum attacks via Grover search.
We provide accurate estimates for the classical and quantum bit complexity of forging signatures for MAYO QR-UOV and SNOVA using our novel algorithm. We reduce the quantum security of all security levels of MAYO QR-UOV and SNOVA.
Susumu Kiyoshima
We present a 4-round statistical non-malleable zero-knowledge (NMZK) argument in the plain model under standard hardness assumptions. Our construction can be based on any collision-resistant hash function and injective one-way function, and it guarantees simulation extractability in the delayed-input one-many setting. Before this work, 4-round constructions were known for computational NMZK but not for statistical NMZK.
Hyeongmin Choe, Jaehyung Kim, Damien Stehlé, Elias Suvanto
The CKKS fully homomorphic encryption (FHE) scheme enables computations on vectors of approximate complex numbers. A moderate precision of $\approx 20$ bits often suffices but, in many applications, a higher precision is required for functionality and/or security. Indeed, to obtain IND-CPA-D security [Li-Micciancio; Eurocrypt'21], secure threshold-FHE [Asharov et al; Eurocrypt'12] and circuit privacy [Gentry; STOC'09], all known approaches require a precision that supports noise flooding. This may lead to a precision of $\approx 80$ bits, or more. High-precision CKKS is hard to achieve, notably because of bootstrapping. The main difficulty is modulus consumption: every homomorphic multiplication consumes some, out of an overall modulus budget. Unfortunately, in high precision, most known bootstrapping algorithms consume so much modulus that one needs to increase the parameters to increase the budget. The state-of-the-art approach, Meta-BTS [Bae et al; CCS'22], performs moderate-precision bootstrapping several times to enable high-precision bootstrapping, with similar modulus consumption as the base bootstrapping it builds upon. It however damages latency.
We introduce a new approach for high-precision CKKS bootstrapping, whose cost is almost independent of the precision (as opposed to Meta-BTS) and whose modulus consumption increases significantly more slowly than with classical bootstrapping algorithms. Our design relies on the EvalRound bootstrapping [Kim et al; Asiacrypt'22], which we improve in the high-precision context by leveraging and improving recent techniques for handling discrete data with CKKS. We obtain for the first time a non-iterative 80-bit precise bootstrapping algorithm which can be run in ring degree $N=2^{16}$, with 494 bits of remaining modulus for computations. In terms of throughput, and for 80-bit precision, our implementation shows an acceleration of 64\% compared to Meta-BTS.
Prabhanjan Ananth, Aditya Gulati, Yao-Ting Lin
Pseudorandom unitaries (PRUs), one of the key quantum pseudorandom notions, are efficiently computable unitaries that are computationally indistinguishable from Haar random unitaries. While there is evidence to believe that PRUs are weaker than one-way functions, so far its relationship with other quantum cryptographic primitives (that are plausibly weaker than one-way functions) has not been fully established.
In this work, we focus on quantum cryptographic primitives with classical communication, referred to as QCCC primitives. Our main result shows that QCCC bit commitments and QCCC key agreement, cannot be constructed from pseudorandom unitaries in a black-box manner.
Our core technical contribution is to show (in a variety of settings) the difficulty of distinguishing identical versus independent Haar unitaries by separable channels. Our result strictly improves upon prior works which studied similar problems in the context of learning theory [Anshu, Landau, Liu, STOC 2022] and cryptography [Ananth, Gulati, Lin, TCC 2024].
In this work, we focus on quantum cryptographic primitives with classical communication, referred to as QCCC primitives. Our main result shows that QCCC bit commitments and QCCC key agreement, cannot be constructed from pseudorandom unitaries in a black-box manner.
Our core technical contribution is to show (in a variety of settings) the difficulty of distinguishing identical versus independent Haar unitaries by separable channels. Our result strictly improves upon prior works which studied similar problems in the context of learning theory [Anshu, Landau, Liu, STOC 2022] and cryptography [Ananth, Gulati, Lin, TCC 2024].
Prabhanjan Ananth, John Bostanci, Aditya Gulati, Yao-Ting Lin
The quantum Haar random oracle model is an idealized model where every party has access to a single Haar random unitary and its inverse. We construct strong pseudorandom unitaries in the quantum Haar random oracle model. This strictly improves upon prior works who either only prove the existence of pseudorandom unitaries in the inverseless quantum Haar random oracle model [Ananth, Bostanci, Gulati, Lin, EUROCRYPT 2025] or prove the existence of a weaker notion (implied by strong pseudorandom unitaries) in the quantum Haar random oracle model [Hhan, Yamada, 2024]. Our results also present a viable approach for building quantum pseudorandomness from random quantum circuits and analyzing pseudorandom objects in nature.
Cody Freitag, Jad Silbak, Daniel Wichs
Is it possible to efficiently convert any arbitrary source with sufficiently high (min-)entropy into nearly uniformly random bits? Information-theoretically, this is achievable using seeded extractors, provided the source is independent of the seed. However, in the absence of any such independence guarantee, no solution is possible for general computationally unbounded sources. Even for efficiently samplable sources, we cannot extract randomness that is statistically close to uniform, but can at best condense the randomness into an output that is only missing logarithmically many bits of entropy compared to the uniform distribution. Dodis, Ristenpart and Vadhan (TCC '12) constructed such seeded condensers for efficiently samplable sources that can depend arbitrarily on the seed assuming near-optimal security of collision-resistant hash functions. In this work, we investigate whether such condensers can be made seedless. We present several new results:
-- We construct seedless condensers for all efficiently samplable sources assuming near-optimal security of keyless multi-collision resistant hash functions. We justify this assumption by showing it holds in the auxiliary-input random oracle model.
-- We show that such seedless condensers cannot be proven secure via a black-box reduction from any standard game-based assumption, even if we assume optimal exact security. In fact, we give a general black-box separation that applies to a broad class of seedless primitives, with seedless condensers as a special case.
-- We consider the class of efficiently samplable distributed sources where two parties jointly sample randomness $x=(x_0,x_1)$, with one party honestly choosing $x_b$ uniformly at random while the other party adversarially chooses $x_{1-b}$ depending on $x_b$. We show how to construct seedless condensers for such sources assuming near-optimal security of game-based assumptions -- either: (1) adaptive one-way functions, or (2) a certain from of (single-input) correlation-intractable hash functions that can be instantiated from key-dependent-message (KDM) secure encryption.
Hamza Abusalah, Karen Azari, Dario Fiore, Chethan Kamath, Erkan Tairi
A verifiable delay function (VDF) [Boneh et al., CRYPTO 2018] is a function $f$ that is slow-to-compute, but is quickly verifiable given a short proof. This is formalised using two security requirements: i) sequentiality, which requires that a parallel adversary is unable to compute $f$ faster; and ii) soundness, which requires that an adversary is unable to generate a proof that verifies wrong outputs. A time-lock puzzle (TLP), on the other hand, is a puzzle that can be quickly generated, but is slow-to-solve. The security requirement here is sequentiality, which requires that a parallel adversary is unable to solve puzzles faster.
In this paper, we study the relationship between these two timed primitives. Our main result is a construction of ``one-time'' VDF from TLP using indistinguishability obfuscation (iO) and one-way functions (OWFs), where by ``one-time'' we mean that sequentiality of the VDF holds only against parallel adversaries that do not preprocess public parameters. Our VDF satisfies several desirable properties. For instance, we achieve perfectly sound and short proofs of $O(\lambda)$ bits, where $\lambda$ is the security parameter. Moreover, our construction is a trapdoor (one-time) VDF that can be easily extended to achieve interesting extra properties (defined in our paper) such as trapdoor-homomorphic and trapdoor-constrained evaluation.
Finally, when combined with the results of Bitansky et al., [ITCS 2016], this yields one-time VDFs from any worst-case non-parallelizing language, iO and OWF. To the best of our knowledge, this is the first such construction that only relies on polynomial security.
In this paper, we study the relationship between these two timed primitives. Our main result is a construction of ``one-time'' VDF from TLP using indistinguishability obfuscation (iO) and one-way functions (OWFs), where by ``one-time'' we mean that sequentiality of the VDF holds only against parallel adversaries that do not preprocess public parameters. Our VDF satisfies several desirable properties. For instance, we achieve perfectly sound and short proofs of $O(\lambda)$ bits, where $\lambda$ is the security parameter. Moreover, our construction is a trapdoor (one-time) VDF that can be easily extended to achieve interesting extra properties (defined in our paper) such as trapdoor-homomorphic and trapdoor-constrained evaluation.
Finally, when combined with the results of Bitansky et al., [ITCS 2016], this yields one-time VDFs from any worst-case non-parallelizing language, iO and OWF. To the best of our knowledge, this is the first such construction that only relies on polynomial security.
Guy Zyskind, Doron Zarchy, Max Leibovich, Chris Peikert
Threshold Fully Homomorphic Encryption (FHE) enables arbitrary computation on encrypted data, while distributing the decryption capability across multiple parties. A primary application of interest is low-communication multi-party computation (MPC), which benefits from a fast and secure threshold FHE decryption protocol.
Several works have addressed this problem, but all existing solutions rely on "noise flooding" for security. This incurs significant overhead and necessitates large parameters in practice, making it unsuitable for many real-world deployments. Some constructions have somewhat better efficiency, but at the cost of weaker, non-simulation-based security definitions, which limits their usability and composability.
In this work, we propose a novel threshold FHE decryption protocol that avoids "noise flooding" altogether, and provides simulation-based security. Rather than masking the underlying ciphertext noise, our technique securely removes it via an efficient MPC rounding procedure. The cost of this MPC is mitigated by an offline/online design that preprocesses special gates for secure comparisons in the offline phase, and has low communication and computation in the online phase. This approach is of independent interest, and should also benefit other MPC protocols (e.g., secure machine learning) that make heavy use of non-linear comparison operations.
We prove our protocol secure in the Universal Composability (UC) framework, and it can be generally instantiated for a variety of adversary models (e.g., security-with-abort against a dishonest majority, or guaranteed output delivery with honest majority). Compared to the state of the art, our protocol offers significant gains both in the adversary model (i.e., dishonest vs. honest majority) and practical performance: empirically, our online phase obtains approximately 20,000$\times$ better throughput, and up to a 37$\times$ improvement in latency.
Several works have addressed this problem, but all existing solutions rely on "noise flooding" for security. This incurs significant overhead and necessitates large parameters in practice, making it unsuitable for many real-world deployments. Some constructions have somewhat better efficiency, but at the cost of weaker, non-simulation-based security definitions, which limits their usability and composability.
In this work, we propose a novel threshold FHE decryption protocol that avoids "noise flooding" altogether, and provides simulation-based security. Rather than masking the underlying ciphertext noise, our technique securely removes it via an efficient MPC rounding procedure. The cost of this MPC is mitigated by an offline/online design that preprocesses special gates for secure comparisons in the offline phase, and has low communication and computation in the online phase. This approach is of independent interest, and should also benefit other MPC protocols (e.g., secure machine learning) that make heavy use of non-linear comparison operations.
We prove our protocol secure in the Universal Composability (UC) framework, and it can be generally instantiated for a variety of adversary models (e.g., security-with-abort against a dishonest majority, or guaranteed output delivery with honest majority). Compared to the state of the art, our protocol offers significant gains both in the adversary model (i.e., dishonest vs. honest majority) and practical performance: empirically, our online phase obtains approximately 20,000$\times$ better throughput, and up to a 37$\times$ improvement in latency.