International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

03 October 2025

Alexander May, Massimo Ostuzzi, Henrik Ressler
ePrint Report ePrint Report
We propose a novel algorithm to solve underdetermined systems of multivariate quadratic (MQ) equations over finite fields. In modern MQ signature schemes such as MAYO QR-UOV and SNOVA finding solutions to such systems is equivalent to signature forgery.

The current benchmark for estimating forgery bit complexity is Hashimoto’s algorithm which transforms the original underdetermined MQ system $P$ into a more tractable system $\tilde{P}$. A hybrid combination of solving $\tilde{P}$ via Gröbner basis and exhaustive search eventually solves $P$.

We introduce a novel transformation that pushes the hybrid approach to its extreme. Specifically, we reduce the underdetermined MQ system to a sequence of quadratic equations in a single variable at the cost of a larger exhaustive search. As a consequence, signature forgery no longer relies on the hardness of MQ solving but becomes pure guessing via exhaustive search. This in turn implies that signature forgery is significantly more vulnerable against quantum attacks via Grover search.

We provide accurate estimates for the classical and quantum bit complexity of forging signatures for MAYO QR-UOV and SNOVA using our novel algorithm. We reduce the quantum security of all security levels of MAYO QR-UOV and SNOVA.
Expand
Susumu Kiyoshima
ePrint Report ePrint Report
We present a 4-round statistical non-malleable zero-knowledge (NMZK) argument in the plain model under standard hardness assumptions. Our construction can be based on any collision-resistant hash function and injective one-way function, and it guarantees simulation extractability in the delayed-input one-many setting. Before this work, 4-round constructions were known for computational NMZK but not for statistical NMZK.
Expand
Hyeongmin Choe, Jaehyung Kim, Damien Stehlé, Elias Suvanto
ePrint Report ePrint Report
The CKKS fully homomorphic encryption (FHE) scheme enables computations on vectors of approximate complex numbers. A moderate precision of $\approx 20$ bits often suffices but, in many applications, a higher precision is required for functionality and/or security. Indeed, to obtain IND-CPA-D security [Li-Micciancio; Eurocrypt'21], secure threshold-FHE [Asharov et al; Eurocrypt'12] and circuit privacy [Gentry; STOC'09], all known approaches require a precision that supports noise flooding. This may lead to a precision of $\approx 80$ bits, or more. High-precision CKKS is hard to achieve, notably because of bootstrapping. The main difficulty is modulus consumption: every homomorphic multiplication consumes some, out of an overall modulus budget. Unfortunately, in high precision, most known bootstrapping algorithms consume so much modulus that one needs to increase the parameters to increase the budget. The state-of-the-art approach, Meta-BTS [Bae et al; CCS'22], performs moderate-precision bootstrapping several times to enable high-precision bootstrapping, with similar modulus consumption as the base bootstrapping it builds upon. It however damages latency. We introduce a new approach for high-precision CKKS bootstrapping, whose cost is almost independent of the precision (as opposed to Meta-BTS) and whose modulus consumption increases significantly more slowly than with classical bootstrapping algorithms. Our design relies on the EvalRound bootstrapping [Kim et al; Asiacrypt'22], which we improve in the high-precision context by leveraging and improving recent techniques for handling discrete data with CKKS. We obtain for the first time a non-iterative 80-bit precise bootstrapping algorithm which can be run in ring degree $N=2^{16}$, with 494 bits of remaining modulus for computations. In terms of throughput, and for 80-bit precision, our implementation shows an acceleration of 64\% compared to Meta-BTS.
Expand
Prabhanjan Ananth, Aditya Gulati, Yao-Ting Lin
ePrint Report ePrint Report
Pseudorandom unitaries (PRUs), one of the key quantum pseudorandom notions, are efficiently computable unitaries that are computationally indistinguishable from Haar random unitaries. While there is evidence to believe that PRUs are weaker than one-way functions, so far its relationship with other quantum cryptographic primitives (that are plausibly weaker than one-way functions) has not been fully established.

In this work, we focus on quantum cryptographic primitives with classical communication, referred to as QCCC primitives. Our main result shows that QCCC bit commitments and QCCC key agreement, cannot be constructed from pseudorandom unitaries in a black-box manner.

Our core technical contribution is to show (in a variety of settings) the difficulty of distinguishing identical versus independent Haar unitaries by separable channels. Our result strictly improves upon prior works which studied similar problems in the context of learning theory [Anshu, Landau, Liu, STOC 2022] and cryptography [Ananth, Gulati, Lin, TCC 2024].
Expand
Prabhanjan Ananth, John Bostanci, Aditya Gulati, Yao-Ting Lin
ePrint Report ePrint Report
The quantum Haar random oracle model is an idealized model where every party has access to a single Haar random unitary and its inverse. We construct strong pseudorandom unitaries in the quantum Haar random oracle model. This strictly improves upon prior works who either only prove the existence of pseudorandom unitaries in the inverseless quantum Haar random oracle model [Ananth, Bostanci, Gulati, Lin, EUROCRYPT 2025] or prove the existence of a weaker notion (implied by strong pseudorandom unitaries) in the quantum Haar random oracle model [Hhan, Yamada, 2024]. Our results also present a viable approach for building quantum pseudorandomness from random quantum circuits and analyzing pseudorandom objects in nature.
Expand
Cody Freitag, Jad Silbak, Daniel Wichs
ePrint Report ePrint Report
Is it possible to efficiently convert any arbitrary source with sufficiently high (min-)entropy into nearly uniformly random bits? Information-theoretically, this is achievable using seeded extractors, provided the source is independent of the seed. However, in the absence of any such independence guarantee, no solution is possible for general computationally unbounded sources. Even for efficiently samplable sources, we cannot extract randomness that is statistically close to uniform, but can at best condense the randomness into an output that is only missing logarithmically many bits of entropy compared to the uniform distribution. Dodis, Ristenpart and Vadhan (TCC '12) constructed such seeded condensers for efficiently samplable sources that can depend arbitrarily on the seed assuming near-optimal security of collision-resistant hash functions. In this work, we investigate whether such condensers can be made seedless. We present several new results: -- We construct seedless condensers for all efficiently samplable sources assuming near-optimal security of keyless multi-collision resistant hash functions. We justify this assumption by showing it holds in the auxiliary-input random oracle model. -- We show that such seedless condensers cannot be proven secure via a black-box reduction from any standard game-based assumption, even if we assume optimal exact security. In fact, we give a general black-box separation that applies to a broad class of seedless primitives, with seedless condensers as a special case. -- We consider the class of efficiently samplable distributed sources where two parties jointly sample randomness $x=(x_0,x_1)$, with one party honestly choosing $x_b$ uniformly at random while the other party adversarially chooses $x_{1-b}$ depending on $x_b$. We show how to construct seedless condensers for such sources assuming near-optimal security of game-based assumptions -- either: (1) adaptive one-way functions, or (2) a certain from of (single-input) correlation-intractable hash functions that can be instantiated from key-dependent-message (KDM) secure encryption.
Expand
Hamza Abusalah, Karen Azari, Dario Fiore, Chethan Kamath, Erkan Tairi
ePrint Report ePrint Report
A verifiable delay function (VDF) [Boneh et al., CRYPTO 2018] is a function $f$ that is slow-to-compute, but is quickly verifiable given a short proof. This is formalised using two security requirements: i) sequentiality, which requires that a parallel adversary is unable to compute $f$ faster; and ii) soundness, which requires that an adversary is unable to generate a proof that verifies wrong outputs. A time-lock puzzle (TLP), on the other hand, is a puzzle that can be quickly generated, but is slow-to-solve. The security requirement here is sequentiality, which requires that a parallel adversary is unable to solve puzzles faster.

In this paper, we study the relationship between these two timed primitives. Our main result is a construction of ``one-time'' VDF from TLP using indistinguishability obfuscation (iO) and one-way functions (OWFs), where by ``one-time'' we mean that sequentiality of the VDF holds only against parallel adversaries that do not preprocess public parameters. Our VDF satisfies several desirable properties. For instance, we achieve perfectly sound and short proofs of $O(\lambda)$ bits, where $\lambda$ is the security parameter. Moreover, our construction is a trapdoor (one-time) VDF that can be easily extended to achieve interesting extra properties (defined in our paper) such as trapdoor-homomorphic and trapdoor-constrained evaluation.

Finally, when combined with the results of Bitansky et al., [ITCS 2016], this yields one-time VDFs from any worst-case non-parallelizing language, iO and OWF. To the best of our knowledge, this is the first such construction that only relies on polynomial security.
Expand
Guy Zyskind, Doron Zarchy, Max Leibovich, Chris Peikert
ePrint Report ePrint Report
Threshold Fully Homomorphic Encryption (FHE) enables arbitrary computation on encrypted data, while distributing the decryption capability across multiple parties. A primary application of interest is low-communication multi-party computation (MPC), which benefits from a fast and secure threshold FHE decryption protocol.

Several works have addressed this problem, but all existing solutions rely on "noise flooding" for security. This incurs significant overhead and necessitates large parameters in practice, making it unsuitable for many real-world deployments. Some constructions have somewhat better efficiency, but at the cost of weaker, non-simulation-based security definitions, which limits their usability and composability.

In this work, we propose a novel threshold FHE decryption protocol that avoids "noise flooding" altogether, and provides simulation-based security. Rather than masking the underlying ciphertext noise, our technique securely removes it via an efficient MPC rounding procedure. The cost of this MPC is mitigated by an offline/online design that preprocesses special gates for secure comparisons in the offline phase, and has low communication and computation in the online phase. This approach is of independent interest, and should also benefit other MPC protocols (e.g., secure machine learning) that make heavy use of non-linear comparison operations.

We prove our protocol secure in the Universal Composability (UC) framework, and it can be generally instantiated for a variety of adversary models (e.g., security-with-abort against a dishonest majority, or guaranteed output delivery with honest majority). Compared to the state of the art, our protocol offers significant gains both in the adversary model (i.e., dishonest vs. honest majority) and practical performance: empirically, our online phase obtains approximately 20,000$\times$ better throughput, and up to a 37$\times$ improvement in latency.
Expand
Björn Kriepke, Gohar Kyureghyan
ePrint Report ePrint Report
Let $1$ be the all-one vector and $\odot$ denote the component-wise multiplication of two vectors in $\mathbb F_2^n$. We study the vector space $\Gamma_n$ over $\mathbb F_2$ generated by the functions $\gamma_{2k}:\mathbb F_2^n \to \mathbb F_2^n, k\geq 0$, where $$ \gamma_{2k} = S^{2k}\odot(1+S^{2k-1})\odot(1+S^{2k-3})\odot\ldots\odot(1+S) $$ and $S:\mathbb F_2^n\to\mathbb F_2^n$ is the cyclic left shift function. The functions in $\Gamma_n$ are shift-invariant and the well known $\chi$ function used in several cryptographic primitives is contained in $ \Gamma_n$. For even $n$, we show that the permutations from $\Gamma_n$ with respect to composition form an Abelian group, which is isomorphic to the unit group of the residue ring $\mathbb F_2[X]/(X^n+X^{n/2})$. This isomorphism yields an efficient theoretic and algorithmic method for constructing and studying a rich family of shift-invariant permutations on $\mathbb F_2^n$ which are natural generalizations of $\chi$. To demonstrate it, we apply the obtained results to investigate the function $\gamma_0 +\gamma_2+\gamma_4$ on $\mathbb F_2^n$.
Expand
Luca Bonamino, Pierrick Méaux
ePrint Report ePrint Report
Algebraic Immunity (AI) is a fundamental property in the security analysis of stream ciphers and Boolean functions, measuring the resistance of a function against algebraic attacks. In this work, we focus on the computation of AI and, more specifically, on its generalization to restricted algebraic immunity, which considers annihilators constrained to specific subsets of $\mathbb{F}_2^n$. While the computation of AI has been studied using methods based on Reed-Muller codes and iterative rank-based algorithms, these approaches have not been formally adapted to the restricted setting. We address this gap by establishing the theoretical foundations required for the adaptation of these techniques and providing explicit algorithms for computing restricted algebraic immunity.

To assess the efficiency of our algorithms, we conduct practical experiments comparing the computational cost of the Reed-Muller and iterative approaches in terms of time and memory. As a case study, we analyze the algebraic immunity restricted to the slices of $\mathbb{F}_2^n$, i.e. the sets of binary vectors of fixed Hamming weight, denoted $\mathsf{AI}_k$. The slices are of particular interest in areas of cryptography, such as side-channel analysis and stream cipher design, where the input weight may be partially predictable. We further investigate restricted algebraic immunity for Weightwise (Almost) Perfectly Balanced (W(A)PB) functions, which have been extensively studied since $2017$. Our results include an empirical analysis of $\mathsf{AI}_k$ distributions for WPB functions with $4$, $8$, and $16$ variables, as well as an evaluation of $\mathsf{AI}_k$ for various known function families.
Expand
Peigen Li, Hao Guo, Jintai Ding
ePrint Report ePrint Report
This article develops a unified framework for analyzing and enhancing a family of multivariate signature schemes based on UOV. We conduct a comparative study of three recent UOV-like schemes—QR-UOV, MAYO, and SNOVA—and identify a common design principle: employing tensor product constructions to enlarge the dimension of the oil subspace. Building on this perspective, we propose a new multivariate signature scheme called TSUOV that synthesizes these insights to provide improved key and signature sizes without compromising security.
Expand
Minwoo Lee, Minjoo Sim, Siwoo Eum, Gyeongju Song, Hwajeong Seo
ePrint Report ePrint Report
This paper presents an ARMv8/NEON–oriented, constant-time implementation of NCC-Sign and a careful head-to-head evaluation against a clean KPQClean baseline on Apple~M2. We design lane-local SIMD kernels (e.g., a 4-lane Montgomery multiply--reduce, centered modular reduction, fused stage-0 butterfly, and streamlined radix-2/3 pipelines) that minimize cross-lane shuffles and keep memory access streaming. Using a reproducible harness with PMU-based cycle counting and identical toolchains for baseline and optimized builds, we observe consistent end-to-end gains across all parameter sets: geometric-mean speedups of \textbf{1.22$\times$} for key generation, \textbf{1.58$\times$} for signing, and \textbf{1.50$\times$} for verification, yielding an overall \textbf{1.42$\times$} improvement over NCC-Sign-1/3/5. The largest benefits appear in NTT-intensive signing/verification, and absolute savings scale with parameter size.
Expand
Ryo Ohashi, Hiroshi Onuki
ePrint Report ePrint Report
In 2023, LeGrow, Ti, and Zobernig proposed a cryptographic hash function (we refer to it as the LTZ hash function in this paper) based on a certain (2,2)-isogeny graph between supersingular non-superspecial abelian surfaces over $\mathbb{F}_{p^4}$. The authors estimated that the time and space complexities required to find a collision in the LTZ hash function are both given by $\widetilde{O}(p^3)$. In this paper, we first propose a mathematical conjecture on the number of vertices defined over the smaller field $\mathbb{F}_{p^2}$ in the isogeny graphs used in the LTZ hash function. Based on this conjecture, we then construct a collision-finding algorithm for the LTZ hash function, which preserves the time complexity $\widetilde{O}(p^3)$, while reducing the required memory to $O(\log(p)^2)$. We implemented this algorithm in Rust, and successfully found a collision for parameters claimed to provide 35-37 bits of security, within 26.2MB of memory usage on average in 20.2 hours.
Expand
Rama Yadavalli, Jeffery Solomon, Vrinda Sharma
ePrint Report ePrint Report
Cloud computing has matured into the default substrate for data processing, yet confidentiality demands of- ten force a hard trade-off between the utility of outsourced computation and the privacy of sensitive inputs. Homomorphic encryption (HE)[1] promises to dissolve that trade-off by enabling computation directly on ciphertexts, returning encrypted results that only the data owner can decrypt. Despite remarkable progress in fully homomorphic encryption (FHE) and leveled variants suitable for bounded-depth circuits, deploying HE at cloud scale remains challenging. The cost of ciphertext arithmetic is orders of magnitude higher than plaintext compute; noise growth and rescaling impose algorithmic discipline; vectorization and rotation keys complicate program layout; and the lack of verifiability in bare HE obliges trust in a correct-but-curious cloud[?]. This paper develops a system perspective on how to apply modern HE in cloud environments without reducing it to a boutique feature. We introduce a reference architecture that marries approximate-arithmetic HE for analytics with exact- arithmetic HE for integrity-critical operations, composes HE with succinct proofs for verifiability, and integrates a cost model into the scheduler so that elastically provisioned serverless workers can meet latency objectives under price constraints. The design begins with a compiler that lowers dataflow graphs to operator sequences parameterized by multiplicative depth L and rotation sets; it then chooses schemes and parameters—CKKS for floating-point style analytics and signal processing, BFV/BGV for integer operations and counters, TFHE-style bootstrapping for comparisons—that minimize the total time-to-result under explicit error and security budgets. A cryptographic key service supports threshold issuance and rotation-key escrow without learning plaintexts, while a storage layer packs columns into ciphertext SIMD lanes to amortize cost across tenants. For verifiability, we attach homomorphic message authentication tags to intermediate ciphertexts and wrap end-to-end executions in succinct non-interactive proofs specialized to the bilinear equations that certify correct key switching, rescaling, and boot- strapping. Analytically, we characterize latency by a linear model in the counts of core homomorphic primitives and show how to saturate GPUs or vector units with batched number-theoretic transforms to bend throughput toward practical regimes. Under realistic traces of analytic queries and encrypted inference, the architecture achieves sub-second P95 for circuits of depth six to eight with one or two bootstraps, while sustaining 128-bit security under RLWE. By treating HE not as an exotic afterthought but as a first-class cloud programming and scheduling primitive, the proposed approach demonstrates a path to confidential-by- default services in which the cloud never sees data in the clear yet remains efficient, elastic, and auditable.
Expand
Lui Zheng, Roger Zhu, Amit Agrawal, Carol Lamore
ePrint Report ePrint Report
Mutual Transport Layer Security (mTLS) under- pins authenticated, confidential communication across modern service meshes, but its deployment stance in machine-learning platforms is typically static—fixed cipher suites, certificate life- times, and re-authentication schedules chosen for worst-case threats rather than observed risk. Large Language Model (LLM) serving pipelines exacerbate this rigidity: traffic is bursty, topolo- gies reconfigure dynamically under autoscaling, and sensitive artifacts such as prompts, training features, and evaluation data traverse heterogeneous substrates. In this paper we argue that mTLS for LLM systems[1] should be governed by adaptive control rather than static policy. We formalize a feedback loop that ingests multi-modal telemetry—connection error codes, handshake latencies, anomaly scores from request semantics, workload attestation freshness, and service-level objective (SLO) drift—and outputs fine-grained adjustments to transport posture: client-certificate renewal cadence, certificate path length and key type selection, session resumption eligibility, early data gat- ing, proof-of-possession challenges, and revocation propagation thresholds. The controller targets two coupled objectives: maintain cryptographic assurances (mutual authentication, forward secrecy, and replay resistance) while bounding the cost of security on tail latency and throughput during high-load inference.
Expand
Hamza Abusalah, Karen Azari, Chethan Kamath, Erkan Tairi, Maximilian von Consbruch
ePrint Report ePrint Report
This paper is concerned with the question whether Verifiable Delay Functions (VDFs), as introduced by Boneh et al. [CRYPTO 2018], can be constructed in the plain Random Oracle Model (ROM) without any computational assumptions. A first partial answer to this question is due to Mahmoody, Smith, and Wu [ICALP 2020], and rules out the existence of perfectly unique VDFs in the ROM. Building on this result, Guan, Riazanov, and Yuan [CRYPTO 2025] very recently demonstrated that even VDFs with computational uniqueness are impossible under a public-coin setup. However, the case of computationally unique VDFs with private-coin setup remained open. We close this gap by showing that even computationally expensive private-coin setup will not allow to construct VDFs in the ROM.
Expand
Pranav Garimidi, Joachim Neu, Max Resnick
ePrint Report ePrint Report
Traditional single-proposer blockchains suffer from miner extractable value (MEV), where validators exploit their serial monopoly on transaction inclusion and ordering to extract rents from users. While there have been many developments at the application layer to reduce the impact of MEV, these approaches largely require auctions as a subcomponent. Running auctions efficiently on chain requires two key properties of the underlying consensus protocol: selective-censorship resistance and hiding. These properties guarantee that an adversary can neither selectively delay transactions nor see their contents before they are confirmed. We propose a multiple concurrent proposer (MCP) protocol offering exactly these properties.
Expand
Foteini Baldimtsi, Rishab Goyal, Aayush Yadav
ePrint Report ePrint Report
Non-interactive blind signatures (NIBS; Eurocrypt '23) allow a signer to asynchronously generate presignatures for a recipient, ensuring that only the intended recipient can extract a "blinded" signature for a random message.

We introduce a new generalization called non-interactive batched blind signatures (NIBBS). Our goal is to reduce the computation and communication costs for signers and receivers, by batching multiple blind signature queries. More precisely, we define the property of 'succinct communication' which requires that the communication cost from signer to receiver be independent of the batch size. NIBBS is very suitable for large-scale deployments requiring only minimal signer-side effort.

We design a NIBBS scheme and prove its security based on the hardness of lattice assumptions (in the random oracle model). When instantiated with the low-depth PRF candidate "Crypto Dark Matter" (TCC '18) and the succinct lattice-based proof system for rank-1 constraint systems (Crypto '23), our final signature size is 308 KB with <1 KB communication.
Expand
Aditya Singh Rawat, Mahabir Prasad Jhanwar
ePrint Report ePrint Report
The proposed $\mathsf{SL\text{-}DNSSEC}$ protocol (AsiaCCS'25) uses a quantum-safe KEM and a MAC to perform \textit{signature-less} $\mathsf{(SL)}$ DNSSEC validations in a single UDP query / response style. In this paper, we give a formal analysis of $\mathsf{SL\text{-}DNSSEC}$ in the reductionist model.
Expand
Andrey Boris Khesin, Jonathan Z. Lu, Alexander Poremba, Akshar Ramkumar, Vinod Vaikuntanathan
ePrint Report ePrint Report
Random classical linear codes are widely believed to be hard to decode. While slightly sub-exponential time algorithms exist when the coding rate vanishes sufficiently rapidly, all known algorithms at constant rate require exponential time. By contrast, the complexity of decoding a random quantum stabilizer code has remained an open question for quite some time. This work closes the gap in our understanding of the algorithmic hardness of decoding random quantum versus random classical codes. We prove that decoding a random stabilizer code with even a single logical qubit is at least as hard as decoding a random classical code at constant rate—the maximally hard regime. This result suggests that the easiest random quantum decoding problem is at least as hard as the hardest random classical decoding problem, and shows that any sub-exponential algorithm decoding a typical stabilizer code, at any rate, would immediately imply a breakthrough in cryptography.

More generally, we also characterize many other complexity-theoretic properties of stabilizer codes. While classical decoding admits a random self-reduction, we prove significant barriers for the existence of random self-reductions in the quantum case. This result follows from new bounds on Clifford entropies and Pauli mixing times, which may be of independent interest. As a complementary result, we demonstrate various other self-reductions which are in fact achievable, such as between search and decision. We also demonstrate several ways in which quantum phenomena, such as quantum degeneracy, force several reasonable definitions of stabilizer decoding—all of which are classically identical—to have distinct or non-trivially equivalent complexity.
Expand
◄ Previous Next ►