International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

03 October 2025

Björn Kriepke, Gohar Kyureghyan
ePrint Report ePrint Report
Let $1$ be the all-one vector and $\odot$ denote the component-wise multiplication of two vectors in $\mathbb F_2^n$. We study the vector space $\Gamma_n$ over $\mathbb F_2$ generated by the functions $\gamma_{2k}:\mathbb F_2^n \to \mathbb F_2^n, k\geq 0$, where $$ \gamma_{2k} = S^{2k}\odot(1+S^{2k-1})\odot(1+S^{2k-3})\odot\ldots\odot(1+S) $$ and $S:\mathbb F_2^n\to\mathbb F_2^n$ is the cyclic left shift function. The functions in $\Gamma_n$ are shift-invariant and the well known $\chi$ function used in several cryptographic primitives is contained in $ \Gamma_n$. For even $n$, we show that the permutations from $\Gamma_n$ with respect to composition form an Abelian group, which is isomorphic to the unit group of the residue ring $\mathbb F_2[X]/(X^n+X^{n/2})$. This isomorphism yields an efficient theoretic and algorithmic method for constructing and studying a rich family of shift-invariant permutations on $\mathbb F_2^n$ which are natural generalizations of $\chi$. To demonstrate it, we apply the obtained results to investigate the function $\gamma_0 +\gamma_2+\gamma_4$ on $\mathbb F_2^n$.
Expand
Luca Bonamino, Pierrick Méaux
ePrint Report ePrint Report
Algebraic Immunity (AI) is a fundamental property in the security analysis of stream ciphers and Boolean functions, measuring the resistance of a function against algebraic attacks. In this work, we focus on the computation of AI and, more specifically, on its generalization to restricted algebraic immunity, which considers annihilators constrained to specific subsets of $\mathbb{F}_2^n$. While the computation of AI has been studied using methods based on Reed-Muller codes and iterative rank-based algorithms, these approaches have not been formally adapted to the restricted setting. We address this gap by establishing the theoretical foundations required for the adaptation of these techniques and providing explicit algorithms for computing restricted algebraic immunity.

To assess the efficiency of our algorithms, we conduct practical experiments comparing the computational cost of the Reed-Muller and iterative approaches in terms of time and memory. As a case study, we analyze the algebraic immunity restricted to the slices of $\mathbb{F}_2^n$, i.e. the sets of binary vectors of fixed Hamming weight, denoted $\mathsf{AI}_k$. The slices are of particular interest in areas of cryptography, such as side-channel analysis and stream cipher design, where the input weight may be partially predictable. We further investigate restricted algebraic immunity for Weightwise (Almost) Perfectly Balanced (W(A)PB) functions, which have been extensively studied since $2017$. Our results include an empirical analysis of $\mathsf{AI}_k$ distributions for WPB functions with $4$, $8$, and $16$ variables, as well as an evaluation of $\mathsf{AI}_k$ for various known function families.
Expand
Peigen Li, Hao Guo, Jintai Ding
ePrint Report ePrint Report
This article develops a unified framework for analyzing and enhancing a family of multivariate signature schemes based on UOV. We conduct a comparative study of three recent UOV-like schemes—QR-UOV, MAYO, and SNOVA—and identify a common design principle: employing tensor product constructions to enlarge the dimension of the oil subspace. Building on this perspective, we propose a new multivariate signature scheme called TSUOV that synthesizes these insights to provide improved key and signature sizes without compromising security.
Expand
Minwoo Lee, Minjoo Sim, Siwoo Eum, Gyeongju Song, Hwajeong Seo
ePrint Report ePrint Report
This paper presents an ARMv8/NEON–oriented, constant-time implementation of NCC-Sign and a careful head-to-head evaluation against a clean KPQClean baseline on Apple~M2. We design lane-local SIMD kernels (e.g., a 4-lane Montgomery multiply--reduce, centered modular reduction, fused stage-0 butterfly, and streamlined radix-2/3 pipelines) that minimize cross-lane shuffles and keep memory access streaming. Using a reproducible harness with PMU-based cycle counting and identical toolchains for baseline and optimized builds, we observe consistent end-to-end gains across all parameter sets: geometric-mean speedups of \textbf{1.22$\times$} for key generation, \textbf{1.58$\times$} for signing, and \textbf{1.50$\times$} for verification, yielding an overall \textbf{1.42$\times$} improvement over NCC-Sign-1/3/5. The largest benefits appear in NTT-intensive signing/verification, and absolute savings scale with parameter size.
Expand
Ryo Ohashi, Hiroshi Onuki
ePrint Report ePrint Report
In 2023, LeGrow, Ti, and Zobernig proposed a cryptographic hash function (we refer to it as the LTZ hash function in this paper) based on a certain (2,2)-isogeny graph between supersingular non-superspecial abelian surfaces over $\mathbb{F}_{p^4}$. The authors estimated that the time and space complexities required to find a collision in the LTZ hash function are both given by $\widetilde{O}(p^3)$. In this paper, we first propose a mathematical conjecture on the number of vertices defined over the smaller field $\mathbb{F}_{p^2}$ in the isogeny graphs used in the LTZ hash function. Based on this conjecture, we then construct a collision-finding algorithm for the LTZ hash function, which preserves the time complexity $\widetilde{O}(p^3)$, while reducing the required memory to $O(\log(p)^2)$. We implemented this algorithm in Rust, and successfully found a collision for parameters claimed to provide 35-37 bits of security, within 26.2MB of memory usage on average in 20.2 hours.
Expand
Rama Yadavalli, Jeffery Solomon, Vrinda Sharma
ePrint Report ePrint Report
Cloud computing has matured into the default substrate for data processing, yet confidentiality demands of- ten force a hard trade-off between the utility of outsourced computation and the privacy of sensitive inputs. Homomorphic encryption (HE)[1] promises to dissolve that trade-off by enabling computation directly on ciphertexts, returning encrypted results that only the data owner can decrypt. Despite remarkable progress in fully homomorphic encryption (FHE) and leveled variants suitable for bounded-depth circuits, deploying HE at cloud scale remains challenging. The cost of ciphertext arithmetic is orders of magnitude higher than plaintext compute; noise growth and rescaling impose algorithmic discipline; vectorization and rotation keys complicate program layout; and the lack of verifiability in bare HE obliges trust in a correct-but-curious cloud[?]. This paper develops a system perspective on how to apply modern HE in cloud environments without reducing it to a boutique feature. We introduce a reference architecture that marries approximate-arithmetic HE for analytics with exact- arithmetic HE for integrity-critical operations, composes HE with succinct proofs for verifiability, and integrates a cost model into the scheduler so that elastically provisioned serverless workers can meet latency objectives under price constraints. The design begins with a compiler that lowers dataflow graphs to operator sequences parameterized by multiplicative depth L and rotation sets; it then chooses schemes and parameters—CKKS for floating-point style analytics and signal processing, BFV/BGV for integer operations and counters, TFHE-style bootstrapping for comparisons—that minimize the total time-to-result under explicit error and security budgets. A cryptographic key service supports threshold issuance and rotation-key escrow without learning plaintexts, while a storage layer packs columns into ciphertext SIMD lanes to amortize cost across tenants. For verifiability, we attach homomorphic message authentication tags to intermediate ciphertexts and wrap end-to-end executions in succinct non-interactive proofs specialized to the bilinear equations that certify correct key switching, rescaling, and boot- strapping. Analytically, we characterize latency by a linear model in the counts of core homomorphic primitives and show how to saturate GPUs or vector units with batched number-theoretic transforms to bend throughput toward practical regimes. Under realistic traces of analytic queries and encrypted inference, the architecture achieves sub-second P95 for circuits of depth six to eight with one or two bootstraps, while sustaining 128-bit security under RLWE. By treating HE not as an exotic afterthought but as a first-class cloud programming and scheduling primitive, the proposed approach demonstrates a path to confidential-by- default services in which the cloud never sees data in the clear yet remains efficient, elastic, and auditable.
Expand
Lui Zheng, Roger Zhu, Amit Agrawal, Carol Lamore
ePrint Report ePrint Report
Mutual Transport Layer Security (mTLS) under- pins authenticated, confidential communication across modern service meshes, but its deployment stance in machine-learning platforms is typically static—fixed cipher suites, certificate life- times, and re-authentication schedules chosen for worst-case threats rather than observed risk. Large Language Model (LLM) serving pipelines exacerbate this rigidity: traffic is bursty, topolo- gies reconfigure dynamically under autoscaling, and sensitive artifacts such as prompts, training features, and evaluation data traverse heterogeneous substrates. In this paper we argue that mTLS for LLM systems[1] should be governed by adaptive control rather than static policy. We formalize a feedback loop that ingests multi-modal telemetry—connection error codes, handshake latencies, anomaly scores from request semantics, workload attestation freshness, and service-level objective (SLO) drift—and outputs fine-grained adjustments to transport posture: client-certificate renewal cadence, certificate path length and key type selection, session resumption eligibility, early data gat- ing, proof-of-possession challenges, and revocation propagation thresholds. The controller targets two coupled objectives: maintain cryptographic assurances (mutual authentication, forward secrecy, and replay resistance) while bounding the cost of security on tail latency and throughput during high-load inference.
Expand
Hamza Abusalah, Karen Azari, Chethan Kamath, Erkan Tairi, Maximilian von Consbruch
ePrint Report ePrint Report
This paper is concerned with the question whether Verifiable Delay Functions (VDFs), as introduced by Boneh et al. [CRYPTO 2018], can be constructed in the plain Random Oracle Model (ROM) without any computational assumptions. A first partial answer to this question is due to Mahmoody, Smith, and Wu [ICALP 2020], and rules out the existence of perfectly unique VDFs in the ROM. Building on this result, Guan, Riazanov, and Yuan [CRYPTO 2025] very recently demonstrated that even VDFs with computational uniqueness are impossible under a public-coin setup. However, the case of computationally unique VDFs with private-coin setup remained open. We close this gap by showing that even computationally expensive private-coin setup will not allow to construct VDFs in the ROM.
Expand
Pranav Garimidi, Joachim Neu, Max Resnick
ePrint Report ePrint Report
Traditional single-proposer blockchains suffer from miner extractable value (MEV), where validators exploit their serial monopoly on transaction inclusion and ordering to extract rents from users. While there have been many developments at the application layer to reduce the impact of MEV, these approaches largely require auctions as a subcomponent. Running auctions efficiently on chain requires two key properties of the underlying consensus protocol: selective-censorship resistance and hiding. These properties guarantee that an adversary can neither selectively delay transactions nor see their contents before they are confirmed. We propose a multiple concurrent proposer (MCP) protocol offering exactly these properties.
Expand
Foteini Baldimtsi, Rishab Goyal, Aayush Yadav
ePrint Report ePrint Report
Non-interactive blind signatures (NIBS; Eurocrypt '23) allow a signer to asynchronously generate presignatures for a recipient, ensuring that only the intended recipient can extract a "blinded" signature for a random message.

We introduce a new generalization called non-interactive batched blind signatures (NIBBS). Our goal is to reduce the computation and communication costs for signers and receivers, by batching multiple blind signature queries. More precisely, we define the property of 'succinct communication' which requires that the communication cost from signer to receiver be independent of the batch size. NIBBS is very suitable for large-scale deployments requiring only minimal signer-side effort.

We design a NIBBS scheme and prove its security based on the hardness of lattice assumptions (in the random oracle model). When instantiated with the low-depth PRF candidate "Crypto Dark Matter" (TCC '18) and the succinct lattice-based proof system for rank-1 constraint systems (Crypto '23), our final signature size is 308 KB with <1 KB communication.
Expand
Aditya Singh Rawat, Mahabir Prasad Jhanwar
ePrint Report ePrint Report
The proposed $\mathsf{SL\text{-}DNSSEC}$ protocol (AsiaCCS'25) uses a quantum-safe KEM and a MAC to perform \textit{signature-less} $\mathsf{(SL)}$ DNSSEC validations in a single UDP query / response style. In this paper, we give a formal analysis of $\mathsf{SL\text{-}DNSSEC}$ in the reductionist model.
Expand
Andrey Boris Khesin, Jonathan Z. Lu, Alexander Poremba, Akshar Ramkumar, Vinod Vaikuntanathan
ePrint Report ePrint Report
Random classical linear codes are widely believed to be hard to decode. While slightly sub-exponential time algorithms exist when the coding rate vanishes sufficiently rapidly, all known algorithms at constant rate require exponential time. By contrast, the complexity of decoding a random quantum stabilizer code has remained an open question for quite some time. This work closes the gap in our understanding of the algorithmic hardness of decoding random quantum versus random classical codes. We prove that decoding a random stabilizer code with even a single logical qubit is at least as hard as decoding a random classical code at constant rate—the maximally hard regime. This result suggests that the easiest random quantum decoding problem is at least as hard as the hardest random classical decoding problem, and shows that any sub-exponential algorithm decoding a typical stabilizer code, at any rate, would immediately imply a breakthrough in cryptography.

More generally, we also characterize many other complexity-theoretic properties of stabilizer codes. While classical decoding admits a random self-reduction, we prove significant barriers for the existence of random self-reductions in the quantum case. This result follows from new bounds on Clifford entropies and Pauli mixing times, which may be of independent interest. As a complementary result, we demonstrate various other self-reductions which are in fact achievable, such as between search and decision. We also demonstrate several ways in which quantum phenomena, such as quantum degeneracy, force several reasonable definitions of stabilizer decoding—all of which are classically identical—to have distinct or non-trivially equivalent complexity.
Expand

01 October 2025

Virtual event, Anywhere on Earth, 23 March - 26 March 2026
Event Calendar Event Calendar
Event date: 23 March to 26 March 2026
Submission deadline: 24 October 2025
Notification: 3 December 2025
Expand
University of Klagenfurt
Job Posting Job Posting
Please use this link: https://jobs.aau.at/en/job/university-assistant-predoctoral-and-project-researcher-all-genders-welcome-2/ Level of employment: 100 % (40 hours/week) Minimum salary: € 52,007.20 per annum (gross); classification according to collective agreement: B1 Limited to: 4 years Application deadline: November 5, 2025 Reference code: 702-1/24

Closing date for applications:

Contact: Prof. Gerhard Friedrich, University of Klagenfurt

More information: https://jobs.aau.at/en/job/university-assistant-predoctoral-and-project-researcher-all-genders-welcome-2/

Expand

30 September 2025

Mingshu Cong, Tsz Hon Yuen, Siu-Ming Yiu
ePrint Report ePrint Report
We present DualMatrix, a zkSNARK solution for large-scale matrix multiplication. Classical zkSNARK protocols typically underperform in data analytic contexts, hampered by the large size of datasets and the superlinear nature of matrix multiplication. DualMatrix excels in its scalability. The prover time of DualMatrix scales linearly with respect to the number of non-zero elements in the input matrices. For $n \times n$ matrix multiplication with $N$ non-zero elements across three input matrices, DualMatrix employs a structured reference string (SRS) of size $O(n)$, and achieves RAM usage of $O(N+n)$, transcript size of $O(\log n)$, prover time of $O(N+n)$, and verifier time of $O(\log n)$. The prover time, notably at $O(N+n)$ and surpassing all existing protocols, includes $O(N+n)$ field multiplications and $O(n)$ exponentiations and pairings within bilinear groups. These efficiencies make DualMatrix effective for linear algebra on large matrices common in real-world applications. We evaluated DualMatrix with $2^{15} \times 2^{15}$ input matrices each containing $1G$ non-zero integers, which necessitate $32T$ integer multiplications in naive matrix multiplication. DualMatrix recorded prover and verifier times of $150.84$s and $0.56$s, respectively. When applied to $1M \times 1M$ sparse matrices each containing $1G$ non-zero integers, it demonstrated prover and verifier times of $1,384.45$s and $0.67$s. Our approach outperforms current zkSNARK solutions by successfully handling the large matrix multiplication task in experiment. We extend matrix operations from field matrices to group matrices, formalizing group matrix algebra. This mathematical advancement brings notable symmetries beneficial for high-dimensional elliptic curve cryptography. By leveraging the bilinear properties of our group matrix algebra in the context of the two-tier commitment scheme, DualMatrix achieves efficiency gains over previous matrix multiplication arguments. To accomplish this, we extend and enhance Bulletproofs to construct an inner product argument featuring a transparent setup and logarithmic verifier time.
Expand
Zhuo Wu, Xinxuan Zhang, Yi Deng, Yuanju Wei, Zhongliang Zhang, Liuyu Yang
ePrint Report ePrint Report
This paper introduces the first multilinear polynomial commitment scheme (PCS) over Galois rings achieving $\bigO{\log^2 n}$ verification cost. It achieves $\bigO{n\log n}$ committing time and $\bigO{n}$ evaluation opening prover time. This PCS can be used to construct zero-knowledge proofs for arithmetic circuits over Galois rings, facilitating verifiable computation in applications requiring proofs of polynomial ring operations (e.g., verifiable fully homomorphic encryption). First we construct random foldable linear codes over Galois rings with sufficient code distance and present a distance preservation theorem over Galois rings. Second we extend the $\textsf{Basefold}$ commitment (Zeilberger et al., Crypto 2024) to multilinear polynomials over Galois rings. Our approach reduces proof size and verifier time from $\bigO{\sqrt{n}}$ to $\bigO{\log^2 n}$ compared to Wei et al., PKC 2025. Furthermore, we give a batched multipoint openning protocol for evaluation phase that collapses the proof size and verifier time of $N$ polynomials at $M$ points from $\bigO{NM \log^2 n}$ to $\bigO{\log^2 n}$, prover time from $\bigO{NMn}$ to $\bigO{n}$, further enhancing efficiency.
Expand
Marc Damie, Florian Hahn, Andreas Peter, Jan Ramon
ePrint Report ePrint Report
Distributed Point Functions (DPFs) enable sharing secret point functions across multiple parties, supporting privacy-preserving technologies such as Private Information Retrieval, and anonymous communications. While 2-party PRG-based schemes with logarithmic key sizes have been known for a decade, extending these solutions to multi-party settings has proven challenging. In particular, PRG-based multi-party DPFs have historically struggled with practicality due to key sizes growing exponentially with the number of parties and the field size.

Our work addresses this efficiency bottleneck by optimizing the PRG-based multi-party DPF scheme of Boyle et al. (EUROCRYPT'15). By leveraging the honest-majority assumption, we eliminate the exponential factor present in this scheme. Our construction is the first PRG-based multi-party DPF scheme with practical key sizes, and provides key up to $3\times$ smaller than the best known multi-party DPF. This work demonstrates that with careful optimization, PRG-based multi-party DPFs can achieve practical performances, and even obtain top performances.
Expand
Jeffrey Champion, Fuyuki Kitagawa, Ryo Nishimaki, Takashi Yamakawa
ePrint Report ePrint Report
We initiate the study of untelegraphable encryption (UTE), founded on the no-telegraphing principle, which allows an encryptor to encrypt a message such that a binary string representation of the ciphertext cannot be decrypted by a user with the secret key, a task that is classically impossible. This is a natural relaxation of unclonable encryption (UE), inspired by the recent work of Nehoran and Zhandry (ITCS 2024), who showed a computational separation between the no-cloning and no-telegraphing principles.

In this work, we define and construct UTE information-theoretically in the plain model. Building off this, we give several applications of UTE and study the interplay of UTE with UE and well-studied tasks in quantum state learning, yielding the following contributions:

- A construction of collusion-resistant UTE from standard secret-key encryption (SKE). We additionally show that hyper-efficient shadow tomography (HEST) is impossible assuming collusion-resistant UTE exists. By considering a relaxation of collusion-resistant UTE, we are able to show the impossibility of HEST assuming only pseudorandom state generators (which may not imply one-way functions). This almost unconditionally answers an open inquiry of Aaronson (STOC 2018).

- A construction of UTE from a quasi-polynomially secure one-shot message authentication code (OSMAC) in the classical oracle model, such that there is an explicit attack that breaks UE security for an unbounded polynomial number of decryptors.

- A construction of everlasting secure collusion-resistant UTE, where the decryptor adversary can run in unbounded time, in the quantum random oracle model (QROM), and formal evidence that a construction in the plain model is a challenging task. We additionally show that HEST with unbounded post-processing time (which we call weakly-efficient shadow tomography) is impossible assuming everlasting secure collusion-resistant UTE exists. - A construction of secret sharing for all polynomial-size policies that is resilient to joint and unbounded classical leakage from collusion-resistant UTE and classical secret sharing for all policies. - A construction (and definition) of collusion-resistant untelegraphable secret-key functional encryption (UTSKFE) from single-decryptor functional encryption and plain secret-key functional encryption, and a construction of collusion-resistant untelegraphable public-key functional encryption from UTSKFE, plain SKE, and plain public-key functional encryption.
Expand
Marcin Kostrzewa, Matthew Klein, Ara Adkins, Grzegorz Świrski, Wojciech Żmuda
ePrint Report ePrint Report
Keccak, the hash function at the core of the Ethereum ecosystem, is computationally expensive to reason about in SNARK circuits, creating a critical bottleneck in the ability of the ZK ecosystem to reason about blockchain state. The recent state-of-the-art in proving Keccak permutations relies on proof systems that can perform lookup arguments, which—while exhibiting better performance than directly proving the hash operations in circuit—still typically require tens of thousands of constraints to prove a single keccak-f permutation. This paper introduces a new method, termed keccacheck, which builds upon sum-check with influence from GKR to create circuits that can batch-verify Keccak permutations with fewer than 4000 constraints per instance. Keccacheck achieves this by exploiting the logarithmic scaling of recursive verification of the sum-check protocol, reducing the computational cost of verifying large enough batches to be only slightly higher than evaluating the multilinear extension of the input and output states. Its performance becomes competitive for a batch containing 16 permutations and offers more than a 10x cost reduction for batches of 512 or more permutations. This approach enables new levels of efficiency for the ZK ecosystem, providing the performant storage proofs that are essential to light clients, cross-chain bridges, privacy-focused protocols, and roll-ups.
Expand

29 September 2025

Jonas Bertels, Ingrid Verbauwhede
ePrint Report ePrint Report
NIST recently selected Kyber as a standard for key encapsulation and decapsulation. As such, servers will soon need dedicated hardware for these encapsulation protocols. The computationally critical operation of Kyber is its Number Theoretic Transform, which is commonly accelerated by dedicated hardware such as FPGAs. This work presents an extremely high-throughput design for the Kyber NTT. By utilizing the LUT-based modular multiplication technique used by us in CHES 2025, its area delay product is generally between one and two orders of magnitude better than similar designs in literature. For instance, where Yaman et al. with 16 Processing Elements requires 9500 LUTs and 16 DSPs to perform an NTT in 69 clock cycles, our design requires 67210 LUTs and no DSPs to perform an NTT every clock cycle.
Expand
◄ Previous Next ►