International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

07 August 2025

Michele Battagliola, Laura Mattiuz, Alessio Meneghetti
ePrint Report ePrint Report
The Vector Oblivious Linear Evaluation in the Head (VOLEitH) paradigm has proven to be a versatile tool to design zero-knowledge proofs and signatures in post-quantum cryptography. In this paper, we propose three VOLE-friendly modellings for Proofs of Knowledge (PoK) of a solution of an instance of the Linear Code Equivalence Problem (LEP). For the first two schemes, we propose two new reductions from LEP to the Multivariate Quadratic (MQ) problem, that may be of independent interest for the cryptanalysis of LEP. Instead, the last model is obtained by generalizing a recent work by Bettaieb et al. to the context of monomial matrices instead of permutation matrices. While our proposed schemes exhibit larger signature sizes compared to LESS, they significantly improve the computational efficiency, reducing the overall complexity from $O(n^3)$ to $O(n^2)$, where $n$ is the code dimension.
Expand
Alessio Meneghetti, Federica Zanetti
ePrint Report ePrint Report
In this work we analyze a problem strictly linked with the Rational Reconstruction, which forms the foundation of some post-quantum Quasi-Cyclic Moderate-Density Parity-Check and Quasi-Cyclic Low-Density Parity-Check code-based schemes such as LEDAkem and BIKE. Given a polynomial in a cyclic ring as input, our aim is to recover two polynomials, with specific properties, whose ratio is the input one. The starting point of this work is the paper of Bardet, Dragoi, Luque, and Otmani, which describes some approaches, based on the Extended Euclidean Algorithm, that solves this problem in some specific cases.

In comparison to previous work, we define an additional setting in which the problem can be solved. We also provide an alternative approach to estimate the probability of success, by taking into account a requirement that was not considered in the original paper, thus getting a more precise estimation. Finally, we present a key-recovery attack on BIKE, evaluate its computational cost, and compare it with that of the most efficient known attacks. Although this last step is performed specifically on BIKE, the methodology can be extended to other schemes as well.
Expand
Manuel B. Santos, Dimitris Mouris, Xiang Xie, Miguel de Vega, Andrei Lapets
ePrint Report ePrint Report
Transport Layer Security (TLS) is the backbone of the web, allowing clients to establish secure and private channels with servers. DECO (CCS'20) and follow-up works proposed protocols that enable proving the provenance of a TLS response, i.e., that a payload came from a particular server, without needing server-side modifications. Unfortunately, these works are limited to proving Boolean statements over the payload (e.g., age $\ge$ 18) and cannot combine payloads from multiple clients.

We introduce TLShare, a framework that extracts authenticated data from a TLS connection and imports it into secure multiparty computation (MPC) or fully homomorphic encryption (FHE), without requiring server-side changes or exposing client credentials. Unlike prior work, TLShare allows the payload itself, not just a predicate about it, to serve as private input to secure downstream computation. TLShare supports combining verifiable inputs across multiple clients and servers, enabling new applications such as privacy-preserving financial risk assessment and collaborative analytics. We design three protocols for TLShare: one for MPC using verifiable secret sharing, and two for FHE using interactive and non-interactive zero-knowledge proofs, each ensuring input authenticity, integrity, and end-to-end privacy. We evaluate all three protocols of TLShare over both LAN and WAN settings, comparing their trade-offs and demonstrating their practicality.
Expand
Ruben Baecker, Paul Gerhart, Daniel Rausch, Dominique Schröder
ePrint Report ePrint Report
Oblivious Pseudorandom Functions (OPRFs) are fundamental cryptographic primitives essential for privacy-enhancing technologies such as private set intersection, oblivious keyword search, and password-based authentication protocols. We present the first fully adaptive, partially oblivious threshold pseudorandom function that supports proactive key refresh and provides composable security under the One-More Gap Diffie-Hellman assumption in the random oracle model.

Our construction is secure with respect to a new ideal functionality for OPRFs that addresses three critical shortcomings of previous models–specifically, key refresh and non-verifiability issues that rendered them unrealizable. In addition, we identify a gap in a prior work's proof of partial obliviousness and develop a novel proof technique to salvage their scheme.
Expand
Theophilus Agama
ePrint Report ePrint Report
We show that Brauer and a certain class of Hansen chains satisfy the requirements for an addition chain to be closed. This puts these types of addition chain as a subfamily of the so-called closed addition chains.
Expand
Sven Argo, Henk Corporaal, Alejandro Garza, Marc Geilen, Manil Dev Gomony, Tim Güneysu, Adrian Marotzke, Fouwad Mir, Christian Larmann, Jan Richter-Brockmann, Jeffrey Smith, Mottaqiallah Taouil, ...
ePrint Report ePrint Report
Artificial Intelligence (AI) has had a profound impact on our contemporary society, and it is indisputable that it will continue to play a significant role in the future. To further enhance AI experience and performance, a transition from large-scale server applications towards AI-powered edge devices is inevitable. In fact, current projections indicate that the market for Smart Edge Processors (SEPs) will grow beyond 70 Billion USD by 2026 [1]. Such a shift comes with major challenges, as these devices have limited computing and energy resources yet need to be highly performant. Additionally, security mechanisms need to be implemented to protect against diverse attack vectors as attackers now have physical access to the device. Besides cryptographic keys, Intellectual Property (IP), including neural network weights, may also be potential targets. The CONVOLVE [2] project (currently in its intermediate stage) follows a holistic approach to address these challenges and establish the EU in a leading position in embedded, ultra-low- power and secure processors for edge computing. It encompasses novel hardware technologies, end-to-end integrated workflows, and a security-by-design approach. This paper highlights the security aspects of future edge-AI processors by illustrating challenges encountered in CONVOLVE, the solutions we pursue including some early results, and directions for future research.
Expand
Huina Li, Le He, Weidong Qiu
ePrint Report ePrint Report
\xoodyak is a finalist of the NIST lightweight cryptography competition, offering both keyed and hash modes. After several years of cryptanalysis, the largest number of \xoodyak hash rounds for which actual collisions was still in vacancy. To the best of our knowledge, one of the most powerful collision attacks on hash functions based on sponge construction is the differential-based attacks using the S-box linearization technique proposed by Qiao \etal (EUROCRYPT 2017). However, the linearization technique requires a large number of degrees of freedom, making it challenging to apply to \xoodyak with a small outer part. On the other hand, the constraint-input and constraint-output imposed on the differential trail of \xoodoo permutation make the exhaustive search for such high-probability differential trails in collision attacks extremely costly.

In this paper, we present critical observations regarding \xoodoo round function, particularly focusing on its unique $\theta$ and $\chi$ operation. These properties can be leveraged to manually design specific differential trails for the \xoodoo permutation, referred to as \textit{loop} differential trails. To efficiently find practical collisions for up to 3 rounds, we develop a SAT model based on these \textit{loop} trails. Finally, we present the first practical collision on 2 rounds and a practical semi-free-start collision on 3 rounds of \xoodyak hash mode. Besides, we improve Dong \etal's (CRYPTO 2024) collision attack on 3-round \xoodyak-\hash from $2^{125.23}$ to $2^{100.93}$ using several linearization strategies. Since we focus on the analysis on collisions during the message absorbing phase of the hash modes, our results are applicable to both \xoodyak-\hash and \xoodyak-\xof.
Expand
Liheng Ji, Yilei Chen
ePrint Report ePrint Report
The hardness of the learning with errors (LWE) problem increases as its noise rate grows. However, all existing LWE-based public-key encryption schemes require the noise rate to be no greater than $o(1/(\sqrt{n}\log n))$. Breaking through this limitation presents an intriguing challenge.

In this paper, we construct public-key encryption (PKE) schemes based on the sub-exponential hardness of decisional LWE with polynomial modulus and noise rate ranging from $O(1/\sqrt{n})$ to $o(1/\log n)$. More concretely, we demonstrate the existence of CPA-secure PKE schemes as long as one of the following three assumptions holds. (i) $(n^{\omega(1)},n^{-\omega(1)})-$hardness of decisional LWE with noise rate $O(1/\sqrt{n})$. (ii) $(2^{\omega(n^{1/c_1})},2^{-\omega(n^{1/c_1})})$-hardness of decisional LWE with noise rate $O(1/\sqrt{n^{1-1/c_1}\log n})$ for some constant $c_1>1$. (iii) $(2^{\omega(n/\log^{c_2}n)},2^{-\omega(n/\log^{c_2}n)})$-hardness of decisional LWE with noise rate $O(1/\sqrt{\log^{c_2+1} n})$ for some constant $c_2>0$. \end{itemize} We also construct injective trapdoor function (iTDF) families based on the same hardness assumption as our PKE. To achieve this, we give a generalization of Babai's nearest plane algorithm, which finds a ``common closest lattice point'' for a set of vectors.

In addition, we propose a PKE based on the $(2^{\omega(n^{1/2})},2^{-\omega(n^{1/2})})$-hardness of constant noise learning parity with noise (LPN) problem. Our construction is simpler than the construction of Yu and Zhang [CRYPTO 2016] while achieving the same security.
Expand
Zhuo Cai
ePrint Report ePrint Report
The security of blockchain systems relies on the honest ma- jority assumption. However, strategic mining threatens this assumption, because selfish miners can gain more block rewards than honest miners by attacks such as withholding blocks. Due to its significant implica- tion, blockchain mining games have been studied in PoW and PoS under various settings using different methods. Nonetheless, this paper argues that the practical limitation of random beacons has not been exploited in strategic mining in PoS blockchains. Current PoS blockchains use random beacons to randomly select valida- tors for each slots. However, the randomness is usually fixed for multiple slots, due to the latency of distributed random beacon protocols. This indicates that validators actually know some information about the elec- tion result in the future, which contrasts with the Markov process models in previous analysis. Using this information, this paper presents a close to optimal mining strategy based on an optimal interval scheduling algo- rithm for each epoch. For proof-of-stake protocols with no propagation delay, we show that a validator with arbitrary proportion of stake can strictly benefit from strategic mining and get significantly higher block rewards than the previous strategies.
Expand
Jintong Yu, Yuxuan Wang, Shipei Qu, Yubo Zhao, Yipeng Shi, Pei Cao, Xiangjun Lu, Chi Zhang, Dawu Gu, Cheng Hong
ePrint Report ePrint Report
With the advancement of deep learning techniques, Deep Learning-based Non-profiled Side-Channel Analysis (DL-NSCA) can automatically learn and combine features, making it a promising method that can skip the manual and precise selection of Points of Interest (PoIs). Existing DL-NSCA methods assume that the attacker can identify a short leakage interval (usually less than 5000 points) containing PoIs from raw traces (more than 100,000 points) and then feed the leakage interval into the neural network to recover the key. However, in practice, the attacker often faces a black-box scenario with unknown underlying implementations, making locating the short interval from raw traces challenging, especially when masking countermeasures exist. To address this issue, we propose a lightweight end-to-end DL-NSCA model called convWIN-MCR, which consists of a performance-optimizing component, convWIN, and an accelerator component, MCR. It can efficiently process raw traces without the need to manually identify the short leakage interval. On the public dataset ASCADv1, while the state-of-the-art model Multi-Output Regression (MOR) requires 28,000 traces and 24 minutes to recover the key from the leakage interval with 1,400 feature points, our framework only requires 6,000 traces in 13 minutes to directly analyze raw traces with 250,000 feature points. To further validate the practical applicability of our framework, we successfully crack a commercial USIM card by analyzing its raw traces and recovering its 128-bit AES key.
Expand
Simone Colombo, Damian Vizár
ePrint Report ePrint Report
A growing body of work addresses the security of cryptographic systems in the presence of mass surveillance, a threat made concrete by Snowden’s revelations and the widespread use of spyware against journalists and activists. In this paper, we investigate the security of symmetric encryption faced with simultaneous algorithm substitution attacks (ASAs) and key exfiltration (KE). The security of symmetric encryption in presence of ASAs or KE alone was established but no result deals with their coordinated deployment. Yet, that is a necessary step to be made if we are to achieve actual security against mass surveillance. We formalize this setting, and prove that no scheme alone stands chance against coordinated ASA and KE, by describing a realistic attack. We then describe a new kind of schemes, which make use of externally supplied randomness. We formalize their security and give a construction which provably resists simultaneous ASAs and KE when paired with a verifiable source of randomness, with security bounds in the concrete security spirit.
Expand
Jiping Yu, Kun Chen, Xiaoyu Fan, Yunyi Chen, Xiaowei Zhu, Wenguang Chen
ePrint Report ePrint Report
Encrypted matrix-vector multiplication is a fundamental component of a variety of applications that involve data privacy concerns. Current algorithms utilizing fully homomorphic encryption (FHE) generally use batching to enhance computational efficiency while neglecting the sparsity of the matrices, a characteristic that exists naturally in many practical situations. Alternatively, porting plaintext algorithms that address sparsity may fail to utilize batching and introduce additional privacy concerns.

We propose Lodia, an efficient outsourced SpMV algorithm for batched FHE schemes without sacrificing privacy. It only requires $\Theta((n+m)\log(n+m)/s)$ FHE operations, where $n$ is the number of rows/columns, $m$ is the number of non-zero elements of the matrix, and $s$ is the batch size of the FHE scheme. This is optimal for $m=\Omega(n)$ and $m=O(n^\rho)$ for some $\rho<2$ (i.e., $an \le m \le bn^\rho$ asymptotically), covering most practical cases. To our knowledge, no method has been published with better than $\Theta(n^2/s)$ FHE operations, suitable for any sparse matrix, and without privacy concerns.

Lodia utilizes a novel low-diagonal decomposition, which decomposes a sparse matrix into a series of special matrices named low-diagonal matrices. Based on a conventional method encoding the matrix in diagonal order, each low-diagonal matrix can be efficiently multiplied by a vector. This results in an efficient SpMV method suitable for any sparse matrix. Experiments show that Lodia practically achieves a speedup of up to $96\times$ compared to baselines that ignore matrix sparsity, and up to $3.6\times$ compared to implementations even with fewer security guarantees. This is the first SpMV solution on encrypted data that can process a substantial matrix with over 8 million rows/columns and 125 million non-zero elements.
Expand
Luke Beckwith, Andre Esser, Edoardo Persichetti, Paolo Santini, Floyd Zweydinger
ePrint Report ePrint Report
LESS is a signature scheme based on the code equivalence problem that has advanced to the second round of the NIST PQC standardization process. While promising, the scheme suffers from relatively large signatures and moderate to slow signing and verification times. Chou, Santini, and Persichetti recently introduced a variant of LESS relying on canonical forms to significantly reduce signature sizes. However, the overall performance impact of this approach remained largely unclear. In this work, we provide the first implementation of the new LESS variant and show that, in its original form, it performs poorly due to the overhead of computing canonical forms in a naïve way. We then introduce a series of algorithmic and implementation-level optimizations that reduce this overhead to about 10%, showing that the signature size reduction comes at minor cost. In addition, we present further improvements to the signature scheme as a whole, as well as a re-parameterization. The resulting scheme achieves speedups of 2.5× to 10× over the Round 1 NIST submission, while maintaining the reduced signature sizes.
Expand
Haotian Yin, Jie Zhang, Wanxin Li, Yuji Dong, Eng Gee Lim, Dominik Wojtczak
ePrint Report ePrint Report
Proxy re-encryption is a cryptographic scheme enabling a delegator (user $i$) to delegate its decryption right to a valid delegatee (user $j$) through a proxy, who cannot extract any information about the message during the procedure. An important security notion is the security against collusion between the proxy and the delegatee. In this case, the adversary has the secret key of the delegatee, $\mathsf{sk}_j$, and the re-encryption key, $\mathsf{rk}_{i\to j}$. The master secret security is first formalised by Ateniese et al. (NDSS'05) to capture the secrecy of $i$'s secret key during collusion. This notion was further formalised by Zhou et al. (ASIACRYPT'23) as the indistinguishability of re-encrypted ciphertext against chosen-message attacks, called collusion safety, which implies the master secret security. In this paper, we find that a PRE scheme is not master secret secure as they claimed, and many other schemes were not master secret secure. Then, we propose a generic construction to achieve collusion safety at the cost of doubling the key size from the IND-CPA secure PRE, enjoying a much better generality and efficiency than the existing technique by secret sharing.
Expand
Minka Mi Nguidjoi Thierry Emmanuel, Mani Onana Flavien Serge, Djotio Ndié Thomas, Atsa Etoundi Roger
ePrint Report ePrint Report
This article presents the architectural design of Zero Knowledge Non-Repudiation (ZK-NR), a layered cryptographic protocol enabling post-quantum secure, legally interpretable, and verifiably non-repudiable attestations. Built upon STARK-based zero-knowledge proofs, hybrid post-quantum signatures, and entropy-accumulating ledger anchoring, ZK-NR satisfies the structural properties of both the Q2CSI framework and the NIZK-E model. The protocol achieves semantic interpretability by structurally separating contextual proofs from bounded explanations, while maintaining cryptographic soundness under the Universal Composability framework. Formal UC proofs are deferred to Article A.2 in this series. This article constitutes the first entry in the ZK-NR series, focused specifically on the protocol's architectural and functional design. Subsequent articles will cover its mathematical foundations, implementation strategies, and operational validation using formal verification environments.
Expand

04 August 2025

Divesh Aggarwal, Pranjal Dutta, Saswata Mukherjee, Satyajeet Nagargoje, Maciej Obremski
ePrint Report ePrint Report
Randomness is a fundamental requirement in cryptographic systems, enabling secure encryption, commitments, and zero-knowledge proofs. However, real-world randomness sources often suffer from weaknesses that adversaries can exploit, leading to significant security vulnerabilities. While deterministic randomness extraction from a single min-entropy source is impossible, two-source extractors provide a robust solution by generating nearly uniform randomness from two independent weak sources. Moreover, cryptographic systems must also be resilient to leakage and tampering attacks, necessitating the development of non-malleable two-source extractors.

In this work, we construct a two-source non-malleable extractor in the Common Reference String (CRS) model, where a random low-degree polynomial is sampled once and made accessible to independent random sources, the distinguisher, and the tamperer. Our extractor requires only linear min-entropy in both sources and doesn't rely on strong computational assumptions, in contrast to prior constructions requiring computational assumptions such as sub-exponential hardness of the Decisional Diffie-Hellman (DDH) problem. Notably, our construction builds upon and relies on the recent breakthrough proof of the polynomial Freiman-Ruzsa conjecture. A connection of the Freiman-Ruzsa conjecture with two-source extractors was considered in prior work [ZBS11],[AGMR24], but their construction did not achieve non-malleability.

Our results advance the state of non-malleable cryptographic primitives, with applications in secure storage, leakage-resilient cryptography, and privacy amplification. By eliminating the need for strong computational hardness assumptions, our techniques provide a more foundational and widely applicable method for randomness extraction.

We also show, that the requirements on CRS for our application are so mild that the CRS can be sampled with $2$ party computation even when one of the parties is malicious (setting in which establishing unbiased coins is impossible).
Expand
Sebastian Angel, Sofía Celi, Elizabeth Margolin, Pratyush Mishra, Martin Sander, Jess Woods
ePrint Report ePrint Report
We introduce Coral, a system for proving in zero- knowledge that a committed byte stream corresponds to a structured object in accordance to a Context Free Grammar. Once a prover establishes the validity of the parsed object with Coral, they can selectively prove facts about the object—such as fields in Web API responses or in JSON Web Tokens—–to third parties or blockchains. Coral reduces the problem of correct parsing to a few simple checks over a left-child right-sibling tree and introduces a novel segmented memory abstraction that unifies and extends prior constructions for RAM in zkSNARKs. Our implementation of Coral runs on a standard laptop, and non-interactively proves the parsing of real Web responses (JSON) and files (TOML and C) in seconds. The resulting proofs are small and cheap to verify.
Expand
Jan Bormet, Arka Rai Choudhuri, Sebastian Faust, Sanjam Garg, Hussien Othman, Guru-Vamsi Policharla, Ziyan Qu, Mingyuan Wang
ePrint Report ePrint Report
Threshold encrypted mempools protect the privacy of transactions up until the point their inclusion on chain is confirmed. They are a promising approach to protection against front-running attacks on decentralized blockchains.

Recent works have introduced two key properties that an encryption scheme must satisfy in order to scale to large scale decentralized blockchains such as Ethereum: Silent Setup [Garg-Kolonelos-Policharla-Wang, CRYPTO'24], demands that a threshold encryption scheme does not require any interaction during the setup phase and only relies on the existence of Public Key Infrastructure. Batched Decryption [Choudhuri-Garg-Piet-Policharla, USENIX'24], demands that an entire block containing $B$ encrypted transactions can be decrypted using communication that is independent of (or sublinear in) $B$, without compromising the privacy of transactions that have not yet been confirmed.

While existing constructions achieve either Silent Setup or Batched Decryption independently, a truly decentralized and scalable encrypted mempool requires both properties to be satisfied simultaneously. In this work, we present the first ``Batched Threshold Encryption scheme with Silent Setup'' built using bilinear pairings. We provide formal definitions for the primitive, and prove security in the Generic Group Model. We provide several optimizations and implement our scheme to evaluate its performance. Our experiments demonstrate its efficiency for deployment in blockchain systems.
Expand
Nick Aquina, Simon Rommel, Idelfonso Tafur Monroy
ePrint Report ePrint Report
Cascader has been introduced as a new key exchange protocol based on iterative multiplicative recurrence. This short note presents a practical shared key recovery attack on the Cascader key exchange protocol. This note also shows that Cascader as a hash function is not collision resistant, presents a new upper bound on the output space of Cascader and shows that a Cascader-based KDF is not secure against an Adaptive Chosen Public Inputs Attack (CPM).
Expand
Joshua Limbrey, Andrew Mendelsohn
ePrint Report ePrint Report
In Submission 2025/1391 to the IACR Cryptology ePrint Archive, the Inverse Discrete Logarithm Problem (IDLP) is introduced and used to build a key exchange protocol and a KEM. The author claims both classical and post-quantum security for IDLP and therefore for the proposed protocols. It is the purpose of this note to give an efficient quantum algorithm for IDLP, based on the algorithm of Shor. We give an implementation of our algorithm, replacing the use of Shor's algorithm with an oracle.
Expand
◄ Previous Next ►