International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

09 November 2025

Benjamin Dowling, Britta Hale, Xisen Tian, Bhagya Wimalasiri
ePrint Report ePrint Report
Among standardization efforts for space and interplanetary network security, the Internet Engineering Task Force (IETF) is driv- ing work on space network security, accounting for the unique proper- ties of space environments that make space communication challenging. This includes long, variable-length delays, packet loss, and intermittent end-to-end connectivity. Within these efforts, there is a focus on using IP-based protocols for security, and in particular the use of the QUIC protocol. This is unsurprising given QUIC’s growing popularity and of- fer of optimization intended for reducing latency. However, QUIC uses the Transport Layer Security (TLS) key exchange handshake protocol, which was originally designed for ‘connect and forget’ style Internet con- nections at scale. It is also session-based, where protocol participants require reestablishment of the session for each reconnection – a costly maneuver in the space setting. Furthermore, TLS by default does not achieve strong post-compromise security properties within sessions, ex- hibiting a risk under long-lived connections, and need for synchronous handshakes to counteract this are in functional contrast to the space environment, which has intermittent end-to-end connectivity. We address both drawbacks of QUIC by introducing QUIC-MLS: a vari- ant of QUIC which replaces the session-based, synchronous TLS hand- shake with the standardized continuous key agreement protocol, Mes- saging Layer Security (MLS), which achieves asynchronous forward se- crecy and post-compromise security. In addition to the design itself, we implement our design and provide benchmarks, and analyze our new construction in a formal cryptographic model.
Expand
Sulaiman Alhussaini, Sergeı̆ Sergeev
ePrint Report ePrint Report
We present a cryptanalysis of a multi-party key exchange protocol over a modified supertropical semiring, as proposed in a recent work of R. Ponmaheshkumar, J. Ramalingam, and R. Perumal. Building on the established methods for solving linear systems $A \otimes x=b$ over the tropical semiring, as well as on our recent work on solving such systems over layered semirings such as the symmetrized and supertropical semirings, we develop a method to compute a solution of $A \otimes x=b$ over the above mentioned modified supertropical semiring. This method enables the attacker to recover the shared secret key by solving the one-sided linear system derived from the public messages of the protocol. Our findings show that this modified supertropical platform does not provide the intended security and motivate further exploration of secure semiring-based constructions.
Expand
Irene Di Muzio, Martin Feussner, Igor Semaev
ePrint Report ePrint Report
We propose a new multivariate digital signature scheme whose central mapping arises from the product of two one-variate polynomials over a finite field $\mathbb{F}_q$. The resulting quadratic transformation is efficiently invertible through polynomial factorization, defining the trapdoor mechanism. The public key comprises $m$ bilinear forms in $2n$ variables, obtained by masking the central map with secret linear transformations. A reference implementation targeting NIST security level 1 achieves a 24-byte signature and a 23-kilobyte public key. This signature size is among the smallest ever proposed for level 1 security and the scheme achieves verification efficiency comparable to the fastest existing designs. Security relies on the hardness of solving certain bilinear systems, for which it seems no efficient classical or quantum algorithms are known.
Expand
Kai-Chun Ning, Lars Ran, Simona Samardjiska
ePrint Report ePrint Report
Algebraic cryptanalysis is an important and versatile tool in the evaluation of the security of various cryptosystems especially in multivariate cryptography. Its effectiveness can be determined by analyzing the Polynomial System Solving problem (PoSSo). However, the polynomial systems arising from cryptanalytic algebraic models often exhibit structure that is crucial for the solving complexity and is often not well understood.

In this paper we turn our focus to multi-homogeneous systems that very often arise in algebraic models. Despite their overwhelming presence, both the theory and the practical solving methods are not complete. Our work fills this gap. We develop a theory for multi-homogeneous systems that extends the one for regular and semi-regular sequences. We define "border-regular" systems and provide exact statements about the rank of a specific submatrix of the Macaulay that we associate to these systems. We then use our theoretical results to define Multi-homogeneous XL - an algorithm that extends XL to the multi-homogeneous case. We further provide fully optimized implementation of Multi-homogeneous XL that uses sparse linear algebra and can handle a vast parameter range of multi-homogeneous systems. To the best of our knowledge this is the first implementation of its kind, and we make it publicly available.
Expand
Julien Devevey, Morgane Guerreau, Maxime Roméas
ePrint Report ePrint Report
The transition to post-quantum cryptography involves balancing the long-term threat of quantum adversaries with the need for post-quantum algorithms and their implementations to gain maturity safely. Hybridization, i.e. combining classical and post-quantum schemes, offers a practical and safe solution.

We introduce a new security notion for hybrid signatures, Hybrid EU-CMA, which captures cross-protocol, separability, and recombination attacks that may occur during the post-quantum transition, while encompassing standard unforgeability guarantees. Using this framework, we adapt the Fiat-Shamir (with or without aborts) transform to build hybrid signature schemes that satisfy our notion from two identification schemes. Compared to simple concatenation of signatures, our construction (i) has no separability issues, (ii) reduces signature size, (iii) runs faster, and (iv) remains easily implementable.

As a concrete application, we propose Silithium, a hybrid signature combining the identification schemes underlying EC-Schnorr and ML-DSA. Implementing Silithium requires only an ML-DSA implementation supporting the ``external $\mu$'' option during verification and an elliptic curve library. In the security analysis, we show that our scheme can be safely used along with ML-DSA and either EC-Schnorr or ECDSA. A proof-of-concept OpenSSL implementation demonstrates its practicality, simplicity, and performance.
Expand
Gyeongwon Cha, Dongjin Park, Yejin Choi, Eunji Park, Joon-Woo Lee
ePrint Report ePrint Report
Emotion recognition has been an actively researched topic in the field of HCI. However, multimodal datasets used for emotion recognition often contain sensitive personal information, such as physiological signals, facial images, and behavioral patterns, raising significant privacy concerns. In particular, the privacy issues become crucial in workplace settings because of the risks such as surveillance and unauthorized data usage caused by the misuse of collected datasets. To address this issue, we propose an Encrypted Emotion Recognition (EER) framework that performs real-time inference on encrypted data using the CKKS homomorphic encryption (HE) scheme. We evaluated the proposed framework using publicly available WESAD and Hide-and-seek datasets, demonstrating successful stress/emotion recognition under encryption. The results demonstrated that encrypted inference achieved similar accuracy to plaintext inference, with accuracy of 0.966 (plaintext) vs. 0.967 (ciphertext) on the WESAD dataset, and 0.868 for both cases on the Hide-and-Seek dataset. Encrypted inference was performed on a GPU, with average inference times of 333 milliseconds for the general model and 455 milliseconds for the personalized model. Furthermore, we validated the feasibility of semi-supervised learning and model personalization in encrypted environments, enhancing the framework’s real-world applicability. Our findings suggest that the EER framework provides a scalable, privacy-preserving solution for emotion recognition in domains such as healthcare and workplace settings, where securing sensitive data is of critical importance.
Expand
Seonhong Min, Guillaume Hanrot, Jai Hyun Park, Alain Passelègue, Damien Stehlé
ePrint Report ePrint Report
Threshold fully homomorphic encryption provides efficient multi-party computation with low round-complexity. Among fully homomorphic encryption schemes, CKKS (Cheon-Kim-Kim-Song) enables high-throughput computations on both approximate and exact data. As most interesting applications involve deep computations, they require bootstrapping, the most efficient variants of which rely on sparse ternary secret keys. Unfortunately, so far, key generation protocols for threshold-CKKS either assume a trusted dealer, or lead to dense and non-ternary secret keys that severely damage computational throughput. In the latter case, the impact is so large that one often considers off-loading bootstrapping to an interactive protocol [Mouchet et al., PETS'21].

We introduce a novel Distributed Key Generation (DKG) protocol for threshold-CKKS. At a high level, it consists in running the existing distributed key generation algorithm from Mouchet et al. resulting in large secret keys, and using it to homomorphically evaluate the sparse-secret key generation algorithm. At the end, the parties obtain additive shares of a sparse secret key. The main technical challenge is to obtain an algorithm for sampling sparse ternary vectors of prescribed Hamming weight that can be CKKS-evaluated in an efficient manner. In the process, we design a new sampler of one-hot vectors that outperforms the one from [Boneh et al., AFT'20]. We also design a rejection-sampling algorithm to map several one-hot vectors into a vector of prescribed Hamming weight. The whole process can be performed with only two CKKS bootstraps, even for a significant number of users.

We present several variants of the DKG protocol, with~2 to~4 communication rounds, as well as an extension to key generation delegation. We implemented the 4-round protocol; its computational components run in 2.13s on GPU (RTX4090).
Expand
Omri Shmueli, Mark Zhandry
ePrint Report ePrint Report
Quantum cryptography is a rapidly-developing area which leverages quantum information to accomplish classically-impossible tasks. In many of these protocols, quantum states are used as long-term cryptographic keys. Typically, this is to ensure the keys cannot be copied by an adversary, owing to the quantum no-cloning theorem. Unfortunately, due to quantum state's tendency to decohere, persistent quantum memory will likely be one of the most challenging resources for quantum computers. As such, it will be important to minimize persistent memory in quantum protocols.

In this work, we consider the case of one-shot signatures (OSS), and more general quantum signing tokens. These are important unclonable primitives, where quantum signing keys allow for signing a single message but not two. Naturally, these quantum signing keys would require storage in long-term quantum memory. Very recently, the first OSS was constructed in a classical oracle model and also in the standard model, but we observe that the quantum memory required for these protocols is quite large. In this work, we significantly decrease the quantum secret key size, in some cases achieving asymptotically optimal size. To do so, we develop novel techniques for proving the security of cryptosystems using coset states, which are one of the main tools used in unclonable cryptography.
Expand
Eli Ben-Sasson, Dan Carmon, Ulrich Haböck, Swastik Kopparty, Shubhangi Saraf
ePrint Report ePrint Report
This paper is about the proximity gaps phenomenon for Reed-Solomon codes. Very roughly, the proximity gaps phenomenon for a code $\mathcal C \subseteq \mathbb F_q^n$ says that for two vectors $f,g \in \mathbb F_q^n$, if sufficiently many linear combinations $f + z \cdot g$ (with $z \in \mathbb F_q$) are close to $\mathcal C$ in Hamming distance, then so are both $f$ and $g$, up to a proximity loss of $\varepsilon^*$.

Determining the optimal quantitative form of proximity gaps for Reed--Solomon codes has recently become of great interest because of applications to interactive proofs and cryptography, and in particular, to scalable transparent arguments of knowledge (STARKs) and other modern hash based argument systems used on blockchains today.

Our main results show improved positive and negative results for proximity gaps for Reed-Solomon codes of constant relative distance $\delta \in (0,1)$.

1. For proximity gaps up to the unique decoding radius $\delta/2$, we show that arbitrarily small proximity loss $\varepsilon^* > 0$ can be achieved with only $O_{\varepsilon^*}(1)$ exceptional $z$'s (improving the previous bound of $O(n)$ exceptions). 2. For proximity gaps up to the Johnson radius $J(\delta)$, we show that proximity loss $\varepsilon^* = 0$ can be achieved with only $O(n)$ exceptional $z$'s (improving the previous bound of $O(n^2)$ exceptions). This significantly reduces the soundness error in the aforementioned arguments systems.

3. In the other direction, we show that for some Reed--Solomon codes and some $\delta$, proximity gaps at or beyond the Johnson radius $J(\delta)$ with arbitrarily small proximity loss $\varepsilon^*$ needs to have at least $\Omega(n^{1.99})$ exceptional $z$'s.

4. More generally, for all constants $\tau$, we show that for some Reed-Solomon codes and some $\delta = \delta(\tau)$, proximity gaps at radius $\delta - \Omega_{\tau}(1)$ with arbitrarily small proximity loss $\varepsilon^*$ needs to have $n^{\tau}$ exceptional $z$'s.

5. Finally, for all Reed-Solomon codes, we show that improved proximity gaps imply improved bounds for their list-decodability. This shows that improved bounds on the list-decoding radius of Reed-Solomon codes is a prerequisite for any new proximity gaps results beyond the Johnson radius.
Expand
Rohan Goyal, Venkatesan Guruswami
ePrint Report ePrint Report
Reed-Solomon (RS) codes were recently shown to exhibit an intriguing $\textit{proximity gap}$ phenomenon. Specifically, given a collection of strings with some algebraic structure (such as belonging to a line or affine space), either all of them are $\delta$-close to RS codewords, or most of them are $\delta$-far from the code. Here $\delta$ is the proximity parameter which can be taken to be the Johnson radius $1-\sqrt{R}$ of the RS code ($R$ being the code rate), matching its best known list-decodability. Proximity gaps play a crucial role in the soundness analysis of Interactive Oracle Proof (IOP) protocols used in Succinct Non-Interactive Arguments of Knowledge (SNARKs) and the resulting proof sizes.

Proving proximity gaps beyond the Johnson radius, and in particular approaching $1-R$ (which is best possible), has been posed multiple times as a challenge with significant practical consequences to the efficiency of SNARKs. Here we prove that variants of RS codes, such as folded RS codes and univariate multiplicity codes, indeed have proximity gaps for $\delta$ approaching $1-R$. The result applies more generally to codes with a certain subspace-design property. Our proof hinges on a clean property we abstract called line (or more generally curve) decodability, which we establish leveraging and adapting techniques from recent progress on list-decoding such codes. Importantly, our analysis avoids the heavy algebraic machinery used in previous works, and requires a field size only linear in the block length.

The behavior of subspace-design codes w.r.t ``local properties'' has recently been shown to be similar to random linear codes and random RS codes (where the evaluation points are chosen at random from the underlying field). We identify a local property that implies curve decodability, and thus also proximity gaps, and thereby conclude that random linear and random RS codes also exhibit proximity gaps up to the $1-R$ bound. Our results also establish the stronger (mutual) correlated agreement property which implies proximity gaps. Additionally, we also a show a $\textit{slacked}$ proximity gap theorem for constant-sized fields using AEL-based constructions and local property techniques.
Expand
Shibam Ghosh, Anup Kumar Kundu, Dhiman Saha
ePrint Report ePrint Report
Fault attacks have historically been one of the most popular gray-box attacks in Cryptographic literature. In such attacks, an attacker tries to inject perturbations while executing a cipher and exploit the faulty outputs to recover the key. While the efficiency of such an attack is measured by number of faults required and size of the reduced key-space, another pivotal temporal parameter in the point of fault injection which has not received considerable attention. In this work, we plug this gap for a special class of ciphers namely DEFAULT and BAKSHEESH which boast of an SBox with one or more linear structures (LS). We make new observations which lead to the improvement of Division Property based fault attacks (DIFA) introduced by Kundu et al. in ACNS 2023. We show that these linear structures are particularly responsible for giving the higher fault penetration in these ciphers. We improve the state-of-the-art for BAKSHEESH from 30 to 28 rounds and for DEFAULT from 75 to 72 using the single round random nibble fault model. While for BAKSHEESH we are able to uniquely recover the key, for DEFAULT, we are able to reduce the key-space to $2^{64}$. This leads to the best fault attacks on BAKSHEESH and DEFAULT in terms of number of rounds penetrated for fault injection. Our work reiterates the fact that a property induced (in this case LS in SBox)) for some particular Cryptographic purposes (like fault attack resistance) may manifest orthogonally for another (increasing fault penetration) and thus adds value to block cipher design space exploration and fault attack counter-measure development.
Expand
Abdoul Ahad Fall
ePrint Report ePrint Report
The rapid advancements in quantum computing pose a significant threat to widely used cryptographic standards such as RSA and Elliptic-Curve Diffie-Hellman (ECDH), which are fundamental to securing digital communications and protecting sensitive data worldwide. The increasing feasibility of "harvest now, decrypt later" strategies where adversaries collect encrypted data today with the intent of decrypting it once quantum computing reaches sufficient maturity underscores the urgency of transitioning toward quantum-resistant cryptographic solutions. A pragmatic approach to maintaining security during this transitional period is the adoption of hybrid cryptographic techniques, which integrate traditional cryptographic mechanisms with post-quantum cryptography (PQC) and Quantum Key Distribution (QKD).

This paper presents a comprehensive review of hybrid cryptographic approaches, focusing on their incorporation into widely adopted security protocols such as TLS 1.3 and QUIC. We examine the key challenges associated with deploying hybrid cryptography, including performance trade-offs, security guarantees, and compatibility with existing infrastructure. Beyond protocol-level implementations, we explore the initiatives undertaken by global standardization bodies and leading technology firms to facilitate a seamless transition toward a quantum-secure future. By analyzing current strategies and insights from early adopters, we identify the critical factors that organizations must consider to effectively implement hybrid cryptographic solutions, ensuring resilience against emerging cryptographic threats.
Expand
Sarah Bordage, Alessandro Chiesa, Ziyi Guan, Ignacio Manzur
ePrint Report ePrint Report
A generator is a function that maps a random seed to a list of coefficients. We study generators that preserve distance to a linear code: the linear combination of any list of vectors using coefficients sampled by the generator has distance to the code no smaller than that of the original vectors, except for a small error. Distance preservation plays a central role in modern probabilistic proofs, and has been formalized in several ways. We study \emph{mutual correlated agreement}, the strongest known form of distance preservation.

We initiate the systematic study of mutual correlated agreement, aiming to characterize the class of generators with this property. Towards this, we study polynomial generators, a rich class that includes all examples of generators considered in the distance preservation literature. Our main result is that \emph{all polynomial generators guarantee mutual correlated agreement for every linear code}. This improves on prior work both in generality (the class of generators covered) and in parameters (the error bounds).

We additionally provide new results for the case where the linear code is a Reed--Solomon code, which is of particular interest in applications. We prove that all polynomial generators satisfy mutual correlated agreement for Reed–Solomon codes up to the Johnson bound. In particular, we improve upon the state-of-the-art by Ben-Sasson, Carmon, Ishai, Kopparty, and Saraf (FOCS 2020) and resolve a question posed by Arnon, Chiesa, Fenzi, and Yogev (Eurocrypt 2025).

Along the way we develop a flexible and general toolbox for mutual correlated agreement, are the first to establish distance preservation for generators that lie beyond polynomial generators.
Expand
Sumesh Manjunath Ramesh, Hoda Alkhzaimi
ePrint Report ePrint Report
In our increasingly interconnected world, the security of embedded devices plays a critical role in protecting sensitive information. Evaluating this security requires a meticulous examination of how cryptographic processes are implemented within the hardware of these devices. One widely employed technique for this purpose is Power Side Channel Analysis. At the heart of Correlation Power Side Channel Analysis lies the concept of the power consumption model, which helps to simulate power consumption while executing cryptographic operations on hardware. In this model, along with the hypothetical secret key, is correlated with the actual power consumption during the execution of a cryptographic operation under an unknown secret key. This approach enables the detection of potential vulnerabilities in cryptographic implementations. In this research, we introduce a novel power leakage model called the Technology library Power Leakage Model. This model is rooted in semiconductor technology. By aligning our model closely with specific semiconductor technology, we achieve a more realistic representation of power consumption. Our study provides compelling evidence for the effectiveness of the Power Leakage Model. We successfully extracted secret keys from an AES implementation using Correlation Power Analysis (CPA). Importantly, our Power Leakage Model can be adapted to accommodate various semiconductor technologies such as $55nm$ or $14nm$ technologies. In this work, we also compared the newly proposed leakage model with the Hamming Distance model and empirically evaluated that both models perform similarly when linear correlation techniques are used in CPA.
Expand
Xinyu Mao, Jiapeng Zhang
ePrint Report ePrint Report
A $K$-multi-collision-resistant hash function ($K$-MCRH) is a shrinking keyed function for which it is computationally infeasible to find $K$ distinct inputs that map to the same output under a randomly chosen hash key; the case $K = 2$ coincides with the standard definition of collision-resistant hash function (CRH). A natural question is whether $K$-MCRH implies CRH for $K \geq 3$, as noted by Komargodski, Naor, and Yogev (EUROCRYPT 2018) and also by Jain, Li, Robere, and Xun (FOCS 2024).

We resolve this question for all constant $K$, showing that there is no black-box construction of $K$-MCRH from $(K + 1)$-MCRH for all constant $K \geq 2$. We also show that there is no black-box construction of distributional CRH (which is another relaxation of CRH) from 3-MCRH, answering an open question posed by Komargodski and Yogev (CRYPTO 2018) and also by Berman, Degwekar, Rothblum, and Vasudevan (EUROCRYPT 2018). Besides applications in cryptography, our separation also implies black-box separations between TFNP search problems, which are related to problems in proof complexity and other areas.
Expand

06 November 2025

Amit Agarwal, Kushal Babel, Sourav Das, Babak Poorebrahim Gilkalaye
ePrint Report ePrint Report
We introduce time-lock encrypted storage (tTLES), a storage service provided by blockchains. In tTLES, clients store encrypted values towards a future decryption time $\tau_{tgt}$ (measured in block height). The security of tTLES requires that a value is decrypted only if (i) the encrypted value is included in the blockchain, and (ii) the time $\tau_{tgt}$ has passed. This is crucially different from existing schemes, which only enforce either of these conditions but not both. We formalize tTLES, and present an efficient protocol that relies on (in a black-box manner) a threshold identity-based encryption scheme, and a recent batch threshold decryption scheme. Finally, we discuss various applications that will benefit from tTLES.
Expand
Weiqi Feng, Xinle Cao, Adam O'Neill, Chuanhui Yang
ePrint Report ePrint Report
Obliviousness has been regarded as an essential property in encrypted databases (EDBs) for mitigating leakage from access patterns. Yet despite decades of work, practical oblivious graph processing remains an open problem. In particular, all existing approaches fail to enable the design of index-free adjacency (IFA), i.e., each vertex preserves the physical positions of its neighbors. However, IFA has been widely recognized as necessary for efficient graph processing and is fundamental in native graph databases (e.g., Neo4j).

In this work, we propose a core technique named delayed duplication to resolve the conflict between IFA and obliviousness. To the best of our knowledge, we are the first to address this conflict with both practicality and strict security. Based on the new technique, we utilize elaborate data structures to develop a new EDB named Grove for processing expressive graph queries. The experimental results demonstrate that incorporating IFA makes Grove impressively outperform the state-of-the-art work across multiple graph-processing tasks, such as the well-known neighbor query and $t$-hop query.
Expand
Bengaluru, India, 2 June 2026
Event Calendar Event Calendar
Event date: 2 June 2026
Submission deadline: 13 February 2026
Notification: 16 March 2026
Expand
Bangalore, India, 2 June 2026
Event Calendar Event Calendar
Event date: 2 June 2026
Submission deadline: 20 January 2026
Notification: 9 March 2027
Expand
ENS Lyon, France
Job Posting Job Posting

The candidate will work on (quantum-)computational and mathematical aspects of lattice-based or isogeny-based cryptography. They will join the Number Theory team at ENS de Lyon, supported by grant ANR-22-PNCQ-0002 (the HQI initiative).

The candidate should hold a PhD degree in Mathematics or Computer Science and have a strong research record in any of the following areas: number theory, quantum computing, lattice-based cryptography, or isogeny-based cryptography.

Applications should be sent to Benjamin Wesolowski at postdoc.hqi.wiring373@passmail.net (including a CV, cover letter, and list of references).

Closing date for applications:

Contact: Benjamin Wesolowski, postdoc.hqi.wiring373@passmail.net

Expand
Next ►