International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

12 February 2025

IBM Research Zurich, Switzerland
Job Posting Job Posting

We have an opening of two PhD positions (one starting September 2025, one starting in 2026) and 1 postdoc (starting September 2025) in the Foundational Cryptography group at IBM Research Zurich. The PhD positions are fully funded for 4 years. The Foundational Cryptography team currently consists of 9 permanent researchers and 7 PhD students.

The research project that the students and postdoc will be working on is about developing post-quantum cryptographic algorithms for human authentication, PIN-based protocols, and the IoT. A background in any of lattice-based cryptography, multi-party computation, or password-authenticated key exchange is helpful but not a requirement. We will explore both theoretical limitations and usable solutions, and depending on the interest of the applicant, either a more foundational or practical direction can be taken.

Applicants need to have a passion for the mathematical analysis of algorithms in general and cryptography in particular. A master's degree in mathematics or computer science and fluently written and spoken English is required.

The IBM Research Zurich lab is located on the beautiful Lake Zurich, close to the Swiss Alps. Research divisions include IT Security, Quantum, AI, and Hybrid Cloud Systems. IBM fosters an inclusive and diverse working environment. Applicants from minorities are particularly encouraged to apply.

Closing date for applications:

Contact: Julia Hesse

Expand

11 February 2025

Tim Beyne, Michiel Verbauwhede
ePrint Report ePrint Report
It is shown that the stream cipher proposed by Carlet and Sarkar in ePrint report 2025/160 is insecure. More precisely, one bit of the key can be deduced from a few keystream bytes. This property extends to an efficient key-recovery attack. For example, for the proposal with 80 bit keys, a few kilobytes of keystream material are sufficient to recover half of the key.
Expand
Dimitri Koshelev, Antonio Sanso
ePrint Report ePrint Report
The present article is a natural extension of the previous one about the GLV method of accelerating a (multi-)scalar multiplication on elliptic curves of moderate CM discriminants $D < 0$. In comparison with the first article, much greater magnitudes of $D$ (in absolute value) are achieved, although the base finite fields of the curves have to be pretty large. This becomes feasible by resorting to quite powerful algorithmic tools developed primarily in the context of lattice-based and isogeny-based cryptography. Curiously, pre-quantum cryptography borrows research outcomes obtained when seeking conversely quantum-resistant solutions or attacks on them.

For instance, some $2$-cycle of pairing-friendly MNT curves (with $-D \approx 100{,}000{,}000$, i.e., $\log_2(-D) \approx 26.5$) is relevant for the result of the current article. The given $2$-cycle was generated at one time by Guillevic to provide $\approx 128$ security bits, hence it was close to application in real-world zk-SNARKs. Another more performant MNT $2$-cycle (with slightly smaller security level, but with much larger $D$) was really employed in the protocol Coda (now Mina) until zero-knowledge proof systems on significantly faster pairing-free (or half-pairing) $2$-cycles were invented. It is also shown in the given work that more lollipop curves, recently proposed by Costello and Korpal to replace MNT ones, are now covered by the GLV technique.
Expand
Paco Azevedo-Oliveira, Andersson Calle Viera, Benoît Cogliati, Louis Goubin
ePrint Report ePrint Report
In Dilithium, the rejection sampling step is crucial for the proof of security and correctness of the scheme. However, to our knowledge, there is no attack in the literature that takes advantage of an attacker knowing rejected signatures. The aim of this paper is to create a practical black-box attack against Dilithium with a weakened rejection sampling. We succeed in showing that an adversary with enough rejected signatures can recover Dilithium's secret key in less than half an hour on a desktop computer. There is one possible application for this result: by physically preventing one of the rejection sampling tests from happening, we obtain two fault attacks against Dilithium.
Expand
Sarisht Wadhwa, Julian Ma, Thomas Thiery, Barnabe Monnot, Luca Zanolini, Fan Zhang, Kartik Nayak
ePrint Report ePrint Report
The decentralized nature of blockchains is touted to provide censorship resistance. However, in reality, the ability of proposers to completely control the contents of a block makes censorship relatively fragile. To combat this, a notion of inclusion lists has been proposed in the blockchain community. This paper presents the first formal study of inclusion lists. Our inclusion list design leverages multiple proposers to propose transactions and improve censorship resistance. The design has two key components. The first component is a utility-maximizing input list creation mechanism that allows rational proposers to achieve a correlated equilibrium while prioritizing high-value transactions. The second component, AUCIL (auction-based inclusion list), is a mechanism for aggregating the input lists from the proposers to output an inclusion list.
Expand
Julien Béguinot, Loïc Masure
ePrint Report ePrint Report
We exhibit a gap between the average random probing model, as defined by Dziembowski et al. at Eurocrypt 2015, and the same model, as defined in the recent paper of Brian et al. at Eurocrypt 2024. Whereas any noisy leakage can be tightly reduced to the former one, we show in this paper that it cannot be tightly reduced to the latter one, unless requiring extra assumptions, e.g., if the noisy leakage is deterministic. As a consequence, the reduction from noisy leakages to random probings — without field size loss — remains unproven.
Expand
Shivam Bhasin, Dirmanto Jap, Marina Krček, Stjepan Picek, Prasanna Ravi
ePrint Report ePrint Report
Machine learning (ML) has been widely deployed in various applications, with many applications being in critical infrastructures. One recent paradigm is edge ML, an implementation of ML on embedded devices for Internet-of-Things (IoT) applications. In this work, we have conducted a practical experiment on Intel Neural Compute Stick (NCS) 2, an edge ML device, with regard to fault injection (FI) attacks. More precisely, we have employed electromagnetic fault injection (EMFI) on NCS 2 to evaluate the practicality of the attack on a real target device. We have investigated multiple fault parameters with a low-cost pulse generator, aiming to achieve misclassification at the output of the inference. Our experimental results demonstrated the possibility of achieving practical and repeatable misclassifications.
Expand
Cruz Barnum, David Heath
ePrint Report ePrint Report
It is often desirable to break cryptographic primitives into two components: an input-independent offline component, and a cheap online component used when inputs arrive. Security of such online/offline primitives must be proved in the input-adaptive setting: the adversary chooses its input adaptively, based on what it sees in the offline-phase. Proving security in the input-adaptive setting can be difficult, particularly when one wishes to achieve simulation security and avoid idealized objects like a random oracle (RO).

This work proposes a framework for reasoning about input-adaptive primitives: adaptive distributional security (ADS). Roughly, an ADS primitive provides security when it is used with inputs drawn from one of two distributions that are themselves hard to distinguish. ADS is useful as a framework for the following reasons: - An ADS definition can often circumvent impossibility results imposed on the corresponding simulation-based definition. This allows us to decrease the online-cost of primitives, albeit by using a weaker notion of security. - With care, one can typically upgrade an ADS-secure object into a simulation-secure object (by increasing cost in the online-phase). - ADS is robust, in the sense that (1) it enables a form of composition and (2) interesting ADS primitives are highly interconnected in terms of which objects imply which other objects. - Many useful ADS-secure objects are plausibly secure from straightforward symmetric-key cryptography.

We start by defining the notion of an ADS encryption (ADE) scheme. A notion of input-adaptive encryption can be easily achieved from RO, and the ADE definition can be understood as capturing the concrete property provided by RO that is sufficient to achieve input-adaptivity. From there, we use ADE to achieve ADS variants of garbled circuits and oblivious transfer, to achieve simulation-secure garbled circuits, oblivious transfer, and two-party computation, and prove interconnectedness of these primitives. In sum, this results in a family of objects with extremely cheap online-cost.
Expand

10 February 2025

Madrid, Spain, 3 April - 3 May 2025
Event Calendar Event Calendar
Event date: 3 April to 3 May 2025
Submission deadline: 14 March 2025
Expand
Jad Silbak, Daniel Wichs
ePrint Report ePrint Report
We study error detection and correction in a computationally bounded world, where errors are introduced by an arbitrary $\textit{polynomial-time}$ adversarial channel. Our focus is on $\textit{seeded}$ codes, where the encoding and decoding procedures can share a public random seed, but are otherwise deterministic. We can ask for either $\textit{selective}$ or $\textit{adaptive}$ security, depending on whether the adversary can choose the message being encoded before or after seeing the seed. For large alphabets, a recent construction achieves essentially optimal rate versus error tolerance trade-offs under minimal assumptions, surpassing information-theoretic limits. However, for the binary alphabet, the only prior improvement over information theoretic codes relies on non-standard assumptions justified via the random oracle model. We show the following:

$\textbf{Selective Security under LWE:}$ Under the learning with errors (LWE) assumption, we construct selectively secure codes over the binary alphabet. For error detection, our codes achieve essentially optimal rate $R \approx 1$ and relative error tolerance $\rho \approx \frac{1}{2}$. For error correction, they can uniquely correct $\rho < 1/4$ relative errors with a rate $R$ that essentially matches that of the best list-decodable codes with error tolerance $\rho$. Both cases provide significant improvements over information-theoretic counterparts. The construction relies on a novel form of 2-input correlation intractable hash functions that we construct from LWE.

$\textbf{Adaptive Security via Crypto Dark Matter:}$ Assuming the exponential security of a natural collision-resistant hash function candidate based on the ``crypto dark matter'' approach of mixing linear functions over different moduli, we construct adaptively secure codes over the binary alphabet, for both error detection and correction. They achieve essentially the same trade-offs between error tolerance $\rho$ and rate $R$ as above, with the caveat that for error-correction they only do so for sufficiently small values of $\rho$.
Expand
Madhurima Mukhopadhyay
ePrint Report ePrint Report
We study the problem of finding a path between conjugate supersingular elliptic curves over $\mathbb{F}_{p^2}$ for a prime $p$, which is important for cycle finding in supersingular isogeny graphs. We see that for any given $p$, there is some $l$ corresponding to $p$ which accelerates the process of conjugate path-finding. Also, time-wise, the most efficient way of overviewing the graph is seeing $i(=3)$ steps at once. We have outlined methods in which the next vertex of any pseudo-random walk should be chosen to reach conjugate vertex faster. We have experimentally investigated the paths between frobenius conjugates for wide ranges of small prime $l$. We introduce sets to experimentally learn about the structure of the isogeny graphs when short cycles are present.
Expand
Nan Wang, Qianhui Wang, Dongxi Liu, Muhammed F. Esgin, Alsharif Abuadbba
ePrint Report ePrint Report
RingCT signatures are essential components of Ring Confidential Transaction (RingCT) schemes on blockchain platforms, enabling anonymous transaction spending and significantly impacting the scalability of these schemes. This paper makes two primary contributions:

We provide the first thorough analysis of a recently developed Any-out-of-N proof in the discrete logarithm (DLOG) setting and the associated RingCT scheme, introduced by ZGSX23 (S&P '23). The proof conceals the number of the secrets to offer greater anonymity than K-out-of-N proofs and uses an efficient "K-Weight" technique for its construction. However, we identify for the first time several limitations of using Any-out-of-N proofs, such as increased transaction sizes, heightened cryptographic complexities and potential security risks. These limitations prevent them from effectively mitigating the longstanding scalability bottleneck.

We then continue to explore the potential of using K-out-of-N proofs to enhance scalability of RingCT schemes. Our primary innovation is a new DLOG-based RingCT signature that integrates a refined "K-Weight"-based K-out-of-N proof and an entirely new tag proof. The latter is the first to efficiently enable the linkability of RingCT signatures derived from the former, effectively resisting double-spending attacks.

Finally, we identify and patch a linkability flaw in ZGSX23's signature. We benchmark our scheme against this patched one to show that our scheme achieves a boost in scalability, marking a promising step forward.
Expand
Huck Bennett, Drisana Bhatia, Jean-François Biasse, Medha Durisheti, Lucas LaBuff, Vincenzo Pallozzi Lavorante, Philip Waitkevich
ePrint Report ePrint Report
We present several new provable algorithms for two variants of the code equivalence problem on linear error-correcting codes, the Linear Code Equivalence Problem (LCE) and the Permutation Code Equivalence Problem (PCE). Specifically, for arbitrary codes of block length $n$ and dimension $k$ over any finite field $\mathbb{F}_q$, we show: 1) A deterministic algorithm running in $2^{n + o(n+q)}$ time for LCE. 2) A randomized algorithm running in $2^{n/2 + o(n+q)}$ time for LCE and PCE. 3) A quantum algorithm running in $2^{n/3 + o(n+q)}$ time for LCE and PCE. The first algorithm complements the deterministic roughly $2^n$-time algorithm of Babai (SODA 2011) for PCE. The second two algorithms improve on recent work of Nowakowski (PQCrypto 2025), which gave algorithms with similar running times, but only for code equivalence on \emph{random} codes and only over fields of order $q \geq 7$.
Expand
Maher Mamah
ePrint Report ePrint Report
In this paper, we investigate several computational problems motivated by post-quantum cryptosystems based on isogenies and ideal class group actions on oriented elliptic curves. Our main technical contribution is an efficient algorithm for embedding the ring of integers of an imaginary quadratic field \( K \) into some maximal order of the quaternion algebra \( B_{p,\infty} \) ramified at a prime \( p \) and infinity. Assuming the Generalized Riemann Hypothesis (GRH), our algorithm runs in probabilistic polynomial time, improving upon previous results that relied on heuristics or required the factorization of \( \textnormal{disc}(K) \). Notably, this algorithm may be of independent interest. Our approach enhances the work of Love and Boneh \citep{LB20} on computing isogenies between \( M \)-small elliptic curves by eliminating heuristics and improving computational efficiency. Furthermore, given a quadratic order \( \mathfrak{O} \) in \( K \), we show that our algorithm reduces the computational endomorphism ring problem of \( \mathfrak{O} \)-oriented elliptic curves to the Vectorization problem in probabilistic polynomial time, assuming the conductor of \( \mathfrak{O} \) can be efficiently factorized. Previously, the best known result required the full factorization of \( \textnormal{disc}(\mathfrak{O}) \), which may be exponentially large. Additionally, when the conductor of \( \mathfrak{O} \) can be efficiently factorized, we establish a polynomial-time equivalence between the Quaternion Order Embedding Problem, which asks to embed a quadratic order \( \mathfrak{O} \) into a maximal order in \( B_{p,\infty} \), and computing horizontal isogenies between \( \mathfrak{O} \)-oriented elliptic curves. Leveraging this reduction, we propose a rigorous algorithm, under GRH, that solves the quaternion order embedding problem in time \( \tilde{O}(|\mathrm{disc}(\mathfrak{O})|^{1/2}) \), improving upon previous methods that required higher asymptotic time and relied on several heuristics.
Expand
Marcel Nageler, Shibam Ghosh, Marlene Jüttler, Maria Eichlseder
ePrint Report ePrint Report
Differential cryptanalysis is one of the main methods of cryptanalysis and has been applied to a wide range of ciphers. While it is very successful, it also relies on certain assumptions that do not necessarily hold in practice. One of these is the hypothesis of stochastic equivalence, which states that the probability of a differential characteristic behaves similarly for all keys. Several works have demonstrated examples where this hypothesis is violated, impacting the attack complexity and sometimes even invalidating the investigated prior attacks. Nevertheless, the hypothesis is still typically taken for granted. In this work, we propose AutoDiVer, an automatic tool that allows to thoroughly verify differential characteristics. First, the tool supports calculating the expected probability of differential characteristics while considering the key schedule of the cipher. Second, the tool supports estimating the size of the space of keys for which the characteristic permits valid pairs, and deducing conditions for these keys. AutoDiVer implements a custom SAT modeling approach and takes advantage of a combination of features of advanced SAT solvers, including approximate model counting and clause learning. To show applicability to many different kinds of block ciphers like strongly aligned, weakly aligned, and ARX ciphers, we apply AutoDiVer to GIFT, PRESENT, RECTANGLE, SKINNY, WARP, SPECK, and SPEEDY.
Expand
Orfeas Stefanos Thyfronitis Litos, Zhaoxuan Wu, Alfredo Musumeci, Songyun Hu, James Helsby, Michael Breza, William Knottenbelt
ePrint Report ePrint Report
Blockchains enable decentralised applications that withstand Byzantine failures and do not need a central authority. Unfortunately, their massive replication requirements preclude their use on constrained devices.

We propose a novel blockchain-based data structure which forgoes replication without affecting the append-only nature of blockchains, making it suitable for maintaining data integrity over networks of storage-constrained devices. Our solution does not provide consensus, which is not required by our motivating application, namely securely storing sensor data of containers in cargo ships.

We elucidate the practical promise of our technique by following a multi-faceted approach: We (i) formally prove the security of our protocol in the Universal Composition (UC) setting, as well as (ii) provide a small-scale proof-of-concept implementation, (iii) a performance simulation for large-scale deployments which showcases a reduction in storage of more than $1000$x compared to traditional blockchains, and (iv) a resilience simulation that predicts the practical effects of network jamming attacks.
Expand
Apostolos Mavrogiannakis, Xian Wang, Ioannis Demertzis, Dimitrios Papadopoulos, Minos Garofalakis
ePrint Report ePrint Report
We introduce oblivious parallel operators designed for both non-foreign key and foreign key equi-joins. Obliviousness ensures nothing is revealed about the data besides input/output sizes, even against a strong adversary that can observe memory access patterns. Our solution achieves this by combining trusted hardware with efficient oblivious primitives for compaction and sorting, and two oblivious algorithms: (i) an oblivious aggregation tree, which can be described as a variation of the parallel prefix sum, customized for trusted hardware, and (ii) a novel algorithm for obliviously expanding the elements of a relation. In the sequential setting, our oblivious join performs $4.6\times$- $5.14\times$ faster than the prior state-of-the-art solution (Krastnikov et al., VLDB 2020) on data sets of size $n=2^{24}$. In the parallel setting, our algorithm achieves a speedup of up to roughly $16\times$ over the sequential version, when running with 32 threads (becoming up to $80\times$ compared to the sequential algorithm of Krastnikov et al.). Finally, our oblivious operators can be used independently to support other oblivious relational database queries, such as oblivious selection and oblivious group-by.
Expand
Zhiyuan An, Fangguo Zhang
ePrint Report ePrint Report
We introduce an enhanced requirement of deniable public key encryption that we call dual-deniability. It asks that a sender who is coerced should be able to produce fake randomness, which can explain the target ciphertext as the encryption of any alternative message under any valid key she/he desires to deny. Compared with the original notion of deniability (Canetti et al. in CRYPTO ’97, hereafter named message-deniability), this term further provides a shield for the anonymity of the receiver against coercion attacks. We first give a formal definition of dual-deniability, along with its weak-mode variant. For conceptual understanding, we then show dual-deniability implies semantic security and anonymity against CPA, separates full robustness, and even contradicts key-less or mixed robustness, while is (constructively) implied by key-deniability and full robustness with a minor assumption for bits encryption. As for the availability of dual-deniability, our main scheme is a generic approach from ciphertext-simulatable PKE, where we devise a subtle multi-encryption schema to hide the true message within random masking ciphertexts under plan-ahead setting. Further, we leverage the weak model to present a more efficient scheme having negligible detection probability and constant ciphertext size. Besides, we revisit the notable scheme (Sahai and Waters in STOC ’14) and show it is inherently dual-deniable. Finally, we extend the Boneh-Katz transform to capture CCA security, deriving dual-deniable and CCA-secure PKE from any selectively dual-deniable IBE under multi-TA setting. Overall our work mounts the feasibility of anonymous messaging against coercion attacks.
Expand
Max Duparc, Mounir Taha
ePrint Report ePrint Report
In this paper, we build upon the blinding methods introduced in recent years to enhance the protection of lattice-based cryptographic schemes against side-channel and fault injection attacks. Specifically, we propose a cost-efficient blinded Number Theoretic Transform (NTT) that impedes the convergence of Soft Analytical Side-Channel Attacks (SASCA), even with limited randomness sampling. Additionally, we extend the blinding mechanism based on the Chinese Remainder Theorem (CRT) and Redundant Number Representation (RNR) introduced by Heiz and Pöppelmann by reducing the randomness sampling overhead and accelerating the verification phase.

These two blinding mechanisms are nicely compatible with each other's and, when combined, provide enhanced resistance against side-channel attacks, both classical and soft analytical, as well as fault injection attacks, while maintaining high performance and low overhead, making the approach well-suited for practical applications, particularly in resource-constrained IoT environments.
Expand
Di Wu, Shoupeng Ren, Yuman Bai, Lipeng He, Jian Liu, Wu Wen, Kui Ren, Chun Chen
ePrint Report ePrint Report
Payment channels have emerged as a promising solution to address the performance limitations of cryptocurrencies payments, enabling efficient off-chain transactions while maintaining security guarantees. However, existing payment channel protocols, including the widely-deployed Lightning Network and the state-of-the-art Sleepy Channels, suffer from a fundamental vulnerability: non-atomic state transitions create race conditions that can lead to unexpected financial losses. We first formalize current protocols into a common paradigm and prove that this vulnerability is fundamental—any protocol following this paradigm cannot guarantee balance security due to the inherent race conditions in their design. To address this limitation, we propose a novel atomic paradigm for payment channels that ensures atomic state transitions, effectively eliminating race conditions while maintaining all desired functionalities. Based on this paradigm, we introduce Ultraviolet, a secure and efficient payment channel protocol that achieves both atomicity and high performance, while avoiding the introduction of unimplemented Bitcoin features. Ultraviolet reduces the number of required messages per transaction by half compared to existing solutions, while maintaining comparable throughput. We formally prove the security of Ultraviolet under the universal composability framework and demonstrate its practical efficiency through extensive evaluations across multiple regions. This results in a 37% and 52% reduction in latency compared to the Lightning Network and Sleepy Channels, respectively. Regarding throughput, Ultraviolet achieves performance comparable to the Lightning Network and delivers 2× the TPS of Sleepy Channels.
Expand
◄ Previous Next ►