International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

19 May 2025

Oğuz Yayla, Yunus Emre Yılmaz
ePrint Report ePrint Report
Phase-locked loops (PLLs) embedded within field-program-mable gate arrays (FPGAs) or system-on-chip FPGAs (SoCs) present a promising methodology for the generation of random numbers. Their widespread integration across these platforms, combined with their isolated operational characteristics and robust entropy generation, as substantiated by prior research, positions PLL-based true random number generators (PLL-TRNGs) as highly effective solutions for this purpose. The present study focuses on the implementation of PLL-TRNGs utilizing the ZC702 Rev1.1 Evaluation Board, which incorporates the Zynq-7020 SoC from Xilinx. For experimental validation, a configuration involving three such boards is employed. The parameters governing the PLL-TRNG are optimized through a backtracking algorithm. Additionally, a novel, platform-adaptive technique is proposed to enhance the rate of random data bit generation while preserving entropy characteristics. The system's performance is rigorously evaluated against the criteria established by the German Federal Office for Information Security (BSI) AIS-20/31 Tests, with a detailed account of the implementation process provided. Furthermore, the study demonstrates the minimal resource utilization of the PLL-TRNG design within a SoC, thereby underscoring its suitability for Internet-of-Things (IoT) applications, where logic resources are often highly constrained.
Expand
Mahdi Mahdavi, Ehsan Meamari, Emad Heydari Beni, Maryam Sheikhi
ePrint Report ePrint Report
Homomorphic encryption is a powerful tool that enables computation on encrypted data without requiring decryption. While many Fully Homomorphic Encryption schemes, supporting arbitrary computations on encrypted data, have been developed using lattice-based and AGCD-based approaches, progress in composite groups has been limited to Partial Homomorphic Encryption schemes, which only support either addition or multiplication. This paper introduces the first $\ell$-leveled homomorphic encryption schemes over composite groups, based on factoring problem, that achieve both multiplicative and additive homomorphic properties. %Additionally, a modified version of Rothblum’s transformation technique is developed to provide public key variants of the symmetric schemes. Our novel design eliminates the need for relinearization operation, which is common in LWE-based HE schemes, and removes the requirement for the circular security assumption. For applications where the traffic must be indistinguishable as encrypted, a scrambled scheme is designed using a labeling technique. While the initial schemes offer an expanded message space, the introduction of the double-sized Message technique enables the encryption of messages up to twice the size without increasing the ciphertext size. Implementation results show that our schemes significantly outperform existing solutions, particularly in multiplication operations, achieving speeds up to 1000 times faster than well-known schemes such as BFV, BGV, CKKS, and TFHE.
Expand
Mahdi Mahdavi, Helena Rifà-Pous
ePrint Report ePrint Report
Homomorphic Encryption (HE) is a fundamental Privacy-Enhancing Technology (PET) that enables computations on encrypted data without decryption. Despite its utility, designing an efficient and secure HE scheme is highly complex, requiring sophisticated cryptographic techniques. This paper introduces a novel approach to achieving homomorphic properties—supporting either one addition or one multiplication—within composite groups. While the proposed technique exhibits one-wayness, it has a good potential to serve as a foundational building block for constructing indistinguishable cryptosystems. This work contributes to the advancement of homomorphic encryption by providing a new perspective on secure computation within structured algebraic settings.
Expand
Bin Hu, Jianwei Liu, Zhenliang Lu, Qiang Tang, Zhuolun Xiang, Zongyang Zhang
ePrint Report ePrint Report
Dynamic-committee Proactive Secret Sharing (DPSS) has gained increased attention for its ability to dynamically update shareholder committees and refresh secret shares, even under adversaries that gradually corrupt all nodes. However, existing state-of-the-art asynchronous DPSS protocols suffer from significant $\mathcal{O}(n^3)$ message complexity and $\mathcal{O}(\lambda n^3)$ communication complexity, where $\lambda$ denotes the security parameter and $n$ is the committee size.

In this paper, under the trusted setup assumption, we propose the first asynchronous DPSS protocol with $\mathcal{O}(n^2)$ message complexity in all scenarios. Additionally, our protocol achieves $\mathcal{O}(\lambda n^2)$ communication complexity in the optimistic case, where all nodes are honest and the network is synchronous, and $\mathcal{O}(\lambda n^3)$ communication complexity in the worst case. Without a trusted setup, in the optimistic case, the message complexity is $\mathcal{O}(n^2)$, and the communication complexity is $\mathcal{O}(\lambda n^2 \log n)$. In the worst case, our protocol preserves state-of-the-art message and communication complexities, i.e., $\mathcal{O}(n^3)$ and $\mathcal{O}(\lambda n^3)$, respectively. Besides, our protocol supports batch amortization and accommodates high thresholds. For committee sizes of 4 to 400, the estimated concrete communication cost of our DPSS is 19--100x (resp., 8--14x) smaller in the optimistic case (resp., worst case) compared to the state-of-the-art. Experiments in AWS show that our DPSS achieves a latency of 1.9--8 seconds for committee sizes from 4 to 64. Single-machine benchmarks reveal a (computational) runtime reduction of up to 44%.
Expand
Michał Osadnik, Darya Kaviani, Valerio Cini, Russell W. F. Lai, Giulio Malavolta
ePrint Report ePrint Report
A verifiable delay function (VDF) requires a specified number of sequential steps to compute, yet the validity of its output can be verified efficiently, much faster than recomputing the function from scratch. VDFs are a versatile cryptographic tool, with many industrial applications, such as blockchain consensus protocols, lotteries and verifiable randomness. Unfortunately, without exceptions, all known practical VDF constructions are broken by quantum algorithms. In this work, we investigate the practicality of VDFs with plausible post-quantum security. We propose Papercraft, a working implementation of a VDF based entirely on lattice techniques and thus plausibly post-quantum secure. Our VDF is based on new observations on lattice-based succinct argument systems with many low-level optimisations, yielding the first lattice-based VDF that is implementable on today's hardware. As an example, our Papercraft implementation can verify a computation of almost 6 minutes in just 7 seconds. Overall, our work demonstrates that lattice-based VDFs are not just a theoretical construct, paving the way for their practical deployment.
Expand
Fabrice Benhamouda, Shai Halevi, Panos Kampanakis, Hugo Krawczyk
ePrint Report ePrint Report
We examine the use of blockcipher-based key derivation beyond the birthday bound, arguing that the analysis step of PRP/PRF switching can be eliminated in many cases. To support this, we consider a modified ``ideal model'' for keying cryptographic applications in the multi-instance setting, where keys are chosen to be random \emph{but distinct}, rather than completely independent).

Our analysis shows that typical cryptographic applications remain secure in this model. One consequence is that it is typically safe to derive close to $2^n$ keys using an $n$-bit blockcipher in counter mode. In particular, considering the practice of nonce-derived keys for authenticated encryption, our results imply that modes such as XAES-256-GCM that use CMAC-based key derivation are safe to use with more than $2^{64}$ distinct nonces.
Expand
Nibesh Shrestha, Aniket Kate
ePrint Report ePrint Report
Directed Acyclic Graph (DAG)-based BFT consensus protocols often suffer from limited throughput and scalability due to bandwidth-intensive data replication to all participants. However, it is sufficient to replicate data to a smaller sub-committee of parties that holds an honest majority with high probability.

In this work, we introduce tribe-assisted reliable broadcast, a novel primitive that ensures reliable broadcast (RBC) properties within a smaller honest-majority sub-committee—referred to as a clan—drawn from the entire network, called the tribe. Leveraging this primitive, we develop two efficient DAG-based BFT consensus protocols. First, we present a single-clan protocol, in which a single clan is elected from the tribe, and data is disseminated exclusively to this designated clan using tribe-assisted RBC. We then extend this design to a multi-clan setting, where multiple clans are elected and data is distributed within each respective clan via the same mechanism. Experimental results demonstrate that both protocols offer substantial improvements in throughput and latency over existing DAG-based BFT protocols, even at moderately large scales.
Expand
Jake Januzelli, Mike Rosulek, Lawrence Roy
ePrint Report ePrint Report
We establish new lower bounds on the size of practical garbled circuits, which hold against any scheme satisfying the following simple properties: (1) Its security is based on symmetric-key cryptography only. More formally, security holds in Minicrypt, a model in which a random oracle is the only available source of cryptography. (2) The evaluation algorithm makes non-adaptive queries to the random oracle. (3) The evaluation algorithm "knows" which of its oracle queries are made by which other input combinations. These restrictions are reasonable for garbling single gates. In particular, unlike prior attempts at lower bounds, we make no assumptions about the internal behavior of the garbling algorithms --- i.e., how it uses random oracle outputs and wire labels to compute the garbled gate, etc.

We prove separate lower bounds depending on whether the scheme uses the free-XOR technique (Kolesnikov & Schneider, ICALP 2008). In the free-XOR case, we prove that a garbled AND-gate requires $1.5\lambda$ bits; thus, the garbling scheme of Rosulek & Roy (Crypto 2022) is optimal. In the non-free-XOR case, we prove that a garbled AND-gate requires $2\lambda$ bits and a garbled XOR-gate requires $\lambda$ bits; thus, the garbling scheme of Gueron, Lindell, Nof, and Pinkas (CCS 2015) is optimal.

We prove our lower bounds using tools from information theory. A garbling scheme can be characterized as a joint distribution over various quantities: wire labels, garbled gate information, random oracle responses. We show that different properties of a garbling scheme imply certain Shannon-type information inequalities about this distribution. We then use an automated theorem prover for Shannon-type inequalities to prove that our inequalities imply lower bounds on the entropy---hence, size---of the garbled gate information.
Expand
Mohammed Rahmani, Abderrahmane Nitaj
ePrint Report ePrint Report
In 2024, based on the cubic Pell curve, Nitaj and Seck proposed a variant of the RSA cryptosystem where the modulus is in the form $N=p^rq^s$, and the public key $e$ and private key $d$ satisfy the equation $ed\equiv 1\pmod{(p-1)^2(q-1)^2}$. They showed that $N$ can be factored when $d$ is less than a certain bound that depends on $r$ and $s$ in the situation $rs\geq 2$, which is not extendable to $r=s=1$. In this paper, we propose a cryptanalysis of this scheme in the situation $r=s=1$, and give an explicit bound for $d$ that makes the scheme insecure. The method is based on Coppersmith's method and lattice reduction.
Expand
Jiaqi Liu, Yan Wang, Fang-Wei Fu
ePrint Report ePrint Report
We initiate the study of multi-authority attribute-based functional encryption for noisy inner-product functionality, and propose two new primitives: (1) multi-authority attribute-based (noisy) inner-product functional encryption (MA-AB(N)IPFE), and (2) multi-authority attribute-based evasive inner-product functional encryption (MA-ABevIPFE). The MA-AB(N)IPFE primitive generalizes the existing multi-authority attribute-based inner-product functional encryption schemes by Agrawal et al. [AGT21], by enabling approximate inner-product computation under decentralized attribute-based control. This newly proposed notion combines the approximate function evaluation of noisy inner-product functional encryption (IPFE) with the decentralized key-distribution structure of multi-authority attribute-based encryption. To better capture noisy functionalities within a flexible security framework, we formulate the MA-ABevIPFE primitive under a generic-model view, inspired by the evasive IPFE framework by Hsieh et al. [HLL24]. It shifts the focus from pairwise ciphertext indistinguishability to a more relaxed pseudorandomness-based game.

To support the above notions, we introduce two variants of lattice-based computational assumptions: - The evasive IPFE assumption (evIPFE): it generalizes the assumption introduced in [HLL24] to the multi-authority setting and admits a reduction from the evasive LWE assumption proposed by Waters et al. [WWW22];

- The indistinguishability-based evasive IPFE assumption (IND-evIPFE): it is an indistinguishability-based variant of the evasive IPFE assumption designed to capture the stronger security guarantees required by our MA-AB(N)IPFE scheme.

We present concrete lattice-based constructions for both primitives supporting subset policies, building upon the framework of [WWW22]. Our schemes are proven to be statically secure in the random oracle model under the standard LWE assumption and the newly introduced assumptions. Additionally, we demonstrate that our MA-AB(N)IPFE scheme can be transformed, via standard modulus switching, into a noiseless MA-ABIPFE scheme that supports exact inner-product functionality consistent with the MA-IPFE syntax in [AGT21,DP23]. This yields the first lattice-based construction of such a primitive. All our schemes support arbitrary polynomial-size attribute policies and are secure in the random oracle model under lattice assumptions with a sub-exponential modulus-to-noise ratio, making them practical candidates for noise-tolerant, fine-grained access control in multi-authority settings.
Expand
Vladimir Sarde, Nicolas Debande
ePrint Report ePrint Report
Mitaka is a variant of Falcon, which is one of the three postquantum signature schemes selected by the NIST for standardization. Introduced in 2021, Mitaka offers a simpler, parallelizable, and maskable version of Falcon. Our article focuses on its physical security, and our results are threefold. Firstly, we enhance a known profiled side-channel attack on an unprotected Mitaka implementation by a factor of 512. Secondly, we analyze the masked implementation of Mitaka described in the reference article, which incorporates a different sampler and a protective gadget. We expose the first side-channel flaw on this sampler. This flaw enables to break the masked implementation with a first-order side-channel attack. In this scenario, the key can be recovered using only three times more traces compared to the attack on the unprotected implementation. Finally, we discuss and recommend new countermeasures to mitigate these attacks.
Expand
Rafael del Pino, Guilhem Niot
ePrint Report ePrint Report
Threshold signatures improve upon digital signatures by splitting the trust and robustness among multiple parties. In a (T, N) threshold signature any set of T parties can produce a signature but no set of less than T users can do so. Many such constructions are now available in the pre-quantum setting but post-quantum threshold schemes are still running heavy, with the state-of-the-art boasting signature sizes that are still an order of magnitude larger than post-quantum digital signatures.

We propose a novel very efficient threshold signature scheme, with a signature size close to that of a single Dilithium signature for any threshold T of at most 8 users. Our construction reduces to well-studied problems (MLWE and SelfTargetMSIS) and does not need any heavy machinery, essentially consisting in just T parallel executions of the Dilithium signature scheme. Though the resulting scheme is remarkably simple, many technical difficulties, such as sharing a secret in small shares, or simulating rejecting transcripts, have kept such an efficient threshold signature out of reach until now.
Expand
Rafael del Pino, Thomas Espitau, Guilhem Niot, Thomas Prest
ePrint Report ePrint Report
We introduce simple yet efficient lattice-based threshold signatures with identifiable aborts, secure under the MLWE assumption. Central to our construction are novel Distributed Key Generation with Short Shares (sDKG) protocols over lattices, ensuring short shares, small reconstruction coefficients, and linear evaluation of honest shares. This uniquely realizes the "threshold designer's dream": signature shares double as valid signatures under the corresponding secret key shares. With two concrete instantiations (ramp and replicated secret sharings), our schemes match Threshold Raccoon (del Pino et al. EUROCRYPT 2024)’s compact ~10kB size. Further, we unveil 'Death Star Detection', a new algorithm that enhances identifiable aborts by efficiently spotting short vector adversarial correlations, of interest beyond threshold signatures.
Expand
Yiwen Gao, Dongliang Cai, Yang Xu, Haibin Kan
ePrint Report ePrint Report
Proximity testing for linear codes is a fundamental problem in coding theory with critical applications in cryptographic protocols, blockchain, and distributed storage systems. This work addresses the proximity gaps for linear codes, a crucial aspect for efficiently verifying whether a batch of codewords is close to a given code. We present a general framework for deriving proximity gaps from the list-decodability properties of the underlying linear code.

Our main result shows that if a code $C\subseteq \mathbb{F}_q^n$ is $(p,L)$-list-decodable, then the probability that a random combination of a batch of $t$ codewords containing a $\delta$-far codeword (for $\delta\le 1-\sqrt{1-p+\varepsilon}$) remains $\delta$-far from $C$ is bounded by $O(\frac{tL^2pn}{q}+\frac{t}{\varepsilon q})$. This result also establishes a form of (mutual) correlated agreement for linear codes, which can be used to strengthen soundness analyses in protocols that rely on proximity testing, thereby reducing query complexity and enabling practical instantiations over smaller finite fields. In particular, we apply our main result to randomly punctured Reed–Solomon codes and folded Reed–Solomon codes—both of which are known to achieve list-decodability up to capacity—and derive linear proximity gaps for these families under the Johnson bound.
Expand
Michał Wroński, Łukasz Dzierzkowski, Mateusz Leśniak, Ewa Syta
ePrint Report ePrint Report
We introduce a unified approach to quantum cryptanalysis that reduces all \emph{standard abelian hidden subgroup problems} (SAHSPs), including integer factorization, discrete logarithm in finite fields (DLP), and discrete logarithm on elliptic curves, to a single core problem: the \emph{elliptic curve discrete logarithm problem} (ECDLP). This reduction forms the foundation of a framework for quantum cryptanalysis based on ECDLP. At the core of this framework is a \emph{semi-agnostic quantum algorithm} for solving ECDLP, which requires only partial knowledge of the problem's group structure. By operating on singular curves, we can translate problems such as integer factorization and finite field DLP into the ECDLP. Leveraging the semi-agnostic algorithm for ECDLP, we propose a \emph{universal, reconfigurable quantum circuit} design that can solve any SAHSP instance by executing the ECDLP solver with appropriate curve and field parameters, without requiring problem-specific customization. We prove that solving ECDLP in this model is sufficient to solve all instances of SAHSPs, establishing ECDLP as a \emph{complete problem} for this class. These results unify the structure of Shor's algorithm across all SAHSPs, eliminating the need for custom quantum circuits for each problem. This demonstrates the practical feasibility of universal quantum cryptanalysis and underscores the urgency of transitioning to cryptographic systems that remain secure against quantum-capable adversaries.
Expand
Sicheng Wei, Jingwei Hu
ePrint Report ePrint Report
This paper presents an efficient protocol for private set intersection in a setting with multiple set owners and a semi-honest cloud server. The core idea is to reduce the intersection computation to secure operations over Bloom filters, enabling both scalability and efficiency. By leveraging this transformation, our protocols achieve strong privacy guarantees while minimizing computation and communication overhead.
Expand
Baraq Ghaleb, William J Buchanan
ePrint Report ePrint Report
Homomorphic encryption provides many opportunities for privacy-aware processing, including with methods related to machine learning. Many of our existing cryptographic methods have been shown in the past to be susceptible to side channel attacks. With these, the implementation of the cryptographic methods can reveal information about the private keys used, the result, or even the original plaintext. An example of this includes the processing of the RSA exponent using the Montgomery method, and where 0's and 1's differ in their processing time for modular exponentiation. With FHE, we typically use lattice methods, and which can have particular problems in their implementation in relation to side channel leakage. This paper aims to outline a range of weaknesses within FHE implementations as related to side channel analysis. It outlines a categorization for side-channel analysis, some case studies, and mitigation strategies.
Expand
Weishen Zou, Bruno Martin, Thomas Prévost
ePrint Report ePrint Report
We explore the application of the QUBO and Ising models to the integer factorization problem with implications for the security of public-key algorithms such as RSA. A key contribution is a program that applies existing algorithms to parameterize and simulate integer factorization through an Ising model in order to replicate previous works. Due to limited access to quantum hardware, we use classical heuristic methods to approximate solutions.
Expand
Yanpei Guo, Alex Luoyuan Xiong, Wenjie Qu, Jiaheng Zhang
ePrint Report ePrint Report
Scalable data availability (DA) is essential for high-throughput, decentralized blockchains, enabling lightweight nodes to verify block availability without incurring the prohibitive costs of full data replication. Reed-Solomon (RS) code commitment schemes underpin modern DA protocols by ensuring that dispersed data fragments can be verified as part of a valid codeword, even in the presence of malicious block producers. However, state-of-the-art schemes such as FRIDA (Crypto'24), while computationally efficient, incur substantial per-node communication overhead at the scale of thousands of network nodes, often 5.7$\times$ the size of the actual data fragment.

In this work, we introduce CONDA, a new interleaved RS code commitment scheme that significantly reduces the communication overhead while retaining FRIDA's prover efficiency. At its core is a novel evaluation consolidation technique for polynomial commitment scheme (PCS) that reduces the problem of proving $n$ evaluations at fixed points (one per verifier) to a single evaluation at a random point, using logarithmic communication. This technique is lightweight, hash-based, and compatible with any multilinear PCS.

To further optimize for DA applications, we introduce LightLigero, a new multilinear PCS that improves upon DeepFold (Sec'25) with $O(\log n)$ reduction in proof size and only $30\%$ slowdown in prover time. Combining CONDA and LightLigero yields an efficient DA scheme for thousands of nodes.

Our implementation demonstrates a 4$\times$ reduction in communication cost compared to FRIDA, while incurring only a 25\% increase in prover time. It also achieves near-best prover time and near-best communication cost simultaneously among all code commitment schemes. CONDA also offers at least $3\times$ smaller proofs and $4\times$ faster provers than state-of-the-art verifiable secret sharing constructions such as ZXH+22 (Sec'22) and PolyFRIM (Sec'24).
Expand
Hiroki Okada, Rachel Player, Simon Pohmann
ePrint Report ePrint Report
Implementations of modern FHE schemes are available in various highly-optimized libraries. Many of these libraries are designed to allow developers who may not have deep expertise in FHE to build fast and secure privacy-preserving applications. To support such users, the API of these libraries often hides the internals of the schemes in question from the user. However, this design choice makes it hard for users of these libraries to modify existing schemes, or implement new ones; work that is often valuable when conducting research on FHE schemes. We present our new Rust library Fheanor, which aims to facilitate such research on FHE schemes. The core target user is an FHE researcher, rather than an application developer. Most importantly, the design of Fheanor is very modular, and mirrors the mathematical structure of the available FHE schemes. By exposing the mathematical structure, but still hiding implementation details, it is easy to modify or extend the functionality of FHE schemes implemented in the library and still preserve high performance. Indeed, Fheanor demonstrates performance that is close to that of HElib or SEAL, with the potential for optimizations in the future. Fheanor implements several features that have not, or have only rarely, been implemented in previous libraries. These include non-power-of-two cyclotomic rings, single-RNS based ring arithmetic, the CLPX/GBFV scheme, and bootstrapping for BFV and BGV. In addition, this paper presents new theoretical contributions that are also implemented in Fheanor. The first is an extension of optimal digit extraction circuits, used in BFV/BGV bootstrapping, to the case 2^23. The second is a more efficient algorithm for computing the trace in the non-power-of-two cyclotomic setting.
Expand
◄ Previous Next ►