International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

24 September 2023

Nina Bindel, Xavier Bonnetain, Marcel Tiepelt, Fernando Virdia
ePrint Report ePrint Report
In 2018, Aono et al. (ASIACRYPT 2018) proposed to use quantum backtracking algorithms (Montanaro, TOC 2018; Ambainis and Kokainis, STOC 2017) to speedup lattice point enumeration. Quantum lattice sieving algorithms had already been proposed (Laarhoven et al., PQCRYPTO 2013), being shown to provide an asymptotic speedup over classical counterparts, but also to lose competitivity at relevant dimensions to cryptography if practical considerations on quantum computer architecture were taken into account (Albrecht et al., ASIACRYPT 2020). Aono et al.’s work argued that quantum walk speedups can be applied to lattice enumeration, achieving at least a quadratic asymptotic speedup à la Grover search while not requiring exponential amounts of quantum accessible classical memory, as it is the case for sieving. In this work, we explore how to lower bound the cost of using Aono et al.’s techniques on lattice enumeration with extreme cylinder pruning assuming a limit to the maximum depth that a quantum computation can achieve without decohering, with the objective of better understanding the practical applicability of quantum backtracking in lattice cryptanalysis.
Expand
Nilanjan Datta, Avijit Dutta, Samir Kundu
ePrint Report ePrint Report
In ASIACRYPT'17, Naito proposed a beyond-birthday-bound variant of the LightMAC construction, called LightMAC_Plus, which is built on three independently keyed $n$-bit block ciphers, and showed that the construction achieves $2n/3$-bits PRF security. Later, Kim et al. claimed (without giving any formal proof) its security bound to $2^{3n/4}$. In FSE'18, Datta et al. have proposed a two-keyed variant of the LightMAC_Plus construction, called 2k-LightMAC_Plus, which is built on two independently keyed $n$-bit block ciphers, and showed that the construction achieves $2n/3$-bits PRF security. In this paper, we show a tight security bound on the 2k-LightMAC_Plus construction. In particular, we show that it provably achieves security up to $2^{3n/4}$ queries. We also exhibit a matching attack on the construction with the same query complexity and hence establishing the tightness of the security bound. To the best of our knowledge, this is the first work that provably shows a message length independent $3n/4$-bit tight security bound on a block cipher based variable input length PRF with two block cipher keys.
Expand
Long Chen, Hui Guo, Ya-Nan Li, Qiang Tang
ePrint Report ePrint Report
Periodic key rotation is a widely used technique to enhance key compromise resilience. Updatable encryption (UE) schemes provide an efficient approach to key rotation, ensuring post-compromise security for both confidentiality and integrity. However, these UE techniques cannot be directly applied to frequently updated databases due to the risk of a malicious server inducing the client to accept an outdated version of a file instead of the latest one.

To address this issue, we propose a scheme called Updatable Secure Storage (USS), which provides a secure and key updatable solution for dynamic databases. USS ensures both data confidentiality and integrity, even in the presence of key compromises. By using efficient key rotation and file update procedures, the communication costs of these operations are independent of the size of the database. This makes USS particularly well-suited for managing large and frequently updated databases with secure version control. Unlike existing UE schemes, the integrity provided by USS holds even when the server learns the current secret key and intentionally violates the key update protocol.
Expand
Gil Segev, Amit Sharabi, Eylon Yogev
ePrint Report ePrint Report
We propose a new notion of knowledge soundness, denoted rogue-instance security, for interactive and non-interactive batch knowledge proofs. Our notion, inspired by the standard notion of rogue-key security for multi-signature schemes, considers a setting in which a malicious prover is provided with an honestly-generated instance $x_1$, and may then be able to maliciously generate related "rogue" instances $x_2,\ldots,x_k$ for convincing a verifier in a batch knowledge proof of corresponding witnesses $w_1,\ldots,w_k$ for all $k$ instances - without actually having knowledge of the witness $w_1$ corresponding to the honestly-generated instance. This setting provides a powerful security guarantee for batch versions of a wide variety of practically-relevant protocols, such as Schnorr's protocol and similar ones.

We present a highly-efficient generic construction of a batch proof-of-knowledge applicable to any algebraic Sigma protocols. The algebraic property refers to a homomorphic structure of the underlying group and includes Schnorr's protocol and others. We provide an almost tight security analysis for our generic batch protocol, which significantly improves upon the previously known security bounds even for the specific case of batch Schnorr protocol. We extend our results beyond algebraic Sigma protocols. We analyze the rogue-instance security of a general batch protocol with plus-one special soundness (a generalization of standard special soundness) and achieve improved security bounds in the generic case.

Our results use a particular type of high-moment assumptions introduced by Rotem and Segev (CRYPTO 2021). These assumptions consider the hardness of a relation against algorithms with bounded expected running time. Although Rotem and Segev introduced these assumptions, they did not provide evidence to support their hardness. To substantiate and validate the high-moment assumptions, we present a new framework for assessing the concrete hardness of cryptographic problems against oracle algorithms with bounded expected runtime. Our framework covers generic models, including the generic group model, random oracle model, and more. Utilizing our framework, we achieve the first hardness result for these high-moment assumptions. In particular, we establish the second-moment hardness of the discrete-logarithm problem against expected-time algorithms in the generic group model.
Expand
Yincen Chen, Nana Zhang, Xuanyu Liang, Ling Song, Qianqian Yang, Zhuohui Feng
ePrint Report ePrint Report
GIFT is a family of lightweight block ciphers based on SPN structure and composed of two versions named GIFT-64 and GIFT-128. In this paper, we reevaluate the security of GIFT-64 against the rectangle attack under the related-key setting. Investigating the previous rectangle key recovery attack on GIFT-64, we obtain the core idea of improving the attack——trading off the time complexity of each attack phase. We flexibly guess part of the involved subkey bits to balance the time cost of each phase so that the overall time complexity of the attack is reduced. Moreover, the reused subkey bits are identified according to the linear key schedule of GIFT-64 and bring additional advantages for our attacks. Furthermore, we incorporate the above ideas and propose a dedicated MILP model for finding the best rectangle key recovery attack on GIFT-64. As a result, we get the improved rectangle attacks on 26-round GIFT-64, which are the best attacks on it in terms of time complexity so far.
Expand
Karim Eldafrawy, Nicholas Genise, Stanislaw Jarecki
ePrint Report ePrint Report
Von Ahn, Hopper and Langford introduced the notion of steganographic a.k.a. covert computation, to capture distributed computation where the attackers must not be able to distinguish honest parties from entities emitting random bitstrings. This indistinguishability should hold for the duration of the computation except for what is revealed by the intended outputs of the computed functionality. An important case of covert computation is mutually authenticated key exchange, a.k.a. mutual authentication. Mutual authentication is a fundamental primitive often preceding more complex secure protocols used for distributed computation. However, standard authentication implementations are not covert, which allows a network adversary to target or block parties who engage in authentication. Therefore, mutual authentication is one of the premier use cases of covert computation and has numerous real-world applications, e.g., for enabling authentication over steganographic channels in a network controlled by a discriminatory entity.

We improve on the state of the art in covert authentication by presenting a protocol that retains covertness and security under concurrent composition, has minimal message complexity, and reduces protocol bandwidth by an order of magnitude compared to previous constructions. To model the security of our scheme we develop a UC model which captures standard features of secure mutual authentication but extends them to covertness. We prove our construction secure in this UC model. We also provide a proof-of-concept implementation of our scheme.
Expand
Qun Liu, Bart Preneel, Zheng Zhao, Meiqin Wang
ePrint Report ePrint Report
Quantum computers hold the potential to solve problems that are intractable for classical computers, thereby driving increased interest in the development of new cryptanalytic ciphers. In NIST's post-quantum standardization process, the security categories are defined by the costs of quantum key search against AES. However, the cost estimates provided by Grassl et al. for the search are high. NIST has acknowledged that these initial classifications should be approached cautiously, since the costs of the most advanced attacks can be significantly reduced. Therefore, accurate resource estimations are crucial for evaluating the security of ciphers against quantum adversaries. This paper presents a set of generic techniques for implementing AES quantum oracles, which are essential for quantum attacks such as Grover's algorithms. Firstly, we introduce the mixing-XOR technique to reuse the ancilla qubits. At ASIACRYPT 2022, Huang et al. proposed an S-box structure with 120 ancilla qubits. We are able to reduce the number of ancilla qubits to 83 without increasing the T-depth. Secondly, we propose the combined pipeline architecture with the share technique to combine the S-box and its reverse, which achieves it with only 98 ancilla qubits, resulting in a significant reduction of 59% compared to the independent structure. Thirdly, we use a general algorithm to determine the depth of quantum circuits, searching for the in-place circuit of AES MixColumns with depth 16. Applying these improvements, we achieve the lower quantum depth of AES circuits, obtaining more precise resource estimates for Grover's algorithm. For AES-128, -192, and -256, we only require the depth of 730, 876, and 1,018, respectively. Recently, the community has also focused on the trade-off of the time and space cost of quantum circuits for AES. In this regard, we present quantum implementations of AES circuits with a lower DW-cost on the zig-zag architecture. Compared with the circuit proposed by Huang et al., the DW-cost is reduced by 35%.
Expand
Helger Lipmaa
ePrint Report ePrint Report
Gentry and Wichs proved that adaptively sound SNARGs for hard languages need non-falsifiable assumptions. Lipmaa and Pavlyk claimed Gentry-Wichs is tight by constructing a non-adaptively sound zk-SNARG FANA for NP from falsifiable assumptions. We show that FANA is flawed. We define and construct a fully algebraic $F$-position-binding vector commitment scheme VCF. We construct a concretely efficient commit-and-prove zk-SNARK Punic, a version of FANA with an additional VCF commitment to the witness. Punic satisfies semi-adaptive black-box $G$-knowledge-soundness, a new natural knowledge-soundness notion for commit-and-prove SNARKs. We use a new proof technique to achieve global consistency using a functional somewhere-extractable commitment scheme to extract vector commitment's local proofs.
Expand
Jonathan Bootle, Sebastian Faller, Julia Hesse, Kristina Hostáková, Johannes Ottenhues
ePrint Report ePrint Report
Fuzzy Password-Authenticated Key Exchange (fuzzy PAKE) allows cryptographic keys to be generated from authentication data that is both fuzzy and of low entropy. The strong protection against offline attacks offered by fuzzy PAKE opens an interesting avenue towards secure biometric authentication, typo-tolerant password authentication, and automated IoT device pairing. Previous constructions of fuzzy PAKE are either based on Error Correcting Codes (ECC) or generic multi-party computation techniques such as Garbled Circuits. While ECC-based constructions are significantly more efficient, they rely on multiple special properties of error correcting codes such as maximum distance separability and smoothness. We contribute to the line of research on fuzzy PAKE in two ways. First, we identify a subtle but devastating gap in the security analysis of the currently most efficient fuzzy PAKE construction (Dupont et al., Eurocrypt 2018), allowing a man-in-the-middle attacker to test individual password characters. Second, we provide a new fuzzy PAKE scheme based on ECC and PAKE that provides a built-in protection against individual password character guesses and requires fewer, more standard properties of the underlying ECC. Additionally, our construction offers better error correction capabilities than previous ECC-based fuzzy PAKEs.
Expand
Yi Chen, Zhenzhen Bao, Hongbo Yu
ePrint Report ePrint Report
The differential-linear attack is one of the most effective attacks against ARX ciphers. However, two technical problems are preventing it from being more effective and having more applications: (1) there is no efficient method to search for good differential-linear approximations. Existing methods either have many constraints or are currently inefficient. (2) partitioning technique has great potential to reduce the time complexity of the key-recovery attack, but there is no general tool to construct partitions for ARX ciphers. In this work, we step forward in solving the two problems. First, we propose a novel idea for generating new good differential-linear approximations from known ones, based on which new searching algorithms are designed. Second, we propose a general tool named partition tree, for constructing partitions for ARX ciphers. Based on these new techniques, we present better attacks for two ISO/IEC standards, i.e., LEA and Speck. For LEA, we present the first 17-round distinguisher which is 1 round longer than the previous best distinguisher. Furthermore, we present the first key recovery attacks on 17-round LEA-128, 18-round LEA-192, and 18-round LEA-256, which attack 3, 4, and 3 rounds more than the previous best attacks. For Speck, we find better differential-linear distinguishers for Speck48 and Speck64. The first differential-linear distinguishers for Speck96 and Speck128 are also presented.
Expand
Xiang Liu, Ying Gao
ePrint Report ePrint Report
Multi-party private set union (MPSU) allows \(k(k\geq 3)\) parties, each holding a dataset of known size, to compute the union of their sets without revealing any additional information. Although two-party PSU has made rapid progress in recent years, applying its effective techniques to the multi-party setting would render information leakage and thus cannot be directly extended. Existing MPSU protocols heavily rely on computationally expensive public-key operations or generic secure multi-party computation techniques, which are not scalable.

In this work, we present a new efficient framework of MPSU from multi-party secret-shared shuffle and a newly introduced protocol called multi-query secret-shared private membership test (mq-ssPMT). Our MPSU is mainly based on symmetric-key operations and is secure against any semi-honest adversary that does not corrupt the leader and clients simultaneously. We also propose new frameworks for computing other multi-party private set operations (MPSO), such as the intersection, and the cardinality of the union and the intersection, meeting the same security requirements.

We demonstrate the scalability of our MPSU protocol with an implementation and a comparison with the state-of-the-art MPSU. Experiments show that when computing on datasets of \(2^{10}\) elements, our protocol is \(109\times\) faster than the state-of-the-art MPSU, and the improvement becomes more significant as the set size increases. To the best of our knowledge, ours is the first protocol that reports on large-size experiments. For 7 parties with datasets of \(2^{20}\) elements each, our protocol requires only 46 seconds.
Expand
Zhuang Shan, Leyou Zhang, Qing Wu, Qiqi Lai
ePrint Report ePrint Report
The main focus of this article is on an open problem, namely the Ring-SIS reduction problem.We first utilize a spatial isomorphism approach to reduce the Ring-SIS problem to the classic SIS problem in lattices, indirectly reducing it to the classic SIVP in lattices. This provides theoretical assurance to some extent for the difficulty and resistance against quantum attacks of the Ring-SIS problem.

Additionally, we reduce the Ring-LWE problem to the Ring-SIS problem, which guarantees the security of encryption schemes based on Ring-LWE to a certain degree. Finally, this article proves that the difficulty of the Ring-SIS problem and the Ring-LWE problem is relatively average with respect to the spatial dimension or polynomial degree.
Expand
Xuan-Thanh Do, Dang-Truong Mac, Quoc-Huy Vu
ePrint Report ePrint Report
Succinct non-interactive zero-knowledge arguments of knowledge (zk-SNARKs) are a type of non-interactive proof system enabling efficient privacy-preserving proofs of membership for NP languages. A great deal of works has studied candidate constructions that are secure against quantum attackers, which are based on either lattice assumptions, or post-quantum collision-resistant hash functions. In this paper, we propose a code-based zk-SNARK scheme, whose security is based on the rank support learning (RSL) problem, a variant of the random linear code decoding problem in the rank metric. Our construction follows the general framework of Gennaro et al. (CCS'18), which is based on square span programs (SSPs). Due to the fundamental differences between the hardness assumptions, our proof of security cannot apply the techniques from the lattice-based constructions, and indeed, it distinguishes itself by the use of techniques from coding theory. We also provide the scheme with a set of concrete parameters.
Expand
Ali Şah Özcan, Erkay Savaş
ePrint Report ePrint Report
The number theoretic transform (NTT) permits a very efficient method to perform multiplication of very large degree polynomials, which is the most time-consuming operation in fully homomorphic encryption (FHE) schemes and a class of non-interactive succinct zero-knowledge proof systems such as zk-SNARK. Efficient modular arithmetic plays an important role in the performance of NTT, and therefore it is studied extensively. The access pattern to the memory, on the other hand, may play much greater role, as the NTT execution time is mostly memory-bound due to large degree polynomials. In this paper, we propose two algorithms for fast computation of NTT on a class of graphical processing units (GPU) by optimizing the memory access patterns. We present an approach i) to optimize the number of accesses to slow global memory for thread synchronization, and ii) to make better use of spatial locality in global memory accesses. It turns out that by controlling certain parameters in CUDA platform for general-purpose GPU computing (GPGPU) such as kernel count, block size and block shape, we can affect the performance of NTT. To best of our knowledge, this work is unique for it suggests a recipe for selecting optimum CUDA parameters to obtain the best NTT performance for a given polynomial degree. Our implementation results on various GPU devices for all power-of-two polynomial degrees from $2^{12}$ to $2^{28}$ show that our algorithms compare favorably with the other state-of-the-art GPU implementations in the literature with the optimum selection of these three CUDA parameters.
Expand
Jonas Meers, Julian Nowakowski
ePrint Report ePrint Report
We define and analyze the Commutative Isogeny Hidden Number Problem which is the natural analogue of the Hidden Number Problem in the CSIDH and CSURF setting. In short, the task is as follows: Given two supersingular elliptic curves \(E_A\), \(E_B\) and access to an oracle that outputs some of the most significant bits of the \({\mathsf{CDH}}\) of two curves, an adversary must compute the shared curve \(E_{AB}={\mathsf{CDH}}(E_A,E_B)\). We show that we can recover \(E_{AB}\) in polynomial time by using Coppersmith's method as long as the oracle outputs \({\frac{13}{24}} + \varepsilon \approx 54\%\) (CSIDH) and \({\frac{31}{41}} + \varepsilon \approx 76\%\) (CSURF) of the most significant bits of the \({\mathsf{CDH}}\), where $\varepsilon > 0$ is an arbitrarily small constant. To this end, we give a purely combinatorial restatement of Coppersmith's method, effectively concealing the intricate aspects of lattice theory and allowing for near-complete automation. By leveraging this approach, we attain recovery attacks with $\varepsilon$ close to zero within a few minutes of computation.
Expand
Jianhua Wang, Lu Qin, Baofeng Wu
ePrint Report ePrint Report
In this paper, we improve the cube attack by exploiting low-degree factors of the superpoly w.r.t. certain "special" index set of cube (ISoC). This can be viewed as a special case of the correlation cube attack proposed at Eurocrypt 2018, but under our framework more beneficial equations on the key variables can be obtained in the key-recovery phase. To mount our attack, one has two challenging problems: (1) effectively recover algebraic normal form of the superpoly and extract out its low-degree factors; and (2) efficiently search a large quantity of good ISoCs. We bring in new techniques to solve both of them.

First, we propose the variable substitution technique for middle rounds of a cipher, in which polynomials on the key variables in the algebraic expressions of internal states are substituted by new variables. This will improve computational complexity of the superpoly recovery and promise more compact superpolys that can be easily decomposed with respect to the new variables. Second, we propose the vector numeric mapping technique, which seeks out a tradeoff between efficiency of the numeric mapping technique (Crypto 2019) and accuracy of the monomial prediction technique (Asiacrypt 2020) in degree evaluation of superpolys. Combining with this technique, a fast pruning method is given and modeled by MILP to filter good ISoCs of which the algebraic degree satisfies some fixed threshold. Thanks to automated MILP solvers, it becomes practical to comprehensively search for good cubes across the entire search space.

To illustrate the power of our techniques, we apply all of them to Trivium stream cipher. As a result, we have recovered the superpolys of three cubes given by Kesarwani et al. in 2020, only to find they do not have zero-sum property up to 842 rounds as claimed in their paper. To our knowledge, the previous best practical key recovery attack was on 820-round Trivium with complexity $2^{53.17}$. We put forward 820-, 825- and 830-round practical key-recovery attacks, in which there are $\mathbf{2^{80}\times 87.8\%}$, $\mathbf{2^{80}\times 83\%}$ and $\mathbf{2^{80}\times 65.7\%}$ keys that could be practically recovered, respectively, if we consider $\mathbf{2^{60}}$ as the upper bound for practical computational complexity. Besides, even for computers with computational power not exceeding $\mathbf{2^{52}}$ (resp. $\mathbf{2^{55}}$), we can still recover $\mathbf{58\%}$ (resp. $\mathbf{46.6\%}$) of the keys in the key space for 820 rounds (resp. 830 rounds). Our attacks have led 10 rounds more than the previous best practical attack.
Expand
JINGWEI HU
ePrint Report ePrint Report
In this paper, we consider how to use fully homomorphic encryptions (FHEs) to solve the problem of secure computations over set intersection where one party holds a relatively set of size $N_s$ and the other party holds a relatively small set of size $N_r$ collaboratively compute some functionality over their set intersection without revealing other information. This problem has many applications for online collaboration, for example, fingerprint matching, online dating, and shareriding.
Expand
George Kadianakis, Mary Maller, Andrija Novakovic
ePrint Report ePrint Report
This paper introduces Sigmabus, a technique designed to enhance the efficiency of zero-knowledge circuits by relocating computationally expensive operations outside the circuit. Specifically, Sigmabus focuses on moving elliptic curve group operations, typically proven with expensive non-native field arithmetic, to external computations. By leveraging Sigma protocols, elliptic curve group operations are proven outside the circuit, while additional constraints are applied to the circuit to ensure correct execution of the Sigma protocol. This approach can achieve significant performance improvements in zero-knowledge circuits. This paper presents the Sigmabus protocol along with its security proofs, and demonstrates its practical implications through various use cases.
Expand
Valerio Cini, Russell W. F. Lai, Giulio Malavolta
ePrint Report ePrint Report
Succinct arguments allow a prover to convince a verifier of the validity of any statement in a language, with minimal communication and verifier's work. Among other approaches, lattice-based protocols offer solid theoretical foundations, post-quantum security, and a rich algebraic structure. In this work, we present some new approaches to constructing efficient lattice-based succinct arguments. Our main technical ingredient is a new commitment scheme based on vanishing polynomials, a notion borrowed from algebraic geometry. We analyse the security of such a commitment scheme, and show how to take advantage of the additional algebraic structure to build new lattice-based succinct arguments. A few highlights amongst our results are:

- The first recursive folding (i.e. Bulletproofs-like) protocol for linear relations with polylogarithmic verifier runtime. Traditionally, the verifier runtime has been the efficiency bottleneck for such protocols (regardless of the underlying assumptions). - The first verifiable delay function (VDF) based on lattices, building on a recently introduced sequential relation. - The first lattice-based \emph{linear-time prover} succinct argument for NP, in the preprocessing model. The soundness of the scheme is based on (knowledge)-k-R-ISIS assumption [Albrecht et al., CRYPTO'22].
Expand
Charlotte Hoffmann, Pavel Hubáček, Chethan Kamath, Tomáš Krňák
ePrint Report ePrint Report
Lucas sequences are constant-recursive integer sequences with a long history of applications in cryptography, both in the design of cryptographic schemes and cryptanalysis. In this work, we study the sequential hardness of computing Lucas sequences over an RSA modulus.

First, we show that modular Lucas sequences are at least as sequentially hard as the classical delay function given by iterated modular squaring proposed by Rivest, Shamir, and Wagner (MIT Tech. Rep. 1996) in the context of time-lock puzzles. Moreover, there is no obvious reduction in the other direction, which suggests that the assumption of sequential hardness of modular Lucas sequences is strictly weaker than that of iterated modular squaring. In other words, the sequential hardness of modular Lucas sequences might hold even in the case of an algorithmic improvement violating the sequential hardness of iterated modular squaring. Moreover, we note that modular Lucas sequences also yield a time-lock puzzle, similar to the classical construction of Rivest, Shamir and Wagner.

Second, we demonstrate the feasibility of constructing practically-efficient verifiable delay functions based on the sequential hardness of modular Lucas sequences. Our construction builds on the work of Pietrzak (ITCS 2019) by leveraging the intrinsic connection between the problem of computing modular Lucas sequences and exponentiation in an appropriate extension field.
Expand
◄ Previous Next ►