International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

17 August 2022

Luan Luan, Chunxiang Gu, Yonghui Zheng, Yanan Shi
ePrint Report ePrint Report
Lattice enumeration is a linear-space algorithm for solving the shortest lattice vector problem(SVP). Extreme pruning is a practical technique for accelerating lattice enumeration, which has mature theoretical analysis and practical implementation. However, these works are still remain to be done for discrete pruning. In this paper, we improve the discrete pruned enumeration (DP enumeration), and give a solution to the problem proposed by Leo Ducas et Damien Stehle about the cost estimation of discrete pruning. Our contribution is on the following three aspects:

First, we refine the algorithm both from theoretical and practical aspects. Discrete pruning using natural number representation lies on a randomness assumption of lattice point distribution, which has an obvious paradox in the original analysis. We rectify this assumption to fix the problem, and correspondingly modify some details of DP enumeration. We also improve the binary search algorithm for cell enumeration radius with polynomial time complexity, and refine the cell decoding algorithm. Besides, we propose to use a truncated lattice reduction algorithm -- k-tours-BKZ as reprocessing method when a round of enumeration failed.

Second, we propose a cost estimation simulator for DP enumeration. Based on the investigation of lattice basis stability during reprocessing, we give a method to simulate the squared length of Gram-Schmidt orthogonalization basis quickly, and give the fitted cost estimation formulae of sub-algorithms in CPU-cycles through intensive experiments. The success probability model is also modified based on the rectified assumption. We verify the cost estimation simulator on middle size SVP challenge instances, and the simulation results are very close to the actual performance of DP enumeration.

Third, we give a method to calculate the optimal parameter setting to minimize the running time of DP enumeration. We compare the efficiency of our optimized DP enumeration with extreme pruning enumeration in solving SVP challenge instances. The experimental results in medium dimension and simulation results in high dimension both show that the discrete pruning method could outperform extreme pruning. An open-source implementation of DP enumeration with its simulator is also provided.
Expand
Peyman Momeni, Sergey Gorbunov, Bohan Zhang
ePrint Report ePrint Report
While blockchain systems are quickly gaining popularity, front-running remains a major obstacle to fair exchange. In this paper, we show how to apply identity-based encryption (IBE) to prevent front-running with minimal bandwidth overheads. In our approach, to decrypt a block of N transactions, the number of messages sent across the network only grows linearly with the size of decrypting committees, S. That is, to decrypt a set of N transactions sequenced at a specific block, a committee only needs to exchange S decryption shares (independent of N ). In comparison, previous solutions are based on threshold decryption schemes, where each transaction in a block must be decrypted separately by the committee, resulting in bandwidth overhead of N × S. Along the way, we present a model for fair block processing and build a prototype implementation. We show that on a sample of 1000 messages with 1000 validators our system saves 42.53 MB of bandwidth which is 99.6% less compared with the standard threshold decryption paradigm.
Expand
Öznur MUT SAĞDIÇOĞLU, Serhat Sağdıçoğlu, Ebru Küçükkubaş
ePrint Report ePrint Report
Differential cryptanalysis is one of the most effective methods for evaluating the security level of block ciphers. For this, an attacker tries to find a differential or a characteristic with a high probability that distinguishes a block cipher from a random permutation to obtain the secret key. Although it is theoretically possible to compute the probability of a differential for a block cipher, there are two problems to compute it practically. The first problem is that it is computationally impossible to compute differential probability by trying all plaintext pairs. The second problem is that the probability of a differential over all choices of the plaintext and key might be different from the probability of the differential over all plaintexts for a fixed key. Thus, to evaluate the security against the differential cryptanalysis, one must assume both the hypothesis of stochastic equivalence and the Markov model. However, the hypothesis of stochastic equivalence does not hold in general. Indeed, we show on simple ciphers that the hypothesis of stochastic equivalence does not hold. Moreover, we observe that the differential probability is not equal to the expected differential probability. For these results, we study plateau characteristics for a 4-bit cipher and a 16-bit super box. As a result, when considering differential cryptanalysis, one must be careful about the gap between the theoretical and the practical security of block ciphers.
Expand
Ruiqi Mi, Haodong Jiang, Zhenfeng Zhang
ePrint Report ePrint Report
Resistance to key misuse attacks is a vital property for key encapsulation mechanisms(KEMs)in NIST-PQC standardization process. In key mismatch attack, the adversary recovers reused secret key with the help of an oracle $\mathcal{O}$ that indicates whether the shared key matches or not. Key mismatch attack is more powerful when fewer oracle queries are required. A series of works tried to reduce query times, Qin et al. [AISACRYPT 2021] gave a systematic approach to finding lower bound of oracle queries for a category of KEMs, including NIST’s third-round candidate Kyber and Saber.

In this paper, we found the aforementioned bound can be bypassed by combining Qin et al. (AISACRYPT 2021)’s key mismatch attack with a standard lattice attack. In particular, we explicitly build the relationship between the number of queries to the oracle and the bit security of the lattice-based KEMs. Our attack is inspired by the fact that each oracle query reveals partial information of reused secrets, and affects the mean and the covariance parameter of secrets, making the attack on lattice easier. In addition, We quantify such effect in theory and estimate the security loss for all NIST second-round candidate KEMs.Specifically, Our improved attack reduces the number of queries for Kyber512 by 34% from 1312 queries with bit security 107 to 865 with bit security 32. For Kyber768 and Kyber1024, our improved attack reduces the number of queries by 29% and 27% with bit security is 32.
Expand
Hao Chung, Elisaweta Masserova, Elaine Shi, Sri AravindaKrishnan Thyagarajan
ePrint Report ePrint Report
The recent work of Chung et al. suggested new formal foundations for side-contract-resilient fair exchange protocols. In this paper, we suggest new constructions that achieve a coalition-resistant Nash equilibrium, and moreover, our constructions improve over Chung et al. in the following senses: 1) we achieve optimistic reponsiveness; and 2) we reduce the amount of collateral from the parties.
Expand
Haiyan Wang, Penghui (Owen) Liu, Xiaoxiong Zhong, Weizhe Zhang
ePrint Report ePrint Report
The time sequence-based outsourcing makes new demands for related access control continue to grow increasingly in cloud computing. In this paper, we propose a practical password-based access control framework for such media cloudization relying on content control based on the time-sequence attribute, which is designed over prime-order groups. First, the scheme supports multi-keyword search simultaneously in any monotonic boolean formulas, and enables media owner to control content encryption key for different time-periods using an updatable password; Second, the scheme supports the key self-retrievability of content encryption key, which is more suitable for the cloud-based media applications with massive users. Then, we show that the proposed scheme is provably secure in the standard model. Finally, the detailed result of performance evaluation shows the proposed scheme is efficient and practical for cloud-based media applications.
Expand
Ray Perlner, John Kelsey, David Cooper
ePrint Report ePrint Report
SPHINCS$^+$ is a stateless hash-based signature scheme that has been selected for standardization as part of the NIST post-quantum cryptography (PQC) standardization process. Its security proof relies on the distinct-function multi-target second-preimage resistance (DM-SPR) of the underlying keyed hash function. The SPHINCS$^+$ submission offered several instantiations of this keyed hash function, including one based on SHA-256. A recent observation by Sydney Antonov on the PQC mailing list demonstrated that the construction based on SHA-256 did not have DM-SPR at NIST category five, for several of the parameter sets submitted to NIST; however, it remained an open question whether this observation leads to a forgery attack. We answer this question in the affirmative by giving a complete forgery attack that reduces the concrete classical security of these parameter sets by approximately 40 bits of security.

Our attack works by applying Antonov's technique to the {WOTS$^+$} public keys in {\SPHINCS}, leading to a new one-time key that can sign a very limited set of hash values. From that key, we construct a slightly altered version of the original hypertree with which we can sign arbitrary messages, yielding signatures that appear valid.
Expand
Elette Boyle, Niv Gilboa, Yuval Ishai, Victor I. Kolobov
ePrint Report ePrint Report
A distributed point function (DPF) is a cryptographic primitive that enables compressed additive sharing of a secret unit vector across two or more parties. Despite growing ubiquity within applications and notable research efforts, the best 2-party DPF construction to date remains the tree-based construction from (Boyle et al, CCS'16), with no significantly new approaches since.

We present a new framework for 2-party DPF construction, which applies in the setting of feasible (polynomial-size) domains. This captures in particular all DPF applications in which the keys are expanded to the full domain. Our approach is motivated by a strengthened notion we put forth, of programmable DPF (PDPF): in which a short, input-independent "offline" key can be reused for sharing many point functions.

* PDPF from OWF: We construct a PDPF for feasible domains from the minimal assumption that one-way functions exist, where the second "online" key size is polylogarithmic in the domain size $N$.

Our approach offers multiple new efficiency features and applications:

* Privately puncturable PRFs: Our PDPF gives the first OWF-based privately puncturable PRFs (for feasible domains) with sublinear keys.

* $O(1)$-round distributed DPF Gen: We obtain a (standard) DPF with polylog-size keys that admits an analog of Doerner-shelat (CCS'17) distributed key generation, requiring only $O(1)$ rounds (versus $\log N$).

* PCG with 1 short key: Compressing useful correlations for secure computation, where one key is of minimal size. This provides up to exponential communication savings in some application scenarios.
Expand
Diana Davidova, Nikolay Kaleyski
ePrint Report ePrint Report
We describe how any function over a finite field $\mathbb{F}_{p^n}$ can be represented in terms of the values of its derivatives. In particular, we observe that a function of algebraic degree $d$ can be represented uniquely through the values of its derivatives of order $(d-1)$ up to the addition of terms of algebraic degree strictly less than $d$. We identify a set of elements of the finite field, which we call the degree $d$ extension of the basis, which has the property that for any choice of values for the elements in this set, there exists a function of algebraic degree $d$ whose values match the given ones. We discuss how to reconstruct a function from the values of its derivatives, and discuss the complexity involved in transitioning between the truth table of the function and its derivative representation. We then specialize to the case of quadratic functions, and show how to directly convert between the univariate and derivative representation without going through the truth table. We thus generalize the matrix representation of qaudratic vectorial Boolean functions due to Yu et al. to the case of arbitrary characteristic. We also show how to characterize quadratic planar functions using the derivative representation. Based on this, we adapt the method of Yu et al. for searching for quadratic APN functions with prime field coefficients to the case of planar DO functions. We use this method to find all such functions (up to CCZ-equivalence) over $\mathbb{F}_{3^n}$ for $n \le 7$. We conclude that the currently known planar DO polynomials cover all possible cases for $n \le 7$. We find representatives simpler than the known ones for the Zhou-Pott, Dickson, and Lunardon-Marino-Polverino-Trombetti-Bierbrauer families for $n = 6$. Finally, we discuss the computational resources that would be needed to push this search to higher dimensions.
Expand
Zhenzhen Bao, Jian Guo, Shun Li, Phuong Pham
ePrint Report ePrint Report
In this work, we evaluate the security of Merkle-Damgård (MD) hash functions and their combiners (XOR and concatenation combiners) in quantum settings. Two main quantum scenarios are considered, including the scenario where a substantial amount of cheap quantum random access memory (qRAM) is available and where qRAM is limited and expensive to access. We present generic quantum attacks on the MD hash functions and hash combiners, and carefully analyze the complexities under both quantum scenarios. The considered securities are fundamental requirements for hash functions, including the resistance against collision and (second-)preimage. The results are consistent with the conclusions in the classical setting, that is, the considered resistances of the MD hash functions and their combiners are far less than ideal, despite the significant differences in the expected security bounds between the classical and quantum settings. Particularly, the generic attacks can be improved significantly using quantum computers under both scenarios. These results serve as an indication that classical hash constructions require careful security re-evaluation before being deployed to the post-quantum cryptography schemes.
Expand
Jian Guo, Shun Li, Guozhen Liu, Phuong Pham
ePrint Report ePrint Report
In ToSC'20, a new approach combining Mix-Integer Linear Programming (MILP) tool and Constraint Programming (CP) tool to search for boomerang distinguishers is proposed and later used for rebound attack in ASIACRYPT'21 and CRYPTO'22. In this work, we extend these techniques to mount collision attacks on SKINNY-128-256 MMO hashing mode in classical and quantum settings. The first results of 17-round (and 15-round) free-start collision attack on this variant of SKINNY hashing mode are presented. Moreover, one more round of the inbound phase is covered leading to the best existing classical free-start collision attack of 19-round on the SKINNY-128-384 MMO hashing.
Expand
Jonathan Bootle, Alessandro Chiesa, Ziyi Guan, Siqi Liu
ePrint Report ePrint Report
Interactive oracle proofs (IOPs) are a generalization of probabilistically checkable proofs that can be used to construct succinct arguments. Improvements in the efficiency of IOPs lead to improvements in the efficiency of succinct arguments. The key efficiency goals are achieving provers that run in linear time and verifiers that run in sublinear time, where the time complexity is with respect to the (nondeterministic) time complexity of the proved computation.

For any given finite field $\mathbb{F}$, we construct IOPs for the correctness of (nondeterministic) arithmetic computations over $\mathbb{F}$ with linear-time prover and polylogarithmic query complexity. Specifically, our IOPs work for the NP-complete language R1CS, which in particular captures arithmetic circuit satisfiability, and for the algebraic automata problem. The IOPs imply succinct arguments for (nondeterministic) arithmetic computations over any finite field with linear-time proving (given black-box access to a linear-time collision-resistant hash function). The argument for algebraic automata also achieves sublinear verification time.

The construction leverages recent applications of reverse-multiplication-friendly embeddings and precomputation techniques to amortize the cost of expensive operations. These tools enable us to overcome a key limitation in prior works that required the field $\mathbb{F}$ to be large.
Expand
Sayandeep Saha, Mustafa Khairallah, Thomas Peyrin
ePrint Report ePrint Report
Implementation-based attacks are major concerns for modern cryptography. For symmetric-key cryptography, a significant amount of exploration has taken place in this regard for primitives such as block ciphers. Concerning symmetric-key operating modes, such as Authenticated Encryption with Associated Data (AEAD), the state-of-the-art mainly addresses the passive Side-Channel Attacks (SCA) in the form of leakage resilient cryptography. So far, only a handful of work address Fault Attacks (FA) in the context of AEADs, especially concerning the fundamental properties – integrity and confidentiality. In this paper, we address this gap by exploring mode-level issues arising due to FAs. We emphasize that FAs can be fatal even in cases where the adversary does not aim to extract the long-term secret, but rather tries to violate the basic security requirements (integrity and confidentiality). Notably, we show novel integrity attack examples on state-of-the-art AEAD constructions and even on a prior fault-resilient AEAD construction called SIV$. On the constructive side, we first present new security notions of fault-resilience, for PRF (frPRF), MAC (frMAC) and AEAD (frAE), the latter can be seen as an improved version of the notion introduced by Fischlin and Gunther at CT-RSA’20. Then, we propose new constructions to turn a frPRF into a fault-resilient MAC frMAC (hash-then-frPRF) and into a fault-resilient AEAD frAE (MAC-then-Encrypt-then-MAC or MEM).
Expand
Tako Boris Fouotsa
ePrint Report ePrint Report
We propose a countermeasure to the Castryck-Decru attack on SIDH. The attack heavily relies on the images of torsion points. The main input to our countermeasure consists in masking the torsion point images in SIDH in a way they are not exploitable in the attack, but can be used to complete the key exchange. This comes with a change in the form the field characteristic and a considerable increase in the parameter sizes.
Expand
Onur Gunlu, Rafael F. Schaefer, Holger Boche, H. Vincent Poor
ePrint Report ePrint Report
The distributed source coding problem is extended by positing that noisy measurements of a remote source are the correlated random variables that should be reconstructed at another terminal. We consider a secure and private distributed lossy source coding problem with two encoders and one decoder such that (i) all terminals noncausally observe a noisy measurement of the remote source; (ii) a private key is available to each legitimate encoder and all private keys are available to the decoder; (iii) rate-limited noiseless communication links are available between each encoder and the decoder; (iv) the amount of information leakage to an eavesdropper about the correlated random variables is defined as secrecy leakage, and privacy leakage is measured with respect to the remote source; and (v) two passive attack scenarios are considered, where a strong eavesdropper can access both communication links and a weak eavesdropper can choose only one of the links to access. Inner and outer bounds on the rate regions defined under secrecy, privacy, communication, and distortion constraints are derived for both passive attack scenarios. When one or both sources should be reconstructed reliably, the rate region bounds are simplified.
Expand
Thomas Pornin
ePrint Report ePrint Report
Double-odd curves are curves with order equal to 2 modulo 4. A prime order group with complete formulas and a canonical encoding/decoding process could previously be built over a double-odd curve. In this paper, we reformulate such curves as a specific case of the Jacobi quartic. This allows using slightly faster formulas for point operations, as well as defining a more efficient encoding format, so that decoding and encoding have the same cost as classic point compression (decoding is one square root, encoding is one inversion). We define the prime-order groups jq255e and jq255s as the application of that modified encoding to the do255e and do255s groups. We furthermore define an optimized signature mechanism on these groups, that offers shorter signatures (48 bytes instead of the usual 64 bytes, for 128-bit security) and makes signature verification faster (down to less than 83000 cycles on an Intel x86 Coffee Lake core).
Expand
Henri Devillez, Olivier Pereira, Thomas Peters
ePrint Report ePrint Report
The verifiable encryption of bits is the main computational step that is needed to prepare ballots in many practical voting protocols. Its computational load can also be a practical bottleneck, preventing the deployment of some protocols or requiring the use of computing clusters.

We investigate the question of producing many verifiably encrypted bits in an efficient and portable way, using as a baseline the protocol that is in use in essentially all modern voting systems and libraries supporting homomorphic voting, including ElectionGuard, a state-of-the-art open source voting SDK deployed in government elections. Combining fixed base exponentiation techniques and new encryption and ZK proof mechanisms, we obtain speed-ups by more than one order of magnitude against standard implementations. Our exploration requires balancing conflicting optimization strategies, and the use of asymptotically less efficient protocols that turn out to be very effective in practice. Several of our proposed improvements are now on the ElectionGuard roadmap.
Expand
Héctor Masip Ardevol, Jordi Baylina Melé, Daniel Lubarov, José L. Muñoz-Tapia
ePrint Report ePrint Report
SNARKs for some standard cryptographic primitives tend to be plenty designed with SNARK-unfriendly operations such as XOR. Previous protocols such as [GW20] worked around this problem by the introduction of lookup arguments. However, these protocols were only appliable over the same circuit. RapidUp is a protocol that solves this limitation by unfolding the grand-product polynomial into two (equivalent) polynomials of the same size. Morevoer, a generalization of previous protocols is presented by the introduction of selectors.
Expand
Jiewen Yao, Krystian Matusiewicz, Vincent Zimmer
ePrint Report ePrint Report
The Security Protocol and Data Model (SPDM) defines flows to authenticate hardware identity of a computing device. It also allows for establishing a secure session for confidential and integrity protected data communication between two devices. The present version of SPDM, namely version 1.2, relies on traditional asymmetric cryptographic algorithms that are known to be vulnerable to quantum attacks. This paper describes the means by which support for post-quantum (PQ) cryptography can be added to the SPDM protocol in order to enable SPDM for the upcoming world of quantum computing. We examine SPDM 1.2 protocol and discuss how to negotiate the use of post-quantum cryptography algorithms (PQC), how to support device identity reporting, means to authenticate the device, and how to establish a secure session when using PQC algorithms. We consider so called hybrid modes where both classical and PQC algorithms are used to achieve security properties as these modes are important during the transition period. We also share our experience with implementing PQ-SPDM and provide benchmarks for some of the winning NIST PQC algorithms.
Expand
Ngoc Khanh Nguyen, Gregor Seiler
ePrint Report ePrint Report
We propose a practical sublinear-size zero-knowledge proof system for Rank-1 Constraint Satisfaction (R1CS) based on lattices. The proof size scales asymptotically with the square root of the witness size. Concretely, the size becomes $2$-$3$ times smaller than Ligero (ACM CCS 2017), which also exhibits square root scaling, for large instances of R1CS. At the core lies an interactive variant of the Schwartz-Zippel Lemma that might be of independent interest.
Expand
◄ Previous Next ►