IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

17 August 2022

Luan Luan, Chunxiang Gu, Yonghui Zheng, Yanan Shi
ePrint Report
Lattice enumeration is a linear-space algorithm for solving the shortest lattice vector problem(SVP). Extreme pruning is a practical technique for accelerating lattice enumeration, which has mature theoretical analysis and practical implementation. However, these works are still remain to be done for discrete pruning. In this paper, we improve the discrete pruned enumeration (DP enumeration), and give a solution to the problem proposed by Leo Ducas et Damien Stehle about the cost estimation of discrete pruning. Our contribution is on the following three aspects:

First, we refine the algorithm both from theoretical and practical aspects. Discrete pruning using natural number representation lies on a randomness assumption of lattice point distribution, which has an obvious paradox in the original analysis. We rectify this assumption to fix the problem, and correspondingly modify some details of DP enumeration. We also improve the binary search algorithm for cell enumeration radius with polynomial time complexity, and refine the cell decoding algorithm. Besides, we propose to use a truncated lattice reduction algorithm -- k-tours-BKZ as reprocessing method when a round of enumeration failed.

Second, we propose a cost estimation simulator for DP enumeration. Based on the investigation of lattice basis stability during reprocessing, we give a method to simulate the squared length of Gram-Schmidt orthogonalization basis quickly, and give the fitted cost estimation formulae of sub-algorithms in CPU-cycles through intensive experiments. The success probability model is also modified based on the rectified assumption. We verify the cost estimation simulator on middle size SVP challenge instances, and the simulation results are very close to the actual performance of DP enumeration.

Third, we give a method to calculate the optimal parameter setting to minimize the running time of DP enumeration. We compare the efficiency of our optimized DP enumeration with extreme pruning enumeration in solving SVP challenge instances. The experimental results in medium dimension and simulation results in high dimension both show that the discrete pruning method could outperform extreme pruning. An open-source implementation of DP enumeration with its simulator is also provided.
Peyman Momeni, Sergey Gorbunov, Bohan Zhang
ePrint Report
While blockchain systems are quickly gaining popularity, front-running remains a major obstacle to fair exchange. In this paper, we show how to apply identity-based encryption (IBE) to prevent front-running with minimal bandwidth overheads. In our approach, to decrypt a block of N transactions, the number of messages sent across the network only grows linearly with the size of decrypting committees, S. That is, to decrypt a set of N transactions sequenced at a specific block, a committee only needs to exchange S decryption shares (independent of N ). In comparison, previous solutions are based on threshold decryption schemes, where each transaction in a block must be decrypted separately by the committee, resulting in bandwidth overhead of N × S. Along the way, we present a model for fair block processing and build a prototype implementation. We show that on a sample of 1000 messages with 1000 validators our system saves 42.53 MB of bandwidth which is 99.6% less compared with the standard threshold decryption paradigm.
Öznur MUT SAĞDIÇOĞLU, Serhat Sağdıçoğlu, Ebru Küçükkubaş
ePrint Report
Differential cryptanalysis is one of the most effective methods for evaluating the security level of block ciphers. For this, an attacker tries to find a differential or a characteristic with a high probability that distinguishes a block cipher from a random permutation to obtain the secret key. Although it is theoretically possible to compute the probability of a differential for a block cipher, there are two problems to compute it practically. The first problem is that it is computationally impossible to compute differential probability by trying all plaintext pairs. The second problem is that the probability of a differential over all choices of the plaintext and key might be different from the probability of the differential over all plaintexts for a fixed key. Thus, to evaluate the security against the differential cryptanalysis, one must assume both the hypothesis of stochastic equivalence and the Markov model. However, the hypothesis of stochastic equivalence does not hold in general. Indeed, we show on simple ciphers that the hypothesis of stochastic equivalence does not hold. Moreover, we observe that the differential probability is not equal to the expected differential probability. For these results, we study plateau characteristics for a 4-bit cipher and a 16-bit super box. As a result, when considering differential cryptanalysis, one must be careful about the gap between the theoretical and the practical security of block ciphers.
Ruiqi Mi, Haodong Jiang, Zhenfeng Zhang
ePrint Report
Resistance to key misuse attacks is a vital property for key encapsulation mechanisms（KEMs）in NIST-PQC standardization process. In key mismatch attack, the adversary recovers reused secret key with the help of an oracle $\mathcal{O}$ that indicates whether the shared key matches or not. Key mismatch attack is more powerful when fewer oracle queries are required. A series of works tried to reduce query times, Qin et al. [AISACRYPT 2021] gave a systematic approach to finding lower bound of oracle queries for a category of KEMs, including NIST’s third-round candidate Kyber and Saber.

In this paper, we found the aforementioned bound can be bypassed by combining Qin et al. (AISACRYPT 2021)’s key mismatch attack with a standard lattice attack. In particular, we explicitly build the relationship between the number of queries to the oracle and the bit security of the lattice-based KEMs. Our attack is inspired by the fact that each oracle query reveals partial information of reused secrets, and affects the mean and the covariance parameter of secrets, making the attack on lattice easier. In addition, We quantify such effect in theory and estimate the security loss for all NIST second-round candidate KEMs.Specifically, Our improved attack reduces the number of queries for Kyber512 by 34% from 1312 queries with bit security 107 to 865 with bit security 32. For Kyber768 and Kyber1024, our improved attack reduces the number of queries by 29% and 27% with bit security is 32.
Hao Chung, Elisaweta Masserova, Elaine Shi, Sri AravindaKrishnan Thyagarajan
ePrint Report
The recent work of Chung et al. suggested new formal foundations for side-contract-resilient fair exchange protocols. In this paper, we suggest new constructions that achieve a coalition-resistant Nash equilibrium, and moreover, our constructions improve over Chung et al. in the following senses: 1) we achieve optimistic reponsiveness; and 2) we reduce the amount of collateral from the parties.
Haiyan Wang, Penghui (Owen) Liu, Xiaoxiong Zhong, Weizhe Zhang
ePrint Report
The time sequence-based outsourcing makes new demands for related access control continue to grow increasingly in cloud computing. In this paper, we propose a practical password-based access control framework for such media cloudization relying on content control based on the time-sequence attribute, which is designed over prime-order groups. First, the scheme supports multi-keyword search simultaneously in any monotonic boolean formulas, and enables media owner to control content encryption key for different time-periods using an updatable password; Second, the scheme supports the key self-retrievability of content encryption key, which is more suitable for the cloud-based media applications with massive users. Then, we show that the proposed scheme is provably secure in the standard model. Finally, the detailed result of performance evaluation shows the proposed scheme is efficient and practical for cloud-based media applications.
Ray Perlner, John Kelsey, David Cooper
ePrint Report
SPHINCS$^+$ is a stateless hash-based signature scheme that has been selected for standardization as part of the NIST post-quantum cryptography (PQC) standardization process. Its security proof relies on the distinct-function multi-target second-preimage resistance (DM-SPR) of the underlying keyed hash function. The SPHINCS$^+$ submission offered several instantiations of this keyed hash function, including one based on SHA-256. A recent observation by Sydney Antonov on the PQC mailing list demonstrated that the construction based on SHA-256 did not have DM-SPR at NIST category five, for several of the parameter sets submitted to NIST; however, it remained an open question whether this observation leads to a forgery attack. We answer this question in the affirmative by giving a complete forgery attack that reduces the concrete classical security of these parameter sets by approximately 40 bits of security.

Our attack works by applying Antonov's technique to the {WOTS$^+$} public keys in {\SPHINCS}, leading to a new one-time key that can sign a very limited set of hash values. From that key, we construct a slightly altered version of the original hypertree with which we can sign arbitrary messages, yielding signatures that appear valid.
Elette Boyle, Niv Gilboa, Yuval Ishai, Victor I. Kolobov
ePrint Report
A distributed point function (DPF) is a cryptographic primitive that enables compressed additive sharing of a secret unit vector across two or more parties. Despite growing ubiquity within applications and notable research efforts, the best 2-party DPF construction to date remains the tree-based construction from (Boyle et al, CCS'16), with no significantly new approaches since.

We present a new framework for 2-party DPF construction, which applies in the setting of feasible (polynomial-size) domains. This captures in particular all DPF applications in which the keys are expanded to the full domain. Our approach is motivated by a strengthened notion we put forth, of programmable DPF (PDPF): in which a short, input-independent "offline" key can be reused for sharing many point functions.

* PDPF from OWF: We construct a PDPF for feasible domains from the minimal assumption that one-way functions exist, where the second "online" key size is polylogarithmic in the domain size $N$.

Our approach offers multiple new efficiency features and applications:

* Privately puncturable PRFs: Our PDPF gives the first OWF-based privately puncturable PRFs (for feasible domains) with sublinear keys.

* $O(1)$-round distributed DPF Gen: We obtain a (standard) DPF with polylog-size keys that admits an analog of Doerner-shelat (CCS'17) distributed key generation, requiring only $O(1)$ rounds (versus $\log N$).

* PCG with 1 short key: Compressing useful correlations for secure computation, where one key is of minimal size. This provides up to exponential communication savings in some application scenarios.
Diana Davidova, Nikolay Kaleyski
ePrint Report
We describe how any function over a finite field $\mathbb{F}_{p^n}$ can be represented in terms of the values of its derivatives. In particular, we observe that a function of algebraic degree $d$ can be represented uniquely through the values of its derivatives of order $(d-1)$ up to the addition of terms of algebraic degree strictly less than $d$. We identify a set of elements of the finite field, which we call the degree $d$ extension of the basis, which has the property that for any choice of values for the elements in this set, there exists a function of algebraic degree $d$ whose values match the given ones. We discuss how to reconstruct a function from the values of its derivatives, and discuss the complexity involved in transitioning between the truth table of the function and its derivative representation. We then specialize to the case of quadratic functions, and show how to directly convert between the univariate and derivative representation without going through the truth table. We thus generalize the matrix representation of qaudratic vectorial Boolean functions due to Yu et al. to the case of arbitrary characteristic. We also show how to characterize quadratic planar functions using the derivative representation. Based on this, we adapt the method of Yu et al. for searching for quadratic APN functions with prime field coefficients to the case of planar DO functions. We use this method to find all such functions (up to CCZ-equivalence) over $\mathbb{F}_{3^n}$ for $n \le 7$. We conclude that the currently known planar DO polynomials cover all possible cases for $n \le 7$. We find representatives simpler than the known ones for the Zhou-Pott, Dickson, and Lunardon-Marino-Polverino-Trombetti-Bierbrauer families for $n = 6$. Finally, we discuss the computational resources that would be needed to push this search to higher dimensions.
Zhenzhen Bao, Jian Guo, Shun Li, Phuong Pham
ePrint Report
In this work, we evaluate the security of Merkle-Damgård (MD) hash functions and their combiners (XOR and concatenation combiners) in quantum settings. Two main quantum scenarios are considered, including the scenario where a substantial amount of cheap quantum random access memory (qRAM) is available and where qRAM is limited and expensive to access. We present generic quantum attacks on the MD hash functions and hash combiners, and carefully analyze the complexities under both quantum scenarios. The considered securities are fundamental requirements for hash functions, including the resistance against collision and (second-)preimage. The results are consistent with the conclusions in the classical setting, that is, the considered resistances of the MD hash functions and their combiners are far less than ideal, despite the significant differences in the expected security bounds between the classical and quantum settings. Particularly, the generic attacks can be improved significantly using quantum computers under both scenarios. These results serve as an indication that classical hash constructions require careful security re-evaluation before being deployed to the post-quantum cryptography schemes.
Jian Guo, Shun Li, Guozhen Liu, Phuong Pham
ePrint Report
In ToSC'20, a new approach combining Mix-Integer Linear Programming (MILP) tool and Constraint Programming (CP) tool to search for boomerang distinguishers is proposed and later used for rebound attack in ASIACRYPT'21 and CRYPTO'22. In this work, we extend these techniques to mount collision attacks on SKINNY-128-256 MMO hashing mode in classical and quantum settings. The first results of 17-round (and 15-round) free-start collision attack on this variant of SKINNY hashing mode are presented. Moreover, one more round of the inbound phase is covered leading to the best existing classical free-start collision attack of 19-round on the SKINNY-128-384 MMO hashing.
Jonathan Bootle, Alessandro Chiesa, Ziyi Guan, Siqi Liu
ePrint Report
Interactive oracle proofs (IOPs) are a generalization of probabilistically checkable proofs that can be used to construct succinct arguments. Improvements in the efficiency of IOPs lead to improvements in the efficiency of succinct arguments. The key efficiency goals are achieving provers that run in linear time and verifiers that run in sublinear time, where the time complexity is with respect to the (nondeterministic) time complexity of the proved computation.

For any given finite field $\mathbb{F}$, we construct IOPs for the correctness of (nondeterministic) arithmetic computations over $\mathbb{F}$ with linear-time prover and polylogarithmic query complexity. Specifically, our IOPs work for the NP-complete language R1CS, which in particular captures arithmetic circuit satisfiability, and for the algebraic automata problem. The IOPs imply succinct arguments for (nondeterministic) arithmetic computations over any finite field with linear-time proving (given black-box access to a linear-time collision-resistant hash function). The argument for algebraic automata also achieves sublinear verification time.

The construction leverages recent applications of reverse-multiplication-friendly embeddings and precomputation techniques to amortize the cost of expensive operations. These tools enable us to overcome a key limitation in prior works that required the field $\mathbb{F}$ to be large.
ePrint Report