International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

21 August 2022

Alexandre Belling, Azam Soleimanian, Olivier Bégassat
ePrint Report ePrint Report
SNARK is a well-known family of cryptographic tools that is increasingly used in the field of computation integrity at scale. In this area, multiple works have introduced SNARK friendly cryptographic primitives - hashing, but also encryption and signature verification - which are used in many applications. Despite all the efforts to create cryptographic primitives that can be proved faster, it remains a major performance hole in practice. In this paper, we present a recursive technique that can improve the efficiency of the prover by an order of magnitude compared to proving MiMC hashes (a snark-friendly hash function, Albrecht et al. 2016) with a Groth16 (Eurocrypt 2016) proof. We use GKR (a well-known public-coin argument system by Goldwasser et al., STOC 2008) to prove the integrity of hash computations and embed the GKR verifier inside a SNARK circuit. The challenge comes from the fact that GKR is a public-coin interactive protocol, and applying Fiat-Shamir naively may result in worse performances than applying existing techniques directly. This is because Fiat-Shamir itself is involved with hash computation over a large string. We take advantage of a property that SNARK schemes commonly have to build a protocol in which for the Fiat-Shamir hashes have very short inputs. The technique we present is generic and can be applied to over most known SNARK scheme and any public-coin argument system in place of GKR. It outputs an augmented proof system combining the performances of the two.
Expand
Yuhei Watanabe, Hideki Yamamoto, Hirotaka Yoshida
ePrint Report ePrint Report
This paper presents results of performance evaluation of NIST Lightweight Cryptography standardization finalists which are implemented by us. Our implementation method puts on the target to reduce RAM consumption on embedded devices. Our target microcontrollers are AVR ATmega 128 and ARM Cortex-M3. We apply our implementation method to five AEAD schemes which include four finalists of the NIST lightweight cryptography standardization and demonstrate the performance evaluation on target microcontrollers. From our performance evaluation of the RAM size, we have achieved 117-byte TinyJAMBU-128 on ATmega 128 and 140-byte TinyJAMBU-128 on ARM Cortex-M3. Our implementation of TinyJAMBU-128 has the smallest RAM of all the target AEAD schemes.
Expand
Tuong Ngoc Nguyen, Anh The Ta, Huy Quoc Le, Dung Hoang Duong, Willy Susilo, Fuchun Guo, Kazuhide Fukushima, Shinsaku Kiyomoto
ePrint Report ePrint Report
Unique ring signatures (URS) were introduced by Franklin and Zhang (FC 2012) as a unification of linkable and traceable ring signatures. In URS, each member within a ring can only produce, on behalf of the ring, at most one signature for a message. Applications of URS potentially are e-voting systems and e–token systems. In blockchain technology, URS has been implemented for mixing contracts. However, existing URS schemes are based on the Discrete Logarithm Problem, which is insecure in the post-quantum setting. In this paper, we design a new lattice-based URS scheme where the signature size is logarithmic in the number of ring members. The proposed URS exploits a Merkle tree-based accumulator as a building block in the lattice setting. Our scheme is secure under the Short Integer Solution and Learning With Rounding assumptions in the random oracle model.
Expand
Marten van Dijk, Chenglu Jin
ePrint Report ePrint Report
Analysis of advanced Physical Unclonable Function (PUF) applications and protocols rely on assuming that a PUF behaves like a random oracle, that is, upon receiving a challenge, a uniform random response with replacement is selected, measurement noise is added, and the resulting response is returned. In order to justify such an assumption, we need to rely on digital interface computation that into some extent remains confidential -- otherwise, information about PUF challenge response pairs leak with which the adversary can train a prediction model for the PUF.

We introduce a theoretical framework that allows the adversary to have a prediction model (with typical accuracy of 75% for predicting response bits for state-of-the-art silicon PUF designs). We do not require any confidential digital computing or digital secrets while we can still prove rigorous statements about the bit security of a system that interfaces with the PUF. In particular, we prove the bit security of a PUF based random oracle construction; this merges the PUF framework with fuzzy extractors.
Expand
Damien Robert
ePrint Report ePrint Report
Let ? ∶ ? → ?′ be an N-isogeny between elliptic curves (or abelian varieties) over a finite field ?_?. We show that there always exist an efficient representation of ? that takes polylogarithmic ?(log^?(1) ? log ?) space and which can evaluate ? at any point ? ∈ ?(?_{?^?}) in polylogarithmic ?(log^?(1) ?) arithmetic operations in ?_{?^?}. Furthermore, this efficient representation can be computed by evaluating ? on ?(log ?) points defined over extensions of degree ?(log ?) over ?_?. In particular, if ? is represented by the equation ?(?) = 0 of its kernel ?, then using Vélu’s formula the efficient representation can be computed in time ? ̃(? log ? + log^2 ?).
Expand

17 August 2022

Luan Luan, Chunxiang Gu, Yonghui Zheng, Yanan Shi
ePrint Report ePrint Report
Lattice enumeration is a linear-space algorithm for solving the shortest lattice vector problem(SVP). Extreme pruning is a practical technique for accelerating lattice enumeration, which has mature theoretical analysis and practical implementation. However, these works are still remain to be done for discrete pruning. In this paper, we improve the discrete pruned enumeration (DP enumeration), and give a solution to the problem proposed by Leo Ducas et Damien Stehle about the cost estimation of discrete pruning. Our contribution is on the following three aspects:

First, we refine the algorithm both from theoretical and practical aspects. Discrete pruning using natural number representation lies on a randomness assumption of lattice point distribution, which has an obvious paradox in the original analysis. We rectify this assumption to fix the problem, and correspondingly modify some details of DP enumeration. We also improve the binary search algorithm for cell enumeration radius with polynomial time complexity, and refine the cell decoding algorithm. Besides, we propose to use a truncated lattice reduction algorithm -- k-tours-BKZ as reprocessing method when a round of enumeration failed.

Second, we propose a cost estimation simulator for DP enumeration. Based on the investigation of lattice basis stability during reprocessing, we give a method to simulate the squared length of Gram-Schmidt orthogonalization basis quickly, and give the fitted cost estimation formulae of sub-algorithms in CPU-cycles through intensive experiments. The success probability model is also modified based on the rectified assumption. We verify the cost estimation simulator on middle size SVP challenge instances, and the simulation results are very close to the actual performance of DP enumeration.

Third, we give a method to calculate the optimal parameter setting to minimize the running time of DP enumeration. We compare the efficiency of our optimized DP enumeration with extreme pruning enumeration in solving SVP challenge instances. The experimental results in medium dimension and simulation results in high dimension both show that the discrete pruning method could outperform extreme pruning. An open-source implementation of DP enumeration with its simulator is also provided.
Expand
Peyman Momeni, Sergey Gorbunov, Bohan Zhang
ePrint Report ePrint Report
While blockchain systems are quickly gaining popularity, front-running remains a major obstacle to fair exchange. In this paper, we show how to apply identity-based encryption (IBE) to prevent front-running with minimal bandwidth overheads. In our approach, to decrypt a block of N transactions, the number of messages sent across the network only grows linearly with the size of decrypting committees, S. That is, to decrypt a set of N transactions sequenced at a specific block, a committee only needs to exchange S decryption shares (independent of N ). In comparison, previous solutions are based on threshold decryption schemes, where each transaction in a block must be decrypted separately by the committee, resulting in bandwidth overhead of N × S. Along the way, we present a model for fair block processing and build a prototype implementation. We show that on a sample of 1000 messages with 1000 validators our system saves 42.53 MB of bandwidth which is 99.6% less compared with the standard threshold decryption paradigm.
Expand
Öznur MUT SAĞDIÇOĞLU, Serhat Sağdıçoğlu, Ebru Küçükkubaş
ePrint Report ePrint Report
Differential cryptanalysis is one of the most effective methods for evaluating the security level of block ciphers. For this, an attacker tries to find a differential or a characteristic with a high probability that distinguishes a block cipher from a random permutation to obtain the secret key. Although it is theoretically possible to compute the probability of a differential for a block cipher, there are two problems to compute it practically. The first problem is that it is computationally impossible to compute differential probability by trying all plaintext pairs. The second problem is that the probability of a differential over all choices of the plaintext and key might be different from the probability of the differential over all plaintexts for a fixed key. Thus, to evaluate the security against the differential cryptanalysis, one must assume both the hypothesis of stochastic equivalence and the Markov model. However, the hypothesis of stochastic equivalence does not hold in general. Indeed, we show on simple ciphers that the hypothesis of stochastic equivalence does not hold. Moreover, we observe that the differential probability is not equal to the expected differential probability. For these results, we study plateau characteristics for a 4-bit cipher and a 16-bit super box. As a result, when considering differential cryptanalysis, one must be careful about the gap between the theoretical and the practical security of block ciphers.
Expand
Ruiqi Mi, Haodong Jiang, Zhenfeng Zhang
ePrint Report ePrint Report
Resistance to key misuse attacks is a vital property for key encapsulation mechanisms(KEMs)in NIST-PQC standardization process. In key mismatch attack, the adversary recovers reused secret key with the help of an oracle $\mathcal{O}$ that indicates whether the shared key matches or not. Key mismatch attack is more powerful when fewer oracle queries are required. A series of works tried to reduce query times, Qin et al. [AISACRYPT 2021] gave a systematic approach to finding lower bound of oracle queries for a category of KEMs, including NIST’s third-round candidate Kyber and Saber.

In this paper, we found the aforementioned bound can be bypassed by combining Qin et al. (AISACRYPT 2021)’s key mismatch attack with a standard lattice attack. In particular, we explicitly build the relationship between the number of queries to the oracle and the bit security of the lattice-based KEMs. Our attack is inspired by the fact that each oracle query reveals partial information of reused secrets, and affects the mean and the covariance parameter of secrets, making the attack on lattice easier. In addition, We quantify such effect in theory and estimate the security loss for all NIST second-round candidate KEMs.Specifically, Our improved attack reduces the number of queries for Kyber512 by 34% from 1312 queries with bit security 107 to 865 with bit security 32. For Kyber768 and Kyber1024, our improved attack reduces the number of queries by 29% and 27% with bit security is 32.
Expand
Hao Chung, Elisaweta Masserova, Elaine Shi, Sri AravindaKrishnan Thyagarajan
ePrint Report ePrint Report
The recent work of Chung et al. suggested new formal foundations for side-contract-resilient fair exchange protocols. In this paper, we suggest new constructions that achieve a coalition-resistant Nash equilibrium, and moreover, our constructions improve over Chung et al. in the following senses: 1) we achieve optimistic reponsiveness; and 2) we reduce the amount of collateral from the parties.
Expand
Haiyan Wang, Penghui (Owen) Liu, Xiaoxiong Zhong, Weizhe Zhang
ePrint Report ePrint Report
The time sequence-based outsourcing makes new demands for related access control continue to grow increasingly in cloud computing. In this paper, we propose a practical password-based access control framework for such media cloudization relying on content control based on the time-sequence attribute, which is designed over prime-order groups. First, the scheme supports multi-keyword search simultaneously in any monotonic boolean formulas, and enables media owner to control content encryption key for different time-periods using an updatable password; Second, the scheme supports the key self-retrievability of content encryption key, which is more suitable for the cloud-based media applications with massive users. Then, we show that the proposed scheme is provably secure in the standard model. Finally, the detailed result of performance evaluation shows the proposed scheme is efficient and practical for cloud-based media applications.
Expand
Ray Perlner, John Kelsey, David Cooper
ePrint Report ePrint Report
SPHINCS$^+$ is a stateless hash-based signature scheme that has been selected for standardization as part of the NIST post-quantum cryptography (PQC) standardization process. Its security proof relies on the distinct-function multi-target second-preimage resistance (DM-SPR) of the underlying keyed hash function. The SPHINCS$^+$ submission offered several instantiations of this keyed hash function, including one based on SHA-256. A recent observation by Sydney Antonov on the PQC mailing list demonstrated that the construction based on SHA-256 did not have DM-SPR at NIST category five, for several of the parameter sets submitted to NIST; however, it remained an open question whether this observation leads to a forgery attack. We answer this question in the affirmative by giving a complete forgery attack that reduces the concrete classical security of these parameter sets by approximately 40 bits of security.

Our attack works by applying Antonov's technique to the {WOTS$^+$} public keys in {\SPHINCS}, leading to a new one-time key that can sign a very limited set of hash values. From that key, we construct a slightly altered version of the original hypertree with which we can sign arbitrary messages, yielding signatures that appear valid.
Expand
Elette Boyle, Niv Gilboa, Yuval Ishai, Victor I. Kolobov
ePrint Report ePrint Report
A distributed point function (DPF) is a cryptographic primitive that enables compressed additive sharing of a secret unit vector across two or more parties. Despite growing ubiquity within applications and notable research efforts, the best 2-party DPF construction to date remains the tree-based construction from (Boyle et al, CCS'16), with no significantly new approaches since.

We present a new framework for 2-party DPF construction, which applies in the setting of feasible (polynomial-size) domains. This captures in particular all DPF applications in which the keys are expanded to the full domain. Our approach is motivated by a strengthened notion we put forth, of programmable DPF (PDPF): in which a short, input-independent "offline" key can be reused for sharing many point functions.

* PDPF from OWF: We construct a PDPF for feasible domains from the minimal assumption that one-way functions exist, where the second "online" key size is polylogarithmic in the domain size $N$.

Our approach offers multiple new efficiency features and applications:

* Privately puncturable PRFs: Our PDPF gives the first OWF-based privately puncturable PRFs (for feasible domains) with sublinear keys.

* $O(1)$-round distributed DPF Gen: We obtain a (standard) DPF with polylog-size keys that admits an analog of Doerner-shelat (CCS'17) distributed key generation, requiring only $O(1)$ rounds (versus $\log N$).

* PCG with 1 short key: Compressing useful correlations for secure computation, where one key is of minimal size. This provides up to exponential communication savings in some application scenarios.
Expand
Diana Davidova, Nikolay Kaleyski
ePrint Report ePrint Report
We describe how any function over a finite field $\mathbb{F}_{p^n}$ can be represented in terms of the values of its derivatives. In particular, we observe that a function of algebraic degree $d$ can be represented uniquely through the values of its derivatives of order $(d-1)$ up to the addition of terms of algebraic degree strictly less than $d$. We identify a set of elements of the finite field, which we call the degree $d$ extension of the basis, which has the property that for any choice of values for the elements in this set, there exists a function of algebraic degree $d$ whose values match the given ones. We discuss how to reconstruct a function from the values of its derivatives, and discuss the complexity involved in transitioning between the truth table of the function and its derivative representation. We then specialize to the case of quadratic functions, and show how to directly convert between the univariate and derivative representation without going through the truth table. We thus generalize the matrix representation of qaudratic vectorial Boolean functions due to Yu et al. to the case of arbitrary characteristic. We also show how to characterize quadratic planar functions using the derivative representation. Based on this, we adapt the method of Yu et al. for searching for quadratic APN functions with prime field coefficients to the case of planar DO functions. We use this method to find all such functions (up to CCZ-equivalence) over $\mathbb{F}_{3^n}$ for $n \le 7$. We conclude that the currently known planar DO polynomials cover all possible cases for $n \le 7$. We find representatives simpler than the known ones for the Zhou-Pott, Dickson, and Lunardon-Marino-Polverino-Trombetti-Bierbrauer families for $n = 6$. Finally, we discuss the computational resources that would be needed to push this search to higher dimensions.
Expand
Zhenzhen Bao, Jian Guo, Shun Li, Phuong Pham
ePrint Report ePrint Report
In this work, we evaluate the security of Merkle-Damgård (MD) hash functions and their combiners (XOR and concatenation combiners) in quantum settings. Two main quantum scenarios are considered, including the scenario where a substantial amount of cheap quantum random access memory (qRAM) is available and where qRAM is limited and expensive to access. We present generic quantum attacks on the MD hash functions and hash combiners, and carefully analyze the complexities under both quantum scenarios. The considered securities are fundamental requirements for hash functions, including the resistance against collision and (second-)preimage. The results are consistent with the conclusions in the classical setting, that is, the considered resistances of the MD hash functions and their combiners are far less than ideal, despite the significant differences in the expected security bounds between the classical and quantum settings. Particularly, the generic attacks can be improved significantly using quantum computers under both scenarios. These results serve as an indication that classical hash constructions require careful security re-evaluation before being deployed to the post-quantum cryptography schemes.
Expand
Jian Guo, Shun Li, Guozhen Liu, Phuong Pham
ePrint Report ePrint Report
In ToSC'20, a new approach combining Mix-Integer Linear Programming (MILP) tool and Constraint Programming (CP) tool to search for boomerang distinguishers is proposed and later used for rebound attack in ASIACRYPT'21 and CRYPTO'22. In this work, we extend these techniques to mount collision attacks on SKINNY-128-256 MMO hashing mode in classical and quantum settings. The first results of 17-round (and 15-round) free-start collision attack on this variant of SKINNY hashing mode are presented. Moreover, one more round of the inbound phase is covered leading to the best existing classical free-start collision attack of 19-round on the SKINNY-128-384 MMO hashing.
Expand
Jonathan Bootle, Alessandro Chiesa, Ziyi Guan, Siqi Liu
ePrint Report ePrint Report
Interactive oracle proofs (IOPs) are a generalization of probabilistically checkable proofs that can be used to construct succinct arguments. Improvements in the efficiency of IOPs lead to improvements in the efficiency of succinct arguments. The key efficiency goals are achieving provers that run in linear time and verifiers that run in sublinear time, where the time complexity is with respect to the (nondeterministic) time complexity of the proved computation.

For any given finite field $\mathbb{F}$, we construct IOPs for the correctness of (nondeterministic) arithmetic computations over $\mathbb{F}$ with linear-time prover and polylogarithmic query complexity. Specifically, our IOPs work for the NP-complete language R1CS, which in particular captures arithmetic circuit satisfiability, and for the algebraic automata problem. The IOPs imply succinct arguments for (nondeterministic) arithmetic computations over any finite field with linear-time proving (given black-box access to a linear-time collision-resistant hash function). The argument for algebraic automata also achieves sublinear verification time.

The construction leverages recent applications of reverse-multiplication-friendly embeddings and precomputation techniques to amortize the cost of expensive operations. These tools enable us to overcome a key limitation in prior works that required the field $\mathbb{F}$ to be large.
Expand
Sayandeep Saha, Mustafa Khairallah, Thomas Peyrin
ePrint Report ePrint Report
Implementation-based attacks are major concerns for modern cryptography. For symmetric-key cryptography, a significant amount of exploration has taken place in this regard for primitives such as block ciphers. Concerning symmetric-key operating modes, such as Authenticated Encryption with Associated Data (AEAD), the state-of-the-art mainly addresses the passive Side-Channel Attacks (SCA) in the form of leakage resilient cryptography. So far, only a handful of work address Fault Attacks (FA) in the context of AEADs, especially concerning the fundamental properties – integrity and confidentiality. In this paper, we address this gap by exploring mode-level issues arising due to FAs. We emphasize that FAs can be fatal even in cases where the adversary does not aim to extract the long-term secret, but rather tries to violate the basic security requirements (integrity and confidentiality). Notably, we show novel integrity attack examples on state-of-the-art AEAD constructions and even on a prior fault-resilient AEAD construction called SIV$. On the constructive side, we first present new security notions of fault-resilience, for PRF (frPRF), MAC (frMAC) and AEAD (frAE), the latter can be seen as an improved version of the notion introduced by Fischlin and Gunther at CT-RSA’20. Then, we propose new constructions to turn a frPRF into a fault-resilient MAC frMAC (hash-then-frPRF) and into a fault-resilient AEAD frAE (MAC-then-Encrypt-then-MAC or MEM).
Expand
Tako Boris Fouotsa
ePrint Report ePrint Report
We propose a countermeasure to the Castryck-Decru attack on SIDH. The attack heavily relies on the images of torsion points. The main input to our countermeasure consists in masking the torsion point images in SIDH in a way they are not exploitable in the attack, but can be used to complete the key exchange. This comes with a change in the form the field characteristic and a considerable increase in the parameter sizes.
Expand
Onur Gunlu, Rafael F. Schaefer, Holger Boche, H. Vincent Poor
ePrint Report ePrint Report
The distributed source coding problem is extended by positing that noisy measurements of a remote source are the correlated random variables that should be reconstructed at another terminal. We consider a secure and private distributed lossy source coding problem with two encoders and one decoder such that (i) all terminals noncausally observe a noisy measurement of the remote source; (ii) a private key is available to each legitimate encoder and all private keys are available to the decoder; (iii) rate-limited noiseless communication links are available between each encoder and the decoder; (iv) the amount of information leakage to an eavesdropper about the correlated random variables is defined as secrecy leakage, and privacy leakage is measured with respect to the remote source; and (v) two passive attack scenarios are considered, where a strong eavesdropper can access both communication links and a weak eavesdropper can choose only one of the links to access. Inner and outer bounds on the rate regions defined under secrecy, privacy, communication, and distortion constraints are derived for both passive attack scenarios. When one or both sources should be reconstructed reliably, the rate region bounds are simplified.
Expand
◄ Previous Next ►