International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

20 February 2022

Simon Holmgaard Kamp, Jesper Buus Nielsen, Søren Eller Thomsen, Daniel Tschudi
ePrint Report ePrint Report
We present two new provably secure finality layers for Nakamoto style blockchains. One is for partially synchronous networks and the other is for networks with periods of synchrony. Both protocols are player replaceable and therefore enjoy protection against denial of service attacks when run with a proof-of-stake lottery to elect the parties. The finality layers are proven secure to run on top of any Nakamoto style blockchain which has a property called \emph{finality friendliness}. Both finality layers improve on all existing provably secure finality layers in terms of communication complexity or security. A proof-of-stake finality layer has $v$-validity if whenever it declares a block $B$ final then honest parties holding a fraction $v$ of the stake had $B$ on the longest chain. Validity is important to prevent that the finality layer finalises blocks that were not ``good'' according to the Nakamoto style blockchain. We prove upper bounds on the achievable validity in partially synchronous networks and networks with periods of synchrony. Both our finality layers match these upper bounds.
Expand
Akshayaram Srinivasan
ePrint Report ePrint Report
The round complexity of secure two-party computation is a long studied problem with matching upper and lower bounds for the case of black-box simulators (i.e., the simulators that use the adversary as a black-box). In this work, we focus on going beyond this black-box barrier via non-black-box techniques. Specifically, based on standard cryptographic assumptions, we give a construction of a 3-round two-party computation protocol for computing inputless functionalities (such as coin-tossing) that satisfies standard security against malicious senders and $\epsilon$-security against malicious receivers. Prior to our work such protocols were only known for the case of (weak) zero-knowledge.
Expand
Giang Linh Duc Nguyen, Dung Hoang Duong, Huy Quoc Le, Willy Susilo
ePrint Report ePrint Report
Nowadays, together with stormy technology advancement, billions of interconnected devices are constantly collecting data around us. In that fashion, privacy protection has become a major concern. The data must be in encrypted form before being stored on the cloud servers. As a result, the cloud servers are unable to perform calculations on en- crypted data, such as searching and matching keywords. In the PKE- MET setting, a cloud server can perform an equality test on a number of ciphertexts which encrypted with the same designated number. In this paper, we propose, for the first time, an efficient construction of a quantum-safe PKE-MET system based on the hardness of the Learning with Errors (LWE) problem in the lattice setting. Furthermore, we also discuss the first lattice-base public key encryption with flexible multi- ciphertext equality test (PKE-FMET) constructions, which allow per- forming equality test on multiple ciphertexts whose designated numbers are less than a threshold number. Our proposed schemes are proven to be secure in the standard model.
Expand
Yongwoo Lee, Daniele Micciancio, Andrey Kim, Rakyong Choi, Maxim Deryabin, Jieun Eom, Donghoon Yoo
ePrint Report ePrint Report
The FHEW fully homomorphic encryption scheme (Ducas and Micciancio, Eurocrypt 2015) and its TFHE variant (Chillotti et al., Asiacrypt 2016) are the best-known methods to perform bit-level homomorphic computations on encrypted data. There are two competing bootstrapping approaches to FHEW-like schemes: the AP bootstrapping method (Alperin-Sheriff and Peikert, Crypto 2014) which is the basis of the original FHEW scheme, and the GINX bootstrapping method (Gama et al., Eurocrypt 2016), adopted by TFHE. An attractive feature of the AP/FHEW method is that it supports arbitrary secret key distributions, which are both critical for a number of important applications (like threshold and some multi-key homomorphic encryption (HE) schemes), and provides better security guarantees. On the other hand, GINX/TFHE bootstrapping uses much smaller evaluation keys, but it is directly applicable only to binary secret keys, which are less standard and restrict the scheme's applicability. (Extensions of GINX/TFHE bootstrapping to arbitrary keys are known, but they incur a substantial performance penalty.) In this paper, we present a new bootstrapping procedure for FHEW-like schemes that achieves the best features of both schemes and further improves on them: support for arbitrary secret key distributions at no additional runtime costs, while using small evaluation keys (even smaller than GINX). As an added benefit, our new bootstrapping procedure results in smaller noise growth than both AP and GINX, regardless of the key distribution. Our improvements are both theoretically significant (offering asymptotic savings, up to a O(log n) multiplicative factor, either on the running time or public evaluation key size), and practically relevant. We demonstrate the practicality of the proposed methods by building a prototype implementation within the PALISADE open-source HE library. Our experimental results show that our methods have minimal overhead, and improve on the practical performance of previous schemes, by factors ranging from small constants (already in the case of ternary keys) to an order of magnitude or more (e.g., when comparing to FHEW evaluation key size for arbitrary secret keys.) We illustrate the benefits of our method by providing a simple construction of threshold HE based on FHEW.
Expand
Charles Bouillaguet
ePrint Report ePrint Report
This paper discusses the implications of choosing a computational model to study the cost of cryptographic attacks and therefore quantify how dangerous they are. This choice is often unconscious and the chosen model itself is usually implicit; but it has repercussions on security evaluations.

We compare three reasonable computational models: $i$) the usual Random Access Machine (RAM) model; $ii$) the ``Expensive Memory Model'' explicitly introduced by several 3rd-round submissions to the Post-Quantum NIST competition (it states that a single access to a large memory costs as much as many local operations); $iii)$ the venerable VLSI model using the Area-Time cost measure.

It is well-known that costs in the RAM model are lower that costs in the last two models. These have been claimed to be more realistic, and therefore to lead to more precise security evaluations. The main technical contribution of this paper is to show that the last two these models are incomparable. We identify a situation where the expensive memory model overestimates costs compared to the (presumably even more realistic) VLSI model.

In addition, optimizing the cost in each model is a distinct objective that leads to different attack parameters, and raises the question of what is the ``best'' way to proceed for an eventual attacker. We illustrate these discrepancies by studying several generic attacks against hash function and Feistel networks in the three models.
Expand
Ariana Goh, Lim Chu Wee, Yan Bo Ti
ePrint Report ePrint Report
In this paper we generalise Ti's fault attack and the loop abort fault attacks on supersingular isogeny cryptosystems (genus one) to genus two. Genus two isogeny based cryptosystems are generalisations of its genus one counterpart, as such, attacks on the the latter are believed to generalise to the former.

Fault attacks on supersingular elliptic curve isogeny cryptography has been shown to be practical. We show in this paper that fault attacks continue to be practical in genus two, albeit with a few additional traces required.
Expand
Richard Allen, Ratip Emin Berker, Sílvia Casacuberta, Michael Gul
ePrint Report ePrint Report
In this paper, we provide a comprehensive overview of a recent debate over the quantum versus classical solvability of bounded distance decoding (BDD). Specifically, we review the work of Eldar and Hallgren [EH22], [Hal21] demonstrating a quantum algorithm solving $\lambda_1 2^{-\Omega(\sqrt{k \log q})}$-BDD in polynomial time for lattices of periodicity $q$, finite group rank $k$, and shortest lattice vector length $\lambda_1$. Subsequently, we prove the results of [DvW21a], [DvW21b] with far greater detail and elaboration than in the original work. Namely, we show that there exists a deterministic, classical algorithm achieving the same result.
Expand
Senyang Huang, Orr Dunkelman, Orna Agmon Ben-Yehuda, Alexander Maximov
ePrint Report ePrint Report
The Keccak sponge function family, designed by Bertoni et al. in 2007, was selected by the U.S. National Institute of Standards and Technology (NIST) in 2012 as the next generation of Secure Hash Algorithm (SHA-3). Due to its theoretical and practical importance, cryptanalysis against SHA-3 has attracted an increasing attention. To the best of our knowledge, the most powerful collision attack on SHA-3 up till now is the linearisation technique proposed by Jian Guo et al. However, that technique is infeasible to work on variants with a smaller input space, such as SHA3-384.

In this work we improve previous results with three ideas which were not used in previous works against SHA-3. First, in order to reduce constraints and increase flexibility in our solutions, we use 2-block messages instead of 1-block messages. Second, we reduce the connectivity problem into a satisfiability (SAT) problem, instead of applying the linearisation technique. Finally, we consider two new non-random properties of the Keccak non-linear layer and propose an efficient deduce-and-sieve algorithm based on these properties.

The resulting collision-finding algorithm on 4-round SHA3-384 has a practical time complexity of $2^{59.64}$ (and memory complexity of $2^{45.93}$). This greatly improves the previous best known collision attack by Dinur et al., whose $2^{147}$ time complexity was far from being practical. Although our attack does not threaten the security margin of the SHA-3 hash function, the tools developed in this paper could be useful in future analysis of other cryptographic primitives and also in development of new and faster SAT solvers.
Expand
Adithya Bhat, Aniket Kate, Kartik Nayak, Nibesh Shrestha
ePrint Report ePrint Report
Distributed random beacons publish random numbers at regular intervals, which anyone can obtain and verify. The design of public distributed random beacons has been an exciting research direction with significant implication to blockchains, voting and beyond. Random beacons, in addition to being bias-resistant and unpredictable, also need to have low communication cost, low latency, and ease of reconfigurability. Existing works on synchronous random beacons sacrifice one or more of these properties.

In this work, we design an efficient unpredictable synchronous random beacon protocol, OptRand, with quadratic (in the number $n$ of system nodes) communication complexity per beacon output. First, we innovate by employing a novel combination of bilinear pairing based publicly verifiable secret sharing and non-interactive zero-knowledge proofs to build a linear (in $n$) sized publicly verifiable random sharing. Second, we develop a state machine replication protocol with linear-sized inputs that is also optimistically responsive, i.e., it can progress responsively at actual network speed during optimistic conditions, despite the synchrony assumption, and thus incur low latency. In addition, we present an efficient reconfiguration mechanism for OptRand that allows nodes to leave and join the system.
Expand
Lawrence Roy
ePrint Report ePrint Report
Given a small number of base oblivious transfers (OTs), how does one generate a large number of extended OTs as efficiently as possible? The answer has long been the seminal work of IKNP (Ishai et al., Crypto 2003) and the family of protocols it inspired, which only use Minicrypt assumptions. Recently, Boyle et al. (Crypto 2019) proposed the Silent-OT technique that improves on IKNP, but at the cost of a much stronger, non-Minicrypt assumption: the learning parity with noise (LPN) assumption. We present SoftSpokenOT, the first OT extension to improve on IKNP's communication cost in the Minicrypt model. While IKNP requires security parameter $\lambda$ bits of communication for each OT, SoftSpokenOT only needs $\lambda / k$ bits, for any $k$, at the expense of requiring $2^{k-1} / k$ times the computation. For small values of $k$, this tradeoff is favorable since IKNP-style protocols are network-bound. We implemented SoftSpokenOT and found that our protocol gives almost a $5 \times$ speedup over IKNP in the LAN setting.

Our technique is based on a novel silent protocol for vector oblivious linear evaluation (VOLE) over polynomial-sized fields. We created a framework to build maliciously secure 1-of-N OT extension from this VOLE, revisiting the existing work for each step. Along the way, we found several flaws in the existing work, including a practical attack against the consistency check of Patra et al. (NDSS 2017), while also making some improvements.
Expand
Andrew Park, Wei-Kai Lin, Elaine Shi
ePrint Report ePrint Report
We propose a new garbled RAM construction called NanoGRAM, which achieves an amortized cost of $\widetilde{O}(\lambda \cdot (W \log N + \log^3 N))$ bits per memory access, where $\lambda$ is the security parameter, $W$ is the block size, and $N$ is the total number of blocks, and $\widetilde{O}(\cdot)$ hides $poly\log\log$ factors. For sufficiently large blocks where $W = \Omega(\log^2 N)$, our scheme achieves $\widetilde{O}(\lambda \cdot W \log N)$ cost per memory access, where the dependence on $N$ is optimal (barring $poly\log\log$ factors), in terms of the evaluator's runtime. Our asymptotical performance matches even the {\it interactive} state-of-the-art (modulo $poly\log\log$ factors), that is, running Circuit ORAM atop garbled circuit, and yet we remove the logarithmic number of interactions necessary in this baseline. Furthermore, we achieve asymptotical improvement over the recent work of Heath et al. Our scheme adopts the same assumptions as the mainstream literature on practical garbled circuits, i.e., circular correlation-robust hashes or a random oracle. We evaluate the concrete performance of NanoGRAM and compare it with a couple of baselines that are asymptotically less efficient. We show that NanoGRAM starts to outperform the naive linear-scan garbled RAM at a memory size of $N = 2^9$ and starts to outperform the recent construction of Heath et al. at $N = 2^{13}$.

Finally, as a by product, we also show the existence of a garbled RAM scheme assuming only one-way functions, with an amortized cost of $\widetilde{O}(\lambda^2 \cdot (W \log N + \log^3 N))$ per memory access. Again, the dependence on $N$ is nearly optimal for blocks of size $W = \Omega(\log^2 N)$ bits.
Expand
Arasu Arun, Joseph Bonneau, Jeremy Clark
ePrint Report ePrint Report
We introduce the short-lived proof, a non-interactive proof of knowledge with a novel feature: after a specified period of time, the proof is no longer convincing. This time-delayed loss of soundness happens "naturally" without further involvement from the prover or any third party. We propose formal definitions for short-lived proofs as well as the special case of short-lived signatures. We show several practical constructions built using verifiable delay functions (VDFs). The key idea in our approach is to allow any party to forge any proof by executing a large sequential computation. Some constructions achieve a stronger property called reusable forgeability in which one sequential computation allows forging an arbitrary number of proofs of different statements. Our work also introduces two novel types of VDFs, re-randomizable VDFs and zero-knowledge VDFs, which may be of independent interest.
Expand
André Schrottenloher, Marc Stevens
ePrint Report ePrint Report
Meet-in-the-middle (MITM) is a general paradigm where internal states are computed along two independent paths ('forwards' and 'backwards') that are then matched. Over time, MITM attacks improved using more refined techniques and exploiting additional freedoms and structure, which makes it more involved to find and optimize such attacks. This has led to the use of detailed attack models for generic solvers to automatically search for improved attacks, notably a MILP model developed by Bao et al. at EUROCRYPT 2021. In this paper, we study a simpler MILP modeling combining a greatly reduced attack representation as input to the generic solver, together with a theoretical analysis that, for any solution, proves the existence and complexity of a detailed attack. This modeling allows to find both classical and quantum attacks on a broad class of cryptographic permutations. First, Present-like constructions, with the permutations of the Spongent hash functions: we improve the MITM step in distinguishers by up to 3 rounds. Second, AES-like designs: despite being much simpler than Bao et al.'s, our model allows to recover the best previous results. The only limitation is that we do not use degrees of freedom from the key schedule. Third, we show that the model can be extended to target more permutations, like Feistel networks. In this context we give new Guess-and-determine attacks on reduced Simpira v2 and Sparkle. Finally, using our model, we find several new quantum preimage and pseudo-preimage attacks (e.g. Haraka v2, Simpira v2 ... ) targeting the same number of rounds as the classical attacks.
Expand
Thibauld Feneuil, Antoine Joux, Matthieu Rivain
ePrint Report ePrint Report
Zero-knowledge proofs of knowledge are useful tools to design signature schemes. The ongoing effort to build a quantum computer urges the cryptography community to develop new secure cryptographic protocols based on quantum-hard cryptographic problems. One of the few directions is code-based cryptography for which the strongest problem is the syndrome decoding (SD) for random linear codes. This problem is known to be NP-hard and the cryptanalysis state of the art has been stable for many years. A zero-knowledge protocol for this problem was pioneered by Stern in 1993. Since its publication, many articles proposed optimizations, implementation, or variants.

In this paper, we introduce a new zero-knowledge proof for the syndrome decoding problem on random linear codes. Instead of using permutations like most of the existing protocols, we rely on the MPC-in-the-head paradigm in which we reduce the task of proving the low Hamming weight of the SD solution to proving some relations between specific polynomials. Specifically, we propose a 5-round zero-knowledge protocol that proves the knowledge of a vector x such that y=Hx and wt(x)<= w and which achieves a soundness error closed to 1/N for an arbitrary N.

While turning this protocol into a signature scheme, we achieve a signature size of 11-12 KB for 128-bit security when relying on the hardness of the SD problem on binary fields. Using larger fields (like $\mathbb{F}_{2^8}$), we can produce fast signatures of around 8 KB. This allows us to outperform Picnic3 and to be competitive with SPHINCS+, both post-quantum signature candidates in the ongoing NIST standardization effort. Moreover, our scheme outperforms all the existing code-based signature schemes for the common ``signature size + public key size'' metric.
Expand
Sebastian Kolby, Divya Ravi, Sophia Yakoubov
ePrint Report ePrint Report
YOSO MPC (Gentry et al., Crypto 2021) is a new MPC framework where each participant can speak at most once. This models an adaptive adversary’s ability to watch the network and corrupt or destroy parties it deems significant based on their communication. By using private channels to anonymous receivers (e.g. by encrypting to a public key whose owner is unknown), the communication complexity of YOSO MPC can scale sublinearly with the total number $N$ of available parties, even when the adversary’s corruption threshold is linear in $N$ (e.g. just under $N/2$). It was previously an open problem whether YOSO MPC can achieve guaranteed output delivery in a constant number of rounds without relying on trusted setup. In this work, we show that this can indeed be accomplished. Using linearly homomorphic encryption and secret sharing, we construct YOSO-LHE, which is the first realistically efficient YOSO MPC protocol that achieves guaranteed output delivery without trusted setup. YOSO-LHE is not itself constant-round; it takes $O(d)$ rounds of communication, where $d$ is the depth of the circuit being computed. However, YOSO-LHE can be used to bootstrap any constant-round YOSO protocol that requires setup, by generating that setup within YOSO-LHE. As long as the complexity of the setup is independent of the circuit to be evaluated, the bootstrapped protocol will be constant-round.
Expand
Seunghwan Lee, Dong-Joon Shin
ePrint Report ePrint Report
We propose a floating-point fully homomorphic encryption(FPFHE) is proposed based on torus fully homomorphic encryptionequipped with programmable bootstrapping. Specifically, FPFHEfor 32-bit and 64-bit floating-point messages is implemented, which isstate-of-the-art in terms of precision. Also, a ciphertext is constructedto check if an overflow had occurred or not while evaluating arithmeticcircuits with FPFHE, which is useful when the message space or arith-metic circuit is too complex to estimate a bound of outputs such as deeplearning applications
Expand
Nir Bitansky, Sapir Freizeit
ePrint Report ePrint Report
We construct a two-message oblivious transfer protocol with statistical sender privacy (SSP OT) based on the Learning Parity with Noise (LPN) Assumption and a standard Nisan-Wigderson style derandomization assumption. Beyond being of interest on their own, SSP OT protocols have proven to be a powerful tool toward minimizing the round complexity in a wide array of cryptographic applications from proofs systems, through secure computation protocols, to hard problems in statistical zero knowledge (SZK). The protocol is plausibly post-quantum secure. The only other construction with plausible post quantum security is that of Brakerski and Do ̈tling (TCC ‘18) based on the Learning with Errors (LWE) Assumption. Lacking the geometric structure of LWE, our construction and analysis rely on a different set of techniques. Technically, we first construct an SSP OT protocol in the common random string model from LPN alone, and then derandomize the common random string. Most of the technical difficulty lies in the first step. Here we prove a robustness property of the inner product randomness extractor to a certain type of linear splitting attacks. A caveat of our construction is that it relies on the so called low noise regime of LPN. This aligns with our current complexity-theoretic understanding of LPN, which only in the low noise regime is known to imply hardness in SZK.
Expand
Jian Guo, Guozhen Liu, Ling Song, Yi Tu
ePrint Report ePrint Report
In this work, we focus on collision attacks against instances of \shac hash family in both classical and quantum settings. Since the 5-round collision attacks on \shacc-256 and other variants proposed by Guo \etal at JoC~2020, no other essential progress has been published. With a thorough investigation, we identify that the challenges of extending such collision attacks on \shac to more rounds lie in the inefficiency of differential trail search. To overcome this obstacle, we develop a \sat automatic search toolkit. The tool is used in multiple intermediate steps of the collision attacks and exhibits surprisingly high efficiency in differential trail search and other optimization problems encountered in the process. As a result, we present the first 6-round classical collision attack on \shakea with time complexity \cpshake, which also forms a quantum collision attack with quantum time \cpshakeq, and the first 6-round quantum collision attack on \shacc-224 and \shacc-256 with quantum time \cpshattf and \cpshatfs, both with negligible requirement of classical and quantum memory. The fact that classical collision attacks do not apply to 6-round \shacc-224 and \shacc-256 shows the higher coverage of quantum collision attacks, which is consistent with that on SHA-2 observed by Hosoyamada and Sasaki at CRYPTO~2021.
Expand
Liu zhang, Zilong Wang, Boyang Wang
ePrint Report ePrint Report
In CRYPTO'19, Gohr proposed a new cryptanalysis strategy using machine learning algorithms. Combining the neural distinguisher with a differential path and integrating the advanced key recovery procedure, Gohr achieved a 12-round key recovery attack on Speck32/64. Chen and Yu improved accuracy of neural distinguisher considering derived features from multiple-ciphertext pairs instead of single-ciphertext pairs. Bao et al. presented the concept of generalized neutral bits, improved 12-round, and devised a practical 13-round key recovery attack on Speck32/64 by enhancing the differential path prepended on the top of neural distinguisher. To capture more dimensional information, inspired by the Inception Blocks in GoogLeNet, we use multiple parallel convolutional layers as the core of the network structure to train neural distinguisher. For Speck32/64, we improve the accuracy of (5-8)-round neural distinguisher and train a new 9-round neural distinguisher. For Simon32/64, we improve the accuracy of (7-11)-round neural distinguisher and train a new 12-round neural distinguisher. In addition, we extend the idea of neutral bits in Gohr's work to solve the requirement of the same distribution of multiple-plaintext pairs in key recovery attack. Under the combined effect of multiple improvements, the time complexity of our (11-13)-round key recovery attacks for Speck32/64 has been reduced to a certain extent. Surprisingly, the success rate of our 12-round key recovery attack can reach $100\%$. For Simon32/64, we increase the success rate and decrease the time complexity of 16-round key recovery attack. Also, we successfully implement a 17-round key recovery attack. Sadly, because the accuracy of 9-round neural distinguisher of Speck32/64 and 12-round neural distinguisher of Simon32/64 is lower, they can't be used for more rounds key recovery attack. \footnote{The source codes are available in https://drive.google.com/drive/folders/17AWYupor_bX2rEe000tPuppdNH4dlmHm?usp=sharing.
Expand
Si Gao, Elisabeth Oswald
ePrint Report ePrint Report
Non-specific leakage detection (initially introduced as “Test Vector Leakage Assessment”, short TVLA) plays a vital role in practice because it detects (potential) leaks independently of assumptions about the leakage model and any specific attack vector. However, the nonspecific nature means detected leaks might not be exploitable, and thus the current state of the art is to employ a battery of specific attacks to confirm the detection outcomes.

We propose a novel leakage assessment framework which enables to link non-specific leakage detection outcomes with size of the key guess that is necessary to exploit them. We therefore solve the problem of deciding if or not a leak is exploitable without the need for specific attacks. Our methodology furthermore enables (for a detected leak) to reveal the specific key bytes, and with that, it allows the construction of confirmatory attacks. This novel approach is enabled by proposing to cast the leakage detection problem as the statistical task of building key-dependent regression models: if such a model exists, then we know that the point leaks. Depending on the size and nature of the model, we can further judge the exploitability and provide a concrete attack vector.
Expand
◄ Previous Next ►