International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

15 October 2019

Vipul Goyal, Silas Richelson
ePrint Report ePrint Report
We give the first construction of three-round non-malleable commitments from the almost minimal assumption of injective one-way functions. Combined with the lower bound of Pass (TCC 2013), our result is almost the best possible w.r.t. standard polynomial-time hardness assumptions (at least w.r.t. black-box reductions). Our results rely on a novel technique which we call bidirectional Goldreich-Levin extraction.

Along the way, we also obtain the first rewind secure delayed-input witness indistinguishable (WI) proofs from only injective one-way functions. We also obtain the first construction of a distributionally extractable commitment scheme from injective one-way functions. We believe both of these to be of independent interest. In particular, as a direct corollary of our rewind secure WI construction, we are able to obtain a construction of 3-round promise zero-knowledge from only injective one-way functions.
Expand
Michel Abdalla, Manuel Barbosa
ePrint Report ePrint Report
SPAKE2 is a balanced password-authenticated key exchange (PAKE) protocol, proposed by Abdalla and Pointcheval at CTRSA 2005. Due to its simplicity and efficiency, SPAKE2 is one of the balanced PAKE candidates currently under consideration for standardization by the CFRG, together with SPEKE, CPace, and J-PAKE. In this paper, we show that SPAKE2 achieves perfect forward security in the random-oracle model under the Gap Diffie-Hellman assumption. Unlike prior results, which either did not consider forward security or only proved a weak form of it, our results guarantee the security of the derived keys even for sessions that were created with the active involvement of the attacker, as long as the parties involved in the protocol are not corrupted when these sessions take place. Finally, our proofs also demonstrate that SPAKE2 is flexible with respect to the generation of its global parameters M and N. This includes the cases where M is a uniform group element and M=N or the case where M and N are chosen as the output of a random oracle.
Expand
Panagiotis Grontas, Aris Pagourtzis, Alexandros Zacharakis
ePrint Report ePrint Report
We propose security models for everlasting privacy, a property that protects the content of the votes cast in electronic elections against future and powerful adversaries. Initially everlasting privacy was treated synonymously with information theoretic privacy and did not take advantage of the information available to the adversary and his behavior during or after the election. More recent works provided variations of the concept, limiting the view of the future adversary to publicly available data. We consider an adversary that potentially has insider access to private election data as well. We formally express our adversarial model in game based definitions build on top of a generic voting scheme. This allows us to define a stronger version of everlasting privacy and contrast the two main proposals to achieve it, namely perfectly hiding commitment schemes and anonymous channels.
Expand
Daniel Berend, Dor Bitan, Shlomi Dolev
ePrint Report ePrint Report
One of the most interesting research topics in cryptography is finding efficient homomorphic encryption schemes, preferably information-theoretically secure, which are not based on unproven computational hardness assumptions. The most significant breakthrough in this field was made by Craig Gentry in 2009, and since then, there were various developments.

We suggest here an information-theoretically secure secret sharing scheme that efficiently supports one homomorphic multiplication of secrets in addition to homomorphic additions of, practically, any number of such multiplied secrets. In particular, our scheme enables sharing a dynamic set of secrets amongst N participants, using polynomials of degree N-1. Quadratic functions and 2-CNF circuits over the set of secrets can then be homomorphically evaluated, while no information is revealed to any single participant, both before and after the computation. Our scheme is statistically secure against coalitions of less than N-1 participants. One possible application of our scheme is a secure homomorphic evaluation of multi-variate quadratic functions and 2-CNF circuits.
Expand
Maura B. Paterson, Douglas R. Stinson
ePrint Report ePrint Report
In this paper, we show a "direct" equivalence between certain authentication codes and robust secret sharing schemes. It was previously known that authentication codes and robust secret sharing schemes are closely related to similar types of designs, but direct equivalences had not been considered in the literature. Our new equivalences motivate the consideration of a certain "key-substitution attack." We study this attack and analyze it in the setting of "dual authentication codes." We also show how this viewpoint provides a nice way to prove properties and generalizations of some known constructions.
Expand
Fulei Ji, Wentao Zhang, Tianyou Ding
ePrint Report ePrint Report
Automatic search methods have been widely used for cryptanalysis of block ciphers, especially for the most classic cryptanalysis methods -- differential and linear cryptanalysis. However, the automatic search methods, no matter based on MILP, SMT/SAT or CP techniques, can be inefficient when the search space is too large. In this paper, we improve Matsui's branch-and-bound search algorithm which is known as the first generic algorithm for finding the best differential and linear trails by proposing three new methods. The three methods, named Reconstructing DDT and LAT According to Weight, Executing Linear Layer Operations in Minimal Cost and Merging Two 4-bit S-boxes into One 8-bit S-box respectively, can efficiently speed up the search process by reducing the search space as much as possible and reducing the cost of executing linear layer operations. We apply our improved algorithm to DESL and GIFT, which are still the hard instances for the automatic search methods. As a result, we find the best differential trails for DESL (up to 14-round) and GIFT-128 (up to 19-round). The best linear trails for DESL (up to 16-round), GIFT-128 (up to 10-round) and GIFT-64 (up to 15-round) are also found. To the best of our knowledge, these security bounds for DESL and GIFT under single-key scenario are given for the first time. Meanwhile, it is the longest exploitable (differential or linear) trails for DESL and GIFT. Furthermore, benefiting from the efficiency of the improved algorithm, we do experiments to demonstrate that the clustering effect of differential trails for 13-round DES and DESL are both weak.
Expand
Joël Alwen, Sandro Coretti, Yevgeniy Dodis, Yiannis Tselekounis
ePrint Report ePrint Report
Secure messaging (SM) protocols allow users to communicate securely over untrusted infrastructure. In contrast to most other secure communication protocols (such as TLS, SSH, or Wireguard), SM sessions may be long-lived (e.g., years) and highly asynchronous. In order to deal with likely state compromises of users during the lifetime of a session, SM protocols do not only protect authenticity and privacy, but they also guarantee forward secrecy (FS) and post-compromise security (PCS). The former ensures that messages sent and received before a state compromise remain secure, while the latter ensures that users can recover from state compromise as a consequence of normal protocol usage.

SM has received considerable attention in the two-party case, where prior work has studied the well-known double-ratchet paradigm in particular and SM as a cryptographic primitive in general. Unfortunately, this paradigm does not scale well to the problem of secure group messaging (SGM). In order to address the lack of satisfactory SGM protocols, the IETF has launched the message-layer security (MLS) working group, which aims to standardize an eponymous SGM protocol. In this work we analyze the TreeKEM protocol, which is at the core of the SGM protocol proposed by the MLS working group.

On a positive note, we show that TreeKEM achieves PCS in isolation (and slightly more). However, we observe that the current version of TreeKEM does not provide an adequate form of FS. More precisely, our work proceeds by formally capturing the exact security of TreeKEM as a so-called continuous group key agreement (CGKA) protocol, which we believe to be a primitive of independent interest. To address the insecurity of TreeKEM, we propose a simple modification to TreeKEM inspired by recent work of Jost et al. (EUROCRYPT '19) and an idea due to Kohbrok (MLS Mailing List). We then show that the modified version of TreeKEM comes with almost no efficiency degradation but achieves optimal (according to MLS specification) CGKA security, including FS and PCS. Our work also lays out how a CGKA protocol can be used to design a full SGM protocol.

Finally, we propose and motivate an extensive list of potential future research directions for the area.
Expand
Dario Pasquini, Ankit Gangwal, Giuseppe Ateniese, Massimo Bernaschi, Mauro Conti
ePrint Report ePrint Report
Learning useful representations from unstructured data is one of the core challenges, as well as a driving force, of modern data-driven approaches. Deep learning has demonstrated the broad advantages of learning and harnessing such representations. In this paper, we introduce a GAN-based representation learning approach for password guessing. We show that an abstract password representation naturally offers compelling and versatile properties that can be used to open new directions in the extensively studied, and yet presently active, password guessing field. These properties can establish novel password generation techniques that are neither feasible nor practical with the existing probabilistic and non-probabilistic approaches. Based on these properties, we introduce: (1) A framework for password guessing for practical scenarios where partial knowledge about target passwords is available and (2) an Expectation Maximization-inspired framework that can dynamically adapt the estimated password distribution to match the distribution of the attacked password set, leading to an optimal guessing strategy.
Expand
Orr Dunkelman, Léo Perrin
ePrint Report ePrint Report
While designers of cryptographic algorithms are rarely considered as potential adversaries, past examples, such as the standardization of the Dual EC PRNG highlights that the story might be more complicated.

To prevent the existence of backdoors, the concept of rigidity was introduced in the specific context of curve generation. The idea is to first state a strict scope statement for the properties that the curve needs to have and then pick e.g. the one with the smallest parameters. The aim is to ensure that the designers did not have the degrees of freedom that allows the addition of a trapdoor.

In this paper, we apply this approach to symmetric algorithms. The task is challenging because the corresponding primitives are more complex: they consist of several sub-components of different types, and the properties required by these sub-components to achieve the desired security level are not as clearly defined. Furthermore, security often comes in this case from the interplay between these components rather than from their individual properties.

In this paper, we argue that it is nevertheless necessary to demand that symmetric algorithms have a similar but, due to their different nature, more complex property which we call "unswervingness". We motivate this need via a study of the literature on symmetric "kleptography" and via the study of some real-world standards. We then suggest some guidelines that could be used to leverage the unswervingness of a symmetric algorithm to standardize a highly trusted and equally safe variant of it.
Expand
Mahabir Prasad Jhanwar, Pratyush Ranjan Tiwari
ePrint Report ePrint Report
Merkle-type trees are widely used to design cryptographic accumulators. The primary advantage in using Merkle tree for accumulators is that they only assume existence of collision-resistant hash functions. Merkle tree based accumulators produces constant size accumulation values. But, the size of the witness is always logarithmic in the number of values accumulated, opposed to the constant size witness as exhibited by some of the other popular accumulators that uses number theoretic techniques and problems. Surprisingly, there exists no Merkle tree based accumulator that provides a trade-off between accumulation size and witness size. Such a trade-off is warranted, as argued in this paper, in a situation where witnesses are stored in memory constrained devices and are being moved around continuously, as opposed to the accumulation values that remain stationary, often in devices with moderate storage capacity. In this paper we propose a Merkle-tree based accumulator scheme assuming only collision-resistant hash functions exist. Our scheme allows witness of size that is in general strictly less than logarithmic in the number of values accumulated, and in some cases reduces to constant size. The trade-off cost results in an increased accumulation size.
Expand
David Butler, Andreas Lochbihler, David Aspinall, Adria Gascon
ePrint Report ePrint Report
Machine-checked proofs of security are important to increase the rigour of provable security. In this work we present a formalised theory of two fundamental two party cryptographic primitives: $\Sigma$-protocols and Commitment Schemes. $\Sigma$-protocols allow a prover to convince a verifier that they possess some knowledge without leaking information about the knowledge. Commitment schemes allow a committer to commit to a message and keep it secret until revealing it at a later time.

We use CryptHOL to formalise both primitives and prove secure multiple examples namely; the Schnorr, Chaum-Pedersen and Okamoto $\Sigma$-protocols as well as a construction that allows for compound (AND and OR) $\Sigma$-protocols and the Pedersen and Rivest commitment schemes.

A highlight of the work is a formalisation of the construction of commitment schemes from $\Sigma$-protocols. We formalise this proof at an abstract level using the modularity available in Isabelle/HOL and CryptHOL. This way, the proofs of the instantiations come for free.
Expand
Andrey Jivsov
ePrint Report ePrint Report
This work provides an instantiation of the Bulletproof zero-knowledge algorithm in modulo prime number fields. The primary motivation for this work is to help readers understand the steps of the Bulletproof protocol.
Expand
Ran Cohen, Juan Garay, Vassilis Zikas
ePrint Report ePrint Report
An intensive effort by the cryptographic community to minimize the round complexity of secure multi-party computation (MPC) has recently led to optimal two-round protocols from minimal assumptions. Most of the proposed solutions, however, make use of a broadcast channel in every round, and it is unclear if the broadcast channel can be replaced by standard point-to-point communication in a round-preserving manner, and if so, at what cost on the resulting security.

In this work, we provide a complete characterization of the trade-off between number of broadcast rounds and achievable security level for two-round MPC tolerating arbitrarily many active corruptions. Specifically, we consider all possible combinations of broadcast and point-to-point rounds against the three standard levels of security for maliciously secure MPC protocols, namely, security with identifiable, unanimous, and selective abort. For each of these notions and each combination of broadcast and point-to-point rounds, we provide either a tight feasibility or an infeasibility result of two-round MPC. Our feasibility results hold assuming two-round OT in the CRS model, whereas our impossibility results hold given any correlated randomness.
Expand

14 October 2019

Zagreb, Croatia, 9 May - 10 May 2020
Event Calendar Event Calendar
Event date: 9 May to 10 May 2020
Submission deadline: 25 January 2020
Notification: 4 March 2020
Expand

10 October 2019

Serge Fehr, Chen Yuan
ePrint Report ePrint Report
We show the first robust secret sharing scheme for a maximal threshold $t < n/2$ that features an optimal overhead in share size, offers security against a rushing adversary, and runs in polynomial time. Previous robust secret sharing schemes for $t < n/2$ either suffered from a suboptimal overhead, offered no (provable) security against a rushing adversary, or ran in superpolynomial time.
Expand
Myrto Arapinis, Mahshid Delavar, Mina Doosti, Elham Kashefi
ePrint Report ePrint Report
Physical Unclonable Functions (PUFs) are physical devices with unique behavior that are hard to clone. A variety of PUF schemes have been considered in theoretical studies as well as practical implementations of several security primitives such as identification and key generation. Recently, the inherent unclonability of quantum states has been exploited for defining (a partial) quantum analogue to classical PUFs (against limited adversaries). There are also a few proposals for quantum implementations of classical optical PUFs. However, none of these attempts provides a comprehensive study of Quantum Physical Unclonable Functions (QPUFs) with quantum cryptographic tools as we present in this paper. We formally define QPUFs, encapsulating all requirements of classical PUFs as well as introducing new ones inherent to the quantum setting such as testability. We develop a quantum game-based security framework for our analysis and define a new class of quantum attacks, called General Quantum Emulation Attack. This class of attacks exploits previously captured valid challenge-response pairs to emulate the action of an unknown quantum transformation on new input. We devise a concrete attack based on an existing quantum emulation algorithm and use it to show that a family of quantum cryptographic primitives that rely on unknown unitary transformations do not provide existential unforgeability while they provide selective unforgeability. Then, we express our results in the case of QPUF as an unknown unitary transformation.
Expand
Pierre-Alain Fouque, Paul Kirchner, Mehdi Tibouchi, Alexandre Wallet, Yang Yu
ePrint Report ePrint Report
In this paper, we initiate the study of side-channel leakage in hash-and-sign lattice-based signatures, with particular emphasis on the two efficient implementations of the original GPV lattice-trapdoor paradigm for signatures, namely NIST second-round candidate Falcon and its simpler predecessor DLP. Both of these schemes implement the GPV signature scheme over NTRU lattices, achieving great speed-ups over the general lattice case. Our results are mainly threefold.

First, we identify a specific source of side-channel leakage in most implementations of those schemes. Signing in lattice-based hash-and-sign schemes involves sampling a lattice point according to a Gaussian distribution. This reduces to sampling several one-dimensional discrete Gaussian distributions with standard deviations determined by the Gram–Schmidt norms of the secret lattice basis. Our observation is that those norms often leak through timing side-channels in the implementation of the one-dimensional Gaussian samplers.

Second, we elucidate the link between this leakage and the secret key, by showing that the entire secret key can be efficiently reconstructed solely from those Gram–Schmidt norms. The result makes heavy use of the algebraic structure of the corresponding schemes, which work over a power-of-two cyclotomic field. To establish it, we propose efficient algorithms of independent interest which, given the leading principal minors of the matrix associated to a totally positive field element (in the power basis for DLP and the bit-reversed order basis for Falcon) recover the element up to conjugation. In the case of those schemes, that element is $f\bar f + g\bar g$, where $(f,g)$ is the NTRU-style secret key. We then show that this element combined with the verification key suffices to recover the entire secret efficiently.

Third, we concretely demonstrate the side-channel attack against DLP. The challenge is that timing information only provides an approximation of the Gram–Schmidt norms (with an accuracy increasing with the number of traces), and our algebraic recovery technique needs to be combined with pruned tree search in order to apply it to approximated values. Experimentally, we show that around $2^{35}$ DLP traces are enough to reconstruct the entire key with good probability. Carrying out a similar experiment against Falcon is left as an open problem, however, since the recursive nature of our bit-reversed order recovery algorithm does not accommodate approximate inputs easily. Nevertheless, our results do underscore the importance of constant time implementations particularly for schemes using Gaussian sampling.
Expand
Ron Steinfeld, Amin Sakzad, Raymond K. Zhao
ePrint Report ePrint Report
Middle-Product Learning With Errors (MP-LWE) is a variant of the LWE problem introduced at CRYPTO 2017 by Rosca et al [RSSS17]. Asymptotically, the theoretical results of [RSSS17] suggest that MP-LWE gives lattice-based public-key cryptosystems offering a ‘security-risk vs. efficiency’ trade-off: higher performance than cryptosystems based on unstructured lattices (LWE problem) and lower risk than cryptosystems based on structured lattices (Polynomial/Ring LWE problem). However, although promising in theory, [RSSS17] left the practical implications of MP-LWE for lattice-based cryptography unclear.

In this paper, we show how to build practical public-key cryptosystems with strong security guarantees based on MP-LWE. On the implementation side, we present optimised fast algorithms for computing the middle-product operation over polynomial rings $Z_q[x]$, the dominant computation for MP-LWE-based cryptosystems. On the security side, we show how to obtain a nearly tight security proof for MP-LWE from the hardest Polynomial LWE problem over a large family of rings, improving on the loose reduction of [RSSS17]. We also show and analyze an optimised cryptanalysis of MP-LWE that narrows the complexity gap to the above security proof. To evaluate the practicality of MP-LWE, we apply our results to construct, implement and optimise parameters for a practical MP-LWE-based public-key cryptosystem, Titanium, and compare its benchmarks to other lattice-based systems. Our results show that MP-LWE offers a new ‘security-risk vs. efficiency’ trade-off in lattice-based cryptography in practice, not only asymptotically in theory.
Expand
Gang Wang, Zhijie Jerry Shi, Mark Nixon, Song Han
ePrint Report ePrint Report
Blockchain is a distributed and decentralized ledger for recording transactions. It is maintained and shared among the participating nodes by utilizing cryptographic primitives. A consensus protocol ensures that all nodes agree on a unique order in which records are appended. However, current blockchain solutions are facing scalability issues. Many methods, such as Off-chain and Directed Acyclic Graph (DAG) solutions, have been proposed to address the issue. However, they have inherent drawbacks, e.g., forming parasite chains. Performance, such as throughput and latency, is also important to a blockchain system. Sharding has emerged as a good candidate that can overcome both the scalability and performance problems in blockchain. To date, there is no systematic work that analyzes the sharding protocols. To bridge this gap, this paper provides a systematic and comprehensive review on blockchain sharding techniques. We first present a general design flow of sharding protocols and then discuss key design challenges. For each challenge, we analyze and compare the techniques in state-of-the-art solutions. Finally, we discuss several potential research directions in blockchain sharding.
Expand
Mary Maller, Noah Vesely
ePrint Report ePrint Report
We present a new public-coin setup protocol for aggregating BLS signatures on distinct messages. For $n$ messages the verifier computes just $6$ pairings and $6(n+\textrm{log}(n))$ exponentiations—an improvement on previous aggregate schemes in which the verifier computes $n+1$ pairings. Our aggregate signature is logarithmic in size. This result uses an $\textit{inner pairing product argument}$ of knowledge that can be used to prove membership in pairing-based languages.
Expand
◄ Previous Next ►