International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

15 October 2019

Dario Pasquini, Ankit Gangwal, Giuseppe Ateniese, Massimo Bernaschi, Mauro Conti
ePrint Report ePrint Report
Learning useful representations from unstructured data is one of the core challenges, as well as a driving force, of modern data-driven approaches. Deep learning has demonstrated the broad advantages of learning and harnessing such representations. In this paper, we introduce a GAN-based representation learning approach for password guessing. We show that an abstract password representation naturally offers compelling and versatile properties that can be used to open new directions in the extensively studied, and yet presently active, password guessing field. These properties can establish novel password generation techniques that are neither feasible nor practical with the existing probabilistic and non-probabilistic approaches. Based on these properties, we introduce: (1) A framework for password guessing for practical scenarios where partial knowledge about target passwords is available and (2) an Expectation Maximization-inspired framework that can dynamically adapt the estimated password distribution to match the distribution of the attacked password set, leading to an optimal guessing strategy.
Expand
Orr Dunkelman, Léo Perrin
ePrint Report ePrint Report
While designers of cryptographic algorithms are rarely considered as potential adversaries, past examples, such as the standardization of the Dual EC PRNG highlights that the story might be more complicated.

To prevent the existence of backdoors, the concept of rigidity was introduced in the specific context of curve generation. The idea is to first state a strict scope statement for the properties that the curve needs to have and then pick e.g. the one with the smallest parameters. The aim is to ensure that the designers did not have the degrees of freedom that allows the addition of a trapdoor.

In this paper, we apply this approach to symmetric algorithms. The task is challenging because the corresponding primitives are more complex: they consist of several sub-components of different types, and the properties required by these sub-components to achieve the desired security level are not as clearly defined. Furthermore, security often comes in this case from the interplay between these components rather than from their individual properties.

In this paper, we argue that it is nevertheless necessary to demand that symmetric algorithms have a similar but, due to their different nature, more complex property which we call "unswervingness". We motivate this need via a study of the literature on symmetric "kleptography" and via the study of some real-world standards. We then suggest some guidelines that could be used to leverage the unswervingness of a symmetric algorithm to standardize a highly trusted and equally safe variant of it.
Expand
Mahabir Prasad Jhanwar, Pratyush Ranjan Tiwari
ePrint Report ePrint Report
Merkle-type trees are widely used to design cryptographic accumulators. The primary advantage in using Merkle tree for accumulators is that they only assume existence of collision-resistant hash functions. Merkle tree based accumulators produces constant size accumulation values. But, the size of the witness is always logarithmic in the number of values accumulated, opposed to the constant size witness as exhibited by some of the other popular accumulators that uses number theoretic techniques and problems. Surprisingly, there exists no Merkle tree based accumulator that provides a trade-off between accumulation size and witness size. Such a trade-off is warranted, as argued in this paper, in a situation where witnesses are stored in memory constrained devices and are being moved around continuously, as opposed to the accumulation values that remain stationary, often in devices with moderate storage capacity. In this paper we propose a Merkle-tree based accumulator scheme assuming only collision-resistant hash functions exist. Our scheme allows witness of size that is in general strictly less than logarithmic in the number of values accumulated, and in some cases reduces to constant size. The trade-off cost results in an increased accumulation size.
Expand
David Butler, Andreas Lochbihler, David Aspinall, Adria Gascon
ePrint Report ePrint Report
Machine-checked proofs of security are important to increase the rigour of provable security. In this work we present a formalised theory of two fundamental two party cryptographic primitives: $\Sigma$-protocols and Commitment Schemes. $\Sigma$-protocols allow a prover to convince a verifier that they possess some knowledge without leaking information about the knowledge. Commitment schemes allow a committer to commit to a message and keep it secret until revealing it at a later time.

We use CryptHOL to formalise both primitives and prove secure multiple examples namely; the Schnorr, Chaum-Pedersen and Okamoto $\Sigma$-protocols as well as a construction that allows for compound (AND and OR) $\Sigma$-protocols and the Pedersen and Rivest commitment schemes.

A highlight of the work is a formalisation of the construction of commitment schemes from $\Sigma$-protocols. We formalise this proof at an abstract level using the modularity available in Isabelle/HOL and CryptHOL. This way, the proofs of the instantiations come for free.
Expand
Andrey Jivsov
ePrint Report ePrint Report
This work provides an instantiation of the Bulletproof zero-knowledge algorithm in modulo prime number fields. The primary motivation for this work is to help readers understand the steps of the Bulletproof protocol.
Expand
Ran Cohen, Juan Garay, Vassilis Zikas
ePrint Report ePrint Report
An intensive effort by the cryptographic community to minimize the round complexity of secure multi-party computation (MPC) has recently led to optimal two-round protocols from minimal assumptions. Most of the proposed solutions, however, make use of a broadcast channel in every round, and it is unclear if the broadcast channel can be replaced by standard point-to-point communication in a round-preserving manner, and if so, at what cost on the resulting security.

In this work, we provide a complete characterization of the trade-off between number of broadcast rounds and achievable security level for two-round MPC tolerating arbitrarily many active corruptions. Specifically, we consider all possible combinations of broadcast and point-to-point rounds against the three standard levels of security for maliciously secure MPC protocols, namely, security with identifiable, unanimous, and selective abort. For each of these notions and each combination of broadcast and point-to-point rounds, we provide either a tight feasibility or an infeasibility result of two-round MPC. Our feasibility results hold assuming two-round OT in the CRS model, whereas our impossibility results hold given any correlated randomness.
Expand

14 October 2019

Zagreb, Croatia, 9 May - 10 May 2020
Event Calendar Event Calendar
Event date: 9 May to 10 May 2020
Submission deadline: 25 January 2020
Notification: 4 March 2020
Expand

10 October 2019

Serge Fehr, Chen Yuan
ePrint Report ePrint Report
We show the first robust secret sharing scheme for a maximal threshold $t < n/2$ that features an optimal overhead in share size, offers security against a rushing adversary, and runs in polynomial time. Previous robust secret sharing schemes for $t < n/2$ either suffered from a suboptimal overhead, offered no (provable) security against a rushing adversary, or ran in superpolynomial time.
Expand
Myrto Arapinis, Mahshid Delavar, Mina Doosti, Elham Kashefi
ePrint Report ePrint Report
Physical Unclonable Functions (PUFs) are physical devices with unique behavior that are hard to clone. A variety of PUF schemes have been considered in theoretical studies as well as practical implementations of several security primitives such as identification and key generation. Recently, the inherent unclonability of quantum states has been exploited for defining (a partial) quantum analogue to classical PUFs (against limited adversaries). There are also a few proposals for quantum implementations of classical optical PUFs. However, none of these attempts provides a comprehensive study of Quantum Physical Unclonable Functions (QPUFs) with quantum cryptographic tools as we present in this paper. We formally define QPUFs, encapsulating all requirements of classical PUFs as well as introducing new ones inherent to the quantum setting such as testability. We develop a quantum game-based security framework for our analysis and define a new class of quantum attacks, called General Quantum Emulation Attack. This class of attacks exploits previously captured valid challenge-response pairs to emulate the action of an unknown quantum transformation on new input. We devise a concrete attack based on an existing quantum emulation algorithm and use it to show that a family of quantum cryptographic primitives that rely on unknown unitary transformations do not provide existential unforgeability while they provide selective unforgeability. Then, we express our results in the case of QPUF as an unknown unitary transformation.
Expand
Pierre-Alain Fouque, Paul Kirchner, Mehdi Tibouchi, Alexandre Wallet, Yang Yu
ePrint Report ePrint Report
In this paper, we initiate the study of side-channel leakage in hash-and-sign lattice-based signatures, with particular emphasis on the two efficient implementations of the original GPV lattice-trapdoor paradigm for signatures, namely NIST second-round candidate Falcon and its simpler predecessor DLP. Both of these schemes implement the GPV signature scheme over NTRU lattices, achieving great speed-ups over the general lattice case. Our results are mainly threefold.

First, we identify a specific source of side-channel leakage in most implementations of those schemes. Signing in lattice-based hash-and-sign schemes involves sampling a lattice point according to a Gaussian distribution. This reduces to sampling several one-dimensional discrete Gaussian distributions with standard deviations determined by the Gram–Schmidt norms of the secret lattice basis. Our observation is that those norms often leak through timing side-channels in the implementation of the one-dimensional Gaussian samplers.

Second, we elucidate the link between this leakage and the secret key, by showing that the entire secret key can be efficiently reconstructed solely from those Gram–Schmidt norms. The result makes heavy use of the algebraic structure of the corresponding schemes, which work over a power-of-two cyclotomic field. To establish it, we propose efficient algorithms of independent interest which, given the leading principal minors of the matrix associated to a totally positive field element (in the power basis for DLP and the bit-reversed order basis for Falcon) recover the element up to conjugation. In the case of those schemes, that element is $f\bar f + g\bar g$, where $(f,g)$ is the NTRU-style secret key. We then show that this element combined with the verification key suffices to recover the entire secret efficiently.

Third, we concretely demonstrate the side-channel attack against DLP. The challenge is that timing information only provides an approximation of the Gram–Schmidt norms (with an accuracy increasing with the number of traces), and our algebraic recovery technique needs to be combined with pruned tree search in order to apply it to approximated values. Experimentally, we show that around $2^{35}$ DLP traces are enough to reconstruct the entire key with good probability. Carrying out a similar experiment against Falcon is left as an open problem, however, since the recursive nature of our bit-reversed order recovery algorithm does not accommodate approximate inputs easily. Nevertheless, our results do underscore the importance of constant time implementations particularly for schemes using Gaussian sampling.
Expand
Ron Steinfeld, Amin Sakzad, Raymond K. Zhao
ePrint Report ePrint Report
Middle-Product Learning With Errors (MP-LWE) is a variant of the LWE problem introduced at CRYPTO 2017 by Rosca et al [RSSS17]. Asymptotically, the theoretical results of [RSSS17] suggest that MP-LWE gives lattice-based public-key cryptosystems offering a ‘security-risk vs. efficiency’ trade-off: higher performance than cryptosystems based on unstructured lattices (LWE problem) and lower risk than cryptosystems based on structured lattices (Polynomial/Ring LWE problem). However, although promising in theory, [RSSS17] left the practical implications of MP-LWE for lattice-based cryptography unclear.

In this paper, we show how to build practical public-key cryptosystems with strong security guarantees based on MP-LWE. On the implementation side, we present optimised fast algorithms for computing the middle-product operation over polynomial rings $Z_q[x]$, the dominant computation for MP-LWE-based cryptosystems. On the security side, we show how to obtain a nearly tight security proof for MP-LWE from the hardest Polynomial LWE problem over a large family of rings, improving on the loose reduction of [RSSS17]. We also show and analyze an optimised cryptanalysis of MP-LWE that narrows the complexity gap to the above security proof. To evaluate the practicality of MP-LWE, we apply our results to construct, implement and optimise parameters for a practical MP-LWE-based public-key cryptosystem, Titanium, and compare its benchmarks to other lattice-based systems. Our results show that MP-LWE offers a new ‘security-risk vs. efficiency’ trade-off in lattice-based cryptography in practice, not only asymptotically in theory.
Expand
Gang Wang, Zhijie Jerry Shi, Mark Nixon, Song Han
ePrint Report ePrint Report
Blockchain is a distributed and decentralized ledger for recording transactions. It is maintained and shared among the participating nodes by utilizing cryptographic primitives. A consensus protocol ensures that all nodes agree on a unique order in which records are appended. However, current blockchain solutions are facing scalability issues. Many methods, such as Off-chain and Directed Acyclic Graph (DAG) solutions, have been proposed to address the issue. However, they have inherent drawbacks, e.g., forming parasite chains. Performance, such as throughput and latency, is also important to a blockchain system. Sharding has emerged as a good candidate that can overcome both the scalability and performance problems in blockchain. To date, there is no systematic work that analyzes the sharding protocols. To bridge this gap, this paper provides a systematic and comprehensive review on blockchain sharding techniques. We first present a general design flow of sharding protocols and then discuss key design challenges. For each challenge, we analyze and compare the techniques in state-of-the-art solutions. Finally, we discuss several potential research directions in blockchain sharding.
Expand
Mary Maller, Noah Vesely
ePrint Report ePrint Report
We present a new public-coin setup protocol for aggregating BLS signatures on distinct messages. For $n$ messages the verifier computes just $6$ pairings and $6(n+\textrm{log}(n))$ exponentiations—an improvement on previous aggregate schemes in which the verifier computes $n+1$ pairings. Our aggregate signature is logarithmic in size. This result uses an $\textit{inner pairing product argument}$ of knowledge that can be used to prove membership in pairing-based languages.
Expand
Eric Brier, David Naccache
ePrint Report ePrint Report
This paper presents an efficient deterministic algorithm for computing $13$\textsuperscript{th}-power residue symbols in the cyclotomic field $\mathbb{Q}(\zeta_{13})$, where $\zeta_{13}$ is a primitive $13$\textsuperscript{th} root of unity.

The new algorithm finds applications in the implementation of certain cryptographic schemes and closes a gap in the \textsl{corpus} of algorithms for computing power residue symbols.
Expand
Laura Blackstone, Seny Kamara, Tarik Moataz
ePrint Report ePrint Report
Encrypted search algorithms (ESA) are cryptographic algorithms that support search over encrypted data. ESAs can be designed with various primitives including searchable/structured symmetric encryption (SSE/STE) and oblivious RAM (ORAM). Leakage abuse attacks attempt to recover client queries using knowledge of the client’s data. An important parameter for any leakage-abuse attack is its known-data rate; that is, the fraction of client data that must be known to the adversary.

In this work, we revisit leakage abuse attacks in several ways. We first highlight some practical limitations and assumptions underlying the well-known IKK (Islam et al. NDSS ’12) and Count (Cash et al., CCS ’15) attacks. We then design four new leakage-abuse attacks that rely on much weaker assumptions. Three of these attacks are volumetric in the sense that they only exploit leakage related to document sizes. In particular, this means that they work not only on SSE/STE-based ESAs but also against ORAM-based solutions. We also introduce two volumetric injection attack which use adversarial file additions to recover queries even from ORAM-based solutions. As far as we know, these are the first attacks of their kind.

We evaluated all our attacks empirically and considered many experimental settings including different data collections, query selectivities, known-data rates, query space size and composition. From our experiments, we observed that the only setting that resulted in reasonable recovery rates under practical assumptions was the case of high-selectivity queries with a leakage profile that includes the response identity pattern (i.e., the identifiers of the matching documents) and the volume pattern (i.e., the size of the matching documents). All other attack scenarios either failed or relied on unrealistic assumptions (e.g., very high known-data rates). For this specific setting, we propose several suggestions and countermeasures including the use of schemes like PBS (Kamara et al, CRYPTO ’18), VLH/AVLH (Kamara and Moataz, Eurocrypt ’19 ), or the use of padding techniques like the ones recently proposed by Bost and Fouque (Bost and Fouque, IACR ePrint 2017/1060).
Expand
Borja Gómez
ePrint Report ePrint Report
Asymmetric schemes are moving towards a new series of cryptosystems based on known open problems that until the day guarantee security from the point that are not solvable under determined properties. In this paper you can read a novel research done mostly on the field of Multivariate Public Key Cryptography that focus the interest on sharing a pre-master key between Alice and Bob using quadratic multivariate polynomials as the public key. What does this scheme somehow special is that it uses a private construction involving polynomial factorization that allows Alice to recover the secret sent by Bob.
Expand
Giuseppe Ateniese, Danilo Francati, Bernardo Magri, Daniele Venturi
ePrint Report ePrint Report
We seek constructions of general-purpose immunizers that take arbitrary cryptographic primitives, and transform them into ones that withstand a powerful “malicious but proud” adversary, who attempts to break security by possibly subverting the implementation of all algorithms (including the immunizer itself!), while trying not to be detected. This question is motivated by the recent evidence of cryptographic schemes being intentionally weakened, or designed together with hidden backdoors, e.g., with the scope of mass surveillance. Our main result is a subversion-secure immunizer in the plain model, that works for a fairly large class of deterministic primitives, i.e. cryptoschemes where a secret (but tamperable) random source is used to generate the keys and the public parameters, whereas all other algorithms are deterministic. The immunizer relies on an additional independent source of public randomness, which is used to sample a public seed. Assuming the public source is untamperable, and that the subversion of the algorithms is chosen independently of the seed, we can instantiate our immunizer from any one-way function. In case the subversion is allowed to depend on the seed, and the public source is still untamperable, we obtain an instantiation from collision-resistant hash functions. In the more challenging scenario where the public source is also tamperable, we additionally need to assume that the initial cryptographic primitive has sub-exponential security. Previous work in the area only obtained subversion-secure immunization for very restricted classes of primitives, often in weaker models of subversion and relying on random oracles, or by leveraging a higher number of independent random sources.
Expand
Mingming Wang, Qianhong Wu
ePrint Report ePrint Report
Blockchain brings dawn to decentralized applications which coordinate correct computations without a prior trust. However, existing scalable on-chain frameworks are incompetent in dealing with intensive validation. On the one hand, duplicated execution pattern leads to limited throughput and unacceptable expenses. On the other hand, there lack fair and secure incentive mechanisms allocating rewards according to the actual workload of validators, thus deriving bad dilemmas among rational participants and inducing effective attacks from shrewd adversaries. While most solutions rely on off-chain patterns to sidestep the shackles, it further introduces unexpected issues in applicability, fairness and brittle dependency on interactive cooperation. The intrinsic bottleneck of backbone has never been drastically broken.

This work presents Lever, the first scalable on-chain framework which supports intensive validation, meanwhile achieves validity, incentive compatibility and cost-efficiency tolerance of f<n/4 Byzantine participants. Lever firstly integrates the evaluation of complexity into the correctness of transaction, thoroughly decoupling intensive validation from regular Byzantine consensus. Significant scalability is then achieved by launching few rounds of novel validation-challenge game between potential adversaries and rational stakeholders; compelling incentive mechanism effectively transfers deposits of adversary to specialized rewards for honest validators, therefore allows the user to lever sufficient endorsement for verification with minimum cost. Combined with game-theoretic insights, a backstop protocol is designed to ensure finality and validity of the framework, breaking through the famous Verifier’s Dilemma. Finally, we streamline Lever under the efficient architecture of sharding, which jointly shows robust to conceivable attacks on validation and performs outstanding ability to purify Byzantine participants. Experimental results show that Lever vastly improves the throughput and reduces expenses of intensive validation with slight compromise in latency.
Expand
Laura Luzzi, Roope Vehkalahti, Cong Ling
ePrint Report ePrint Report
Despite several works on secrecy coding for fading and MIMO wiretap channels from an error probability perspective, the construction of information-theoretically secure codes over such channels remains an open problem. In this paper, we consider a fading wiretap channel model where the transmitter has only partial statistical channel state information. Our channel model includes static channels, i.i.d. block fading channels, and ergodic stationary fading with fast decay of large deviations for the eavesdropper's channel.

We extend the flatness factor criterion from the Gaussian wiretap channel to fading and MIMO wiretap channels, and establish a simple design criterion where the normalized product distance / minimum determinant of the lattice and its dual should be maximized simultaneously.

Moreover, we propose concrete lattice codes satisfying this design criterion, which are built from algebraic number fields with constant root discriminant in the single-antenna case, and from division algebras centered at such number fields in the multiple-antenna case.
Expand
Iggy van Hoof
ePrint Report ePrint Report
Multiplication is an essential step in a lot of calculations. In this paper we look at multiplication of 2 binary polynomials of degree at most $n-1$, modulo an irreducible polynomial of degree $n$ with $2n$ input and $n$ output qubits, without ancillary qubits, assuming no errors. With straightforward schoolbook methods this would result in a quadratic number of Toffoli gates and a linear number of CNOT gates. This paper introduces a new algorithm that uses the same space, but by utilizing space-efficient variants of Karatsuba multiplication methods it requires only $O(n^{\log_2(3)})$ Toffoli gates at the cost of a higher CNOT gate count: theoretically up to $O(n^2)$ but in examples the CNOT gate count looks a lot better.
Expand
◄ Previous Next ►