International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

03 June 2016

Viet Tung Hoang, Stefano Tessaro
ePrint Report ePrint Report
The best existing bounds on the concrete security of key-alternating ciphers (Chen and Steinberger, EUROCRYPT '14) are only asymptotically tight, and the quantitative gap with the best existing attacks remains numerically substantial for concrete parameters. Here, we prove exact bounds on the security of key-alternating ciphers and extend them to XOR cascades, the most efficient construction for key-length extension. Our bounds essentially match, for any possible query regime, the advantage achieved by the best existing attack.

Our treatment also extends to the multi-user regime. We show that the multi-user security of key-alternating ciphers and XOR cascades is very close to the single-user case, i.e., given enough rounds, it does not substantially decrease as the number of users increases. On the way, we also provide the first explicit treatment of multi-user security for key-length extension, which is particularly relevant given the significant security loss of block ciphers (even if ideal) in the multi-user setting. The common denominator behind our results are new techniques for information-theoretic indistinguishability proofs that both extend and refine existing proof techniques like the H-coefficient method.
Expand
Jean Paul Degabriele, Kenneth G. Paterson, Jacob C. N. Schuldt, Joanne Woodage
ePrint Report ePrint Report
Inspired by the Dual EC DBRG incident, Dodis et al. (Eurocrypt 2015) initiated the formal study of backdoored PRGs, showing that backdoored PRGs are equivalent to public key encryption schemes, giving constructions for backdoored PRGs (BPRGs), and showing how BPRGs can be ``immunised'' by careful post-processing of their outputs. In this paper, we continue the foundational line of work initiated by Dodis et al., providing both positive and negative results.

We first revisit the backdoored PRG setting of Dodis et al., showing that PRGs can be more strongly backdoored than was previously envisaged. Specifically, we give efficient constructions of BPRGs for which, given a single generator output, Big Brother can recover the initial state and, therefore, all outputs of the BPRG. Moreover, our constructions are forward-secure in the traditional sense for a PRG, resolving an open question of Dodis et al. in the negative.

We then turn to the question of the effectiveness of backdoors in robust PRNGs with input (c.f. Dodis et al., ACM-CCS 2013): generators in which the state can be regularly refreshed using an entropy source, and in which, provided sufficient entropy has been made available since the last refresh, the outputs will appear pseudorandom. The presence of a refresh procedure might suggest that Big Brother could be defeated, since he would not be able to predict the values of the PRNG state backwards or forwards through the high-entropy refreshes. Unfortunately, we show that this intuition is not correct: we are also able to construct robust PRNGs with input that are backdoored in a backwards sense. Namely, given a single output, Big Brother is able to rewind through a number of refresh operations to earlier ``phases'', and recover all the generator's outputs in those earlier phases.

Finally, and ending on a positive note, we give an impossibility result: we provide a bound on the number of previous phases that Big Brother can compromise as a function of the state-size of the generator: smaller states provide more limited backdooring opportunities for Big Brother.
Expand
Gilad Asharov, Alon Rosen, Gil Segev
ePrint Report ePrint Report
We prove that indistinguishability obfuscation (iO) and one-way functions do not naturally reduce to any language within $NP \cap coNP$. This is proved within the framework introduced by Asharov and Segev (FOCS '15) that captures the vast majority of techniques that have been used so far in iO-based constructions.

Our approach is based on a two-fold generalization of a classic result due to Impagliazzo and Naor (Structure in Complexity Theory '88) on non-deterministic computations via decision trees. First, we generalize their approach from the, rather restrictive, model of decision trees to arbitrary computations. Then, we further generalize the argument within the Asharov-Segev framework.

Our result explains why iO does not seem to suffice for certain applications, even when combined with the assumption that one-way functions exist. In these cases it appears that additional, more structured, assumptions are necessary. In addition, our result suggests that any attempt for ruling out the existence of iO by reducing it "ad absurdum" to a potentially efficiently-decidable language may encounter significant difficulties.
Expand
Ethan Heilman, Foteini Baldimtsi, Leen Alshenibr, Alessandra Scafuro, Sharon Goldberg
ePrint Report ePrint Report
This paper presents TumbleBit, a new anonymous payments scheme that is fully compatible with today's Bitcoin protocol. TumbleBit allows parties to make payments through an untrusted Tumbler. No-one, not even the Tumbler, can tell which payer paid which payee during a TumbleBit epoch. TumbleBit consists of two interleaved fair-exchange protocols that prevent theft of bitcoins by cheating users or a malicious Tumbler. Our protocol combines fast cryptographic computations (performed off the blockchain) with standard bitcoin scripting functionalities (on the blockchain). We prove the security of TumbleBit using the ideal/real world paradigm and the random oracle model. Security follows from the standard RSA assumption. We have implemented our protocol and used it to mix payments from several participants on the blockchain. Because our off-blockchain computations run in less than a second, TumbleBit's performance is limited only by the time it takes to confirm three blocks on the blockchain.
Expand
Nir Bitansky, Akshay Degwekar, Vinod Vaikuntanathan
ePrint Report ePrint Report
Cryptography relies on the computational hardness of structured problems. While one-way functions, the most basic cryptographic object, do not seem to require much structure, as we advance up the ranks into public-key cryptography and beyond, we seem to require that certain structured problems are hard. For example, factoring, quadratic residuosity, discrete logarithms, and approximate shortest and closest vectors in lattices all have considerable algebraic structure. This structure, on the one hand, enables useful applications such as public-key and homomorphic encryption, but on the other, also puts their hardness in question. Their structure is exactly what puts them in low complexity classes such as SZK or NP$\cap$coNP, and is in fact the reason behind (sub-exponential or quantum) algorithms for these problems. The question is whether such structure is inherent in different cryptographic primitives, deeming them inherently easier.

We study the relationship between two structured complexity classes, statistical zero-knowledge (SZK) and NP$\cap$coNP, and cryptography. To frame the question in a meaningful way, we rely on the language of black-box constructions and separations.

Our results are the following:

1. Cryptography vs. Structured Hardness: Our two main results show that there are no black-box constructions of hard problems in SZK or NP$\cap$coNP starting from one of a wide variety of cryptographic primitives such as one-way and trapdoor functions, one-way and trapdoor permutations (in the case of SZK), public-key encryption, oblivious transfer, deniable encryption, functional encryption, and even indistinguishability obfuscation;

2. Complexity-theoretic Implications: As a corollary of our result, we show a separation between SZK and NP$\cap$coNP and the class PPAD that captures the complexity of computing Nash Equilibria; and

3. Positive Results: We construct collision-resistant hashing from a strong form of SZK-hardness and indistinguishability obfuscation. It was previously known that indistinguishability obfuscation by itself does not imply collision-resistant hashing in a black-box way; we show that it does if one adds SZK-hardness as a “catalyst”. Our black-box separations are derived using indistinguishability obfuscation as a “gateway”, by first showing a (separation) result for indistinguishability obfuscation and then leveraging on the fact that indistinguishability obfuscation can be used to construct the above variety of cryptographic primitives and hard PPAD instances in a black-box manner.
Expand
Stefan Dziembowski, Sebastian Faust, Gottfried Herold, Anthony Journault, Daniel Masny, Francois-Xavier Standaert
ePrint Report ePrint Report
Most leakage-resilient cryptographic constructions aim at limiting the information adversaries can obtain about secret keys. In the case of asymmetric algorithms, this is usually obtained by secret sharing (aka masking) the key, which is made easy by their algebraic properties. In the case of symmetric algorithms, it is rather key evolution that is exploited. While more efficient, the scope of this second solution is limited to stateful primitives that easily allow for key evolution such as stream ciphers. Unfortunately, it seems generally hard to avoid the need of (at least one) execution of a stateless primitive, both for encryption and authentication protocols. As a result, fresh re-keying has emerged as an alternative solution, in which a block cipher that is hard to protect against side-channel attacks is re-keyed with a stateless function that is easy to mask. While previous proposals in this direction were all based on heuristic arguments, we propose two new constructions that, for the first time, allow a more formal treatment of fresh re-keying. More precisely, we reduce the security of our re-keying schemes to two building blocks that can be of independent interest. The first one is an assumption of Learning Parity with Leakage, which leverages the noise that is available in side-channel measurements. The second one is based on the Learning With Rounding assumption, which can be seen as an alternative solution for low-noise implementations. Both constructions are efficient and easy to mask, since they are key homomorphic or almost key homomorphic.
Expand
Jean-Sebastien Coron, Aurelien Greuet, Emmanuel Prouff, Rina Zeitoun
ePrint Report ePrint Report
We describe a new technique for improving the efficiency of the masking countermeasure against side-channel attacks. Our technique is based on using common shares between secret variables, in order to reduce the number of finite field multiplications. Our algorithms are proven secure in the ISW probing model with $n \geq t+1$ shares against $t$ probes. For AES, we get an equivalent of $2.8$ non-linear multiplications for every SBox evaluation, instead of $4$ in the Rivain-Prouff countermeasure. We obtain similar improvements for other block-ciphers. Our technique is easy to implement and performs relatively well in practice, with roughly a 20% speed-up compared to existing algorithms.
Expand
Romain poussier, François-Xavier Standaert, Vincent Grosso
ePrint Report ePrint Report
The main contribution of this paper, is a new key enumeration algorithm that combines the conceptual simplicity of the rank estimation algorithm of Glowacz et al. (from FSE 2015) and the parallelizability of the enumeration algorithm of Bogdanov et al. (SAC 2015) and Martin et al. (from ASIACRYPT 2015). Our new algorithm is based on histograms. It allows obtaining simple bounds on the (small) rounding errors that it introduces and leads to straightforward parallelization. We further show that it can minimize the bandwidth of distributed key testing by selecting parameters that maximize the factorization of the lists of key candidates produced by the enumeration, which can be highly beneficial, e.g. if these tests are performed by a hardware coprocessor. We also put forward that the conceptual simplicity of our algorithm translates into efficient implementations (that slightly improve the state-of-the-art). As an additional consolidating effort, we finally describe an open source implementation of this new enumeration algorithm, combined with the FSE 2015 rank estimation one, that we make available with the paper.
Expand
Masayuki Abe; Fumitaka Hoshino; Miyako Ohkubo
ePrint Report ePrint Report
Bilinear type conversion is to convert cryptographic schemes designed over symmetric groups instantiated with imperilled curves into ones that run over more secure and efficient asymmetric groups. In this paper we introduce a novel type conversion method called {\em IPConv} using 0-1 Integer Programming. Instantiated with a widely available IP solver, it instantly converts existing intricate schemes, and can process large-scale schemes that involves more than a thousand variables and hundreds of pairings.

Such a quick and scalable method allows a new approach in designing cryptographic schemes over asymmetric bilinear groups. Namely, designers work without taking much care about asymmetry of computation but the converted scheme runs well in the asymmetric setting. We demonstrate the usefulness of conversion-aided design by presenting somewhat counter-intuitive examples where converted DLIN-based Groth-Sahai proofs are more compact than manually built SXDH-based proofs.
Expand
Kimmo Järvinen, Andrea Miele, Reza Azarderakhsh, Patrick Longa
ePrint Report ePrint Report
We present fast and compact implementations of FourQ (ASIACRYPT 2015) on field-programmable gate arrays (FPGAs), and demonstrate, for the first time, the high efficiency of this new elliptic curve on reconfigurable hardware. By adapting FourQ's algorithms to hardware, we design FPGA-tailored architectures that are significantly faster than any other ECC alternative over large prime characteristic fields. For example, we show that our single-core and multi-core implementations can compute at a rate of 6389 and 64730 scalar multiplications per second, respectively, on a Xilinx Zynq-7020 FPGA, which represent factor-2.5 and 2 speedups in comparison with the corresponding variants of the fastest Curve25519 implementation on the same device. These results show the potential of deploying FourQ on hardware for high-performance and embedded security applications. All the presented implementations exhibit regular, constant-time execution, protecting against timing and simple side-channel attacks.
Expand
Tobias Boelter, Rishabh Poddar, Raluca Ada Popa
ePrint Report ePrint Report
We present the first one-roundtrip protocol for performing range, range-aggregate, and order-by-limit queries over encrypted data, that both provides semantic security and is efficient. We accomplish this task by chaining garbled circuits over a search tree, using branch-chained garbled circuits, as well as carefully designing garbled circuits. We then show how to build a database index that can answer order comparison queries. We implemented and evaluated our index. We demonstrate that queries as well as inserts and updates are efficient, and that our index outperforms previous interactive constructions. This index is part of the Arx database system, whose source code will be released in the near future.
Expand
Takashi Yamakawa; Shota Yamada; Goichiro Hanaoka; Noboru Kunihiro
ePrint Report ePrint Report
Lossy trapdoor functions (LTDFs), proposed by Peikert and Waters (STOC'08), are known to have a number of applications in cryptography. They have been constructed based on various assumptions, which include the quadratic residuosity (QR) and decisional composite residuosity (DCR) assumptions, which are factoring-based {\it decision} assumptions. However, there is no known construction of an LTDF based on the factoring assumption or other factoring-related search assumptions. In this paper, we first define a notion of {\it adversary-dependent lossy trapdoor functions} (ad-LTDFs) that is a weaker variant of LTDFs. Then we construct an ad-LTDF based on the hardness of factorizing RSA moduli of a special form called semi-smooth RSA subgroup (SS) moduli proposed by Groth (TCC'05). Moreover, we show that ad-LTDFs can replace LTDFs in many applications. Especially, we obtain the first factoring-based deterministic encryption scheme that satisfies the security notion defined by Boldyreva et al. (CRYPTO'08) without relying on a decision assumption. Besides direct applications of ad-LTDFs, by a similar technique, we construct a chosen ciphertext secure public key encryption scheme whose ciphertext overhead is the shortest among existing schemes based on the factoring assumption w.r.t. SS moduli.
Expand
Michele Ciampi, Rafail Ostrovsky, Luisa Siniscalchi, Ivan Visconti
ePrint Report ePrint Report
The round complexity of commitment schemes secure against man-in-the-middle attacks has been the focus of extensive research for about 25 years. The recent breakthrough of Goyal, Pandey and Richelson [STOC 2016] showed that 3 rounds are sufficient for (one-left, one-right) non-malleable commitments. This result matches a lower bound of [Pas13]. The state of affairs leaves still open the intriguing problem of constructing 3-round concurrent non-malleable commitment schemes. In this paper we solve the above open problem by showing how to transform any 3-round (one-left one-right) non-malleable commitment scheme (with some extractability property) in a 3-round concurrent non-malleable commitment scheme. Our transform makes use of complexity leveraging and when instantiated with the construction of [GPR16] gives a 3-round concurrent non-malleable commitment scheme from one-way permutations secure w.r.t. subexponential-time adversaries. We also show how our 3-round concurrent non-malleable commitment scheme can be used for 3-round arguments of knowledge and in turn for 3-round identification schemes secure against concurrent man-in-the-middle attacks.
Expand
Andrej Bogdanov; Yuval Ishai; Emanuele Viola; Christopher Williamson
ePrint Report ePrint Report
Motivated by cryptographic applications, we study the notion of {\em bounded indistinguishability}, a natural relaxation of the well studied notion of bounded independence.

We say that two distributions $\mu$ and $\nu$ over $\Sigma^n$ are {\em $k$-wise indistinguishable} if their projections to any $k$ symbols are identical. We say that a function $f\colon \Sigma^n \to \zo$ is {\em $\e$-fooled by $k$-wise indistinguishability} if $f$ cannot distinguish with advantage $\e$ between any two $k$-wise indistinguishable distributions $\mu$ and $\nu$ over $\Sigma^n$.

We are interested in characterizing the class of functions that are fooled by $k$-wise indistinguishability. While the case of $k$-wise independence (corresponding to one of the distributions being uniform) is fairly well understood, the more general case remained unexplored.

When $\Sigma = \zo$, we observe that whether $f$ is fooled is closely related to its approximate degree. For larger alphabets $\Sigma$, we obtain several positive and negative results. Our results imply the first efficient secret sharing schemes with a high secrecy threshold in which the secret can be reconstructed in AC$^0$. More concretely, we show that for every $0 < \sigma < \rho \leq 1$ it is possible to share a secret among $n$ parties so that any set of fewer than $\sigma n$ parties can learn nothing about the secret, any set of at least $\rho n$ parties can reconstruct the secret, and where both the sharing and the reconstruction are done by constant-depth circuits of size $\poly(n)$. We present additional cryptographic applications of our results to low-complexity secret sharing, visual secret sharing, leakage-resilient cryptography, and protecting against ``selective failure'' attacks.
Expand
Mihir Bellare, Bjoern Tackmann
ePrint Report ePrint Report
We initiate the study of multi-user (mu) security of authenticated encryption (AE) schemes as a way to rigorously formulate, and answer, questions about the "randomized nonce" mechanism proposed for the use of the AE scheme GCM in TLS 1.3. We (1) Give definitions of mu ind (indistinguishability) and mu kr (key recovery) security for AE (2) Characterize the intent of nonce randomization as being improved mu security as a defense against mass surveillance (3) Cast the method as a (new) AE scheme RGCM (4) Analyze and compare the mu security of both GCM and RGCM in the model where the underlying block cipher is ideal, showing that the mu security of the latter is indeed superior in many practical contexts to that of the former, and (5) Propose an alternative AE scheme XGCM having the same efficiency as RGCM but better mu security and a more simple and modular design.
Expand
Carmen Kempka, Ryo Kikuchi, Susumu Kiyoshima, Koutarou Suzuki
ePrint Report ePrint Report
We provide a garbling scheme which creates garbled circuits of a very small constant size (four bits per gate) for circuits with fan-out one (formulas). For arbitrary fan-out, we additionally need only two ciphertexts per additional connection of each gate output wire. We make use of a trapdoor permutation for which we define a generalized notion of correlation robustness. We show that our notion is implied by PRIV-security, a notion for deterministic (searchable) encryption. We prove our scheme secure in the programmable random oracle model.
Expand
Daniel Apon, Xiong Fan, Feng-Hao Liu
ePrint Report ePrint Report
Deniable encryption (Canetti et al. CRYPTO '97) is an intriguing primitive that provides a security guarantee against not only eavesdropping attacks as required by semantic security, but also stronger coercion attacks performed after the fact. The concept of deniability has later demonstrated useful and powerful in many other contexts, such as leakage resilience, adaptive security of protocols, and security against selective opening attacks. Despite its conceptual usefulness, our understanding of how to construct deniable primitives under standard assumptions is restricted.

In particular from standard lattice assumptions, i.e. Learning with Errors (LWE), we have only flexibly and non-negligible advantage deniable public-key encryption schemes, whereas with the much stronger assumption of indistinguishable obfuscation, we can obtain at least fully sender-deniable PKE and computation. How to achieve deniability for other more advanced encryption schemes under standard assumptions remains an interesting open question.

In this work, we construct a flexibly bi-deniable Attribute-Based Encryption (ABE) scheme for all polynomial-size Branching Programs from LWE. Our techniques involve new ways of manipulating Gaussian noise that may be of independent interest, and lead to a significantly sharper analysis of noise growth in Dual Regev type encryption schemes. We hope these ideas give insight into achieving deniability and related properties for further, advanced cryptographic systems from lattice assumptions.
Expand
Baiyu Li, Daniele Micciancio
ePrint Report ePrint Report
We present two general constructions that can be used to combine any two functional encryption (FE) schemes (supporting a bounded number of key queries) into a new functional encryption scheme supporting a larger number of key queries. By using these constructions iteratively, we transform any primitive FE scheme supporting a single functional key query (from a sufficiently general class of functions) and has certain weak compactness properties to a collusion-resistant FE scheme with the same or slightly weaker compactness properties. Together with previously known reductions, this shows that the compact, weakly compact, collusion-resistant, and weakly collusion-resistant versions of FE are all equivalent under polynomial time reductions. These are all FE variants known to imply the existence of indistinguishability obfuscation, and were previously thought to offer slightly different avenues toward the realization of obfuscation from general assumptions. Our results show that they are indeed all equivalent, improving our understanding of the minimal assumptions on functional encryption required to instantiate indistinguishability obfuscation.
Expand
Itai Dinur; Orr Dunkelman; Nathan Keller; Adi Shamir
ePrint Report ePrint Report
One of the most common tasks in cryptography and cryptanalysis is to find some interesting event (a needle) in an exponentially large collection (haystack) of $N=2^n$ possible events, or to demonstrate that no such event is likely to exist. In particular, we are interested in finding needles which are defined as events that happen with an unusually high probability of $p \gg 1/N$ in a haystack which is an almost uniform distribution on $N$ possible events. When the search algorithm can only sample values from this distribution, the best known time/memory tradeoff for finding such an event requires $O(1/Mp^2)$ time given $O(M)$ memory.

In this paper we develop much faster needle searching algorithms in the common cryptographic setting in which the distribution is defined by applying some deterministic function $f$ to random inputs. Such a distribution can be modelled by a random directed graph with $N$ vertices in which almost all the vertices have $O(1)$ predecessors while the vertex we are looking for has an unusually large number of $O(pN)$ predecessors. When we are given only a constant amount of memory, we propose a new search methodology which we call \textbf{NestedRho}. As $p$ increases, such random graphs undergo several subtle phase transitions, and thus the log-log dependence of the time complexity $T$ on $p$ becomes a piecewise linear curve which bends four times. Our new algorithm is faster than the $O(1/p^2)$ time complexity of the best previous algorithm in the full range of $1/N<p<1$, and in particular it improves the previous time complexity by a significant factor of $\sqrt{N}$ for any $p$ in the range $N^{-0.75}<p< N^{-0.5}$. When we are given more memory, we show how to combine the \textbf{NestedRho} technique with the parallel collision search technique in order to further reduce its time complexity. Finally, we show how to apply our new search technique to more complicated distributions with multiple peaks when we want to find all the peaks whose probabilities are higher than $p$.
Expand
Yfke Dulek, Christian Schaffner, Florian Speelman
ePrint Report ePrint Report
We present a new scheme for quantum homomorphic encryption which is compact and allows for efficient evaluation of arbitrary polynomial-sized quantum circuits. Building on the framework of Broad- bent and Jeffery and recent results in the area of instantaneous non-local quantum computation, we show how to construct quantum gadgets that allow perfect correction of the errors which occur during the homomorphic evaluation of T gates on encrypted quantum data. Our scheme can be based on any classical (leveled) fully homomorphic encryption (FHE) scheme and requires no computational assumptions besides those already used by the classical scheme. The size of our quantum gadget depends on the space complexity of the classical decryption function - which aligns well with the current efforts to minimize the complexity of the decryption function. Our scheme (or slight variants of it) offers a number of additional advantages such as ideal compactness, the ability to supply gadgets "on demand", circuit privacy for the evaluator against passive adversaries, and a three-round scheme for blind delegated quantum computation which puts only very limited demands on the quantum abilities of the client.
Expand
◄ Previous Next ►