International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Itay Berman

Publications

Year
Venue
Title
2020
CRYPTO
A Tight Parallel Repetition Theorem for Partially Simulatable Interactive Arguments via Smooth KL-Divergence 📺
Hardness amplification is a central problem in the study of interactive protocols. While "natural" parallel repetition transformation is known to reduce the soundness error of some special cases of interactive arguments: three-message protocols (Bellare, Impagliazzo, and Naor [FOCS '97]) and public-coin protocols (Hastad, Pass, Wikstrom, and Pietrzak [TCC '10], Chung and Lu [TCC '10] and Chung and Pass [TCC '15]), it fails to do so in the general case (the above Bellare et al.; also Pietrzak and Wikstrom [TCC '07]). The only known round-preserving approach that applies to all interactive arguments is Haitner's random-terminating transformation [SICOMP '13], who showed that the parallel repetition of the transformed protocol reduces the soundness error at a weak exponential rate: if the original m-round protocol has soundness error (1 − ε) then the n-parallel repetition of its random-terminating variant has soundness error (1 − ε)^{ε n/m^4} (omitting constant factors). Hastad et al. have generalized this result to the so-called partially simulatable interactive arguments. In this work we prove that parallel repetition of random-terminating arguments reduces the soundness error at a much stronger exponential rate: the soundness error of the n parallel repetition is (1 − ε)^{n/m}, only an m factor from the optimal rate of (1 − ε)^n achievable in public-coin and three-message arguments. The result generalizes to partially simulatable arguments. This is achieved by presenting a tight bound on a relaxed variant of the KL-divergence between the distribution induced by our reduction and its ideal variant, a result whose scope extends beyond parallel repetition proofs. We prove the tightness of the above bound for random-terminating arguments, by presenting a matching protocol.
2019
JOFC
Hardness-Preserving Reductions via Cuckoo Hashing
The focus of this work is hardness-preserving transformations of somewhat limited pseudorandom functions families (PRFs) into ones with more versatile characteristics. Consider the problem of domain extension of pseudorandom functions: given a PRF that takes as input elements of some domain $$\mathcal {U}$$U, we would like to come up with a PRF over a larger domain. Can we do it with little work and without significantly impacting the security of the system? One approach is to first hash the larger domain into the smaller one and then apply the original PRF. Such a reduction, however, is vulnerable to a “birthday attack”: after $$\sqrt{\left| \mathcal {U}\right| }$$U queries to the resulting PRF, a collision (i.e., two distinct inputs having the same hash value) is very likely to occur. As a consequence, the resulting PRF is insecure against an attacker making this number of queries. In this work, we show how to go beyond the aforementioned birthday attack barrier by replacing the above simple hashing approach with a variant of cuckoo hashing, a hashing paradigm that resolves collisions in a table by using two hash functions and two tables, cleverly assigning each element to one of the two tables. We use this approach to obtain: (i) a domain extension method that requires just two calls to the original PRF can withstand as many queries as the original domain size, and has a distinguishing probability that is exponentially small in the amount of non-cryptographic work; and (ii) a security-preserving reduction from non-adaptive to adaptive PRFs.
2019
TCC
Statistical Difference Beyond the Polarizing Regime
The polarization lemma for statistical distance ( $${\text {SD}}$$ ), due to Sahai and Vadhan (JACM, 2003), is an efficient transformation taking as input a pair of circuits $$(C_0,C_1)$$ and an integer k and outputting a new pair of circuits $$(D_0,D_1)$$ such that if $${\text {SD}}(C_0,C_1) \ge \alpha $$ then $${\text {SD}}(D_0,D_1) \ge 1-2^{-k}$$ and if $${\text {SD}}(C_0,C_1) \le \beta $$ then $${\text {SD}}(D_0,D_1) \le 2^{-k}$$ . The polarization lemma is known to hold for any constant values $$\beta < \alpha ^2$$ , but extending the lemma to the regime in which $$\alpha ^2 \le \beta < \alpha $$ has remained elusive. The focus of this work is in studying the latter regime of parameters. Our main results are: 1.Polarization lemmas for different notions of distance, such as Triangular Discrimination ( $${{\,\mathrm{TD}\,}}$$ ) and Jensen-Shannon Divergence ( $${{\,\mathrm{JS}\,}}$$ ), which enable polarization for some problems where the statistical distance satisfies $$ \alpha ^2< \beta < \alpha $$ . We also derive a polarization lemma for statistical distance with any inverse-polynomially small gap between $$ \alpha ^2 $$ and $$ \beta $$ (rather than a constant).2.The average-case hardness of the statistical difference problem (i.e., determining whether the statistical distance between two given circuits is at least $$\alpha $$ or at most $$\beta $$ ), for any values of $$\beta < \alpha $$ , implies the existence of one-way functions. Such a result was previously only known for $$\beta < \alpha ^2$$ .3.A (direct) constant-round interactive proof for estimating the statistical distance between any two distributions (up to any inverse polynomial error) given circuits that generate them. Proofs of closely related statements have appeared in the literature but we give a new proof which we find to be cleaner and more direct.
2018
EUROCRYPT
2018
CRYPTO
From Laconic Zero-Knowledge to Public-Key Cryptography 📺
Since its inception, public-key encryption ( $$\mathsf {PKE}$$ PKE) has been one of the main cornerstones of cryptography. A central goal in cryptographic research is to understand the foundations of public-key encryption and in particular, base its existence on a natural and generic complexity-theoretic assumption. An intriguing candidate for such an assumption is the existence of a cryptographically hard language .In this work we prove that public-key encryption can be based on the foregoing assumption, as long as the (honest) prover in the zero-knowledge protocol is efficient and laconic. That is, messages that the prover sends should be efficiently computable (given the witness) and short (i.e., of sufficiently sub-logarithmic length). Actually, our result is stronger and only requires the protocol to be zero-knowledge for an honest-verifier and sound against computationally bounded cheating provers.Languages in with such laconic zero-knowledge protocols are known from a variety of computational assumptions (e.g., Quadratic Residuocity, Decisional Diffie-Hellman, Learning with Errors, etc.). Thus, our main result can also be viewed as giving a unifying framework for constructing $$\mathsf {PKE}$$ PKE which, in particular, captures many of the assumptions that were already known to yield $$\mathsf {PKE}$$ PKE.We also show several extensions of our result. First, that a certain weakening of our assumption on laconic zero-knowledge is actually equivalent to $$\mathsf {PKE}$$ PKE, thereby giving a complexity-theoretic characterization of $$\mathsf {PKE}$$ PKE. Second, a mild strengthening of our assumption also yields a (2-message) oblivious transfer protocol.
2015
JOFC
2013
TCC
2012
TCC