International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Ron D. Rothblum

Affiliation: MIT and Northeastern University

Publications

Year
Venue
Title
2019
PKC
Towards Non-Interactive Zero-Knowledge for NP from LWE
Non-interactive zero-knowledge ( $$\mathsf {NIZK}$$ ) is a fundamental primitive that is widely used in the construction of cryptographic schemes and protocols. Despite this, general purpose constructions of $$\mathsf {NIZK}$$ proof systems are only known under a rather limited set of assumptions that are either number-theoretic (and can be broken by a quantum computer) or are not sufficiently well understood, such as obfuscation. Thus, a basic question that has drawn much attention is whether it is possible to construct general-purpose $$\mathsf {NIZK}$$ proof systems based on the learning with errors ( $$\mathsf {LWE}$$ ) assumption.Our main result is a reduction from constructing $$\mathsf {NIZK}$$ proof systems for all of $$\mathbf {NP}$$ based on $$\mathsf {LWE}$$ , to constructing a $$\mathsf {NIZK}$$ proof system for a particular computational problem on lattices, namely a decisional variant of the Bounded Distance Decoding ( $$\mathsf {BDD}$$ ) problem. That is, we show that assuming $$\mathsf {LWE}$$ , every language $$L \in \mathbf {NP}$$ has a $$\mathsf {NIZK}$$ proof system if (and only if) the decisional $$\mathsf {BDD}$$ problem has a $$\mathsf {NIZK}$$ proof system. This (almost) confirms a conjecture of Peikert and Vaikuntanathan (CRYPTO, 2008).To construct our $$\mathsf {NIZK}$$ proof system, we introduce a new notion that we call prover-assisted oblivious ciphertext sampling ( $$\mathsf {POCS}$$ ), which we believe to be of independent interest. This notion extends the idea of oblivious ciphertext sampling, which allows one to sample ciphertexts without knowing the underlying plaintext. Specifically, we augment the oblivious ciphertext sampler with access to an (untrusted) prover to help it accomplish this task. We show that the existence of encryption schemes with a $$\mathsf {POCS}$$ procedure, as well as some additional natural requirements, suffices for obtaining $$\mathsf {NIZK}$$ proofs for $$\mathbf {NP}$$ . We further show that such encryption schemes can be instantiated based on $$\mathsf {LWE}$$ , assuming the existence of a $$\mathsf {NIZK}$$ proof system for the decisional $$\mathsf {BDD}$$ problem.
2019
EUROCRYPT
Reusable Designated-Verifier NIZKs for all NP from CDH 📺
Willy Quach Ron D. Rothblum Daniel Wichs
Non-interactive zero-knowledge proofs (NIZKs) are a fundamental cryptographic primitive. Despite a long history of research, we only know how to construct NIZKs under a few select assumptions, such as the hardness of factoring or using bilinear maps. Notably, there are no known constructions based on either the computational or decisional Diffie-Hellman (CDH/DDH) assumption without relying on a bilinear map.In this paper, we study a relaxation of NIZKs in the designated verifier setting (DV-NIZK), in which the public common-reference string is generated together with a secret key that is given to the verifier in order to verify proofs. In this setting, we distinguish between one-time and reusable schemes, depending on whether they can be used to prove only a single statement or arbitrarily many statements. For reusable schemes, the main difficulty is to ensure that soundness continues to hold even when the malicious prover learns whether various proofs are accepted or rejected by the verifier. One-time DV-NIZKs are known to exist for general NP statements assuming only public-key encryption. However, prior to this work, we did not have any construction of reusable DV-NIZKs for general NP statements from any assumption under which we didn’t already also have standard NIZKs.In this work, we construct reusable DV-NIZKs for general NP statements under the CDH assumption, without requiring a bilinear map. Our construction is based on the hidden-bits paradigm, which was previously used to construct standard NIZKs. We define a cryptographic primitive called a hidden-bits generator (HBG), along with a designated-verifier variant (DV-HBG), which modularly abstract out how to use this paradigm to get both standard NIZKs and reusable DV-NIZKs. We construct a DV-HBG scheme under the CDH assumption by relying on techniques from the Cramer-Shoup hash-proof system, and this yields our reusable DV-NIZK for general NP statements under CDH.We also consider a strengthening of DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK) where the setup consists of an honestly generated common random string and the verifier then gets to choose his own (potentially malicious) public/secret key pair to generate/verify proofs. We construct MDV-NIZKs under the “one-more CDH” assumption without relying on bilinear maps.
2019
CRYPTO
New Constructions of Reusable Designated-Verifier NIZKs 📺
Non-interactive zero-knowledge arguments (NIZKs) for $$\mathsf {NP}$$ are an important cryptographic primitive, but we currently only have instantiations under a few specific assumptions. Notably, we are missing constructions from the learning with errors (LWE) assumption, the Diffie-Hellman (CDH/DDH) assumption, and the learning parity with noise (LPN) assumption.In this paper, we study a relaxation of NIZKs to the designated-verifier setting (DV-NIZK), where a trusted setup generates a common reference string together with a secret key for the verifier. We want reusable schemes, which allow the verifier to reuse the secret key to verify many different proofs, and soundness should hold even if the malicious prover learns whether various proofs are accepted or rejected. Such reusable DV-NIZKs were recently constructed under the CDH assumption, but it was open whether they can also be constructed under LWE or LPN.We also consider an extension of reusable DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK). In this setting, the only trusted setup consists of a common random string. However, there is also an additional untrusted setup in which the verifier chooses a public/secret key needed to generate/verify proofs, respectively. We require that zero-knowledge holds even if the public key is chosen maliciously by the verifier. Such reusable MDV-NIZKs were recently constructed under the “one-more CDH” assumption, but constructions under CDH/LWE/LPN remained open.In this work, we give new constructions of (reusable) DV-NIZKs and MDV-NIZKs using generic primitives that can be instantiated under CDH, LWE, or LPN.
2019
TCC
Statistical Difference Beyond the Polarizing Regime
The polarization lemma for statistical distance ( $${\text {SD}}$$ ), due to Sahai and Vadhan (JACM, 2003), is an efficient transformation taking as input a pair of circuits $$(C_0,C_1)$$ and an integer k and outputting a new pair of circuits $$(D_0,D_1)$$ such that if $${\text {SD}}(C_0,C_1) \ge \alpha $$ then $${\text {SD}}(D_0,D_1) \ge 1-2^{-k}$$ and if $${\text {SD}}(C_0,C_1) \le \beta $$ then $${\text {SD}}(D_0,D_1) \le 2^{-k}$$ . The polarization lemma is known to hold for any constant values $$\beta < \alpha ^2$$ , but extending the lemma to the regime in which $$\alpha ^2 \le \beta < \alpha $$ has remained elusive. The focus of this work is in studying the latter regime of parameters. Our main results are: 1.Polarization lemmas for different notions of distance, such as Triangular Discrimination ( $${{\,\mathrm{TD}\,}}$$ ) and Jensen-Shannon Divergence ( $${{\,\mathrm{JS}\,}}$$ ), which enable polarization for some problems where the statistical distance satisfies $$ \alpha ^2< \beta < \alpha $$ . We also derive a polarization lemma for statistical distance with any inverse-polynomially small gap between $$ \alpha ^2 $$ and $$ \beta $$ (rather than a constant).2.The average-case hardness of the statistical difference problem (i.e., determining whether the statistical distance between two given circuits is at least $$\alpha $$ or at most $$\beta $$ ), for any values of $$\beta < \alpha $$ , implies the existence of one-way functions. Such a result was previously only known for $$\beta < \alpha ^2$$ .3.A (direct) constant-round interactive proof for estimating the statistical distance between any two distributions (up to any inverse polynomial error) given circuits that generate them. Proofs of closely related statements have appeared in the literature but we give a new proof which we find to be cleaner and more direct.
2019
TCC
On the (In)security of Kilian-Based SNARGs
The Fiat-Shamir transform is an incredibly powerful technique that uses a suitable hash function to reduce the interaction of general public-coin protocols. Unfortunately, there are known counterexamples showing that this methodology may not be sound (no matter what concrete hash function is used). Still, these counterexamples are somewhat unsatisfying, as the underlying protocols were specifically tailored to make Fiat-Shamir fail. This raises the question of whether this transform is sound when applied to natural protocols.One of the most important protocols for which we would like to reduce interaction is Kilian’s four-message argument system for all of $$\mathsf {NP}$$ , based on collision resistant hash functions ( $$\mathsf {CRHF}$$ ) and probabilistically checkable proofs ( $$\mathsf {PCP}$$ s). Indeed, an application of the Fiat-Shamir transform to Kilian’s protocol is at the heart of both theoretical results (e.g., Micali’s CS proofs) as well as leading practical approaches of highly efficient non-interactive proof-systems (e.g., $$\mathsf {SNARK}$$ s and $$\mathsf {STARK}$$ s).In this work, we show significant obstacles to establishing soundness of (what we refer to as) the “Fiat-Shamir-Kilian-Micali” ( $$\mathsf {FSKM}$$ ) protocol. More specifically:We construct a (contrived) $$\mathsf {CRHF}$$ for which $$\mathsf {FSKM}$$ is unsound for a very large class of $$\mathsf {PCP}$$ s and for any Fiat-Shamir hash function. The collision-resistance of our $$\mathsf {CRHF}$$ relies on very strong but plausible cryptographic assumptions. The statement is “tight” in the following sense: any $$\mathsf {PCP}$$ outside the scope of our result trivially implies a $$\mathsf {SNARK}$$ , eliminating the need for $$\mathsf {FSKM}$$ in the first place.Second, we consider a known extension of Kilian’s protocol to an interactive variant of $$\mathsf {PCP}$$ s called probabilistically checkable interactive proofs ( $$\mathsf {PCIP})$$ (also known as interactive oracle proofs or $$\mathsf {IOP}$$ s). We construct a particular (contrived) $$\mathsf {PCIP}$$ for $$\mathsf {NP}$$ for which the $$\mathsf {FSKM}$$ protocol is unsound no matter what $$\mathsf {CRHF}$$ and Fiat-Shamir hash function is used. This result is unconditional (i.e., does not rely on any cryptographic assumptions). Put together, our results show that the soundness of $$\mathsf {FSKM}$$ must rely on some special structure of both the $$\mathsf {CRHF}$$ and $$\mathsf {PCP}$$ that underlie Kilian’s protocol. We believe these negative results may cast light on how to securely instantiate the $$\mathsf {FSKM}$$ protocol by a synergistic choice of the $$\mathsf {PCP}$$ , $$\mathsf {CRHF}$$ , and Fiat-Shamir hash function.
2018
EUROCRYPT
2018
EUROCRYPT
2018
CRYPTO
From Laconic Zero-Knowledge to Public-Key Cryptography 📺
Since its inception, public-key encryption ( $$\mathsf {PKE}$$ PKE) has been one of the main cornerstones of cryptography. A central goal in cryptographic research is to understand the foundations of public-key encryption and in particular, base its existence on a natural and generic complexity-theoretic assumption. An intriguing candidate for such an assumption is the existence of a cryptographically hard language .In this work we prove that public-key encryption can be based on the foregoing assumption, as long as the (honest) prover in the zero-knowledge protocol is efficient and laconic. That is, messages that the prover sends should be efficiently computable (given the witness) and short (i.e., of sufficiently sub-logarithmic length). Actually, our result is stronger and only requires the protocol to be zero-knowledge for an honest-verifier and sound against computationally bounded cheating provers.Languages in with such laconic zero-knowledge protocols are known from a variety of computational assumptions (e.g., Quadratic Residuocity, Decisional Diffie-Hellman, Learning with Errors, etc.). Thus, our main result can also be viewed as giving a unifying framework for constructing $$\mathsf {PKE}$$ PKE which, in particular, captures many of the assumptions that were already known to yield $$\mathsf {PKE}$$ PKE.We also show several extensions of our result. First, that a certain weakening of our assumption on laconic zero-knowledge is actually equivalent to $$\mathsf {PKE}$$ PKE, thereby giving a complexity-theoretic characterization of $$\mathsf {PKE}$$ PKE. Second, a mild strengthening of our assumption also yields a (2-message) oblivious transfer protocol.
2017
CRYPTO
2017
CRYPTO
2016
CRYPTO
2015
CRYPTO
2013
TCC
2013
CRYPTO
2013
JOFC
Enhancements of Trapdoor Permutations
Oded Goldreich Ron D. Rothblum
We take a closer look at several enhancements of the notion of trapdoor permutations. Specifically, we consider the notions of enhanced trapdoor permutation (Goldreich, Foundation of Cryptography: Basic Applications, 2004) and doubly enhanced trapdoor permutation (Goldreich, Computational Complexity: A Conceptual Perspective, 2011) as well as intermediate notions (Rothblum, A Taxonomy of Enhanced Trapdoor Permutations, 2010). These enhancements arose in the study of Oblivious Transfer and NIZK, but they address natural concerns that may arise also in other applications of trapdoor permutations. We clarify why these enhancements are needed in such applications, and show that they actually suffice for these needs.
2011
TCC

Program Committees

Eurocrypt 2020
Crypto 2018
TCC 2017