## CryptoDB

### Vinod Vaikuntanathan

#### Affiliation: MIT

#### Publications

**Year**

**Venue**

**Title**

2020

EUROCRYPT

Extracting Randomness from Extractor-Dependent Sources
📺
Abstract

We revisit the well-studied problem of extracting nearly uniform randomness from an arbitrary source of sufficient min-entropy. Strong seeded extractors solve this problem by relying on a public random seed, which is unknown to the source. Here, we consider a setting where the seed is reused over time and the source may depend on prior calls to the extractor with the same seed. Can we still extract nearly uniform randomness?
In more detail, we assume the seed is chosen randomly, but the source can make arbitrary oracle queries to the extractor with the given seed before outputting a sample. We require that the sample has entropy and differs from any of the previously queried values. The extracted output should look uniform even to a distinguisher that gets the seed. We consider two variants of the problem, depending on whether the source only outputs the sample, or whether it can also output some correlated public auxiliary information that preserves the sample's entropy. Our results are:
* Without Auxiliary Information: We show that every pseudo-random function (PRF) with a sufficiently high security level is a good extractor in this setting, even if the distinguisher is computationally unbounded. We further show that the source necessarily needs to be computationally bounded and that such extractors imply one-way functions.
* With Auxiliary Information: We construct secure extractors in this setting, as long as both the source and the distinguisher are computationally bounded. We give several constructions based on different intermediate primitives, yielding instantiations based on the DDH, DLIN, LWE or DCR assumptions. On the negative side, we show that one cannot prove security against computationally unbounded distinguishers in this setting under any standard assumption via a black-box reduction. Furthermore, even when restricting to computationally bounded distinguishers, we show that there exist PRFs that are insecure as extractors in this setting and that a large class of constructions cannot be proven secure via a black-box reduction from standard assumptions.

2020

EUROCRYPT

Statistical ZAPR Arguments from Bilinear Maps
📺
Abstract

Dwork and Naor (FOCS '00) defined ZAPs as 2-message witness-indistinguishable proofs that are public-coin. We relax this to \emph{ZAPs with private Randomness} (ZAPRs), where the verifier can use private coins to sample the first message (independently of the statement being proved), but the proof must remain publicly verifiable given only the protocol transcript. In particular, ZAPRs are \emph{reusable}, meaning that the first message can be reused for multiple proofs without compromising security.
Known constructions of ZAPs from trapdoor permutations or bilinear maps are only computationally WI (and statistically sound). Two recent results of Badrinarayanan-Fernando-Jain-Khurana-Sahai and Goyal-Jain-Jin-Malavolta [EUROCRYPT '20] construct the first \emph{statistical ZAP arguments}, which are statistically WI (and computationally sound), from the quasi-polynomial LWE assumption. Here, we construct \emph{statistical ZAPR arguments} from the quasi-polynomial decision-linear (DLIN) assumption on groups with a bilinear map. Our construction relies on a combination of several tools including Groth-Ostrovsky-Sahai NIZK and NIWI [EUROCRYPT '06, CRYPTO '06, JACM '12], ``sometimes-binding statistically hiding commitments'' [Kalai-Khurana-Sahai, EUROCRYPT '18] and the ``MPC-in-the-head'' technique [Ishai-Kushilevitz-Ostrovsky-Sahai, STOC '07].

2019

EUROCRYPT

Worst-Case Hardness for LPN and Cryptographic Hashing via Code Smoothing
📺
Abstract

We present a worst case decoding problem whose hardness reduces to that of solving the Learning Parity with Noise (LPN) problem, in some parameter regime. Prior to this work, no worst case hardness result was known for LPN (as opposed to syntactically similar problems such as Learning with Errors). The caveat is that this worst case problem is only mildly hard and in particular admits a quasi-polynomial time algorithm, whereas the LPN variant used in the reduction requires extremely high noise rate of
$$1/2-1/\mathrm{poly}(n)$$
. Thus we can only show that “very hard” LPN is harder than some “very mildly hard” worst case problem. We note that LPN with noise
$$1/2-1/\mathrm{poly}(n)$$
already implies symmetric cryptography.Specifically, we consider the (n, m, w)-nearest codeword problem ((n, m, w)-NCP) which takes as input a generating matrix for a binary linear code in m dimensions and rank n, and a target vector which is very close to the code (Hamming distance at most w), and asks to find the codeword nearest to the target vector. We show that for balanced (unbiased) codes and for relative error
$$w/m \approx {\log ^2 n}/{n}$$
, (n, m, w)-NCP can be solved given oracle access to an LPN distinguisher with noise ratio
$$1/2-1/\mathrm{poly}(n)$$
.Our proof relies on a smoothing lemma for codes which we show to have further implications: We show that (n, m, w)-NCP with the aforementioned parameters lies in the complexity class
$$\mathrm {{Search}\hbox {-}\mathcal {BPP}}^\mathcal {SZK}$$
(i.e. reducible to a problem that has a statistical zero knowledge protocol) implying that it is unlikely to be
$$\mathcal {NP}$$
-hard. We then show that the hardness of LPN with very low noise rate
$$\log ^2(n)/n$$
implies the existence of collision resistant hash functions (our aforementioned result implies that in this parameter regime LPN is also in
$$\mathcal {BPP}^\mathcal {SZK}$$
).

2019

CRYPTO

Reusable Non-Interactive Secure Computation
📺
Abstract

We consider the problem of Non-Interactive Two-Party Secure Computation (NISC), where Rachel wishes to publish an encryption of her input x, in such a way that any other party, who holds an input y, can send her a single message which conveys to her the value f(x, y), and nothing more. We demand security against malicious parties. While such protocols are easy to construct using garbled circuits and general non-interactive zero-knowledge proofs, this approach inherently makes a non-black-box use of the underlying cryptographic primitives and is infeasible in practice.Ishai et al. (Eurocrypt 2011) showed how to construct NISC protocols that only use parallel calls to an ideal oblivious transfer (OT) oracle, and additionally make only a black-box use of any pseudorandom generator. Combined with the efficient 2-message OT protocol of Peikert et al. (Crypto 2008), this leads to a practical approach to NISC that has been implemented in subsequent works. However, a major limitation of all known OT-based NISC protocols is that they are subject to selective failure attacks that allows a malicious sender to entirely compromise the security of the protocol when the receiver’s first message is reused.Motivated by the failure of the OT-based approach, we consider the problem of basing reusable NISC on parallel invocations of a standard arithmetic generalization of OT known as oblivious linear-function evaluation (OLE). We obtain the following results:We construct an information-theoretically secure reusable NISC protocol for arithmetic branching programs and general zero-knowledge functionalities in the OLE-hybrid model. Our zero-knowledge protocol only makes an absolute constant number of OLE calls per gate in an arithmetic circuit whose satisfiability is being proved. We also get reusable NISC in the OLE-hybrid model for general Boolean circuits using any one-way function.We complement this by a negative result, showing that reusable NISC is impossible to achieve in the OT-hybrid model. This provides a formal justification for the need to replace OT by OLE.We build a universally composable 2-message reusable OLE protocol in the CRS model that can be based on the security of Paillier encryption and requires only a constant number of modular exponentiations. This provides the first arithmetic analogue of the 2-message OT protocols of Peikert et al. (Crypto 2008).By combining our NISC protocol in the OLE-hybrid model and the 2-message OLE protocol, we get protocols with new attractive asymptotic and concrete efficiency features. In particular, we get the first (designated-verifier) NIZK protocols for NP where following a statement-independent preprocessing, both proving and verifying are entirely “non-cryptographic” and involve only a constant computational overhead. Furthermore, we get the first statistical designated-verifier NIZK argument for NP under an assumption related to factoring.

2019

TCC

Lattice Trapdoors and IBE from Middle-Product LWE
Abstract

Middle-product learning with errors (MP-LWE) was recently introduced by Rosca, Sakzad, Steinfeld and Stehlé (CRYPTO 2017) as a way to combine the efficiency of Ring-LWE with the more robust security guarantees of plain LWE. While Ring-LWE is at the heart of efficient lattice-based cryptosystems, it involves the choice of an underlying ring which is essentially arbitrary. In other words, the effect of this choice on the security of Ring-LWE is poorly understood. On the other hand, Rosca et al. showed that a new LWE variant, called MP-LWE, is as secure as Polynomial-LWE (another variant of Ring-LWE) over any of a broad class of number fields. They also demonstrated the usefulness of MP-LWE by constructing an MP-LWE based public-key encryption scheme whose efficiency is comparable to Ring-LWE based public-key encryption. In this work, we take this line of research further by showing how to construct Identity-Based Encryption (IBE) schemes that are secure under a variant of the MP-LWE assumption. Our IBE schemes match the efficiency of Ring-LWE based IBE, including a scheme in the random oracle model with keys and ciphertexts of size $$\tilde{O}(n)$$ (for n-bit identities).We construct our IBE scheme following the lattice trapdoors paradigm of [Gentry, Peikert, and Vaikuntanathan, STOC’08]; our main technical contributions are introducing a new leftover hash lemma and instantiating a new variant of lattice trapdoors compatible with MP-LWE.This work demonstrates that the efficiency/security tradeoff gains of MP-LWE can be extended beyond public-key encryption to more complex lattice-based primitives.

2019

TCC

Matrix PRFs: Constructions, Attacks, and Applications to Obfuscation
Abstract

We initiate a systematic study of pseudorandom functions (PRFs) that are computable by simple matrix branching programs; we refer to these objects as “matrix PRFs”. Matrix PRFs are attractive due to their simplicity, strong connections to complexity theory and group theory, and recent applications in program obfuscation.Our main results are:We present constructions of matrix PRFs based on the conjectured hardness of computational problems pertaining to matrix products.We show that any matrix PRF that is computable by a read-c, width w branching program can be broken in time poly$$(w^c)$$; this means that any matrix PRF based on constant-width matrices must read each input bit $$\omega (\log (\lambda ))$$ times. Along the way, we simplify the “tensor switching lemmas” introduced in previous IO attacks.We show that a subclass of the candidate local-PRG proposed by Barak et al. [Eurocrypt 2018] can be broken using simple matrix algebra.We show that augmenting the CVW18 IO candidate with a matrix PRF provably immunizes the candidate against all known algebraic and statistical zeroizing attacks, as captured by a new and simple adversarial model.

2019

TCC

Optimal Bounded-Collusion Secure Functional Encryption
Abstract

We construct private-key and public-key functional encryption schemes in the bounded-key setting; that is, secure against adversaries that obtain an a-priori bounded number of functional keys (also known as the collusion bound).An important metric considered in the literature on bounded-key functional encryption schemes is the dependence of the running time of the encryption algorithm on the collusion bound
$$Q=Q(\lambda )$$
(where
$$\lambda $$
is the security parameter). It is known that bounded-key functional encryption schemes with encryption complexity growing with
$$Q^{1-\varepsilon }$$
, for any constant
$$\varepsilon > 0$$
, implies indistinguishability obfuscation. On the other hand, in the public-key setting, it was previously unknown whether we could achieve encryption complexity growing linear with Q, also known as optimal bounded-key FE, based on well-studied assumptions.In this work, we give the first construction of an optimal bounded-key public-key functional encryption scheme under the minimal assumption of the existence of any public-key encryption scheme. Moreover, our scheme supports the class of all polynomial-size circuits.Our techniques also extend to the private-key setting. We achieve a construction of an optimal bounded-key functional encryption in the private-key setting based on the minimal assumption of one-way functions, instead of learning with errors as achieved in prior works.

2018

CRYPTO

GGH15 Beyond Permutation Branching Programs: Proofs, Attacks, and Candidates
📺
Abstract

We carry out a systematic study of the GGH15 graded encoding scheme used with general branching programs. This is motivated by the fact that general branching programs are more efficient than permutation branching programs and also substantially more expressive in the read-once setting. Our main results are as follows:Proofs. We present new constructions of private constrained PRFs and lockable obfuscation, for constraints (resp. functions to be obfuscated) that are computable by general branching programs. Our constructions are secure under LWE with subexponential approximation factors. Previous constructions of this kind crucially rely on the permutation structure of the underlying branching programs. Using general branching programs allows us to obtain more efficient constructions for certain classes of constraints (resp. functions), while posing new challenges in the proof, which we overcome using new proof techniques.Attacks. We extend the previous attacks on indistinguishability obfuscation (iO) candidates that use GGH15 encodings. The new attack simply uses the rank of a matrix as the distinguisher, so we call it a “rank attack”. The rank attack breaks, among others, the iO candidate for general read-once branching programs by Halevi, Halevi, Shoup and Stephens-Davidowitz (CCS 2017).Candidate Witness Encryption and iO. Drawing upon insights from our proofs and attacks, we present simple candidates for witness encryption and iO that resist the existing attacks, using GGH15 encodings. Our candidate for witness encryption crucially exploits the fact that formulas in conjunctive normal form (CNFs) can be represented by general, read-once branching programs.

2018

TCC

Traitor-Tracing from LWE Made Simple and Attribute-Based
Abstract

A traitor tracing scheme is a public key encryption scheme for which there are many secret decryption keys. Any of these keys can decrypt a ciphertext; moreover, even if a coalition of users collude, put together their decryption keys and attempt to create a new decryption key, there is an efficient algorithm to trace the new key to at least one the colluders.Recently, Goyal, Koppula and Waters (GKW, STOC 18) provided the first traitor tracing scheme from LWE with ciphertext and secret key sizes that grow polynomially in $$\log n$$, where n is the number of users. The main technical building block in their construction is a strengthening of (bounded collusion secure) secret-key functional encryption which they refer to as mixed functional encryption (FE).In this work, we improve upon and extend the GKW traitor tracing scheme:We provide simpler constructions of mixed FE schemes based on the LWE assumption. Our constructions improve upon the GKW construction in terms of expressiveness, modularity, and security.We provide a construction of attribute-based traitor tracing for all circuits based on the LWE assumption.

2014

EUROCRYPT

2013

JOFC

Round-Optimal Password-Based Authenticated Key Exchange
Abstract

We show a general framework for constructing password-based authenticated key-exchange protocols with optimal round complexity—one message per party, sent simultaneously—in the standard model, assuming the existence of a common reference string. When our framework is instantiated using bilinear-map-based cryptosystems, the resulting protocol is also (reasonably) efficient. Somewhat surprisingly, our framework can be adapted to give protocols in the standard model that are universally composable while still using only one (simultaneous) round.

2012

TCC

2012

EUROCRYPT

2010

EPRINT

i-Hop Homomorphic Encryption and Rerandomizable Yao Circuits
Abstract

Homomorphic encryption (HE) schemes enable computing functions on encrypted data, by means of a public $\Eval$ procedure that can be applied to ciphertexts. But the evaluated ciphertexts so generated may differ from freshly encrypted ones. This brings up the question of whether one can keep computing on evaluated ciphertexts. An \emph{$i$-hop} homomorphic encryption scheme is one where $\Eval$ can be called on its own output up to $i$~times, while still being able to decrypt the result. A \emph{multi-hop} homomorphic encryption is a scheme which is $i$-hop for all~$i$. In this work we study $i$-hop and multi-hop schemes in conjunction with the properties of function-privacy (i.e., $\Eval$'s output hides the function) and compactness (i.e., the output of $\Eval$ is short). We provide formal definitions and describe several constructions.
First, we observe that "bootstrapping" techniques can be used to convert any (1-hop) homomorphic encryption scheme into an $i$-hop scheme for any~$i$, and the result inherits the function-privacy and/or compactness of the underlying scheme. However, if the underlying scheme is not compact (such as schemes derived from Yao circuits) then the complexity of the resulting $i$-hop scheme can be as high as $k^{O(i)}$.
We then describe a specific DDH-based multi-hop homomorphic encryption scheme that does not suffer from this exponential blowup. Although not compact, this scheme has complexity linear in the size of the composed function, independently of the number of hops. The main technical ingredient in this solution is a \emph{re-randomizable} variant of the Yao circuits. Namely, given a garbled circuit, anyone can re-garble it in such a way that even the party that generated the original garbled circuit cannot recognize it. This construction may be of independent interest.

2010

EPRINT

A Simple BGN-type Cryptosystem from LWE
Abstract

We construct a simple public-key encryption scheme that supports polynomially many additions and one multiplication, similar to the cryptosystem of Boneh, Goh, and Nissim (BGN). Security is based on the hardness of the learning with errors (LWE) problem, which is known to be as hard as certain worst-case lattice problems.
Some features of our cryptosystem include support for large message space, an easy way of achieving formula-privacy, a better message-to-ciphertext expansion ratio than BGN, and an easy way of multiplying two encrypted polynomials. Also, the scheme can be made identity-based and leakage-resilient (at the cost of a higher message-to-ciphertext expansion ratio).

2010

EPRINT

One-Round Password-Based Authenticated Key Exchange
Abstract

We show a general framework for constructing password-based authenticated key exchange protocols with optimal round complexity --- one message per party, sent simultaneously --- in the standard model, assuming the existence of a common reference string. When our framework is instantiated using bilinear-map cryptosystems, the resulting protocol is also (reasonably) efficient. Somewhat surprisingly, our framework can be adapted to give protocols (still in the standard model) that are universally composable, while still using only one (simultaneous) round.

2010

EPRINT

Cryptography Resilient to Continual Memory Leakage
Abstract

In recent years, there has been a major effort to design cryptographic schemes
that remain secure even if part of the secret key is leaked. This is due to a
recent proliferation of side channel attacks which, through various physical
means, can recover part of the secret key. We explore the possibility of
achieving security even with continual leakage, i.e., even if some information
is leaked each time the key is used.
We show how to securely update a secret key while information is leaked: We
construct schemes that remain secure even if an attacker, {\em at each time
period}, can probe the entire memory (containing a secret key) and ``leak'' up
to a $(1-o(1))$ fraction of the secret key. The attacker may also probe the
memory during the updates, and leak $O(\log k)$ bits, where $k$ is the security
parameter (relying on subexponential hardness allows $k^\epsilon$ bits of
leakage during each update process). All of the above is achieved without
restricting the model as is done in previous works (e.g. by assuming that
``only computation leaks information'' [Micali-Reyzin, TCC04]).
Specifically, under the decisional linear assumption on bilinear groups (which
allows for a leakage rate of $(1/2-o(1))$) or the symmetric external
Diffie-Hellman assumption (which allows for a leakage rate of $(1-o(1))$), we
achieve the above for public key encryption, identity-based encryption, and
signature schemes. Prior to this work, it was not known how to construct
public-key encryption schemes even in the more restricted model of [MR].
The main contributions of this work are (1) showing how to securely update a
secret key while information is leaked (in the more general model) and (2)
giving a public key encryption (and IBE) schemes that are resilient to
continual leakage.

2009

ASIACRYPT

2007

EPRINT

A Framework for Efficient and Composable Oblivious Transfer
Abstract

We propose and simple, general, and unified framework for constructing
oblivious transfer (OT) protocols that are \emph{efficient},
\emph{universally composable}, and \emph{generally realizable} from a
variety of standard number-theoretic assumptions, such as the
decisional Diffie-Hellman assumption and the Quadratic Residuosity
assumption. Most interestingly, we can also instantiate our framework
with \emph{worst-case} complexity assumptions relating to
\emph{lattices}.
Our OT protocols are round-optimal (one message each way), efficient
in the parties' communication and local computation, and use only one
reference string for an unbounded number of executions. Furthermore,
the protocols can provide \emph{unconditional} security to either the
sender or receiver, simply by changing the distribution of the
reference string. (For several versions of the protocol, even a
common \emph{random} string suffices.)
One of our key technical contributions is a simple and novel
abstraction that we call a \emph{dual-mode} cryptosystem. We
implement dual-mode cryptosystems by taking a unified view of several
cryptosystems in the literature that have what we call
``message-lossy'' public keys, whose defining property is that a
ciphertext produced under such a key carries \emph{no information}
(even statistically) about the encrypted message.
As a contribution of independent interest, we also provide a multi-bit
version of Regev's lattice-based cryptosystem (STOC 2005) whose time
and space efficiency are improved by a linear factor. In particular,
the amortized runtime per message bit is only $\tilde{O}(n)$ bit
operations, and the ciphertext expansion can be made as small as a
constant.

2007

EPRINT

Trapdoors for Hard Lattices and New Cryptographic Constructions
Abstract

We show how to construct a variety of ``trapdoor'' cryptographic tools
assuming the worst-case hardness of standard lattice problems (such as
approximating the shortest nonzero vector to within small factors).
The applications include trapdoor functions with \emph{preimage
sampling}, simple and efficient ``hash-and-sign'' digital signature
schemes, universally composable oblivious transfer, and identity-based
encryption.
A core technical component of our constructions is an efficient
algorithm that, given a basis of an arbitrary lattice, samples lattice
points from a Gaussian-like probability distribution whose standard
deviation is essentially the length of the longest vector in the
basis. In particular, the crucial security property is that the
output distribution of the algorithm is oblivious to the particular
geometry of the given basis.

#### Program Committees

- TCC 2018
- Eurocrypt 2018
- TCC 2016
- Crypto 2014
- TCC 2014
- PKC 2013
- Asiacrypt 2013
- TCC 2012
- Eurocrypt 2012
- Crypto 2012
- Asiacrypt 2010
- TCC 2010
- Crypto 2010

#### Coauthors

- Shweta Agrawal (4)
- Adi Akavia (1)
- Prabhanjan Ananth (2)
- Gilad Asharov (1)
- Nir Bitansky (7)
- Dan Boneh (2)
- Xavier Boyen (1)
- Zvika Brakerski (11)
- Ran Canetti (3)
- Nishanth Chandran (1)
- Melissa Chase (2)
- Hao Chen (1)
- Yilei Chen (3)
- Aloni Cohen (3)
- Ronald Cramer (2)
- Dana Dachman-Soled (1)
- Akshay Degwekar (2)
- Yevgeniy Dodis (4)
- Cynthia Dwork (1)
- Sebastian Faust (1)
- David Freeman (1)
- Craig Gentry (8)
- Shafi Goldwasser (8)
- Sergey Gorbunov (7)
- S. Dov Gordon (1)
- Robbert de Haan (1)
- Shai Halevi (7)
- Goichiro Hanaoka (1)
- Minki Hhan (1)
- Dennis Hofheinz (1)
- Susan Hohenberger (2)
- Justin Holmgren (1)
- Hideki Imai (1)
- Yuval Ishai (1)
- Abhishek Jain (2)
- Yael Tauman Kalai (4)
- Jonathan Katz (7)
- Eike Kiltz (1)
- Daniel Kraschewski (1)
- Huijia Lin (1)
- Tianren Liu (5)
- Alex Lombardi (4)
- Adriana López-Alt (1)
- Vadim Lyubashevsky (1)
- Moni Naor (1)
- Valeria Nikolaenko (2)
- Rafail Ostrovsky (1)
- Omkant Pandey (1)
- Omer Paneth (2)
- Bryan Parno (1)
- Rafael Pass (4)
- Chris Peikert (5)
- Raluca A. Popa (1)
- Tal Rabin (1)
- Srinivasan Raghuraman (1)
- Mariana Raykova (1)
- Leonid Reyzin (1)
- Silas Richelson (1)
- Guy N. Rothblum (4)
- Gil Segev (4)
- Abhi Shelat (5)
- Stefano Tessaro (1)
- Eran Tromer (2)
- Rotem Tsabary (1)
- Marten van Dijk (1)
- Prashant Nalini Vasudevan (3)
- Dhinakaran Vinayagamurthy (2)
- Panagiotis Voulgaris (1)
- Thuy Duong Vuong (1)
- Brent Waters (4)
- Hoeteck Wee (12)
- Daniel Wichs (6)
- Nickolai Zeldovich (1)