## CryptoDB

### Amit Sahai

#### Affiliation: UCLA

#### Publications

**Year**

**Venue**

**Title**

2019

CRYPTO

Cryptographic Sensing
📺
Abstract

Is it possible to measure a physical object in a way that makes the measurement signals unintelligible to an external observer? Alternatively, can one learn a natural concept by using a contrived training set that makes the labeled examples useless without the line of thought that has led to their choice? We initiate a study of “cryptographic sensing” problems of this type, presenting definitions, positive and negative results, and directions for further research.

2019

EUROCRYPT

Sum-of-Squares Meets Program Obfuscation, Revisited
Abstract

We develop attacks on the security of variants of pseudo-random generators computed by quadratic polynomials. In particular we give a general condition for breaking the one-way property of mappings where every output is a quadratic polynomial (over the reals) of the input. As a corollary, we break the degree-2 candidates for security assumptions recently proposed for constructing indistinguishability obfuscation by Ananth, Jain and Sahai (ePrint 2018) and Agrawal (ePrint 2018). We present conjectures that would imply our attacks extend to a wider variety of instances, and in particular offer experimental evidence that they break assumption of Lin-Matt (ePrint 2018).Our algorithms use semidefinite programming, and in particular, results on low-rank recovery (Recht, Fazel, Parrilo 2007) and matrix completion (Gross 2009).

2019

EUROCRYPT

How to Leverage Hardness of Constant-Degree Expanding Polynomials over $\mathbb {R}$R to build $i\mathcal {O}$iO
Abstract

In this work, we introduce and construct D-restricted Functional Encryption (FE) for any constant $$D \ge 3$$D≥3, based only on the SXDH assumption over bilinear groups. This generalizes the notion of 3-restricted FE recently introduced and constructed by Ananth et al. (ePrint 2018) in the generic bilinear group model.A $$D=(d+2)$$D=(d+2)-restricted FE scheme is a secret key FE scheme that allows an encryptor to efficiently encrypt a message of the form $$M=(\varvec{x},\varvec{y},\varvec{z})$$M=(x,y,z). Here, $$\varvec{x}\in \mathbb {F}_{\mathbf {p}}^{d\times n}$$x∈Fpd×n and $$\varvec{y},\varvec{z}\in \mathbb {F}_{\mathbf {p}}^n$$y,z∈Fpn. Function keys can be issued for a function $$f=\varSigma _{\varvec{I}= (i_1,..,i_d,j,k)}\ c_{\varvec{I}}\cdot \varvec{x}[1,i_1] \cdots \varvec{x}[d,i_d] \cdot \varvec{y}[j]\cdot \varvec{z}[k]$$f=ΣI=(i1,..,id,j,k)cI·x[1,i1]⋯x[d,id]·y[j]·z[k] where the coefficients $$c_{\varvec{I}}\in \mathbb {F}_{\mathbf {p}}$$cI∈Fp. Knowing the function key and the ciphertext, one can learn $$f(\varvec{x},\varvec{y},\varvec{z})$$f(x,y,z), if this value is bounded in absolute value by some polynomial in the security parameter and n. The security requirement is that the ciphertext hides $$\varvec{y}$$y and $$\varvec{z}$$z, although it is not required to hide $$\varvec{x}$$x. Thus $$\varvec{x}$$x can be seen as a public attribute.D-restricted FE allows for useful evaluation of constant-degree polynomials, while only requiring the SXDH assumption over bilinear groups. As such, it is a powerful tool for leveraging hardness that exists in constant-degree expanding families of polynomials over $$\mathbb {R}$$R. In particular, we build upon the work of Ananth et al. to show how to build indistinguishability obfuscation ($$i\mathcal {O}$$iO) assuming only SXDH over bilinear groups, LWE, and assumptions relating to weak pseudorandom properties of constant-degree expanding polynomials over $$\mathbb {R}$$R.

2019

CRYPTO

Simultaneous Amplification: The Case of Non-interactive Zero-Knowledge
📺
Abstract

In this work, we explore the question of simultaneous privacy and soundness amplification for non-interactive zero-knowledge argument systems (NIZK). We show that any $$\delta _s-$$sound and $$\delta _z-$$zero-knowledge NIZK candidate satisfying $$\delta _s+\delta _z=1-\epsilon $$, for any constant $$\epsilon >0$$, can be turned into a computationally sound and zero-knowledge candidate with the only extra assumption of a subexponentially secure public-key encryption.We develop novel techniques to leverage the use of leakage simulation lemma (Jetchev-Peitzrak TCC 2014) to argue amplification. A crucial component of our result is a new notion for secret sharing $$\mathsf {NP}$$ instances. We believe that this may be of independent interest.To achieve this result we analyze following two transformations:Parallel Repetition: We show that using parallel repetition any $$\delta _s-$$sound and $$\delta _z-$$zero-knowledge $$\mathsf {NIZK}$$ candidate can be turned into (roughly) $$\delta ^n_s-$$sound and $$1-(1-\delta _{z})^n-$$zero-knowledge candidate. Here n is the repetition parameter.MPC based Repetition: We propose a new transformation that amplifies zero-knowledge in the same way that parallel repetition amplifies soundness. We show that using this any $$\delta _s-$$sound and $$\delta _z-$$zero-knowledge $$\mathsf {NIZK}$$ candidate can be turned into (roughly) $$1-(1-\delta _s)^n-$$sound and $$2\cdot \delta ^n_{z}-$$zero-knowledge candidate.
Then we show that using these transformations in a zig-zag fashion we can obtain our result. Finally, we also present a simple transformation which directly turns any $$\mathsf {NIZK}$$ candidate satisfying $$\delta _s,\delta _z<1/3 -1/\mathsf {poly}(\lambda )$$ to a secure one.

2019

CRYPTO

Indistinguishability Obfuscation Without Multilinear Maps: New Paradigms via Low Degree Weak Pseudorandomness and Security Amplification
📺
Abstract

The existence of secure indistinguishability obfuscators (
$$i\mathcal {O}$$
) has far-reaching implications, significantly expanding the scope of problems amenable to cryptographic study. All known approaches to constructing
$$i\mathcal {O}$$
rely on d-linear maps. While secure bilinear maps are well established in cryptographic literature, the security of candidates for
$$d>2$$
is poorly understood.We propose a new approach to constructing
$$i\mathcal {O}$$
for general circuits. Unlike all previously known realizations of
$$i\mathcal {O}$$
, we avoid the use of d-linear maps of degree
$$d \ge 3$$
.At the heart of our approach is the assumption that a new weak pseudorandom object exists. We consider two related variants of these objects, which we call perturbation resilient generator (
$$\varDelta $$
RG) and pseudo flawed-smudging generator (
$$\mathrm {PFG}$$
), respectively. At a high level, both objects are polynomially expanding functions whose outputs partially hide (or smudge) small noise vectors when added to them. We further require that they are computable by a family of degree-3 polynomials over
$$\mathbb {Z}$$
. We show how they can be used to construct functional encryption schemes with weak security guarantees. Finally, we use novel amplification techniques to obtain full security.As a result, we obtain
$$i\mathcal {O}$$
for general circuits assuming:Subexponentially secure LWEBilinear Maps
$$\mathrm {poly}(\lambda )$$
-secure 3-block-local PRGs
$$\varDelta $$
RGs or
$$\mathrm {PFG}$$
s

2019

TCC

From FE Combiners to Secure MPC and Back
Abstract

Cryptographic combiners allow one to combine many candidates for a cryptographic primitive, possibly based on different computational assumptions, into another candidate with the guarantee that the resulting candidate is secure as long as at least one of the original candidates is secure. While the original motivation of cryptographic combiners was to reduce trust on existing candidates, in this work, we study a rather surprising implication of combiners to constructing secure multiparty computation protocols. Specifically, we initiate the study of functional encryption combiners and show its connection to secure multiparty computation.Functional encryption (FE) has incredible applications towards computing on encrypted data. However, constructing the most general form of this primitive has remained elusive. Although some candidate constructions exist, they rely on nonstandard assumptions, and thus, their security has been questioned. An FE combiner attempts to make use of these candidates while minimizing the trust placed on any individual FE candidate. Informally, an FE combiner takes in a set of FE candidates and outputs a secure FE scheme if at least one of the candidates is secure.Another fundamental area in cryptography is secure multi-party computation (MPC), which has been extensively studied for several decades. In this work, we initiate a formal study of the relationship between functional encryption (FE) combiners and secure multi-party computation (MPC). In particular, we show implications in both directions between these primitives. As a consequence of these implications, we obtain the following main results.
A two-round semi-honest MPC protocol in the plain model secure against up to $$n-1$$ corruptions with communication complexity proportional only to the depth of the circuit being computed assuming learning with errors (LWE). Prior two round protocols based on standard assumptions that achieved this communication complexity required trust assumptions, namely, a common reference string.A functional encryption combiner based on pseudorandom generators (PRGs) in $$\mathsf {NC}^1$$. This is a weak assumption as such PRGs are implied by many concrete intractability problems commonly used in cryptography, such as ones related to factoring, discrete logarithm, and lattice problems [11]. Previous constructions of FE combiners, implicit in [7], were known only from LWE. Using this result, we build a universal construction of functional encryption: an explicit construction of functional encryption based only on the assumptions that functional encryption exists and PRGs in $$\mathsf {NC}^1$$.

2019

ASIACRYPT

Output Compression, MPC, and iO for Turing Machines
Abstract

In this work, we study the fascinating notion of output-compressing randomized encodings for Turing Machines, in a shared randomness model. In this model, the encoder and decoder have access to a shared random string, and the efficiency requirement is, the size of the encoding must be independent of the running time and output length of the Turing Machine on the given input, while the length of the shared random string is allowed to grow with the length of the output. We show how to construct output-compressing randomized encodings for Turing machines in the shared randomness model, assuming iO for circuits and any assumption in the set
$$\{$$
LWE, DDH, N
$$^{th}$$
Residuosity
$$\}$$
.We then show interesting implications of the above result to basic feasibility questions in the areas of secure multiparty computation (MPC) and indistinguishability obfuscation (iO): 1.Compact MPC for Turing Machines in the Random Oracle Model. In the context of MPC, we consider the following basic feasibility question: does there exist a malicious-secure MPC protocol for Turing Machines whose communication complexity is independent of the running time and output length of the Turing Machine when executed on the combined inputs of all parties? We call such a protocol as a compact MPC protocol. Hubácek and Wichs [HW15] showed via an incompressibility argument, that, even for the restricted setting of circuits, it is impossible to construct a malicious secure two party computation protocol in the plain model where the communication complexity is independent of the output length. In this work, we show how to evade this impossibility by compiling any (non-compact) MPC protocol in the plain model to a compact MPC protocol for Turing Machines in the Random Oracle Model, assuming output-compressing randomized encodings in the shared randomness model.2.Succinct iO for Turing Machines in the Shared Randomness Model. In all existing constructions of iO for Turing Machines, the size of the obfuscated program grows with a bound on the input length. In this work, we show how to construct an iO scheme for Turing Machines in the shared randomness model where the size of the obfuscated program is independent of a bound on the input length, assuming iO for circuits and any assumption in the set
$$\{$$
LWE, DDH, N
$$^{th}$$
Residuosity
$$\}$$
.

2018

CRYPTO

Private Circuits: A Modular Approach
📺
Abstract

We consider the problem of protecting general computations against constant-rate random leakage. That is, the computation is performed by a randomized boolean circuit that maps a randomly encoded input to a randomly encoded output, such that even if the value of every wire is independently leaked with some constant probability
$$p > 0$$
p>0, the leakage reveals essentially nothing about the input.In this work we provide a conceptually simple, modular approach for solving the above problem, providing a simpler and self-contained alternative to previous constructions of Ajtai (STOC 2011) and Andrychowicz et al. (Eurocrypt 2016). We also obtain several extensions and generalizations of this result. In particular, we show that for every leakage probability
$$p<1$$
p<1, there is a finite basis
$$\mathbb {B}$$
B such that leakage-resilient computation with leakage probability p can be realized using circuits over the basis
$$\mathbb {B}$$
B. We obtain similar positive results for the stronger notion of leakage tolerance, where the input is not encoded, but the leakage from the entire computation can be simulated given random
$$p'$$
p′-leakage of input values alone, for any
$$p<p'<1$$
p<p′<1. Finally, we complement this by a negative result, showing that for every basis
$$\mathbb {B}$$
B there is some leakage probability
$$p<1$$
p<1 such that for any
$$p'<1$$
p′<1, leakage tolerance as above cannot be achieved in general.We show that our modular approach is also useful for protecting computations against worst case leakage. In this model, we require that leakage of any
$$\mathbf{t}$$
t (adversarially chosen) wires reveal nothing about the input. By combining our construction with a previous derandomization technique of Ishai et al. (ICALP 2013), we show that security in this setting can be achieved with
$$O(\mathbf{t}^{1+\varepsilon })$$
O(t1+ε) random bits, for every constant
$$\varepsilon > 0$$
ε>0. This (near-optimal) bound significantly improves upon previous constructions that required more than
$$\mathbf{t}^{3}$$
t3 random bits.

2018

CRYPTO

Promise Zero Knowledge and Its Applications to Round Optimal MPC
📺
Abstract

We devise a new partitioned simulation technique for MPC where the simulator uses different strategies for simulating the view of aborting adversaries and non-aborting adversaries. The protagonist of this technique is a new notion of promise zero knowledge (ZK) where the ZK property only holds against non-aborting verifiers. We show how to realize promise ZK in three rounds in the simultaneous-message model assuming polynomially hard DDH (or QR or N$$^{th}$$-Residuosity).We demonstrate the following applications of our new technique:We construct the first round-optimal (i.e., four round) MPC protocol for general functions based on polynomially hard DDH (or QR or N$$^{th}$$-Residuosity).We further show how to overcome the four-round barrier for MPC by constructing a three-round protocol for “list coin-tossing” – a slight relaxation of coin-tossing that suffices for most conceivable applications – based on polynomially hard DDH (or QR or N$$^{th}$$-Residuosity). This result generalizes to randomized input-less functionalities.
Previously, four round MPC protocols required sub-exponential-time hardness assumptions and no multi-party three-round protocols were known for any relaxed security notions with polynomial-time simulation against malicious adversaries.In order to base security on polynomial-time standard assumptions, we also rely upon a leveled rewinding security technique that can be viewed as a polynomial-time alternative to leveled complexity leveraging for achieving “non-malleability” across different primitives.

2018

CRYPTO

Threshold Cryptosystems from Threshold Fully Homomorphic Encryption
📺
Abstract

We develop a general approach to adding a threshold functionality to a large class of (non-threshold) cryptographic schemes. A threshold functionality enables a secret key to be split into a number of shares, so that only a threshold of parties can use the key, without reconstructing the key. We begin by constructing a threshold fully-homomorphic encryption scheme (ThFHE) from the learning with errors (LWE) problem. We next introduce a new concept, called a universal thresholdizer, from which many threshold systems are possible. We show how to construct a universal thresholdizer from our ThFHE. A universal thresholdizer can be used to add threshold functionality to many systems, such as CCA-secure public-key encryption (PKE), signature schemes, pseudorandom functions, and others primitives. In particular, by applying this paradigm to a (non-threshold) lattice signature system, we obtain the first single-round threshold signature scheme from LWE.

2018

TCC

Upgrading to Functional Encryption
Abstract

The notion of Functional Encryption (FE) has recently emerged as a strong primitive with several exciting applications. In this work, we initiate the study of the following question: Can existing public key encryption schemes be “upgraded” to Functional Encryption schemes without changing their public keys or the encryption algorithm? We call a public-key encryption scheme with this property to be FE-compatible. Indeed, assuming ideal obfuscation, it is easy to see that every CCA-secure public-key encryption scheme is FE-compatible. Despite the recent success in using indistinguishability obfuscation to replace ideal obfuscation for many applications, we show that this phenomenon most likely will not apply here. We show that assuming fully homomorphic encryption and the learning with errors (LWE) assumption, there exists a CCA-secure encryption scheme that is provably not FE-compatible. We also show that a large class of natural CCA-secure encryption schemes proven secure in the random oracle model are not FE-compatible in the random oracle model.Nevertheless, we identify a key structure that, if present, is sufficient to provide FE-compatibility. Specifically, we show that assuming sub-exponentially secure iO and sub-exponentially secure one way functions, there exists a class of public key encryption schemes which we call Special-CCA secure encryption schemes that are in fact, FE-compatible. In particular, each of the following popular CCA secure encryption schemes (some of which existed even before the notion of FE was introduced) fall into the class of Special-CCA secure encryption schemes and are thus FE-compatible:1.[CHK04] when instantiated with the IBE scheme of [BB04].2.[CHK04] when instantiated with any Hierarchical IBE scheme.3.[PW08] when instantiated with any Lossy Trapdoor Function.

2018

TCC

Exploring Crypto Dark Matter:
Abstract

Pseudorandom functions (PRFs) are one of the fundamental building blocks in cryptography. Traditionally, there have been two main approaches for PRF design: the “practitioner’s approach” of building concretely-efficient constructions based on known heuristics and prior experience, and the “theoretician’s approach” of proposing constructions and reducing their security to a previously-studied hardness assumption. While both approaches have their merits, the resulting PRF candidates vary greatly in terms of concrete efficiency and design complexity.In this work, we depart from these traditional approaches by exploring a new space of plausible PRF candidates. Our guiding principle is to maximize simplicity while optimizing complexity measures that are relevant to cryptographic applications. Our primary focus is on weak PRFs computable by very simple circuits—specifically, depth-2$$\mathsf {ACC}^0$$ circuits. Concretely, our main weak PRF candidate is a “piecewise-linear” function that first applies a secret mod-2 linear mapping to the input, and then a public mod-3 linear mapping to the result. We also put forward a similar depth-3 strong PRF candidate.The advantage of our approach is twofold. On the theoretical side, the simplicity of our candidates enables us to draw many natural connections between their hardness and questions in complexity theory or learning theory (e.g., learnability of $$\mathsf {ACC}^0$$ and width-3 branching programs, interpolation and property testing for sparse polynomials, and new natural proof barriers for showing super-linear circuit lower bounds). On the applied side, the piecewise-linear structure of our candidates lends itself nicely to applications in secure multiparty computation (MPC). Using our PRF candidates, we construct protocols for distributed PRF evaluation that achieve better round complexity and/or communication complexity (often both) compared to protocols obtained by combining standard MPC protocols with PRFs like AES, LowMC, or Rasta (the latter two are specialized MPC-friendly PRFs).Finally, we introduce a new primitive we call an encoded-input PRF, which can be viewed as an interpolation between weak PRFs and standard (strong) PRFs. As we demonstrate, an encoded-input PRF can often be used as a drop-in replacement for a strong PRF, combining the efficiency benefits of weak PRFs and the security benefits of strong PRFs. We conclude by showing that our main weak PRF candidate can plausibly be boosted to an encoded-input PRF by leveraging standard error-correcting codes.

2017

EUROCRYPT

2017

EUROCRYPT

2017

CRYPTO

2017

ASIACRYPT

2016

CRYPTO

2016

CRYPTO

2015

JOFC

2015

EPRINT

2015

TCC

2015

EUROCRYPT

2013

CRYPTO

2013

CRYPTO

2012

CRYPTO

2010

EUROCRYPT

2010

EPRINT

Interactive Locking, Zero-Knowledge PCPs, and Unconditional Cryptography
Abstract

Motivated by the question of basing cryptographic protocols on stateless tamper-proof hardware tokens, we revisit the question of unconditional two-prover zero-knowledge proofs for $NP$. We show that such protocols exist in the {\em interactive PCP} model of Kalai and Raz (ICALP '08), where one of the provers is replaced by a PCP oracle. This strengthens the feasibility result of Ben-Or, Goldwasser, Kilian, and Wigderson (STOC '88) which requires two stateful provers. In contrast to previous zero-knowledge PCPs of Kilian, Petrank, and Tardos (STOC '97), in our protocol both the prover and the PCP oracle are efficient given an $NP$ witness.
Our main technical tool is a new primitive that we call {\em interactive locking}, an efficient realization of an unconditionally secure commitment scheme in the interactive PCP model. We implement interactive locking by adapting previous constructions of {\em interactive hashing} protocols to our setting, and also provide a direct construction which uses a minimal amount of interaction and improves over our interactive hashing based constructions.
Finally, we apply the above results towards showing the feasibility of basing unconditional cryptography on {\em stateless} tamper-proof hardware tokens, and obtain the following results:
*) We show that if tokens can be used to encapsulate other tokens, then there exist unconditional and statistically secure (in fact, UC secure) protocols for general secure computation.
*) Even if token encapsulation is not possible, there are unconditional and statistically secure commitment protocols and zero-knowledge proofs for $NP$.
*) Finally, if token encapsulation is not possible, then no protocol can realize statistically secure oblivious transfer.

2010

EPRINT

Fully Secure Functional Encryption: Attribute-Based Encryption and (Hierarchical) Inner Product Encryption
Abstract

In this paper, we present two fully secure functional encryption schemes. Our first result is a fully secure attribute-based encryption (ABE) scheme. Previous constructions of ABE were only proven to be selectively secure. We achieve full security by adapting the dual system encryption methodology recently introduced by Waters and previously leveraged to obtain fully secure IBE and HIBE systems. The primary challenge in applying dual system encryption to ABE is the richer structure of keys and ciphertexts. In an IBE or HIBE system, keys and ciphertexts are both associated with the same type of simple object: identities. In an ABE system, keys and ciphertexts are associated with more complex objects: attributes and access formulas. We use a novel information-theoretic argument to adapt the dual system encryption methodology to the more complicated structure of ABE systems. We construct our system in composite order bilinear groups, where the order is a product of three primes. We prove the security of our system from three static assumptions. Our ABE scheme supports arbitrary monotone access formulas.
Our second result is a fully secure (attribute-hiding) predicate encryption (PE) scheme for inner-product predicates. As for ABE, previous constructions of such schemes were only proven to be selectively secure. Security is proven under a non-interactive assumption whose size does not depend on the number of queries. The scheme is comparably efficient to existing selectively secure schemes. We also present a fully secure hierarchical PE scheme under the same assumption. The key technique used to obtain these results is an elaborate combination of the dual system encryption methodology (adapted to the structure of inner product PE systems) and a new approach on bilinear pairings using the notion of dual pairing vector spaces (DPVS) proposed by Okamoto and Takashima.

2010

EPRINT

Founding Cryptography on Tamper-Proof Hardware Tokens
Abstract

A number of works have investigated using tamper-proof hardwaretokens as tools to achieve a variety of cryptographic tasks. In particular, Goldreich and Ostrovsky considered the goal of software protection via oblivious RAM. Goldwasser, Kalai, and Rothblum introduced the concept of \emph{one-time programs}: in a one-time program, an honest sender sends a set of {\em simple} hardware tokens to a (potentially malicious) receiver. The hardware tokens allow the receiver to execute a secret program specified by the sender's tokens exactly once (or, more generally, up to a fixed $t$ times). A recent line of work initiated by Katz examined the problem ofachieving UC-secure computation using hardware tokens. Motivated by the goal of unifying and strengthening these previous notions, we consider the general question of basing secure computation on hardware tokens. We show that the following tasks, which cannot be realized in the ``plain'' model, become feasible if the parties are allowed to generate and exchange tamper-proof hardware tokens.
Unconditional non-interactive secure computation: We show that by exchanging simple stateful hardware tokens, any functionality can be realized with unconditional security against malicious parties. In the case of two-party functionalities $f(x,y)$ which take their inputs from a sender and a receiver and deliver their output to the receiver, our protocol is non-interactive and only requires a unidirectional communication of simple stateful tokens from the sender to the receiver. This strengthens previous feasibility results for one-time programs both by providing unconditional security and by offering general protection against malicious senders. As is typically the case for unconditionally secure protocols, our protocol is in fact UC-secure. This improves over previous works on UC-secure computation based on hardware tokens, which provided computational security under cryptographic assumptions.
Interactive Secure computation from stateless tokens based on one-way functions: We show that stateless hardware tokens are sufficient to base general secure (in fact, UC-secure) computation on the existence of one-way functions. One cannot hope for security against unbounded adversaries with stateless tokens since an unbounded adversary could query the token multiple times to ``learn" the functionality it contains.
Non-interactive secure computation from stateless tokens: We consider the problem of designing non-interactive secure computation from stateless tokens for stateless oblivious reactive functionalities, i.e., reactive functionalities which allow unlimited queries from the receiver (these are the only functionalities one can hope to realize non-interactively with stateless tokens). By building on recent techniques from resettably secure computation, we give a general positive result for stateless oblivious reactive functionalities under standard cryptographic assumption. This result generalizes the notion of (unlimited-use) obfuscation by providing security against a malicious sender, and also provides the first general feasibility result for program obfuscation using stateless tokens.

2009

CRYPTO

2008

EUROCRYPT

2008

EPRINT

Revocation Systems with Very Small Private Keys
Abstract

In this work, we design a new public key broadcast encryption system,
and we focus on a critical parameter of device key size: the amount of
the cryptographic key material that must be stored securely on the
receiving devices. Our new scheme has ciphertext size overhead O(r),
where $r$ is the number of revoked users, and the size of public and
private keys is only a constant number of group elements from
an elliptic-curve group of prime order. All previous work, even in the
restricted case of systems based on symmetric keys, required at least
lg(n) keys stored on each device. In addition, we show that our
techniques can be used to realize Attribute-Based Encryption (ABE)
systems with non-monotonic access formulas, where are key storage is
significantly more efficient than previous solutions. Our results are
in the standard model under a new, but non-interactive, assumption.

2008

EPRINT

Secure Arithmetic Computation with No Honest Majority
Abstract

We study the complexity of securely evaluating arithmetic circuits over finite rings. This question is motivated by natural secure computation tasks. Focusing mainly on the case of {\em two-party} protocols with security against {\em malicious} parties, our main goals are to: (1) only make black-box calls to the ring operations and standard cryptographic primitives, and (2) minimize the number of such black-box calls as well as the communication overhead.
We present several solutions which differ in their efficiency, generality, and underlying intractability assumptions. These include:
\begin{itemize}
\item An {\em unconditionally secure} protocol in the OT-hybrid model which makes a black-box use of an arbitrary ring $R$, but where the number of ring operations grows linearly with (an upper bound on) $\log|R|$.
\item Computationally secure protocols in the OT-hybrid model which make a black-box use of an underlying ring, and in which the numberof ring operations does not grow with the ring size. The protocols rely on variants of previous intractability assumptions related to linear codes. In the most efficient instance of these protocols, applied to a suitable class of fields, the (amortized) communication cost is a constant number of field elements per multiplication gate and the computational cost is dominated by $O(\log k)$ field operations per gate, where$k$ is a security parameter. These results extend a previous approach of Naor and Pinkas for secure polynomial evaluation ({\em SIAM J.\ Comput.}, 35(5), 2006).
\item A protocol for the rings $\mathbb{Z}_m=\mathbb{Z}/m\mathbb{Z}$ which only makes a black-box use of a homomorphic encryption scheme. When $m$ is prime, the (amortized) number of calls to the encryption scheme for each gate of the circuit is constant.
\end{itemize}
All of our protocols are in fact {\em UC-secure} in the OT-hybrid model and can be generalized to {\em multiparty} computation with an arbitrary number of malicious parties.

2007

EPRINT

Private Locally Decodable Codes
Abstract

We consider the problem of constructing efficient locally decodable
codes in the presence of a computationally bounded adversary.
Assuming the existence of one-way functions, we construct {\em
efficient} locally decodable codes with positive information rate
and \emph{low} (almost optimal) query complexity which can correctly
decode any given bit of the message from constant channel error rate
$\rho$. This compares favorably to our state of knowledge
locally-decodable codes without cryptographic assumptions. For all
our constructions, the probability for any polynomial-time
adversary, that the decoding algorithm incorrectly decodes any bit
of the message is negligible in the security parameter.

2007

EPRINT

Attribute-Based Encryption with Non-Monotonic Access Structures
Abstract

We construct an Attribute-Based Encryption (ABE) scheme
that allows a user's private key to be expressed in terms
of any access formula over attributes. Previous ABE
schemes were limited to expressing only monotonic access structures. We provide a proof of security for our scheme based on the
Decisional Bilinear Diffie-Hellman (BDH) assumption.
Furthermore, the performance of our new scheme compares
favorably with existing, less-expressive schemes.

2007

EPRINT

Efficient Non-interactive Proof Systems for Bilinear Groups
Abstract

Non-interactive zero-knowledge proofs and non-interactive witness-indistinguishable proofs have played a significant role in the theory of cryptography. However, lack of efficiency has prevented them from being used in practice. One of the roots of this inefficiency is that non-interactive zero-knowledge proofs have been constructed for general NP-complete languages such as Circuit Satisfiability, causing an expensive blowup in the size of the statement when reducing it to a circuit. The contribution of this paper is a general methodology for constructing very simple and efficient non-interactive zero-knowledge proofs and non-interactive witness-indistinguishable proofs that work directly for groups with a bilinear map, without needing a reduction to Circuit Satisfiability.
Groups with bilinear maps have enjoyed tremendous success in the field of cryptography in recent years and have been used to construct a plethora of protocols. This paper provides non-interactive witness-indistinguishable proofs and non-interactive zero-knowledge proofs that can be used in connection with these protocols. Our goal is to spread the use of non-interactive cryptographic proofs from mainly theoretical purposes to the large class of practical cryptographic protocols based on bilinear groups.

2007

EPRINT

New Constructions for UC Secure Computation using Tamper-proof Hardware
Abstract

The Universal Composability framework was introduced by Canetti to study the security of protocols which are concurrently executed with other protocols in a network environment. Unfortunately it was shown that in the so called plain model, a large class of functionalities cannot be securely realized. These severe impossibility results motivated the study of other models involving some sort of setup assumptions, where general positive results can be obtained. Until recently, all the setup assumptions which were proposed required some trusted third party (or parties).
Katz recently proposed using a \emph{physical setup} to avoid such trusted setup assumptions. In his model, the physical setup phase includes the parties exchanging tamper proof hardware tokens implementing some functionality. The tamper proof hardware is modeled so as to assume that the receiver of the token can do nothing more than observe its input/output characteristics. It is further assumed that the sender \emph{knows} the program code of the hardware token which it distributed. Based on the DDH assumption, Katz gave general positive results for universally composable multi-party computation tolerating any number of dishonest parties making this model quite attractive.
In this paper, we present new constructions for UC secure computation using tamper proof hardware (in a stronger model). Our results represent an improvement over the results of Katz in several directions using substantially different techniques. Interestingly, our security proofs do not rely on being able to rewind the hardware tokens created by malicious parties. This means that we are able to relax the assumptions that the parties \emph{know} the code of the hardware token which they distributed. This allows us to model real life attacks where, for example, a party may simply pass on the token obtained from one party to the other without actually knowing its functionality. Furthermore, our construction models the interaction with the tamper-resistant hardware as a simple request-reply protocol. Thus, we show that the hardware tokens used in our construction can be \emph{resettable}. In fact, it suffices to use token which are completely stateless (and thus cannot execute a multi-round protocol). Our protocol is also based on general assumptions (namely enhanced trapdoor permutations).

2007

EPRINT

Predicate Encryption Supporting Disjunctions, Polynomial Equations, and Inner Products
Abstract

Predicate encryption is a new paradigm generalizing, among other
things, identity-based encryption. In a predicate encryption scheme,
secret keys correspond to predicates and ciphertexts are associated
with attributes; the secret key SK_f corresponding to the predicate
f can be used to decrypt a ciphertext associated with attribute I if and only if f(I)=1. Constructions of such schemes are currently known for relatively few classes of predicates.
We construct such a scheme for predicates corresponding to the evaluation of inner products over N (for some large integer N).
This, in turn, enables constructions in which predicates correspond to the evaluation of disjunctions, polynomials, CNF/DNF formulae, or threshold predicates (among others). Besides serving as what we feel is a significant step forward in the theory of predicate encryption,
our results lead to a number of applications that are interesting in their own right.

2007

EPRINT

Precise Concurrent Zero Knowledge
Abstract

\emph{Precise zero knowledge} introduced by Micali and Pass
(STOC'06) guarantees that the view of any verifier $V$ can be
simulated in time closely related to the \emph{actual} (as opposed
to worst-case) time spent by $V$ in the generated view. We provide
the first constructions of precise concurrent zero-knowledge
protocols. Our constructions have essentially optimal precision;
consequently this improves also upon the previously tightest
non-precise concurrent zero-knowledge protocols by Kilian and
Petrank (STOC'01) and Prabhakaran, Rosen and Sahai (FOCS'02) whose
simulators have a quadratic worst-case overhead.
Additionally, we achieve a statistically-precise concurrent
zero-knowledge property---which requires simulation of unbounded verifiers participating in an unbounded number of concurrent executions;
as such we obtain the first (even non-precise)
concurrent zero-knowledge protocols which handle verifiers
participating in a super-polynomial number of concurrent executions.

2006

EPRINT

Fully Collusion Resistant Traitor Tracing
Abstract

We construct the first fully collusion resistant tracing traitors
system with sublinear size ciphertexts and constant size private keys.
More precisely, let $N$ be the total number of users. Our system
generates ciphertexts of size $O(\sqrt{N})$ and private keys of size
$O(1)$. We build our system by first building a simpler primitive
called private linear broadcast encryption (PLBE). We then show
that any PLBE gives a tracing traitors system with the same
parameters. Our system uses bilinear maps in groups of composite
order.

2006

EPRINT

Sequential Aggregate Signatures and Multisignatures without Random Oracles
Abstract

We present the first aggregate signature, the first multisignature,
and the first verifiably encrypted signature provably secure without
random oracles. Our constructions derive from a novel application
of a recent signature scheme due to Waters. Signatures in our
aggregate signature scheme are sequentially constructed, but
knowledge of the order in which messages were signed is not necessary
for verification. The aggregate signatures obtained are shorter than
Lysyanskaya et~al. sequential aggregates and can be verified more
efficiently than Boneh et~al. aggregates. We also consider
applications to secure routing and proxy signatures.

2006

EPRINT

Cryptography from Anonymity
Abstract

There is a vast body of work on {\em implementing} anonymous
communication. In this paper, we study the possibility of using
anonymous communication as a {\em building block}, and show that
one can leverage on anonymity in a variety of cryptographic
contexts. Our results go in two directions.
\begin{itemize}
\item{\bf Feasibility.} We show that anonymous communication
over {\em insecure} channels can be used to implement
unconditionally secure point-to-point channels, and hence
general multi-party protocols with unconditional security in the
presence of an honest majority. In contrast, anonymity cannot be
generally used to obtain unconditional security when there is no
honest majority.
\item{\bf Efficiency.} We show that anonymous channels can yield
substantial efficiency improvements for several natural secure
computation tasks. In particular, we present the first solution
to the problem of private information retrieval (PIR) which can
handle multiple users while being close to optimal with respect
to {\em both} communication and computation. A key observation
that underlies these results is that {\em local randomization}
of inputs, via secret-sharing, when combined with the {\em
global mixing} of the shares, provided by anonymity, allows to
carry out useful computations on the inputs while keeping the
inputs private.
\end{itemize}

2006

EPRINT

Non-Malleable Encryption: Equivalence between Two Notions, and an Indistinguishability-based Characterization
Abstract

We prove the equivalence of two definitions of non-malleable
encryption, one based on the simulation approach of Dolev, Dwork and Naor and the other based on the comparison approach of Bellare,
Desai, Pointcheval and Rogaway.
Our definitions are slightly stronger than the original ones.
The equivalence relies on a new
characterization of non-malleable encryption in terms of the standard
notion of indistinguishability of Goldwasser and Micali. We show that
non-malleability is equivalent to indistinguishability under a
``parallel chosen ciphertext attack,'' this being a new kind of chosen
ciphertext attack we introduce, in which the adversary's decryption
queries are not allowed to depend on answers to previous queries, but
must be made all at once. This characterization simplifies both the
notion of non-malleable encryption and its usage, and enables one to
see more easily how it compares with other notions of encryption. The
results here apply to non-malleable encryption under any form of
attack, whether chosen-plaintext, chosen-ciphertext, or adaptive
chosen-ciphertext.

2006

EPRINT

Attribute-Based Encryption for Fine-Grained Access Control of Encrypted Data
Abstract

As more sensitive data is shared and stored by third-party sites on
the Internet, there will be a need to encrypt data stored at these
sites. One drawback of encrypting data, is that it can be
selectively shared only at a coarse-grained level (i.e., giving
another party your private key). We develop a new cryptosystem for
fine-grained sharing of encrypted data that we call Key-Policy
Attribute-Based Encryption (KP-ABE). In our cryptosystem, ciphertexts
are labeled with sets of attributes and private keys are associated
with access structures that control which ciphertexts a user is able
to decrypt. We demonstrate the applicability of our construction to
sharing of audit-log information and broadcast encryption. Our
construction supports delegation of private keys which subsumes
Hierarchical Identity-Based Encryption (HIBE).

2006

EPRINT

Concurrent Non-Malleable Zero Knowledge
Abstract

We provide the first construction of a
concurrent and non-malleable zero knowledge argument for every
language in NP. We stress that our construction is in the plain
model with no common random string, trusted parties, or
super-polynomial simulation. That is, we construct a zero knowledge
protocol $\Pi$ such that for every polynomial-time adversary that
can adaptively and concurrently schedule polynomially many
executions of $\Pi$, and corrupt some of the verifiers and some of
the provers in these sessions, there is a polynomial-time simulator
that can simulate a transcript of the entire execution, along with
the witnesses for all statements proven by a corrupt prover to an
honest verifier.
Our security model is the traditional model for concurrent zero
knowledge, where the statements to be proven by the honest provers
are fixed in advance and do not depend on the previous history (but
can be correlated with each other); corrupted provers, of course,
can chose the statements adaptively. We also prove that there exists
some functionality F (a combination of zero knowledge and
oblivious transfer) such that it is impossible to obtain a
concurrent non-malleable protocol for F in this model.
Previous impossibility results for composable protocols ruled out
existence of protocols for a wider class of functionalities
(including zero knowledge!) but only if these protocols were
required to remain secure when executed concurrently with
arbitrarily chosen different protocols (Lindell, FOCS 2003) or if
these protocols were required to remain secure when the honest
parties' inputs in each execution are chosen adaptively based on the
results of previous executions (Lindell, TCC 2004).
We obtain an $\Tilde{O}(n)$-round protocol under the assumption that
one-to-one one-way functions exist. This can be improved to
$\Tilde{O}(k\log n)$ rounds under the assumption that there exist
$k$-round statistically hiding commitment schemes. Our protocol is a
black-box zero knowledge protocol.

2006

EPRINT

Concurrent Statistical Zero-Knowledge Arguments for NP from One Way Functions
Abstract

In this paper we show a general transformation from any honest
verifier statistical zero-knowledge argument to a concurrent
statistical zero-knowledge argument. Our transformation relies only on the existence of one-way functions. It is known that the existence of zero-knowledge systems for any non-trivial language implies one way functions. Hence our transformation \emph{unconditionally} shows that concurrent statistical zero-knowledge arguments for a non-trivial language exist if and only if standalone secure statistical zero-knowledge arguments for that language exist.
Further, applying our transformation to the recent statistical zero-knowledge argument system of Nguyen et al (STOC'06) yields the first concurrent statistical zero-knowledge argument system for all languages in \textbf{NP} from any one way function.

2005

EPRINT

How To Play Almost Any Mental Game Over The Net --- Concurrent Composition via Super-Polynomial Simulation
Abstract

We construct a secure protocol for any multi-party functionality
that remains secure (under a relaxed definition of security) when
executed concurrently with multiple copies of itself and other
protocols. We stress that we do *not* use any assumptions on
existence of trusted parties, common reference string, honest
majority or synchronicity of the network. The relaxation of
security, introduced by Prabhakaran and Sahai (STOC '04), is
obtained by allowing the ideal-model simulator to run in
*quai-polynomial* (as opposed to polynomial) time.
Quasi-polynomial simulation suffices to ensure security for most
applications of multi-party computation. Furthermore, Lindell (FOCS
'03, TCC' 04) recently showed that such a protocol is
*impossible* to obtain under the more standard definition of
*polynomial-time* simulation by an ideal adversary.
Our construction is the first such protocol under reasonably
standard cryptographic assumptions. That is, existence of a hash
function collection that is collision resistent with respect to
circuits of subexponential size, and existence of trapdoor
permutations that are secure with respect to circuits of
quasi-polynomial size.
We introduce a new technique: ``protocol condensing''. That is,
taking a protocol that has strong security properties but requires
*super-polynomial* communication and computation, and then
transforming it into a protocol with *polynomial* communication
and computation, that still inherits the strong security properties
of the original protocol. Our result is obtained by combining this
technique with previous techniques of Canetti, Lindell, Ostrovsky,
and Sahai (STOC '02) and Pass (STOC '04).

2005

EPRINT

Concurrent Zero Knowledge without Complexity Assumptions
Abstract

We provide unconditional constructions of concurrent statistical zero-knowledge proofs for a variety of non-trivial problems (not known to have probabilistic polynomial-time algorithms). The problems include Graph Isomorphism, Graph Nonisomorphism, Quadratic Residuosity, Quadratic Nonresiduosity, a restricted version of Statistical Difference, and approximate versions of the (coNP forms of the) Shortest Vector Problem and Closest Vector Problem in lattices.
For some of the problems, such as Graph Isomorphism and Quadratic Residuosity, the proof systems have provers that can be implemented in polynomial time (given an NP witness) and have \tilde{O}(log n) rounds, which is known to be essentially optimal for black-box simulation.
To our best of knowledge, these are the first constructions of concurrent zero-knowledge protocols in the asynchronous model (without timing assumptions) that do not require complexity assumptions (such as the existence of one-way functions).

2005

EPRINT

Perfect Non-Interactive Zero Knowledge for NP
Abstract

Non-interactive zero-knowledge (NIZK) systems are fundamental cryptographic primitives used in many constructions, including CCA2-secure cryptosystems, digital signatures, and various cryptographic protocols. What makes them especially attractive, is that they work equally well in a concurrent setting, which is notoriously hard for interactive zero-knowledge protocols. However, while for interactive zero-knowledge we know how to construct statistical zero-knowledge argument systems for all NP languages, for non-interactive zero-knowledge, this problem remained open since the inception of NIZK in the late 1980's. Here we resolve two problems regarding NIZK:
- we construct the first perfect NIZK argument system for any NP language.
- we construct the first UC-secure NIZK protocols for any NP language in the presence of a dynamic/adaptive adversary.
While it was already known how to construct efficient prover computational NIZK proofs for any NP language, the known techniques yield large common reference strings and large NIZK proofs. As an additional implication of our techniques, we considerably reduce both the size of the common reference string and the size of the proofs.

2004

EPRINT

Positive Results and Techniques for Obfuscation
Abstract

Informally, an obfuscator $\Obf$ is an efficient, probabilistic ``compiler''
that transforms a program $P$ into a new program $\Obf(P)$ with the same
functionality as $P$, but such that $\Obf(P)$ protects any secrets that may
be built into and used by $P$. Program obfuscation, if possible, would have
numerous important cryptographic applications, including: (1) ``Intellectual
property'' protection of secret algorithms and keys in software, (2) Solving
the long-standing open problem of homomorphic public-key encryption, (3)
Controlled delegation of authority and access, (4) Transforming Private-Key
Encryption into Public-Key Encryption, and (5) Access Control Systems.
Unfortunately however, program obfuscators that work on arbitrary programs
cannot exist [Barak et al]. No positive results for program obfuscation
were known prior to this work.
In this paper, we provide the first positive results in program obfuscation.
We focus on the goal of access control, and give several provable
obfuscations for complex access control functionalities, in the random
oracle model. Our results are obtained through non-trivial compositions of
obfuscations; we note that general composition of obfuscations is
impossible, and so developing techniques for composing obfuscations is an
important goal. Our work can also be seen as making initial progress toward
the goal of obfuscating finite automata or regular expressions, an important
general class of machines which are not ruled out by the impossibility
results of Barak et al. We also note that our work provides the first
formal proof techniques for obfuscation, which we expect to be useful in
future work in this area.

2004

EPRINT

Fuzzy Identity Based Encryption
Abstract

We introduce a new type of Identity-Based Encryption
(IBE) scheme that we call Fuzzy Identity-Based Encryption.
In Fuzzy IBE we view an identity as set of descriptive attributes.
A Fuzzy IBE scheme allows for a private key for an identity, $\omega$, to decrypt a ciphertext encrypted with an identity, $\omega'$, if
and only if the identities $\omega$ and $\omega'$ are close to each
other as measured by the ``set overlap'' distance metric.
A Fuzzy IBE scheme can be applied to enable encryption
using biometric inputs as identities; the error-tolerance property
of a Fuzzy IBE scheme is precisely what allows for the use of
biometric identities, which inherently will have some noise each
time they are sampled. Additionally, we show that Fuzzy-IBE can
be used for a type of application that we term ``attribute-based encryption''.
In this paper we present two constructions of Fuzzy IBE schemes. Our
constructions can be viewed as an Identity-Based Encryption of a
message under several attributes that compose a (fuzzy) identity.
Our IBE schemes are both error-tolerant and secure against collusion
attacks. Additionally, our basic construction does not use random
oracles. We prove the security of our schemes under the Selective-ID
security model.

2004

EPRINT

New Notions of Security: Achieving Universal Composability without Trusted Setup
Abstract

We propose a modification to the framework of Universally Composable (UC) security [Canetti'01]. Our new notion, involves comparing the protocol executions with an ideal execution involving ideal functionalities (just as in UC-security), but allowing the environment and adversary access to some super-polynomial computational power. We argue the meaningfulness of the new notion, which in particular subsumes many of the traditional notions of security.
We generalize the Universal Composition theorem of [Canetti'01] to the new setting. Then under new computational assumptions, we realize secure multi-party computation (for static adversaries) without a common reference string or any other set-up assumptions, in the new framework. This is known to be impossible under the UC framework.

2002

EPRINT

A Unified Methodology For Constructing Public-Key Encryption Schemes Secure Against Adaptive Chosen-Ciphertext Attack
Abstract

We introduce a new methodology for achieving security against
adaptive chosen-ciphertext attack (CCA) for
public-key encryption schemes, which we call
the {\em oblivious decryptors model}. The oblivious decryptors model
generalizes both the two-key model of Naor and Yung,
as well the Cramer--Shoup encryption schemes.
The key ingredient in our new paradigm is Sahai's notion of
Simulation-Sound NIZK proofs.
Our methodology is easy to use: First, construct an
encryption scheme which satisfies the ``bare'' oblivious-decryptors
model: This can be done quite easily, with simple proofs
of security. Then, by adding a Simulation-Sound NIZK proof,
the scheme becomes provably CCA-secure. Note that this paradigm
allows for the use of {\em efficient} special-purpose Simulation-Sound
NIZK proofs, such as those recently put forward by Cramer and Shoup.
We also show how to present all known
efficient (provably secure) CCA-secure public-key encryption schemes
as special cases of our model.

2002

EPRINT

Concurrent Zero Knowledge Proofs with Logarithmic Round-Complexity
Abstract

We consider the problem of constructing Concurrent Zero Knowledge
Proofs, in which the fascinating and useful ``zero
knowledge'' property is guaranteed even in situations where
multiple concurrent proof sessions are executed with many
colluding dishonest verifiers. Canetti et al.
show that black-box concurrent zero knowledge proofs for
non-trivial languages require $\tilde\Omega(\log k)$ rounds where
$k$ is the security parameter. Till now the best known upper
bound on the number of rounds for NP languages was $\omega(\log^2
k)$, due to Kilian, Petrank and Richardson. We establish an
upper bound of $\omega(\log k)$ on the number of rounds for NP
languages, thereby closing the gap between the upper and lower
bounds, up to a $\omega(\log\log k)$ factor.

2002

EPRINT

Universally Composable Two-Party and Multi-Party Secure Computation
Abstract

We show how to securely realize any two-party and multi-party functionality in a {\em universally composable} way, regardless of the number of corrupted participants. That is, we consider an asynchronous multi-party network with open communication and an
adversary that can adaptively corrupt as many parties as it wishes. In this setting, our protocols allow any subset of the parties (with pairs of parties being a special case) to securely realize any desired functionality of their local inputs, and be guaranteed that security is preserved regardless of the activity in the rest of the network. This implies that security is preserved under concurrent composition of an unbounded number of protocol executions, it implies non-malleability with respect to arbitrary protocols, and more. Our constructions are in the common reference string model and rely on standard intractability assumptions.

2001

EPRINT

On the (Im)possibility of Obfuscating Programs
Abstract

Informally, an {\em obfuscator} $O$ is an (efficient,
probabilistic) ``compiler'' that takes as input a program (or
circuit) $P$ and produces a new program $O(P)$ that has the
same functionality as $P$ yet is ``unintelligible'' in some sense.
Obfuscators, if they exist, would have a wide variety of
cryptographic and complexity-theoretic applications, ranging from
software protection to homomorphic encryption to
complexity-theoretic analogues of Rice's theorem. Most of these
applications are based on an interpretation of the
``unintelligibility'' condition in obfuscation as meaning that
$O(P)$ is a ``virtual black box,'' in the sense that anything
one can efficiently compute given $O(P)$, one could also
efficiently compute given oracle access to $P$.
In this work, we initiate a theoretical investigation of
obfuscation. Our main result is that, even under very
weak formalizations of the above intuition, obfuscation
is impossible. We prove this by constructing a family of
functions $F$ that are {\em \inherently
unobfuscatable} in the following sense: there is a
property $\pi : F \rightarrow \{0,1\}$ such that (a) given {\em
any program} that computes a function $f\in F$, the
value $\pi(f)$ can be efficiently computed, yet (b) given
{\em oracle access} to a (randomly selected) function
$f\in F$, no efficient algorithm can compute $\pi(f)$
much better than random guessing.
We extend our impossibility result in a number of ways, including
even obfuscators that (a) are not necessarily computable in
polynomial time, (b) only {\em approximately} preserve the
functionality, and (c) only need to work for very restricted
models of computation ($TC_0$). We also rule out several
potential applications of obfuscators, by constructing
``unobfuscatable'' signature schemes, encryption schemes, and
pseudorandom function families.

2000

EPRINT

A Complete Problem for Statistical Zero Knowledge
Abstract

We present the first complete problem for SZK, the class of
(promise) problems
possessing statistical zero-knowledge proofs (against an
honest verifier).
The problem, called STATISTICAL DIFFERENCE,
is to decide whether two efficiently samplable distributions
are either statistically close or far apart. This
gives a new characterization of
SZK that makes no reference to interaction or zero knowledge.
We propose the use of complete problems to unify and
extend the study of statistical zero knowledge. To this end,
we examine several
consequences of our Completeness Theorem and its proof, such as:
(1) A way to make every (honest-verifier) statistical
zero-knowledge proof very communication efficient,
with the prover sending only one bit
to the verifier (to achieve soundness error 1/2).
(2) Simpler proofs of many of the previously known
results about statistical zero knowledge, such as
the Fortnow and Aiello--Håstad upper bounds on the complexity of SZK
and
Okamoto's result that SZK is closed under complement.
(3) Strong closure properties of SZK which amount to
constructing statistical zero-knowledge proofs for complex assertions
built out of simpler assertions already shown to be in SZK.
(4) New results about the various measures of "knowledge complexity,"
including a collapse in the hierarchy corresponding
to knowledge complexity in the "hint" sense.
(5) Algorithms for manipulating the statistical difference
between efficiently samplable distributions, including transformations
which "polarize" and "reverse" the statistical relationship
between a pair of distributions.

1999

CRYPTO

1999

EPRINT

Non-Malleable Encryption: Equivalence between Two Notions, and an Indistinguishability-Based Characterization
Abstract

We prove the equivalence of two definitions of non-malleable
encryption appearing in the literature--- the original one of Dolev, Dwork
and Naor and the later one of Bellare, Desai, Pointcheval and Rogaway. The
equivalence relies on a new characterization of non-malleable encryption in
terms of the standard notion of indistinguishability of Goldwasser and
Micali. We show that non-malleability is equivalent to indistinguishability
under a ``parallel chosen ciphertext attack,'' this being a new kind of
chosen ciphertext attack we introduce, in which the adversary's decryption
queries are not allowed to depend on answers to previous queries, but must be
made all at once. This characterization simplifies both the notion of
non-malleable encryption and its usage, and enables one to see more easily
how it compares with other notions of encryption. The results here apply to
non-malleable encryption under any form of attack, whether chosen-plaintext,
chosen-ciphertext, or adaptive chosen-ciphertext.

1999

EPRINT

Concurrent Zero-Knowledge
Abstract

One of the toughest challenges in designing
cryptographic protocols is to design them so that they will remain
secure even when composed. For example, concurrent executions of a
zero-knowledge protocol by a single prover (with one or more
verifiers) may leak information and may not be zero-knowledge in
toto. In this work we:
(1) Suggest time as a mechanism to design concurrent cryptographic
protocols and in particular maintaining zero-knowledge under
concurrent execution.
(2) Introduce the notion of of Deniable Authentication
and connect it to the problem of concurrent zero-knowledge.
We do not assume global synchronization, however we assume an
(alpha,beta) timing constraint: for any two processors $P_1$
and $P_2$, if $P_1$ measures alpha elapsed time on its local
clock and $P_2$ measures beta elapsed time on its local clock, and
$P_2$ starts after $P_1$ does, then $P_2$ will finish after
$P_1$ does. We show that for an adversary controlling all the
processors clocks (as well as their communication channels) but
which is constrained by an (alpha,beta) constraint
there exist four-round almost concurrent zero-knowledge interactive proofs
and perfect concurrent zero-knowledge arguments for every language in NP.
We also address the more specific problem of Deniable Authentication,
for which we propose several particularly efficient solutions.
Deniable Authentication is of independent interest, even in the
sequential case; our concurrent solutions yield sequential
solutions, without recourse to timing, i.e., in the standard model.

1998

EPRINT

Many-to-one Trapdoor Functions and their Relation to Public-key Cryptosystems
Abstract

The heart of the task of building public key cryptosystems is viewed
as that of ``making trapdoors;'' in fact, public key cryptosystems and
trapdoor functions are often discussed as synonymous. How accurate is
this view? In this paper we endeavor to get a better understanding of
the nature of ``trapdoorness'' and its relation to public key
cryptosystems, by broadening the scope of the investigation: we look
at general trapdoor functions; that is, functions that are not
necessarily injective (ie., one-to-one). Our first result is somewhat
surprising: we show that non-injective trapdoor functions (with
super-polynomial pre-image size) can be constructed {from} any one-way
function (and hence it is unlikely that they suffice for public key
encryption). On the other hand, we show that trapdoor functions with
polynomial pre-image size are sufficient for public key encryption.
Together, these two results indicate that the pre-image size is a
fundamental parameter of trapdoor functions. We then turn our
attention to the converse, asking what kinds of trapdoor functions can
be constructed from public key cryptosystems. We take a first step by
showing that in the random-oracle model one can construct injective
trapdoor functions from any public key cryptosystem.

#### Program Committees

- Eurocrypt 2019
- TCC 2017
- Crypto 2014
- TCC 2013
- TCC 2012
- Eurocrypt 2010
- Crypto 2008
- Crypto 2007
- TCC 2005
- Asiacrypt 2005
- Eurocrypt 2001

#### Coauthors

- Shweta Agrawal (3)
- Shashank Agrawal (1)
- Prabhanjan Vijendra Ananth (1)
- Prabhanjan Ananth (12)
- Benny Applebaum (1)
- Saikrishna Badrinarayanan (11)
- Boaz Barak (7)
- Mihir Bellare (5)
- Allison Bishop (3)
- Nir Bitansky (1)
- Dan Boneh (9)
- Elette Boyle (1)
- Zvika Brakerski (1)
- Ran Canetti (3)
- David Cash (1)
- Nishanth Chandran (3)
- Jean-Sébastien Coron (2)
- Giovanni Di Crescenzo (1)
- Yi Deng (1)
- Yevgeniy Dodis (2)
- Cynthia Dwork (2)
- Edith Elkind (1)
- Dengguo Feng (1)
- Rex Fernando (2)
- Sanjam Garg (11)
- Daniel Genkin (1)
- Rosario Gennaro (1)
- Craig Gentry (8)
- Steven Goldfeder (1)
- Oded Goldreich (3)
- Shafi Goldwasser (1)
- S. Dov Gordon (2)
- Vipul Goyal (24)
- Jens Groth (6)
- Divya Gupta (5)
- Shai Halevi (9)
- Dennis Hofheinz (1)
- Susan Hohenberger (2)
- Samuel B. Hopkins (1)
- Russell Impagliazzo (2)
- Yuval Ishai (29)
- Tibor Jager (1)
- Abhishek Jain (15)
- Aayush Jain (10)
- Yael Tauman Kalai (6)
- Bhavana Kanukurthi (1)
- Jonathan Katz (3)
- Dakshita Khurana (10)
- Sam Kim (1)
- Ilan Komargodski (1)
- Venkata Koppula (2)
- Pravesh Kothari (1)
- Daniel Kraschewski (3)
- Ravi Kumar (1)
- Abishek Kumarasubramanian (2)
- Eyal Kushilevitz (7)
- Tancrède Lepoint (3)
- Kevin Lewi (1)
- Dongdai Lin (1)
- Huijia Lin (3)
- Yehuda Lindell (1)
- Feng-Hao Liu (1)
- Steve Lu (2)
- Benjamin Lynn (1)
- Ben Lynn (1)
- Mohammad Mahmoody (3)
- Hemanta K. Maji (8)
- Nathan Manohar (1)
- Christian Matt (2)
- Daniele Micciancio (2)
- Eric Miles (6)
- Ilya Mironov (2)
- Tal Moran (1)
- Ryan Moriarty (2)
- Pratyay Mukherjee (1)
- Moni Naor (2)
- Tatsuaki Okamoto (2)
- Shien Jin Ong (2)
- Claudio Orlandi (3)
- Rafail Ostrovsky (19)
- Omkant Pandey (8)
- Omer Paneth (3)
- Rafael Pass (3)
- Alain Passelègue (1)
- Chris Peikert (2)
- Giuseppe Persiano (1)
- Manoj Prabhakaran (20)
- Sridhar Rajagopalan (1)
- Vanishree Rao (4)
- Peter M. R. Rasmussen (2)
- Mariana Raykova (3)
- Steven Rudich (2)
- Alfredo De Santis (1)
- Dominique Schröder (1)
- Hakan Seyalioglu (2)
- Hovav Shacham (2)
- Elaine Shi (1)
- Adam Smith (2)
- Akshayaram Srinivasan (1)
- Katsuyuki Takashima (2)
- Mehdi Tibouchi (3)
- Eran Tromer (1)
- Wei-Lung Dustin Tseng (2)
- Dominique Unruh (1)
- Salil P. Vadhan (8)
- Ramarathnam Venkatesan (2)
- Muthuramakrishnan Venkitasubramaniam (2)
- Ivan Visconti (2)
- Akshay Wadia (3)
- David Wagner (2)
- Brent Waters (24)
- David J. Wu (3)
- Jürg Wullschleger (1)
- Ke Yang (2)
- Eylon Yogev (2)
- Ching-Hua Yu (1)
- Moti Yung (1)
- Mark Zhandry (6)
- Hong-Sheng Zhou (1)
- Joe Zimmerman (1)