## CryptoDB

### Oded Goldreich

#### Publications

**Year**

**Venue**

**Title**

2013

JOFC

Enhancements of Trapdoor Permutations
Abstract

We take a closer look at several enhancements of the notion of trapdoor permutations. Specifically, we consider the notions of enhanced trapdoor permutation (Goldreich, Foundation of Cryptography: Basic Applications, 2004) and doubly enhanced trapdoor permutation (Goldreich, Computational Complexity: A Conceptual Perspective, 2011) as well as intermediate notions (Rothblum, A Taxonomy of Enhanced Trapdoor Permutations, 2010). These enhancements arose in the study of Oblivious Transfer and NIZK, but they address natural concerns that may arise also in other applications of trapdoor permutations. We clarify why these enhancements are needed in such applications, and show that they actually suffice for these needs.

2006

EPRINT

On Probabilistic versus Deterministic Provers in the Definition of Proofs Of Knowledge
Abstract

This note points out a gap between two natural formulations of
the concept of a proof of knowledge, and shows that in all
natural cases (e.g., NP-statements) this gap can be closed.
The aforementioned formulations differ by whether they refer to
(all possible) probabilistic or deterministic prover strategies.
Unlike in the rest of cryptography, in the current context,
the obvious transformation of probabilistic strategies
to deterministic strategies does not seem to suffice per se.

2006

EPRINT

On Post-Modern Cryptography
Abstract

This essay relates to a recent article of Koblitz & Menezes
(Cryptology ePrint Report 2004/152)
that ``criticizes several typical `provable security' results''
and argues that the ``theorem-proof paradigm of theoretical
mathematics is often of limited relevance'' to cryptography.
Although it feels ridiculous to answer such a claim,
we undertake to do so in this essay.
In particular, we point out some of the fundamental philosophical flaws
that underly the said article and some of its misconceptions regarding
theoretical research in Cryptography in the last quarter of a century.

2006

EPRINT

On Expected Probabilistic Polynomial-Time Adversaries -- A suggestion for restricted definitions and their benefits
Abstract

This paper concerns the possibility of developing a coherent
theory of security when feasibility is associated
with expected probabilistic polynomial-time (expected PPT).
The source of difficulty is that
the known definitions of expected PPT strategies
(i.e., expected PPT interactive machines)
do not support natural results of the type presented below.
To overcome this difficulty,
we suggest new definitions of expected PPT strategies,
which are more restrictive than the known definitions
(but nevertheless
extend the notion of expected PPT non-interactive algorithms).
We advocate the conceptual adequacy of these definitions,
and point out their technical advantages.
Specifically, identifying a natural subclass of black-box simulators,
called normal, we prove the following two results:
(1) Security proofs that refer to all strict PPT adversaries
(and are proven via normal black-box simulators), extend to
provide security with respect to all adversaries that satisfy
the restricted definitions of expected PPT.
(2) Security composition theorems of the type known for strict PPT
hold for these restricted definitions of expected PPT,
where security means simulation by normal black-box simulators.
Specifically, a normal black-box simulator is required
to make an expected polynomial number of steps,
when given oracle access to any strategy,
where each oracle call is counted as a single step.
This natural property is satisfies by most known simulators
and is easy to verify.

2004

EPRINT

The Power of Verification Queries in Message Authentication and Authenticated Encryption
Abstract

This paper points out that, contrary to popular belief,
allowing a message authentication adversary multiple verification attempts
towards forgery is NOT equivalent to allowing it a single one, so that
the notion of security that most message authentication schemes are proven to
meet does not guarantee their security in practice. We then show, however, that
the equivalence does hold for STRONG unforgeability. Based on this we
recover security of popular classes of message authentication schemes such as
MACs (including HMAC and PRF-based MACs) and CW-schemes. Furthermore, in many
cases we do so with a TIGHT security reduction, so that in the end
the news we bring is surprisingly positive given the initial negative result.
Finally, we show analogous results for authenticated encryption.

2003

EPRINT

On the random-oracle methodology as applied to length-restricted signature schemes
Abstract

In earlier work, we described a ``pathological'' example of a signature scheme that is secure in the random-oracle model, but for which no secure implementation exists. For that example, however, it was crucial that the scheme is able to sign "long messages" (i.e., messages whose
length is not a-priori bounded). This left open the possibility that
the Random Oracle Methodology is sound with respect to signature schemes
that sign only "short" messages (i.e., messages of a-priori bounded length, smaller than the length of the keys in use), and are "memoryless" (i.e., the only thing kept between different signature
generations is the initial signing-key). In this work, we extend our negative result to address such signature schemes. A key ingredient in our proof is a new type of interactive proof systems, which may be of independent interest.

2002

EPRINT

The GGM Construction does NOT yield Correlation Intractable Function Ensembles
Abstract

We consider the function ensembles emerging from the
construction of Goldreich, Goldwasser and Micali (GGM),
when applied to an arbitrary pseudoramdon generator.
We show that, in general, such functions
fail to yield correlation intractable ensembles.
Specifically, it may happen that, given a description of such a function,
one can easily find an input that is mapped to zero under this function.

2002

EPRINT

On Chosen Ciphertext Security of Multiple Encryptions
Abstract

We consider the security of multiple and possibly related
plaintexts in the context of a chosen ciphertext attack.
That is the attacker in addition and concurrently to obtaining encryptions
of multiple plaintexts under the same key,
may issue encryption and decryption queries and
partial information queries.
Loosely speaking, an encryption scheme is considered
secure under such attacks if all that the adversary can learn from
such attacks about the selected plaintexts can be obtained from the
corresponding partial information queries.
The above definition extends the definition of semantic security
under chosen ciphertext attacks (CCAs),
which is also formulated in this work.
The extension is in considering the security of multiple plaintexts
rather than the security of a single plaintext. We prove that both these
formulations are equivalent to the standard formulation of CCA,
which refers to indistinguishability of encryptions. The good news
is that any encryption scheme that is secure in the standard CCA
sense is in fact secure in the extended model.
The treatment holds both for public-key and private-key
encryption schemes.

2002

EPRINT

Zero-Knowledge twenty years after its invention
Abstract

Zero-knowledge proofs are proofs that are both convincing and yet
yield nothing beyond the validity of the assertion being proven.
Since their introduction about twenty years ago,
zero-knowledge proofs have attracted a lot of attention
and have, in turn, contributed to the development of other
areas of cryptography and complexity theory.
We survey the main definitions and results regarding
zero-knowledge proofs.
Specifically, we present the basic definitional approach
and its variants, results regarding the power of zero-knowledge proofs
as well as recent results regarding questions such as
the composeability of zero-knowledge proofs
and the use of the adversary's program
within the proof of security (i.e., non-black-box simulation).

2001

EPRINT

Resettably-Sound Zero-Knowledge and its Applications
Abstract

Resettably-sound proofs and arguments remain sound even when the
prover can reset the verifier, and so force it to use the same
random coins in repeated executions of the protocol. We show that
resettably-sound zero-knowledge {\em arguments} for NP exist
if collision-resistant hash functions exist. In contrast,
resettably-sound zero-knowledge {\em proofs} are possible only
for languages in P/poly.
We present two applications of resettably-sound zero-knowledge
arguments. First, we construct resettable zero-knowledge arguments
of knowledge for NP, using a natural relaxation of the
definition of arguments (and proofs) of knowledge. We note that,
under the standard definition of proofs of knowledge, it is
impossible to obtain resettable zero-knowledge arguments of
knowledge for languages outside BPP. Second, we construct a
constant-round resettable zero-knowledge argument for NP in the
public-key model, under the assumption that collision-resistant
hash functions exist. This improves upon the sub-exponential
hardness assumption required by previous constructions.
We emphasize that our results use non-black-box zero-knowledge
simulations. Indeed, we show that some of the results are {\em
impossible} to achieve using black-box simulations. In
particular, only languages in BPP have resettably-sound
arguments that are zero-knowledge with respect to black-box
simulation.

2001

EPRINT

On the (Im)possibility of Obfuscating Programs
Abstract

Informally, an {\em obfuscator} $O$ is an (efficient,
probabilistic) ``compiler'' that takes as input a program (or
circuit) $P$ and produces a new program $O(P)$ that has the
same functionality as $P$ yet is ``unintelligible'' in some sense.
Obfuscators, if they exist, would have a wide variety of
cryptographic and complexity-theoretic applications, ranging from
software protection to homomorphic encryption to
complexity-theoretic analogues of Rice's theorem. Most of these
applications are based on an interpretation of the
``unintelligibility'' condition in obfuscation as meaning that
$O(P)$ is a ``virtual black box,'' in the sense that anything
one can efficiently compute given $O(P)$, one could also
efficiently compute given oracle access to $P$.
In this work, we initiate a theoretical investigation of
obfuscation. Our main result is that, even under very
weak formalizations of the above intuition, obfuscation
is impossible. We prove this by constructing a family of
functions $F$ that are {\em \inherently
unobfuscatable} in the following sense: there is a
property $\pi : F \rightarrow \{0,1\}$ such that (a) given {\em
any program} that computes a function $f\in F$, the
value $\pi(f)$ can be efficiently computed, yet (b) given
{\em oracle access} to a (randomly selected) function
$f\in F$, no efficient algorithm can compute $\pi(f)$
much better than random guessing.
We extend our impossibility result in a number of ways, including
even obfuscators that (a) are not necessarily computable in
polynomial time, (b) only {\em approximately} preserve the
functionality, and (c) only need to work for very restricted
models of computation ($TC_0$). We also rule out several
potential applications of obfuscators, by constructing
``unobfuscatable'' signature schemes, encryption schemes, and
pseudorandom function families.

2001

EPRINT

Concurrent Zero-Knowledge With Timing, Revisited
Abstract

Following Dwork, Naor, and Sahai (30th STOC, 1998),
we consider concurrent execution of protocols in a
semi-synchronized network. Specifically, we assume that each party
holds a local clock such that a constant bound on the relative rates
of these clocks is a-priori known, and consider protocols that
employ time-driven operations
(i.e., time-out in-coming messages and delay out-going messages).
We show that the constant-round zero-knowledge proof for NP
of Goldreich and Kahan (Jour. of Crypto., 1996)
preserves its security when polynomially-many independent copies
are executed concurrently under the above timing model.
We stress that
our main result establishes zero-knowledge of interactive proofs,
whereas the results of Dwork et al. are
either for zero-knowledge arguments
or for a weak notion of zero-knowledge (called $\epsilon$-knowledge) proofs.
Our analysis identifies two extreme schedulings
of concurrent executions under the above timing model:
the first is the case of parallel execution of polynomially-many copies,
and the second is of concurrent execution of polynomially-many
copies such the number of copies that are simultaneously active
at any time is bounded by a constant (i.e., bounded simultaneity).
Dealing with each of these extreme cases is of independent interest,
and the general result (regarding concurrent executions under
the timing model) is obtained by combining the two treatments.

2001

EPRINT

Universal Arguments and their Applications
Abstract

We put forward a new type of
computationally-sound proof systems, called universal-arguments,
which are related but different from both CS-proofs (as defined
by Micali) and arguments (as defined by Brassard, Chaum and
Crepeau). In particular, we adopt the instance-based
prover-efficiency paradigm of CS-proofs, but follow the
computational-soundness condition of argument systems (i.e., we
consider only cheating strategies that are implementable by
polynomial-size circuits).
We show that universal-arguments can be constructed based on
standard intractability assumptions that refer to polynomial-size
circuits (rather than assumptions referring to
subexponential-size circuits as used in the construction of
CS-proofs). As an application of universal-arguments, we weaken
the intractability assumptions used in the recent non-black-box
zero-knowledge arguments of Barak. Specifically, we only utilize
intractability assumptions that refer to polynomial-size circuits
(rather than assumptions referring to circuits of some ``nice''
super-polynomial size).

2000

EPRINT

On Security Preserving Reductions -- Revised Terminology
Abstract

Many of the results in Modern Cryptography are actually
transformations of a basic computational phenomenon (i.e., a
basic primitive, tool or assumption) to a more complex
phenomenon (i.e., a higher level primitive or
application). The transformation is explicit and is always
accompanied by an explicit reduction of the violation of the
security of the former phenomenon to the violation of the
latter. A key aspect is the efficiency of the reduction. We
discuss and slightly modify the hierarchy of reductions
originally suggested by Levin.

2000

EPRINT

On the Security of Modular Exponentiation with Application to the Construction of Pseudorandom Generators
Abstract

Assuming the inractability of factoring, we show that
the output of the exponentiation modulo a composite function
$f_{N,g}(x)=g^x\bmod N$ (where $N=P\cdot Q$) is pseudorandom,
even when its input is restricted to be half the size.
This result is equivalent to the simultaneous hardness of the upper
half of the bits of $f_{N,g}$,
proven by Hastad, Schrift and Shamir.
Yet, we supply a different proof that is significantly simpler
than the original one. In addition, we suggest a pseudorandom generator
which is more efficient than all previously known factoring
based pseudorandom generators.

2000

EPRINT

Session-Key Generation using Human Passwords Only
Abstract

We present session-key generation protocols in a model where the
legitimate parties share {\em only} a human-memorizable
password, and there is no additional setup assumption in the
network. Our protocol is proven secure under the assumption that
trapdoor permutations exist. The security guarantee holds with
respect to probabilistic polynomial-time adversaries that control
the communication channel (between the parties), and may omit,
insert and modify messages at their choice. Loosely speaking, the
effect of such an adversary that attacks an execution of our
protocol is comparable to an attack in which an adversary is only
allowed to make a constant number of queries of the form ``is $w$
the password of Party $A$''. We stress that the result holds also
in case the passwords are selected at random from a small
dictionary so that it is feasible (for the adversary) to scan the
entire directory. We note that prior to our result, it was not
known whether or not such protocols were attainable without the
use of random oracles or additional setup assumptions.

2000

EPRINT

Candidate One-Way Functions Based on Expander Graphs
Abstract

We suggest a candidate one-way function using combinatorial
constructs such as expander graphs. These graphs are used to
determine a sequence of small overlapping subsets of input bits,
to which a hard-wired random predicate is applied.
Thus, the function is extremely easy to evaluate:
all that is needed is to take multiple projections of the
input bits, and to use these as entries to a look-up table.
It is feasible for the adversary to scan the look-up table,
but we believe it would be infeasible to find an input that fits a given
sequence of values obtained for these overlapping projections.
The conjectured difficulty of inverting the suggested function
does not seem to follow from any well-known assumption.
Instead, we propose the study of the complexity of inverting
this function as an interesting open problem,
with the hope that further research will provide evidence to our
belief that the inversion task is intractable.

1999

CRYPTO

1999

EPRINT

Chinese Remaindering with Errors
Abstract

The Chinese Remainder Theorem states that a positive integer m is
uniquely specified by its remainder modulo k relatively prime integers
p_1,...,p_k, provided m < \prod_{i=1}^k p_i. Thus the residues of m
modulo relatively prime integers p_1 < p_2 < ... < p_n form a
redundant representation of m if m <= \prod_{i=1}^k p_i and k <
n. This suggests a number-theoretic construction of an
``error-correcting code'' that has been implicitly considered often in
the past. In this paper we provide a new algorithmic tool to go with
this error-correcting code: namely, a polynomial-time algorithm for
error-correction. Specifically, given n residues r_1,...,r_n and an
agreement parameter t, we find a list of all integers m <
\prod_{i=1}^k p_i such that (m mod p_i) = r_i for at least t values of
i in {1,...,n}, provided t = Omega(sqrt{kn (log p_n)/(log p_1)}). We
also give a simpler algorithm to decode from a smaller number of
errors, i.e., when t > n - (n-k)(log p_1)/(log p_1 + \log p_n). In
such a case there is a unique integer which has such agreement with
the sequence of residues.
One consequence of our result is that is a strengthening of the
relationship between average-case complexity of computing the
permanent and its worst-case complexity. Specifically we show that if
a polynomial time algorithm is able to guess the permanent of a random
n x n matrix on 2n-bit integers modulo a random n-bit prime with
inverse polynomial success rate, then #P=BPP. Previous results of
this nature typically worked over a fixed prime moduli or assumed very
small (though non-negligible) error probability (as opposed to small
but non-negligible success probability).

1999

EPRINT

Resettable Zero-Knowledge
Abstract

We introduce the notion of Resettable Zero-Knowledge
(rZK), a new security measure for cryptographic protocols which
strengthens the classical notion of zero-knowledge. In essence, an
rZK protocol is one that remains zero knowledge even if an adeversary
can interact with the prover many times, each time resetting the
prover to its initial state and forcing him to use the same random
tape.
Under general complexity asumptions, which hold for example if the
Discrete Logarithm Problem is hard, we construct (1) rZK proof-systems
for NP: (2) constant-round resettable witness-indistinguishable
proof-systems for NP; and (3) constant-round rZK arguments for NP in
the public key model where verifiers have fixed, public keys
associated with them.
In addition to shedding new light on what makes zero knowledge
possible (by constructing ZK protocols that use randomness in a
dramatically weaker way than before), rZK has great relevance to
applications. Firstly, we show that rZK protocols are closed under
parallel and concurrent execution and thus are guaranteed to be secure
when implemented in fully asynchronous networks, even if an adversary
schedules the arrival of every message sent. Secondly, rZK protocols
enlarge the range of physical ways in which provers of a ZK protocols
can be securely implemented, including devices which cannot reliably
toss coins on line, nor keep state betweeen invocations. (For
instance, because ordinary smart cards with secure hardware are
resattable, they could not be used to implement securely the provers
of classical ZK protocols, but can now be used to implement securely
the provers of rZK protocols.)

1998

EPRINT

The Graph Clustering Problem has a Perfect Zero-Knowledge Proof
Abstract

The input to the Graph Clustering Problem
consists of a sequence of integers $m_1,...,m_t$
and a sequence of $\sum_{i=1}^{t}m_i$ graphs.
The question is whether the equivalence classes,
under the graph isomorphism relation,
of the input graphs have sizes which match the input sequence of integers.
In this note we show that this problem has a (perfect) zero-knowledge
interactive proof system.
This result improves over <a href="http:../1996/96-14.html">record 96-14</a>,
where a parametrized (by the sequence of integers)
version of the problem was studied.

1998

EPRINT

On the possibility of basing Cryptography on the assumption that $P \neq NP$
Abstract

Recent works by Ajtai and by Ajtai and Dwork
bring to light the old (general) question of whether it is
at all possible to base the
security of cryptosystems on the assumption that $\P\neq\NP$.
We discuss this question and in particular review and extend
a two-decade old result of Brassard regarding this question.
Our conclusion is that the question remains open.

1998

EPRINT

The Random Oracle Methodology, Revisited
Abstract

We take a critical look at the relationship between the security of
cryptographic schemes in the Random Oracle Model, and the security of the
schemes that result from implementing the random oracle by so called
"cryptographic hash functions".
The main result of this paper is a negative one: There exist signature and
encryption schemes that are secure in the Random Oracle Model, but for which
any implementation of the random oracle results in insecure schemes.
In the process of devising the above schemes, we consider possible definitions
for the notion of a "good implementation" of a random oracle, pointing out
limitations and challenges.

1998

EPRINT

Comparing Entropies in Statistical Zero-Knowledge with Applications to the Structure of SZK
Abstract

We consider the following (promise) problem, denoted ED (for Entropy
Difference): The input is a pairs of circuits, and YES instances
(resp., NO instances) are such pairs in which the first (resp.,
second) circuit generates a distribution with noticeably higher
entropy.
On one hand we show that any language having a (honest-verifier)
statistical zero-knowledge proof is Karp-reducible to ED. On the other
hand, we present a public-coin (honest-verifier) statistical
zero-knowledge proof for ED. Thus, we obtain an alternative proof of
Okamoto's result by which HVSZK (i.e., Honest-Verifier Statistical
Zero-Knowledge) equals public-coin HVSZK. The new proof is much simpler
than the original one. The above also yields a trivial proof that HVSZK
is closed under complementation (since ED easily reduces to its
complement). Among the new results obtained is an equivalence of a weak
notion of statistical zero-knowledge to the standard one.

1997

EPRINT

A Probabilistic Error-Correcting Scheme
Abstract

In the course of research in Computational Learning Theory,
we found ourselves in need of an error-correcting encoding scheme for which
few bits in the codeword yield no information about the plain message.
Being unaware of a previous solution,
we came-up with the scheme presented here.
Since this scheme may be of interest to people working in Cryptography,
we thought it may be worthwhile to ``publish'' this part of our work
within the Cryptography community.
Clearly, a scheme as described above cannot be deterministic.
Thus, we introduce a probabilistic coding scheme which,
in addition to the standard coding theoretic requirements,
has the feature that any constant fraction
of the bits in the (randomized) codeword yields no information about
the message being encoded.
This coding scheme is also used to obtain efficient constructions for
the Wire-Tap Channel Problem.

1997

EPRINT

Self-Delegation with Controlled Propagation - or - What If You Lose Your Laptop
Abstract

We introduce delegation schemes wherein a user may delegate rights to
himself, i.e., to other public keys he owns, but may
not safely delegate those rights to others, i.e., to their
public keys. In our motivating application, a user
has a primary (long-term) key that receives rights, such as access
privileges, that may not be delegated to others, yet the user may
reasonably wish to delegate these rights to new
secondary (short-term) keys he creates to use on his laptop when
traveling, to avoid having to store his primary secret key on the
vulnerable laptop.
We propose several cryptographic schemes, both generic and practical,
that allow such self-delegation while providing strong motivation for
the user not to delegate rights that he only obtained for personal use
to other parties.

1996

EPRINT

Collision-Free Hashing from Lattice Problems
Abstract

Recently Ajtai described a construction of one-way functions whose
security is equivalent to the difficulty of some well known approximation
problems in lattices. We show that essentially the same
construction can also be used to obtain collision-free hashing.

1996

EPRINT

The Graph Clustering Problem has a Perfect Zero-Knowledge Proof
Abstract

The Graph Clustering Problem is parameterized by a sequence
of positive integers, $m_1,...,m_t$.
The input is a sequence of $\sum_{i=1}^{t}m_i$ graphs,
and the question is whether the equivalence classes
under the graph isomorphism relation have sizes which match
the sequence of parameters.
In this note
we show that this problem has a (perfect) zero-knowledge
interactive proof system.

1996

EPRINT

Public-Key Cryptosystems from Lattice Reduction Problems
Abstract

We present a new proposal for a trapdoor one-way function, from which
we derive public-key encryption and digital signatures.
The security of the new construction is based on the
conjectured computational difficulty of lattice-reduction problems,
providing a possible alternative to existing
public-key encryption algorithms
and digital signatures such as RSA and DSS.

1986

CRYPTO

#### Program Committees

- Crypto 1992
- Crypto 1988
- Crypto 1985

#### Coauthors

- Boaz Barak (4)
- Mihir Bellare (5)
- Michael Ben-Or (1)
- Ran Canetti (4)
- Benny Chor (2)
- Giovanni Di Crescenzo (1)
- Ivan Damgård (1)
- S. Decatur (1)
- Shimon Even (7)
- David Freeman (1)
- Shafi Goldwasser (11)
- Shai Halevi (7)
- Johan Håstad (1)
- Russell Impagliazzo (2)
- Ariel Kahan (1)
- Joe Kilian (1)
- Eike Kiltz (1)
- Hugo Krawczyk (3)
- Eyal Kushilevitz (2)
- Abraham Lempel (1)
- Yehuda Lindell (4)
- Michael Luby (1)
- Yoad Lustig (1)
- Silvio Micali (6)
- Anton Mityagin (1)
- Moni Naor (1)
- Tatsuaki Okamoto (1)
- Yair Oren (1)
- Giuseppe Persiano (1)
- Birgit Pfitzmann (2)
- Ronald L. Rivest (2)
- Phillip Rogaway (1)
- Dana Ron (2)
- Vered Rosen (2)
- Alon Rosen (1)
- Ron D. Rothblum (1)
- Steven Rudich (2)
- Amit Sahai (3)
- Alfredo De Santis (1)
- Gil Segev (1)
- Adi Shamir (1)
- Madhu Sudan (1)
- Salil P. Vadhan (4)
- Ronen Vainish (1)
- Avi Wigderson (2)
- Ke Yang (2)