## CryptoDB

### Ran Canetti

#### Affiliation: Tel Aviv university and Boston University

#### Publications

**Year**

**Venue**

**Title**

2018

TCC

Certifying Trapdoor Permutations, Revisited
Abstract

The modeling of trapdoor permutations has evolved over the years. Indeed, finding an appropriate abstraction that bridges between the existing candidate constructions and the needs of applications has proved to be challenging. In particular, the notions of certifying permutations (Bellare and Yung, 96), enhanced and doubly enhanced trapdoor permutations (Goldreich, 04, 08, 11, Goldreich and Rothblum, 13) were added to bridge the gap between the modeling of trapdoor permutations and needs of applications. We identify an additional gap in the current abstraction of trapdoor permutations: Previous works implicitly assumed that it is easy to recognize elements in the domain, as well as uniformly sample from it, even for illegitimate function indices. We demonstrate this gap by using the (Bitansky-Paneth-Wichs, 16) doubly-enhanced trapdoor permutation family to instantiate the Feige-Lapidot-Shamir (FLS) paradigm for constructing non-interactive zero-knowledge (NIZK) protocols, and show that the resulting proof system is unsound. To close the gap, we propose a general notion of certifiably injective doubly enhanced trapdoor functions (DECITDFs), which provides a way of certifying that a given key defines an injective function over the domain defined by it, even when that domain is not efficiently recognizable and sampleable. We show that DECITDFs suffice for instantiating the FLS paradigm; more generally, we argue that certifiable injectivity is needed whenever the generation process of the function is not trusted. We then show two very different ways to construct DECITDFs: One is via the traditional method of RSA/Rabin with the Bellare-Yung certification mechanism, and the other using indistinguishability obfuscation and injective pseudorandom generators. In particular the latter is the first candidate injective trapdoor function, from assumptions other than factoring, that suffices for the FLS paradigm. Finally we observe that a similar gap appears also in other paths proposed in the literature for instantiating the FLS paradigm, specifically via verifiable pseudorandom generators and verifiable pseudorandom functions. Closing the gap there can be done in similar ways to the ones proposed here.

2015

CRYPTO

2014

CRYPTO

2014

EPRINT

2010

EPRINT

On Symmetric Encryption and Point Obfuscation
Abstract

We show tight connections between several cryptographic primitives, namely encryption with weakly random keys, encryption with key-dependent messages (KDM), and obfuscation of point functions with multi-bit output(which we call multi-bit point functions, or MBPFs, for short). These primitives, which have been studied mostly separately in recent works, bear some apparent similarities, both in the flavor of their security requirements and in the flavor of their constructions and assumptions. Still, rigorous connections have not been drawn.
Our results can be interpreted as indicating that MBPF obfuscators imply a very strong form of encryption that simultaneously achieves security for weakly-random keys and key-dependent messages as special cases. Similarly, each one of the other primitives implies a certain restricted form of MBPF obfuscation. Our results carry both constructions and impossibility results from one primitive to others. In particular:
The recent impossibility result for KDM security of Haitner and Holenstein (TCC 09) carries over to MBPF obfuscators.
The Canetti-Dakdouk construction of MBPF obfuscators based on a strong variant of the DDH assumption (EC 08) gives an encryption scheme which is secure w.r.t. any weak key distribution of super-logarithmic
min-entropy (and in particular, also has very strong leakage resilient properties).
All the recent constructions of encryption schemes that are secure w.r.t. weak keys imply a weak form of MBPF obfuscators.

2010

EPRINT

Composable Security Analysis of OS Services
Abstract

We provide an analytical framework for analyzing basic integrity properties of file systems, namely the binding of files to filenames and writing capabilities. A salient feature of our modeling and analysis is that it is *composable*: In spite of the fact that we analyze the filesystem in isolation, security is guaranteed even when the file system operates as a component within an arbitrary, and potentially adversarial system. Such secure composability properties seem essential when trying to assert the security of large systems.
Our results are obtained by adapting the *Universally Composable* (UC) security framework to the analysis of software systems. Originally developed for cryptographic protocols, the UC framework allows the analysis of simple components in isolation, and provides assurance that these components maintain their behavior when combined in a large system, potentially under adversarial conditions.

2010

EPRINT

Universally Composable Symbolic Analysis of Diffie-Hellman based Key Exchange
Abstract

Canetti and Herzog (TCC'06) show how to efficiently perform fully
automated, computationally sound security analysis of key exchange
protocols with an unbounded number of sessions. A key tool in their
analysis is {\em composability}, which allows deducing security of
the multi-session case from the security of a single session.
However, their framework only captures protocols that use public key
encryption as the only cryptographic primitive, and only handles
static corruptions.
We extend the [CH'06] modeling in two ways. First, we handle also
protocols that use digital signatures and Diffie-Hellman exchange.
Second, we handle also forward secrecy under fully adaptive party
corruptions. This allows us to automatically analyze systems that
use an unbounded number of sessions of realistic key exchange
protocols such as the ISO 9798-3 or TLS protocol.
A central tool in our treatment is a new abstract modeling of plain
Diffie-Hellman key exchange. Specifically, we show that plain
Diffie-Hellman securely realizes an idealized version of Key
Encapsulation.

2010

EPRINT

On Strong Simulation and Composable Point Obfuscation
Abstract

The Virtual Black Box (VBB) property for program obfuscators
provides a strong guarantee: Anything computable by an efficient
adversary given the obfuscated program can also be computed by an
efficient simulator with only oracle access to the program. However,
we know how to achieve this notion only for very restricted classes
of programs.
This work studies a simple relaxation of VBB: Allow the simulator
unbounded computation time, while still allowing only polynomially
many queries to the oracle. We then demonstrate the viability of
this relaxed notion, which we call Virtual Grey Box (VGB), in the
context of fully composable obfuscators for point programs: It is
known that, w.r.t. VBB, if such obfuscators exist then there exist
multi-bit point obfuscators (aka ``digital lockers'') and
subsequently also very strong variants of encryption that are
resilient to various attacks, such as key leakage and
key-dependent-messages. However, no composable VBB-obfuscators for
point programs have been shown. We show fully composable {\em
VGB}-obfuscators for point programs under a strong variant of the
Decision Diffie Hellman assumption. We show they suffice for the
above applications and even for extensions to the public key setting
as well as for encryption schemes with resistance to certain related
key attacks (RKA).

2008

EPRINT

Non-Malleable Obfuscation
Abstract

Existing definitions of program obfuscation do not rule out malleability attacks, where an adversary that sees an obfuscated program is able to generate another (potentially obfuscated) program that is related to the original one in some way.
We formulate two natural flavors of non-malleability requirements for program obfuscation, and show that they are incomparable in general. We also construct non-malleable obfuscators of both flavors for some program families of interest. Some of our constructions are in the Random Oracle model, whereas another one is in the common reference string model. We also define the notion of verifiable obfuscation which is of independent interest.

2008

EPRINT

How to Protect Yourself without Perfect Shredding
Abstract

Erasing old data and keys is an important tool in cryptographic protocol design. It is useful in many settings, including proactive security, adaptive security, forward security, and intrusion resilience. Protocols for all these settings typically assume the ability to perfectly erase information. Unfortunately, as amply demonstrated in the systems literature, perfect erasures are hard to implement in practice.
We propose a model of partial erasures where erasure instructions leave almost all the data erased intact, thus giving the honest players only a limited capability for disposing of old data. Nonetheless, we provide a general compiler that transforms any secure protocol using perfect erasures into one that maintains the same security properties when only partial erasures are available. The key idea is a new redundant representation of secret data which can still be computed on, and yet is rendered useless when partially erased. We prove that any such a compiler must incur a cost in additional storage, and that our compiler is near optimal in terms of its storage overhead.

2008

EPRINT

Modeling Computational Security in Long-Lived Systems, Version 2
Abstract

For many cryptographic protocols, security relies on the assumption
that adversarial entities have limited computational power.
This type of security degrades progressively over the lifetime of a protocol. However, some cryptographic services, such as timestamping services or digital archives, are long-lived in nature; they are expected to be secure and operational for a very long time (i.e. super-polynomial). In such cases, security cannot be guaranteed in the traditional sense: a computationally secure protocol may become insecure if the attacker has a super-polynomial number of interactions with the protocol.
This paper proposes a new paradigm for the analysis of long-lived
security protocols. We allow entities to be active for a potentially unbounded amount of real time, provided they perform only a polynomial amount of work per unit of real time. Moreover, the space used by these entities is allocated dynamically and must be polynomially bounded.
We propose a new notion of long-term implementation, which is an
adaptation of computational indistinguishability to the long-lived
setting. We show that long-term implementation is preserved under polynomial parallel composition and exponential sequential composition. We illustrate the use of this new paradigm by analyzing some security properties of the long-lived timestamping protocol of Haber and Kamat.

2007

EPRINT

On the Role of Scheduling in Simulation-Based Security
Abstract

In a series of papers, K\"usters et al. investigated the
relationships between various notions of simulation-based
security. Two main factors, the placement of a ``master
process'' and the existence of ``forwarder processes'',
were found to affect the relationship between different
definitions. In this extended abstract, we add a new
dimension to the analysis of simulation-based security,
namely, the scheduling of concurrent processes. We show
that, when we move from sequential scheduling (as used in
previous studies) to task-based nondeterministic scheduling,
the same syntactic definition of security gives rise to
incomparable semantic notions of security. Under task-based scheduling, the hierarchy based on placement of ``master
process'' is no longer relevant, because no such designation
is necessary to obtain meaningful runs of a system. On
the other hand, the existence of ``forwarder processes''
remains an important factor.

2007

EPRINT

Chosen-Ciphertext Secure Proxy Re-Encryption
Abstract

In a proxy re-encryption (PRE) scheme, a proxy is given special information that
allows it to translate a ciphertext under one key into a ciphertext of the
same message under a different key. The proxy cannot, however, learn
anything about the messages encrypted under either key. PRE
schemes have many practical applications, including distributed storage, email, and DRM. Previously proposed re-encryption schemes achieved
only semantic security;
in contrast, applications often require security
against chosen ciphertext attacks. We propose a definition of security
against chosen ciphertext attacks for PRE schemes, and present a
scheme that satisfies the definition. Our construction is efficient and based
only on the Decisional Bilinear Diffie-Hellman assumption
in the standard model. We also formally capture CCA security for PRE schemes
via both a game-based definition and simulation-based definitions that
guarantee universally composable security.
We note that, simultaneously with our work, Green and Ateniese proposed
a CCA-secure PRE, discussed herein.

2007

EPRINT

How to Model Bounded Computation in Long-Lived Systems
Abstract

In most interesting cases, the security of cryptographic protocols relies on the assumption that adversarial entities have limited
computational power, and it is generally accepted that security degrades progressively over time. However, some cryptographic services (e.g., time-stamping services or digital archives) are long-lived in nature; that is, their lifetime need not be bounded by a polynomial. In such cases, it is impossible to guarantee security in the traditional sense: even information theoretically secure protocols can fail if the attacker is given sufficient run time.
This work proposes a new paradigm for long-lived computation, where
computational restrictions are stated in terms of space and processing
rates. In this setting, entities may live for an unbounded amount of real time, subject to the condition that only a polynomial amount of work can be done per unit real time. Moreover, the space used by these entities is allocated dynamically and must be polynomially bounded. We propose a key notion of approximate implementation, which is an adaptation of computational indistinguishability to the long-lived setting. We show that approximate implementation is preserved under polynomial parallel composition, and under exponential sequential composition. This provides core foundations for an exciting new area, namely, the analysis of long-lived cryptographic systems.

2007

EPRINT

Secure Computation Without Authentication
Abstract

Research on secure multiparty computation has mainly concentrated on the case where the parties can authenticate each other and the communication between them. This work addresses the question of what security can be guaranteed when authentication is not available. We consider a completely unauthenticated setting, where all
messages sent by the parties may be tampered with and modified by the
adversary without the honest parties being able to detect this
fact. In this model, it is not possible to achieve the same level
of security as in the authenticated-channel setting. Nevertheless,
we show that meaningful security guarantees can be provided:
Essentially, all the adversary can do is to partition the network into
disjoint sets, where in each set the computation is secure in itself,
and also independent of the computation in the other sets. In the basic setting our construction provides, for the first time,
non-trivial security guarantees in a model with no set-up assumptions whatsoever. We also obtain similar results while guaranteeing universal composability, in some variants of the common reference string model. Finally, our protocols can be used to provide conceptually simple and unified solutions to a number of problems
that were studied separately in the past, including password-based authenticated key exchange and non-malleable commitments. As an application of our results, we study the question of constructing secure protocols in partially-authenticated networks, where some of the links are authenticated and some are not (as is the case in most networks today).

2007

EPRINT

Obtaining Universally Composable Security: Towards the Bare Bones of Trust
Abstract

A desirable goal for cryptographic protocols is to guarantee security
when the protocol is composed with other protocol instances. Universally Composable (UC) security provides this guarantee in a strong sense: A UC-secure protocol maintains its security properties even when composed concurrently with an unbounded number of instances of arbitrary protocols. However, many interesting cryptographic tasks are provably impossible to realize with UC security in the standard, ``plain'' model of computation. Impossibility holds even if ideally authenticated communication channels are provided. In contrast, it has been demonstrated that general secure computation can be obtained in a number of idealized models. Each one of these models represents a form of trust that is put in some of the system's components.
This survey examines and compares some of these trust models, both
from the point of view of their sufficiency for building UC secure
protocols, and from the point of view of their practical realizability. We start with the common reference string (CRS) model, and then describe several relaxations and alternatives including the Defective CRS model, the key registration models, the hardware token model, the global and augmented CRS models, and a timing assumption.
Finally, we briefly touch upon trust models for obtaining
authenticated communication.

2006

TCC

2006

JOFC

2006

EPRINT

Mitigating Dictionary Attacks on Password-Protected Local Storage
Abstract

We address the issue of encrypting data in local storage using a key that is derived from the user's password. The typical solution in use today is to derive the key from the password using a cryptographic hash function. This solution provides relatively weak protection, since an attacker that gets hold of the encrypted data can mount an off-line dictionary attack on the user's password, thereby recovering the key and decrypting the stored data.
We propose an approach for limiting off-line dictionary attacks in this setting without relying on secret storage or secure hardware. In our proposal, the process of deriving a key from the password requires the user to solve a puzzle that is presumed to be solvable only by humans (e.g, a CAPTCHA). We describe a simple protocol using this approach: many different puzzles are stored on the disk, the user's password is used to specify which of them need to be solved, and the encryption key is derived from the password and the solutions of the specified puzzles. Completely specifying and analyzing this simple protocol, however, raises a host of modeling and technical issues, such as new properties of human-solvable puzzles and some seemingly hard combinatorial problems. Here we analyze this protocol in some interesting special cases.

2006

EPRINT

Universally Composable Security with Global Setup
Abstract

Cryptographic protocols are often designed and analyzed under some trusted setup assumptions, namely in settings where the participants have access to global information that is trusted to have some basic security properties. However, current modeling of security in the presence of such setup falls short of providing the expected security guarantees. A quintessential example of this phenomenon is the deniability concern: there exist natural protocols that meet the strongest known composable security notions, and are still vulnerable to bad interactions with rogue protocols that use the same setup.
We extend the notion of universally composable (UC) security in a way that re-establishes its original intuitive guarantee even for protocols that use globally available setup. The new formulation prevents bad interactions even with adaptively chosen protocols that use the same setup. In particular, it guarantees deniability. While for protocols that use no setup the proposed requirements are the same as in traditional UC security, for protocols that use global setup the proposed requirements are significantly stronger. In fact, realizing Zero Knowledge or commitment becomes provably impossible, even in the Common Reference String model. Still, we propose reasonable alternative setup assumptions and protocols that allow realizing practically any cryptographic task under standard hardness assumptions even against adaptive corruptions.

2006

EPRINT

Security and Composition of Cryptographic Protocols: A Tutorial
Abstract

What does it mean for a cryptographic protocol to be "secure"?
Capturing the security requirements of cryptographic tasks in
a meaningful way is a slippery business: On the one hand, we want security criteria that prevent "all feasible attacks" against a protocol. On the other hand, we want our criteria to not be overly restrictive; that is, we want them to accept those protocols that do not succumb to "feasible attacks".
This tutorial studies a general methodology for defining security of
cryptographic protocols. The methodology, often dubbed the "trusted party paradigm", allows for defining the security requirements of practically any cryptographic task in a unified and natural way.
We first review a basic formulation that captures security in isolation from other protocol instances. Next we address the secure composition problem, namely the vulnerabilities resulting from the often unexpected interactions among different protocol instances that run alongside each other in the same system. We demonstrate the limitations of the basic formulation and review a formulation that guarantees security of protocols even in general composite systems.

2005

EPRINT

Universally Composable Password-Based Key Exchange
Abstract

We propose and realize a definition of security for password-based key exchange within the framework of universal composability (UC), thus providing security guarantees under arbitrary composition with other protocols. In addition, our definition captures some aspects of the problem that were not adequately addressed by most prior notions. For instance, our definition does not assume any underlying probability distribution on passwords, nor does it assume independence between passwords chosen by different parties. We also formulate a definition of password-based secure channels, and show how to realize such channels given any password-based key exchange protocol.
The password-based key exchange protocol shown here is in the common reference string model and relies on standard number-theoretic assumptions. The components of our protocol can be instantiated to give a relatively efficient solution which is conceivably usable in practice. We also show that it is impossible to satisfy our definition in the "plain" model (e.g., without a common reference string).

2005

EPRINT

Using Probabilistic I/O Automata to Analyze an Oblivious Transfer Protocol
Abstract

The Probabilistic I/O Automata framework of
Lynch, Segala and Vaandrager provides
tools for precisely specifying protocols and reasoning about their
correctness using multiple levels of abstraction, based on
implementation relationships between these levels.
We enhance this framework to allow analyzing
protocols that use cryptographic primitives. This requires resolving and
reconciling issues
such as nondeterministic behavior and scheduling, randomness,
resource-bounded computation, and computational hardness assumptions.
The enhanced framework allows for more rigorous and systematic
analysis of cryptographic protocols. To demonstrate the use of this
framework, we present an example analysis that we have done for an
Oblivious Transfer protocol.

2004

EPRINT

On the Limitations of Universally Composable Two-Party Computation Without Set-up Assumptions
Abstract

The recently proposed {\em universally composable {\em (UC)} security} framework for analyzing security of cryptographic protocols provides very strong security guarantees. In particular, a protocol proven secure in this framework is guaranteed to maintain its security even when run concurrently with arbitrary other protocols. It has been shown that if a majority of the parties are honest, then universally composable protocols exist for essentially any cryptographic task in the {\em plain model} (i.e., with no setup assumptions beyond that of authenticated communication). When honest majority is not guaranteed, general feasibility results are known only given trusted set-up, such as in the common reference string model. Only little was known regarding the existence of universally composable protocols in the plain model without honest majority, and in particular regarding the important special case of two-party protocols.
We study the feasibility of universally composable two-party {\em function evaluation} in the plain model. Our results show that in this setting, very few functions can be securely computed in the
framework of universal composability. We demonstrate this by providing broad impossibility results that apply to large classes of deterministic and probabilistic functions. For some of these classes, we also present full characterizations of what can and cannot be securely realized in the framework of universal composability. Specifically, our characterizations are for the classes of deterministic functions in which (a) both parties receive the same output, (b) only one party receives output, and (c) only one party has input.

2004

EPRINT

Adaptively-Secure, Non-Interactive Public-Key Encryption
Abstract

Adaptively-secure encryption schemes ensure secrecy even in the presence of an adversary who can corrupt parties in an adaptive manner based on public keys, ciphertexts, and secret data of already-corrupted parties. Ideally, an adaptively-secure encryption scheme should, like standard public-key encryption, allow arbitrarily-many parties to use a single encryption key to securely encrypt arbitrarily-many messages to a given receiver who maintains only a single short decryption key. However, it is known that these requirements are impossible to achieve: no non-interactive encryption scheme that supports encryption of an unbounded number of messages and uses a single, unchanging decryption key can be adaptively secure.
Impossibility holds even if secure data erasure is possible.
We show that this limitation can be overcome by updating the decryption key over time and making some mild assumptions about
the frequency of communication between parties. Using this approach,
we construct adaptively-secure, completely non-interactive encryption
schemes supporting secure encryption of arbitrarily-many messages from
arbitrarily-many senders. Our schemes additionally provide
forward security and security against chosen-ciphertext attacks.

2004

EPRINT

Hardness amplification of weakly verifiable puzzles
Abstract

Is it harder to solve many puzzles than it is to solve just one? This
question has different answers, depending on how you define puzzles.
For the case of inverting one-way functions it was shown by Yao that
solving many independent instances simultaneously is indeed harder
than solving a single instance (cf. the transformation from weak to
strong one-way functions). The known proofs of that result, however,
use in an essential way the fact that for one-way functions, verifying
candidate solutions to a given puzzle is easy. We extend this result
to the case where solutions are efficiently verifiable only by
the party that generated the puzzle. We call such puzzles weakly
verifiable. That is, for weakly verifiable puzzles we show that if no
efficient algorithm can solve a single puzzle with probability more
than $\eps$, then no efficient algorithm can solve $n$ independent
puzzles simultaneously with probability more than $\eps^n$. We also
demonstrate that when the puzzles are not even weakly verifiable,
solving many puzzles may be no harder than solving a single one.
Hardness amplification of weakly verifiable puzzles turns out to be
closely related to the reduction of soundness error under parallel
repetition in computationally sound arguments. Indeed, the proof of
Bellare, Impagliazzo and Naor that parallel repetition reduces
soundness error in three-round argument systems implies a result
similar to our first result, albeit with considerably worse
parameters. Also, our second result is an adaptation of their proof
that parallel repetition of four-round systems may not reduce the
soundness error.

2004

EPRINT

Universally Composable Symbolic Analysis of Cryptographic Protocols (The case of encryption-based mutual authentication and key exchange)
Abstract

Symbolic analysis of cryptographic protocols is dramatically simpler than full-fledged cryptographic analysis. In particular, it is readily amenable to automation. However, symbolic analysis does not a priori carry any cryptographic soundness guarantees. Following recent work on cryptographically sound symbolic analysis, we demonstrate how Dolev-Yao style symbolic analysis can be used to assert the security of cryptographic protocols within the universally composable (UC) security framework. Consequently, our methods enable security analysis that is completely symbolic, and at the same time cryptographically sound with strong composability properties.
More specifically, we define a mapping from a class of cryptographic protocols to Dolev-Yao style symbolic protocols. For this mapping, we show that the symbolic protocol satisfies a certain symbolic criterion if and only if the corresponding cryptographic protocol is UC-secure. We concentrate on mutual authentication and key-exchange protocols that use public-key encryption as their only cryptographic primitive. For mutual authentication, our symbolic criterion is similar to the traditional Dolev-Yao criterion. For key exchange, we demonstrate that the traditional Dolev-Yao style symbolic criterion is insufficient, and formulate an adequate symbolic criterion.
Finally, to demonstrate the viability of the treatment, we use an existing automated verification tool to assert UC security of some prominent key exchange protocols.

2003

EUROCRYPT

2003

EPRINT

A Forward-Secure Public-Key Encryption Scheme
Abstract

Cryptographic computations are often carried out on insecure devices
for which the threat of key exposure represents a serious and
realistic concern. In an effort to mitigate the damage caused by
exposure of secret keys stored on such devices, the paradigm of
\emph{forward security} was introduced. In a forward-secure scheme,
secret keys are updated at regular periods of time; exposure of the
secret key corresponding to a given time period does not enable an
adversary to ``break'' the scheme (in the appropriate sense) for
any \emph{prior} time period. A number of constructions of
forward-secure digital signature schemes, key-exchange protocols,
and symmetric-key schemes are known.
We present the first non-trivial constructions of (non-interactive)
forward-secure public-key encryption schemes. Our main construction
achieves security against chosen-plaintext attacks under the decisional
bilinear Diffie-Hellman assumption in the standard model. This
scheme is practical, and all parameters grow at most logarithmically
with the total number of time periods. We also give a slightly more
efficient scheme in the random oracle model. Both our schemes can be
extended to achieve security against chosen-ciphertext attacks and to
support an unbounded number of time periods.
Toward our goal, we introduce the notion of \emph{binary tree
encryption} and show how to construct a binary tree encryption scheme
in the standard model. This new primitive may be of independent
interest. In particular, we use it to construct the first known example
of a (hierarchical) identity-based encryption scheme that is secure
in the standard model. (Here, however, the notion of security we
achieve is slightly weaker than what is achieved in some previous constructions
in the random oracle model.)

2003

EPRINT

On the random-oracle methodology as applied to length-restricted signature schemes
Abstract

In earlier work, we described a ``pathological'' example of a signature scheme that is secure in the random-oracle model, but for which no secure implementation exists. For that example, however, it was crucial that the scheme is able to sign "long messages" (i.e., messages whose
length is not a-priori bounded). This left open the possibility that
the Random Oracle Methodology is sound with respect to signature schemes
that sign only "short" messages (i.e., messages of a-priori bounded length, smaller than the length of the keys in use), and are "memoryless" (i.e., the only thing kept between different signature
generations is the initial signing-key). In this work, we extend our negative result to address such signature schemes. A key ingredient in our proof is a new type of interactive proof systems, which may be of independent interest.

2003

EPRINT

Relaxing Chosen-Ciphertext Security
Abstract

Security against adaptive chosen ciphertext attacks (or, CCA security) has been accepted as the standard requirement from encryption schemes that need to withstand active attacks. In particular, it is regarded as the appropriate security
notion for encryption schemes used as components within general protocols and applications. Indeed, CCA security was shown to suffice in a large variety of contexts. However, CCA security often appears to be somewhat {\em too strong:} there exist encryption schemes (some of which come up naturally in practice) that are not CCA secure, but seem sufficiently secure ``for most practical purposes.''
We propose a relaxed variant of CCA security, called {\sf Replayable CCA (RCCA)} security. RCCA security accepts as secure the non-CCA (yet arguably secure) schemes mentioned above; furthermore, it suffices for most existing applications of CCA security.
We provide three formulations of RCCA security. The first one follows the spirit of semantic security and is formulated via an ideal functionality in the universally composable security framework. The other two are formulated following the indistinguishability and non-malleability approaches,
respectively. We show that the three formulations are equivalent in most interesting cases.

2003

EPRINT

Chosen-Ciphertext Security from Identity-Based Encryption
Abstract

We show how to construct a CCA-secure public-key encryption scheme
from any CPA-secure identity-based encryption (IBE) scheme. Our
conversion from IBE to a CCA-secure scheme is simple,
efficient, and provably secure in the standard model (i.e., security
of the resulting scheme does not rely on the random oracle model).
In addition, the resulting scheme achieves CCA security even if the
underlying IBE scheme satisfies only a ``weak'' notion of security
which is known to be achievable in the standard model based on the
bilinear Diffie-Hellman assumption. Thus, our results yield a new
construction of CCA-secure public-key encryption in the
standard model. Interestingly, the resulting scheme avoids any
non-interactive proofs of ``well-formedness'' which were shown to
underlie all previously-known constructions.
We also extend our technique to obtain a simple and reasonably efficient
method for securing any BTE scheme against adaptive chosen-ciphertext
attacks. This, in turn, yields more efficient constructions of CCA-secure
(hierarchical) identity-based and forward-secure encryption schemes in the
standard model.
Our results --- building on previous black-box separations ---
rule out black-box constructions of IBE from CPA-secure public-key encryption.

2003

EPRINT

Universally Composable Signatures, Certification and Authentication
Abstract

Recently some efforts were made towards capturing the security requirements from digital signature schemes as an ideal functionality within a
composable security framework. This modeling of digital signatures
potentially has some significant analytical advantages (such as enabling component-wise analysis of complex systems that use signature schemes, as well as symbolic and automatable analysis of such systems). However, it turns out that
formulating ideal functionalities that capture the properties
expected from signature schemes in a way that is both sound and
enjoys the above advantages is not a trivial task.
This work has several contributions. We first correct some flaws in the definition of the ideal signature functionality of Canetti, 2001, andsubsequent formulations. Next we provide a minimal
formalization of ``ideal certification authorities'' and
show how authenticated communication can be obtained using ideal signatures and an ideal certification authority. This is done while guaranteeing full modularity (i.e., each component is analyzed as stand-alone), and in an unconditional and errorless way.
This opens the door to symbolic and
automated analysis of protocols for these tasks, in a way that is
both modular and cryptographically sound.

2002

EPRINT

Security Analysis of IKE's Signature-based Key-Exchange Protocol
Abstract

We present a security analysis of the Diffie-Hellman
key-exchange protocols authenticated with digital signatures
used by the Internet Key Exchange (IKE) standard, and of the more
comprehensive SIGMA family of key exchange protocols.
The analysis is based on an adaptation of the key-exchange security
model from [Canetti and Krawczyk, Eurocrypt'01] to the setting
where peer identities are not necessarily known or disclosed
from the start of the protocol. This is a common practical setting,
which includes the case of IKE and other protocols that provide
confidentiality of identities over the network. The rigorous study
of this ``post-specified peer" model is a further contribution of
this paper.

2002

EPRINT

Universal Composition with Joint State
Abstract

Cryptographic systems often involve running multiple concurrent instances of some protocol, where the instances have some amount of joint state and randomness. (Examples include systems where multiple protocol instances use the same public-key infrastructure, or the same common reference string.) Rather than attempting to analyze the entire system as a single unit, we would like to be able to analyze each such protocol instance as stand-alone, and then use a general composition theorem to deduce the security of the entire system. However, no known composition theorem applies in this setting, since they all assume that the composed protocol instances have disjoint internal states, and that the internal random choices in the various instances are independent.
We propose a new composition operation that can handle the case where
different components have some amount of joint state and randomness,
and demonstrate sufficient conditions for when the new operation preserves security. The new operation, which is called {\em universal composition with joint state} (and is based on the recently proposed universal composition operation), turns out to be very useful in a number of quite different scenarios such as those mentioned above.

2002

EPRINT

Universally Composable Notions of Key Exchange and Secure Channels
Abstract

Recently, Canetti and Krawczyk (Eurocrypt 2001) formulated a notion of
security for key-exchange (KE) protocols, called SK-security, and
showed that this notion suffices for constructing secure channels.
Their model and proofs, however, do not suffice for proving more general
composability properties of SK-secure KE protocols.
We show that while the notion of SK-security is strictly
weaker than a fully-idealized notion of key exchange security, it
is sufficiently robust for providing secure composition with
arbitrary protocols. In particular, SK-security guarantees the security
of the key for any application that desires to set-up secret keys
between pairs of parties. We also provide new definitions of
secure-channels protocols with similarly strong composability properties,
and show that SK-security suffices for obtaining these definitions.
To obtain these results we use the recently proposed framework of
"universally composable (UC) security."
We also use a new tool, called "non-information
oracles," which will probably find applications beyond the present case.
These tools allow us to bridge between seemingly limited
indistinguishability-based definitions such as SK-security and
more powerful, simulation-based definitions, such as UC-security,
where general composition theorems can be proven.
Furthermore, based on such composition theorems we reduce the
analysis of a full-fledged multi-session key-exchange protocol to the
(simpler) analysis of individual, stand-alone,
key-exchange sessions.

2002

EPRINT

Universally Composable Two-Party and Multi-Party Secure Computation
Abstract

We show how to securely realize any two-party and multi-party functionality in a {\em universally composable} way, regardless of the number of corrupted participants. That is, we consider an asynchronous multi-party network with open communication and an
adversary that can adaptively corrupt as many parties as it wishes. In this setting, our protocols allow any subset of the parties (with pairs of parties being a special case) to securely realize any desired functionality of their local inputs, and be guaranteed that security is preserved regardless of the activity in the rest of the network. This implies that security is preserved under concurrent composition of an unbounded number of protocol executions, it implies non-malleability with respect to arbitrary protocols, and more. Our constructions are in the common reference string model and rely on standard intractability assumptions.

2001

EPRINT

Black-Box Concurrent Zero-Knowledge Requires $\tilde\Omega(\log n)$ Rounds
Abstract

We show that any concurrent zero-knowledge protocol for a non-trivial
language (i.e., for a language outside $\BPP$), whose security is
proven via black-box simulation, must use at least
$\tilde\Omega(\log n)$ rounds of interaction. This result achieves a
substantial improvement over previous lower bounds, and is the first
bound to rule out the possibility of constant-round concurrent
zero-knowledge when proven via black-box simulation. Furthermore, the
bound is polynomially related to the number of rounds in the best
known concurrent zero-knowledge protocol for languages in $\NP$.

2001

EPRINT

On adaptive vs. non-adaptive security of multiparty protocols
Abstract

Security analysis of multiparty cryptographic protocols distinguishes
between two types of adversarial settings: In the non-adaptive
setting, the set of corrupted parties is chosen in advance, before the
interaction begins. In the adaptive setting, the adversary chooses
who to corrupt during the course of the computation. We study the
relations between adaptive security (i.e., security in the adaptive
setting) and non-adaptive security, according to two definitions and
in several models of computation. While affirming some prevailing
beliefs, we also obtain some unexpected results. Some highlights of
our results are:
o According to the definition of Dodis-Micali-Rogaway (which is set in
the information-theoretic model), adaptive and non-adaptive security
are equivalent. This holds for both honest-but-curious and Byzantine
adversaries, and for any number of parties.
o According to the definition of Canetti, for honest-but-curious
adversaries, adaptive security is equivalent to non-adaptive
security when the number of parties is logarithmic, and is strictly
stronger than non-adaptive security when the number of parties is
super-logarithmic. For Byzantine adversaries, adaptive security is
strictly stronger than non-adaptive security, for any number of
parties.

2001

EPRINT

Analysis of Key-Exchange Protocols and Their Use for Building Secure Channels
Abstract

We present a formalism for the analysis of key-exchange protocols
that combines previous definitional approaches and results in a definition
of security that enjoys some important analytical benefits:
(i) any key-exchange protocol that satisfies the security definition
can be composed with symmetric encryption and authentication functions
to provide provably secure communication channels;
and
(ii) the definition allows for simple modular proofs of security:
one can design and prove security of key-exchange protocols in an
idealized model where the communication links are perfectly authenticated,
and then translate them using general tools to obtain security in
the realistic setting of adversary-controlled links.
We exemplify the usability of our results by applying them to obtain the
proof of two main classes of key-exchange protocols, Diffie-Hellman and
key-transport, authenticated via symmetric or asymmetric techniques.
Further contributions of the paper include the formalization of
``secure channels'' in the context of key-exchange protocols, and
establishing sufficient conditions on the symmetric encryption and
authentication functions to realize these channels.

2001

EPRINT

Universally Composable Commitments
Abstract

We propose a new security measure for commitment protocols, called
/universally composable/ (UC) Commitment. The measure
guarantees that commitment protocols behave like an "ideal commitment
service," even when concurrently composed with an arbitrary set of
protocols. This is a strong guarantee: it implies that security is
maintained even when an unbounded number of copies of the scheme are
running concurrently, it implies non-malleability (not only with
respect to other copies of the same protocol but even with respect
to other protocols), it provides resilience to selective
decommitment, and more.
Unfortunately two-party UC commitment protocols do not exist in
the plain model. However, we construct two-party UC commitment
protocols, based on general complexity assumptions, in the
/common reference string model/ where all parties have
access to a common string taken from a predetermined distribution.
The protocols are non-interactive, in the sense that both the
commitment and the opening phases consist of a single message
from the committer to the receiver.

2000

EPRINT

Universally Composable Security: A New Paradigm for Cryptographic Protocols
Abstract

We present a general framework for representing cryptographic protocols and analyzing their security. The framework
allows specifying the security requirements of practically any
cryptographic task in a unified and systematic way.
Furthermore, in this framework the security of protocols
is maintained under a general protocol composition operation, called
universal composition.
The proposed framework with its security-preserving composition property allow for modular design and analysis of complex cryptographic protocols from relatively simple building blocks.
Moreover, within this framework, protocols are guaranteed to maintain their security within any context, even in the presence of an unbounded number of arbitrary protocol instances that run concurrently in an adversarially controlled manner.
This is a useful guarantee, that allows arguing about the security of
cryptographic protocols in complex and unpredictable environments such
as modern communication networks.

1999

EUROCRYPT

1999

EPRINT

Resettable Zero-Knowledge
Abstract

We introduce the notion of Resettable Zero-Knowledge
(rZK), a new security measure for cryptographic protocols which
strengthens the classical notion of zero-knowledge. In essence, an
rZK protocol is one that remains zero knowledge even if an adeversary
can interact with the prover many times, each time resetting the
prover to its initial state and forcing him to use the same random
tape.
Under general complexity asumptions, which hold for example if the
Discrete Logarithm Problem is hard, we construct (1) rZK proof-systems
for NP: (2) constant-round resettable witness-indistinguishable
proof-systems for NP; and (3) constant-round rZK arguments for NP in
the public key model where verifiers have fixed, public keys
associated with them.
In addition to shedding new light on what makes zero knowledge
possible (by constructing ZK protocols that use randomness in a
dramatically weaker way than before), rZK has great relevance to
applications. Firstly, we show that rZK protocols are closed under
parallel and concurrent execution and thus are guaranteed to be secure
when implemented in fully asynchronous networks, even if an adversary
schedules the arrival of every message sent. Secondly, rZK protocols
enlarge the range of physical ways in which provers of a ZK protocols
can be securely implemented, including devices which cannot reliably
toss coins on line, nor keep state betweeen invocations. (For
instance, because ordinary smart cards with secure hardware are
resattable, they could not be used to implement securely the provers
of classical ZK protocols, but can now be used to implement securely
the provers of rZK protocols.)

1998

EPRINT

A Modular Approach to the Design and Analysis of Authentication and Key Exchange Protocols
Abstract

We present a general framework for constructing and analyzing authentication
protocols in realistic models of communication networks. This framework
provides a sound formalization for the authentication problem and suggests
simple and attractive design principles for general authentication and key
exchange protocols. The key element in our approach is a modular treatment of
the authentication problem in cryptographic protocols; this applies to the
definition of security, to the design of the protocols, and to their analysis.
In particular, following this modular approach, we show how to systematically
transform solutions that work in a model of idealized authenticated
communications into solutions that are secure in the realistic setting of
communication channels controlled by an active adversary.
Using these principles we construct and prove the security of simple and
practical authentication and key-exchange protocols. In particular, we provide
a security analysis of some well-known key exchange protocols (e.g.
authenticated Diffie-Hellman key exchange), and of some of the techniques
underlying the design of several authentication protocols that are currently
being deployed on a large scale for the Internet Protocol and other
applications.

1998

EPRINT

The Random Oracle Methodology, Revisited
Abstract

We take a critical look at the relationship between the security of
cryptographic schemes in the Random Oracle Model, and the security of the
schemes that result from implementing the random oracle by so called
"cryptographic hash functions".
The main result of this paper is a negative one: There exist signature and
encryption schemes that are secure in the Random Oracle Model, but for which
any implementation of the random oracle results in insecure schemes.
In the process of devising the above schemes, we consider possible definitions
for the notion of a "good implementation" of a random oracle, pointing out
limitations and challenges.

1998

EPRINT

Maintaining Authenticated Communication in the Presence of Break-ins
Abstract

We study the problem of maintaining authenticated communication over untrusted
communication channels, in a scenario where the communicating parties may be
occasionally and repeatedly broken into for transient periods of time. Once
a party is broken into, its cryptographic keys are exposed and perhaps
modified. Yet, we want parties whose security is thus compromised to regain
their ability to communicate in an authenticated way aided by other parties.
In this work we present a mathematical model for this highly adversarial
setting, exhibiting salient properties and parameters, and then describe
a practically-appealing protocol for the task of maintaining authenticated
communication in this model.
A key element in our solution is devising {\em proactive distributed signature
(PDS) schemes} in our model. Although PDS schemes are known in the literature,
they are all designed for a model where authenticated communication and
broadcast primitives are available. We therefore show how these schemes can be
modified to work in our model, where no such primitives are available a-priori.
In the process of devising the above schemes, we also present a new definition
of PDS schemes (and of distributed signature schemes in general). This
definition may be of independent interest.

1998

EPRINT

Randomness versus Fault-Tolerance
Abstract

We investigate the relations between two major requirements of multiparty
protocols: {\em fault tolerance} (or {\em resilience}) and {\em randomness}.
Fault-tolerance is measured in terms of the maximum number of colluding faulty
parties, t, that a protocol can withstand and still maintain the privacy of the inputs and the correctness of the outputs (of the honest parties). Randomness
is measured in terms of the total number of random bits needed by the parties
in order to execute the protocol.
Previously, the upper bound on the amount of randomness required by general
constructions for securely computing any non-trivial function f was polynomial
both in $n$, the total number of parties, and the circuit-size C(f). This was
the state of knowledge even for the special case t=1 (i.e., when there is at
most one faulty party). In this paper, we show that for any linear-size
circuit, and for any number t < n/3 of faulty parties, O(poly(t) * log n)
randomness is sufficient. More generally, we show that for any function f
with circuit-size C(f), we need only O(poly(t) * log n + poly(t) * C(f)/n)
randomness in order to withstand any coalition of size at most t.
Furthermore, in our protocol only t+1 parties flip coins and the rest of
the parties are deterministic. Our results generalize to the case of adaptive
adversaries as well.

1998

EPRINT

Security and Composition of Multi-party Cryptographic Protocols
Abstract

We present general definitions of security for multiparty cryptographic
protocols and show that, using these definitions, security is preserved
under a natural composition method.
The definitions follow the general paradigm of known definitions;
yet some substantial modifications and simplifications are introduced. In
particular, `black-box simulation' is no longer required. The composition
method is essentially the natural `subroutine substitution' method suggested
by Micali and Rogaway.
We first present the general definitional approach. Next we consider several
settings for multiparty protocols. These include the cases of non-adaptive
and adaptive adversaries, as well as the information-theoretic and the
computational models.

1997

EPRINT

Towards realizing random oracles: Hash functions that hide all partial information
Abstract

The random oracle model is a very convenient setting for designing
cryptographic protocols. In this idealized model all parties have access
to a common, public random function, called a random oracle.
Protocols in this model are often very simple and efficient; also the
analysis is often clearer. However, we do not have a general mechanism for
transforming protocols that are secure in the random oracle model into
protocols that are secure in real life. In fact, we do not even know how
to meaningfully specify the properties required from such a mechanism.
Instead, it is a common practice to simply replace - often without
mathematical justification - the random oracle with a `cryptographic hash
function' (e.g., MD5 or SHA). Consequently, the resulting protocols have
no meaningful proofs of security.

1996

EPRINT

Deniable Encryption
Abstract

Consider a situation in which the transmission of encrypted
messages is intercepted by an adversary who can later
ask the sender to reveal
the random choices (and also the secret key, if one exists)
used in generating
the ciphertext, thereby exposing the cleartext.
An encryption scheme is <B>deniable</B> if the sender can generate
`fake random choices' that will make the ciphertext `look like'
an encryption of a different cleartext, thus keeping the
real cleartext private.
Analogous requirements can be formulated with respect to
attacking the receiver and with respect to attacking both parties.
In this paper we introduce deniable encryption and propose
constructions of schemes with polynomial deniability. In addition to
being interesting by itself, and having several applications, deniable
encryption provides a simplified and elegant construction of
<B>adaptively secure</B> multiparty computation.

#### Program Committees

- Crypto 2019
- TCC 2017
- Crypto 2017
- Eurocrypt 2016
- Crypto 2014
- Crypto 2013
- Crypto 2012
- Eurocrypt 2010
- TCC 2008
- TCC 2007
- TCC 2004
- Crypto 2001
- Crypto 2000

#### Coauthors

- Gilad Asharov (2)
- Boaz Barak (4)
- Mihir Bellare (2)
- Nir Bitansky (12)
- Suresh Chari (1)
- Yilei Chen (5)
- Ling Cheung (4)
- Alessandro Chiesa (2)
- Asaf Cohen (1)
- Asaf Cohen (1)
- Henry Cohn (1)
- Dana Dachman-Soled (1)
- Ronny Ramzi Dakdouk (2)
- Ivan Damgård (3)
- Yevgeniy Dodis (3)
- Cynthia Dwork (2)
- Stefan Dziembowski (3)
- Dror Eiger (1)
- Marc Fischlin (2)
- Benjamin Fuller (2)
- Sebastian Gajek (1)
- Rosario Gennaro (1)
- Oded Goldreich (4)
- Shafi Goldwasser (8)
- Vipul Goyal (2)
- Shai Halevi (22)
- Carmit Hazay (2)
- Amir Herzberg (3)
- Jonathan Herzog (3)
- Susan Hohenberger (1)
- Justin Holmgren (3)
- Yuval Ishai (3)
- Abhishek Jain (3)
- Stanislaw Jarecki (1)
- Yael Tauman Kalai (9)
- Jonathan Katz (9)
- Dilsun Kaynar (3)
- Joe Kilian (1)
- Hugo Krawczyk (11)
- Eyal Kushilevitz (6)
- Amit Lichtenberg (1)
- Dah-Yoh Lim (1)
- Huijia Lin (4)
- Yehuda Lindell (11)
- Moses Liskov (1)
- Nancy Lynch (4)
- Philip D. MacKenzie (2)
- Tal Malkin (4)
- Silvio Micali (1)
- Moni Naor (2)
- Jesper Buus Nielsen (2)
- Kobbi Nissim (1)
- Rafail Ostrovsky (5)
- Omer Paneth (12)
- Dimitrios Papadopoulos (1)
- Rafael Pass (5)
- Olivier Pereira (4)
- Erez Petrank (1)
- Birgit Pfitzmann (1)
- Oxana Poburinnaya (3)
- Tal Rabin (6)
- Srinivasan Raghuraman (1)
- Mariana Raykova (2)
- Leonid Reyzin (5)
- Silas Richelson (2)
- Ronald L. Rivest (1)
- Alon Rosen (3)
- Adi Rosén (2)
- Ron D. Rothblum (1)
- Guy N. Rothblum (2)
- Arnay Roy (1)
- Aviad Rubinstein (2)
- Amit Sahai (3)
- Roberto Segala (1)
- Daniel Shahaf (2)
- Adam Smith (1)
- Michael Steiner (5)
- Madhu Sudan (1)
- Stefano Tessaro (1)
- Luca Trevisan (1)
- Nikos Triandopoulos (1)
- Eran Tromer (2)
- Salil P. Vadhan (1)
- Vinod Vaikuntanathan (3)
- Margarita Vald (2)
- Mayank Varia (5)
- Wietse Venema (1)
- Muthuramakrishnan Venkitasubramaniam (1)
- Shabsi Walfish (2)
- Hoeteck Wee (2)
- Daniel Wichs (2)