## CryptoDB

### Dennis Hofheinz

#### Affiliation: KIT, Germany

#### Publications

**Year**

**Venue**

**Title**

2019

PKC

On Tightly Secure Primitives in the Multi-instance Setting
Abstract

We initiate the study of general tight reductions in cryptography. There already exist a variety of works that offer tight reductions for a number of cryptographic tasks, ranging from encryption and signature schemes to proof systems. However, our work is the first to provide a universal definition of a tight reduction (for arbitrary primitives), along with several observations and results concerning primitives for which tight reductions have not been known.Technically, we start from the general notion of reductions due to Reingold, Trevisan, and Vadhan (TCC 2004), and equip it with a quantification of the respective reduction loss, and a canonical multi-instance extension to primitives. We then revisit several standard reductions whose tight security has not yet been considered. For instance, we revisit a generic construction of signature schemes from one-way functions, and show how to tighten the corresponding reduction by assuming collision-resistance from the used one-way function. We also obtain tightly secure pseudorandom generators (by using suitable rerandomisable hard-core predicates), and tightly secure lossy trapdoor functions.

2019

EUROCRYPT

Designated-Verifier Pseudorandom Generators, and Their Applications
📺
Abstract

We provide a generic construction of non-interactive zero-knowledge (NIZK) schemes. Our construction is a refinement of Dwork and Naor’s (FOCS 2000) implementation of the hidden bits model using verifiable pseudorandom generators (VPRGs). Our refinement simplifies their construction and relaxes the necessary assumptions considerably.As a result of this conceptual improvement, we obtain interesting new instantiations:A designated-verifier NIZK (with unbounded soundness) based on the computational Diffie-Hellman (CDH) problem. If a pairing is available, this NIZK becomes publicly verifiable. This constitutes the first fully secure CDH-based designated-verifier NIZKs (and more generally, the first fully secure designated-verifier NIZK from a non-generic assumption which does not already imply publicly-verifiable NIZKs), and it answers an open problem recently raised by Kim and Wu (CRYPTO 2018).A NIZK based on the learning with errors (LWE) assumption, and assuming a non-interactive witness-indistinguishable (NIWI) proof system for bounded distance decoding (BDD). This simplifies and improves upon a recent NIZK from LWE that assumes a NIZK for BDD (Rothblum et al., PKC 2019).

2019

ASIACRYPT

Dual-Mode NIZKs from Obfuscation
Abstract

Two standard security properties of a non-interactive zero-knowledge (NIZK) scheme are soundness and zero-knowledge. But while standard NIZK systems can only provide one of those properties against unbounded adversaries, dual-mode NIZK systems allow to choose dynamically and adaptively which of these properties holds unconditionally. The only known dual-mode NIZK schemes are Groth-Sahai proofs (which have proved extremely useful in a variety of applications), and the FHE-based NIZK constructions of Canetti et al. and Peikert et al, which are concurrent and independent to this work. However, all these constructions rely on specific algebraic settings.Here, we provide a generic construction of dual-mode NIZK systems for all of NP. The public parameters of our scheme can be set up in one of two indistinguishable ways. One way provides unconditional soundness, while the other provides unconditional zero-knowledge. Our scheme relies on subexponentially secure indistinguishability obfuscation and subexponentially secure one-way functions, but otherwise only on comparatively mild and generic computational assumptions. These generic assumptions can be instantiated under any one of the DDH, $$k$$-LIN, DCR, or QR assumptions.As an application, we reduce the required assumptions necessary for several recent obfuscation-based constructions of multilinear maps. Combined with previous work, our scheme can be used to construct multilinear maps from obfuscation and a group in which the strong Diffie-Hellman assumption holds. We also believe that our work adds to the understanding of the construction of NIZK systems, as it provides a conceptually new way to achieve dual-mode properties.

2018

CRYPTO

On Tightly Secure Non-Interactive Key Exchange
📺
Abstract

We consider the reduction loss of security reductions for non-interactive key exchange (NIKE) schemes. Currently, no tightly secure NIKE schemes exist, and in fact Bader et al. (EUROCRYPT 2016) provide a lower bound (of $$\varOmega (n^2)$$, where $$n$$ is the number of parties an adversary interacts with) on the reduction loss for a large class of NIKE schemes.We offer two results: the first NIKE scheme with a reduction loss of $$n/2$$ that circumvents the lower bound of Bader et al., but is of course still far from tightly secure. Second, we provide a generalization of Bader et al.’s lower bound to a larger class of NIKE schemes (that also covers our NIKE scheme), with an adapted lower bound of $$n/2$$ on the reduction loss. Hence, in that sense, the reduction for our NIKE scheme is optimal.

2018

PKC

Interactively Secure Groups from Obfuscation
Abstract

We construct a mathematical group in which an interactive variant of the very general Uber assumption holds. Our construction uses probabilistic indistinguishability obfuscation, fully homomorphic encryption, and a pairing-friendly group in which a mild and standard computational assumption holds. While our construction is not practical, it constitutes a feasibility result that shows that under a strong but generic, and a mild assumption, groups exist in which very general computational assumptions hold. We believe that this grants additional credibility to the Uber assumption.

2018

PKC

Graded Encoding Schemes from Obfuscation
Abstract

We construct a graded encoding scheme (GES), an approximate form of graded multilinear maps. Our construction relies on indistinguishability obfuscation, and a pairing-friendly group in which (a suitable variant of) the strong Diffie–Hellman assumption holds. As a result of this abstract approach, our GES has a number of advantages over previous constructions. Most importantly:
We can prove that the multilinear decisional Diffie–Hellman (MDDH) assumption holds in our setting, assuming the used ingredients are secure (in a well-defined and standard sense). Hence, our GES does not succumb to so-called “zeroizing” attacks if the underlying ingredients are secure.Encodings in our GES do not carry any noise. Thus, unlike previous GES constructions, there is no upper bound on the number of operations one can perform with our encodings. Hence, our GES essentially realizes what Garg et al. (EUROCRYPT 2013) call the “dream version” of a GES.
Technically, our scheme extends a previous, non-graded approximate multilinear map scheme due to Albrecht et al. (TCC 2016-A). To introduce a graded structure, we develop a new view of encodings at different levels as polynomials of different degrees.

2018

ASIACRYPT

Identity-Based Encryption Tightly Secure Under Chosen-Ciphertext Attacks
Abstract

We propose the first identity-based encryption (IBE) scheme that is (almost) tightly secure against chosen-ciphertext attacks. Our scheme is efficient, in the sense that its ciphertext overhead is only seven group elements, three group elements more than that of the state-of-the-art passively (almost) tightly secure IBE scheme. Our scheme is secure in a multi-challenge setting, i.e., in face of an arbitrary number of challenge ciphertexts. The security of our scheme is based upon the standard symmetric external Diffie-Hellman assumption in pairing-friendly groups, but we also consider (less efficient) generalizations under weaker assumptions.

2016

TCC

2015

JOFC

2015

EPRINT

2013

JOFC

Polynomial Runtime and Composability
Abstract

We devise a notion of polynomial runtime suitable for the simulation-based security analysis of multi-party cryptographic protocols. Somewhat surprisingly, straightforward notions of polynomial runtime lack expressivity for reactive tasks and/or lead to an unnatural simulation-based security notion. Indeed, the problem has been recognized in previous works, and several notions of polynomial runtime have already been proposed. However, our new notion, dubbed reactive polynomial time, is the first to combine the following properties: it is simple enough to support simple security/runtime analyses,it is intuitive in the sense that all intuitively feasible protocols and attacks (and only those) are considered polynomial-time,it supports secure composition of protocols in the sense of a universal composition theorem. We work in the Universal Composability (UC) protocol framework. We remark that while the UC framework already features a universal composition theorem, we develop new techniques to prove secure composition in the case of reactively polynomial-time protocols and attacks.

2012

JOFC

Bonsai Trees, or How to Delegate a Lattice Basis
Abstract

We introduce a new lattice-based cryptographic structure called a bonsai tree, and use it to resolve some important open problems in the area. Applications of bonsai trees include an efficient, stateless ‘hash-and-sign’ signature scheme in the standard model (i.e., no random oracles), and the first hierarchical identity-based encryption (HIBE) scheme (also in the standard model) that does not rely on bilinear pairings. Interestingly, the abstract properties of bonsai trees seem to have no known realization in conventional number-theoretic cryptography.

2012

JOFC

Programmable Hash Functions and Their Applications
Abstract

We introduce a new combinatorial primitive called programmable hash functions (PHFs). PHFs can be used to program the output of a hash function such that it contains solved or unsolved discrete logarithm instances with a certain probability. This is a technique originally used for security proofs in the random oracle model. We give a variety of standard model realizations of PHFs (with different parameters).The programmability makes PHFs a suitable tool to obtain black-box proofs of cryptographic protocols when considering adaptive attacks. We propose generic digital signature schemes from the strong RSA problem and from some hardness assumption on bilinear maps that can be instantiated with any PHF. Our schemes offer various improvements over known constructions. In particular, for a reasonable choice of parameters, we obtain short standard model digital signatures over bilinear maps.

2009

EUROCRYPT

2009

EPRINT

A general framework for computational soundness proofs - or - The computational soundness of the applied pi-calculus
Abstract

We provide a general framework for conducting computational soundness
proofs of symbolic models. Our framework considers arbitrary sets of
constructors, deduction rules, and computational implementations, and
it abstracts away from many details that are not core for proving
computational soundness such as message scheduling, corruption models,
and even the internal structure of a protocol. We identify several
properties of a so-called simulator such that the existence of a
simulator with all these properties already entails computational
soundness in the sense of preservation of trace properties in our
framework. This simulator-based characterization allows for proving
soundness results in a conceptually modular and generic way.
We exemplify the usefulness of our framework by proving the first
computational soundness result for the full-fledged applied
$\pi$-calculus under active attacks. Concretely, we embed the applied
$\pi$-calculus into our framework and give a sound implementation of
public-key encryption.

2009

EPRINT

Polynomial Runtime and Composability
Abstract

To prove security of a multi-party cryptographic protocol, one often reduces attacks on the protocol to attacks on a suitable computational problem. Thus, if the computational problem is hard, then the protocol is secure. But to allow for a security reduction, the protocol itself and the attack on the protocol must be efficient, i.e., polynomial-time. Of course, the obvious way to enforce an overall polynomial runtime of the protocol is to require each individual protocol machine and adversarial entity to be polynomial-time. However, as the specific case of zero-knowledge protocols demonstrates, an a priori polynomial-time bound on all entities may not be an optimal choice because the running time of some machines needs to depend on that of others. As we want to be able to model arbitrary protocol tasks, we work in the Universal Composability framework (UC). This framework additionally provides strong composability guarantees. We will point out that in the UC setting, finding a useful notion of polynomial-time for the analysis of general protocols is a highly non-trivial task.
Our goal in this work is to find a good and useful definition of polynomial-time for multi-party protocols in the UC setting that matches the intuition of what is feasible. A good definition should have the following properties:
* Flexibility: All "intuitively feasible" protocols and protocol tasks should be considered polynomial-time.
* Soundness: All "intuitively feasible" attacks (i.e., adversaries) should be considered polynomial-time.
* Completeness: Only "intuitively feasible" attacks should be considered polynomial-time. In particular, this implies that the security of protocols can be reduced to computational hardness assumptions.
* Composability: The induced security notion should support secure (universal) composition of protocols.
* Simplicity: The notion should be easy to formulate, and for all practical cases, it should be easy to decide whether a protocol or attack runs in polynomial time.
The problem of finding a good definition of polynomial time in the UC framework has been considered in a number of works, but no definition satisfying the five above criteria had been found so far. This seemingly simple problem is surprisingly elusive and it is hard to come up with a definition that does not involve many technical artifacts.
In this contribution, we give a definition of polynomial time for cryptographic protocols in the UC model, called reactively polynomial, that satisfies all five properties. Our notion is simple and easy to verify. We argue for its flexibility, completeness and soundness with practical examples that are problematic with previous approaches. We give a very general composition theorem for reactively polynomial protocols. The theorem states that arbitrarily many instances of a secure protocol can be used in any larger protocol without sacrificing security. Our proof is technically different from and substantially more involved than proofs for previous protocol composition theorems (for previous definitions of polynomial runtime). We believe that it is precisely this additional proof complexity, which appears only once and for all in the proof of the composition theorem, that makes a useful definition as simple as ours possible.

2008

EPRINT

Possibility and impossibility results for selective decommitments
Abstract

The *selective decommitment problem* can be described as follows: assume an adversary receives a number of commitments and then may request openings of, say, half of them. Do the unopened commitments remain secure? Although this question arose more than twenty years ago, no satisfactory answer could be presented so far. We answer the question in several ways:
- If simulation-based security is desired (i.e., if we demand that the adversary's output can be simulated by a machine that does not see the unopened commitments), then security is *not achievable* for non-interactive or perfectly binding commitment schemes via black-box reductions to standard cryptographic assumptions. *However,* we show how to achieve security in this sense with interaction and a non-black-box reduction to one-way permutations.
- If only indistinguishability of the unopened commitments from random commitments is desired, then security is *not achievable* for (interactive or non-interactive) perfectly binding commitment schemes, via black-box reductions to standard cryptographic assumptions. *However,* any statistically hiding scheme *does* achieve security in this sense.
Our results give an almost complete picture when and how security under selective openings can be achieved. Applications of our results include:
- Essentially, an encryption scheme *must* be non-committing in order to achieve provable security against an adaptive adversary.
- When implemented with our commitment scheme, the interactive proof for graph 3-coloring due to Goldreich et al. becomes black-box zero-knowledge under parallel composition.
On the technical side, we develop a technique to show very general impossibility results for black-box proofs.

2007

EPRINT

Secure Hybrid Encryption from Weakened Key Encapsulation
Abstract

We put forward a new paradigm for building hybrid encryption schemes
from constrained chosen-ciphertext secure (CCCA) key-encapsulation mechanisms (KEMs) plus authenticated symmetric encryption. Constrained chosen-ciphertext security is a new security notion for KEMs that we propose. CCCA has less demanding security requirements than standard chosen-ciphertext (CCA) security (since it requires the adversary to have a certain plaintext-knowledge when making a decapsulation query) yet we can prove that CCCA is sufficient for secure hybrid encryption.
Our notion is not only useful to express the Kurosawa-Desmedt public-key encryption scheme and its generalizations to hash-proof systems in an abstract KEM/DEM security framework. It also has a very constructive appeal, which we demonstrate with a new encryption scheme
whose security relies on a class of intractability assumptions that we show (in the generic group model) strictly weaker than the Decision Diffie-Hellman (DDH) assumption. This appears to be the first practical public-key encryption scheme in the literature from an
algebraic assumption strictly weaker than DDH.

2007

EPRINT

Towards Key-Dependent Message Security in the Standard Model
Abstract

Standard security notions for encryption schemes do not guarantee any security if the encrypted messages depend on the secret key. Yet it is exactly the stronger notion of security in the presence of *key-dependent* messages (KDM security) that is required in a number of applications: most prominently, KDM security plays an important role in analyzing cryptographic multi-party protocols in a formal calculus. But although often assumed, the mere existence of KDM secure schemes is an open problem. The only previously known construction was proven secure in the random oracle model.
We present symmetric encryption schemes that are KDM secure in the standard model (i.e., without random oracles). The price we pay is that we achieve only a relaxed (but still useful) notion of key-dependent message security. Our work answers (at least partially) an open problem posed by Black, Rogaway, and Shrimpton. More concretely, our contributions are as follows:
- We present a (stateless) symmetric encryption scheme that is information-theoretically secure in face of a *bounded* number and length of encryptions for which the messages depend in an arbitrary way on the secret key.
- We present a stateful symmetric encryption scheme that is computationally secure in face of an arbitrary number of encryptions for which the messages depend only on the respective *current* secret state/key of the scheme. The underlying computational assumption is minimal: we assume the existence of one-way functions.
- We give evidence that the only previously known KDM secure encryption scheme cannot be proven secure in the standard model (i.e., without random oracles).

2006

EPRINT

The Kurosawa-Desmedt Key Encapsulation is not Chosen-Ciphertext Secure
Abstract

At CRYPTO 2004, Kurosawa and Desmedt presented a hybrid public-key encryption scheme that is chosen-ciphertext secure in the standard model. Until now it was unknown if the key-encapsulation part of the Kurosawa-Desmedt scheme by itself is still chosen-ciphertext secure or not. In this short note we answer this question to the negative, namely we present a simple chosen-ciphertext attack on the Kurosawa-Desmedt key encapsulation mechanism.

2006

EPRINT

Simulatable Security and Polynomially Bounded Concurrent Composition
Abstract

Simulatable security is a security notion for multi-party protocols that implies strong composability features. The main definitional flavours of simulatable security are standard simulatability, universal simulatability, and black-box simulatability. All three come in "computational," "statistical" and "perfect" subflavours indicating the considered adversarial power. Universal and black-box simulatability, in all of their subflavours, are already known to guarantee that the concurrent composition even of a polynomial number of secure protocols stays secure.
We show that computational standard simulatability does not allow for secure concurrent composition of polynomially many protocols, but we also show that statistical standard simulatability does. The first result assumes the existence of an interesting cryptographic tool (namely time-lock puzzles), and its proof employs a cryptographic multi-party computation in an interesting and unconventional way.

2006

EPRINT

Conditional Reactive Simulatability
Abstract

Simulatability has established itself as a salient notion for defining
and proving the security of cryptographic protocols since it entails
strong security and compositionality guarantees, which are achieved by
universally quantifying over all environmental behaviors of the
analyzed protocol. As a consequence, however, protocols that are
secure except for certain environmental behaviors are not simulatable,
even if these behaviors are efficiently identifiable and thus can be
prevented by the surrounding protocol.
We propose a relaxation of simulatability by conditioning the
permitted environmental behaviors, i.e., simulation is only required
for environmental behaviors that fulfill explicitly stated
constraints. This yields a more fine-grained security definition that
is achievable i) for several protocols for which unconditional
simulatability is too strict a notion or ii) at lower cost for the
underlying cryptographic primitives. Although imposing restrictions
on the environment destroys unconditional composability in general, we
show that the composition of a large class of conditionally
simulatable protocols yields protocols that are again simulatable
under suitable conditions. This even holds for the case of cyclic
assume-guarantee conditions where protocols only guarantee suitable
behavior if they themselves are offered certain guarantees.
Furthermore, composing several commonly investigated protocol classes
with conditionally simulatable subprotocols yields protocols that are
again simulatable in the standard, unconditional sense.

2006

EPRINT

On the (Im-)Possibility of Extending Coin Toss
Abstract

We consider the cryptographic two-party protocol task of extending a given coin toss. The goal is to generate n common random coins from a single use of an ideal functionality which gives m<n common random coins to the parties. In the framework of Universal Composability we show the impossibility of securely extending a coin toss for statistical and perfect security. On the other hand, for computational security the existence of a protocol for coin toss extension depends on the number m of random coins which can be obtained "for free".
For the case of stand-alone security, i.e., a simulation based security definition without an environment, we present a novel protocol for unconditionally secure coin toss extension. The new protocol works for superlogarithmic m, which is optimal as we show the impossibility of statistically secure coin toss extension for smaller m.
Combining our results with already known results, we obtain a (nearly) complete characterization under which circumstances coin toss extension is possible.

2006

EPRINT

KEM/DEM: Necessary and Sufficient Conditions for Secure Hybrid Encryption
Abstract

The KEM/DEM hybrid encryption paradigm combines the efficiency and large message space of secret key encryption with the advantages of public key cryptography. Due to its simplicity and flexibility, the approach has ever since gained increased popularity and has been successfully adapted in encryption standards.
In hybrid public key encryption (PKE), first a key encapsulation mechanism (KEM) is used to fix a random session key that is then fed into a highly efficient data encapsulation mechanism (DEM) to encrypt the actual message. A composition theorem states that if both the KEM and the DEM have the highest level of security (i.e. security against chosen-ciphertext attacks), then so does the hybrid PKE scheme. It is not known if these strong security requirements on the KEM and DEM are also neccessary, nor if such general composition theorems exist for weaker levels of security. In this work we study neccessary and sufficient conditions on the security of the KEM and the DEM in order to guarantee a hybrid PKE scheme with a certain given level of security. More precisely, using nine different security notions for KEMs, ten for DEMs, and six for PKE schemes we completely characterize which combinations lead to a secure hybrid PKE scheme (by proving a composition theorem) and which do not (by providing counterexamples).
Furthermore, as an independent result, we revisit and extend prior work on the relation among security notions for KEMs and DEMs.

2006

EPRINT

A Note on Bounded Chosen Ciphertext Security from Black-box Semantical Security
Abstract

Designing public key encryption schemes withstanding chosen
ciphertext attacks, which is the highest security level for such
schemes, is generally perceived as a delicate and intricate task,
and for good reason. In the standard model, there are essentially
three well-known but quite involved approaches.
This state of affairs is to be contrasted with the situation for
semantically secure encryption schemes, a much weaker security
notion that only guarantees security in the absence of active
attack but that appears to be much easier to fulfill,
both conceptually and practically. Thus, the boundary
between passive attack and active attack seems to make up the
dividing line between which security levels are relatively easily
achieved and which are not. Our contributions are two-fold.
First, we show a simple, efficient black-box construction of a
public key encryption scheme withstanding chosen ciphertext attack
from any given semantically secure one. Our scheme is $q$-bounded in
the sense that security is only guaranteed if the adversary makes at
most $q$ adaptive chosen ciphertext queries. Here, $q$ is an
arbitrary polynomial that is fixed in advance in the key-generation.
Our work thus shows that whether or not the number of active,
adversarial queries is known in advance is the dividing line, and
not passive versus active attack. In recent work, Gertner, Malkin
and Myers show that such black-box reductions are impossible if
instead $q$ is a polynomial that only depends on the adversary.
Thus, in a sense, our result appears to be the best black-box result
one can hope for. Second, we give a non-blackbox reduction from bounded chosen ciphertext security to semantic security where the length of the public/secret keys and ciphertexts drops from quadratic to linear in $q$, compared to our black-box construction. This latter scheme, however, is only of theoretical interest as it uses general NP-reductions, and our blackbox construction is in fact much more practical.

2006

EPRINT

Obfuscation for Cryptographic Purposes
Abstract

An obfuscation O of a function F should satisfy two requirements: firstly, using O it should be possible to evaluate F; secondly, O should not reveal anything about F that cannot be learnt from oracle access to F. Several definitions for obfuscation exist. However, most of them are either too weak for or incompatible with cryptographic applications, or have been shown impossible to achieve, or both.
We give a new definition of obfuscation and argue for its reasonability and usefulness. In particular, we show that it is strong enough for cryptographic applications, yet we show that it has the potential for interesting positive results. We illustrate this with the following two results:
- If the encryption algorithm of a secure secret-key encryption scheme can be obfuscated according to our definition, then the result is a secure public-key encryption scheme.
- A uniformly random point function can be easily obfuscated according to our definition, by simply applying a one-way permutation. Previous obfuscators for point functions, under varying notions of security, are either probabilistic or in the random oracle model (but work for arbitrary distributions on the point function).
On the negative side, we show that
- Following Hada and Wee, any family of deterministic functions that can be obfuscated according to our definition must already be ``approximately learnable.'' Thus, many deterministic functions cannot be obfuscated. However, a probabilistic functionality such as a probabilistic secret-key encryption scheme can potentially be obfuscated. In particular, this is possible for a public-key encryption scheme when viewed as a secret-key scheme.
- There exists a secure probabilistic secret-key encryption scheme that cannot be obfuscated according to our definition. Thus, we cannot hope for a general-purpose cryptographic obfuscator for encryption schemes.

2005

EPRINT

On the Notion of Statistical Security in Simulatability Definitions
Abstract

We investigate the definition of statistical security (i.e.,
security against unbounded adversaries) in the framework of reactive
simulatability. This framework allows to formulate and analyze
multi-party protocols modularly by providing a composition theorem
for protocols. However, we show that the notion of statistical
security, as defined by Backes, Pfitzmann and Waidner for the
reactive simulatability framework, does not allow for secure
composition of protocols. This in particular invalidates the proof
of the composition theorem.
We give evidence that the reason for the non-composability of
statistical security is no artifact of the framework itself, but of
the particular formulation of statistical security. Therefore, we
give a modified notion of statistical security in the reactive
simulatability framework. We prove that this notion allows for
secure composition of protocols.
As to the best of our knowledge, no formal definition of statistical
security has been fixed for Canetti's universal composability
framework, we believe that our observations and results can also
help to avoid potential pitfalls there.

2005

EPRINT

A Practical Attack on the Root Problem in Braid Groups
Abstract

Using a simple heuristic approach to the root problem in braid groups, we show that cryptographic parameters proposed in this context must be considered as insecure. In our experiments we can, often within seconds, extract the secret key of an authentication system based on the root problem in braid groups.

2005

EPRINT

On Fairness in Simulatability-based Cryptographic Systems
Abstract

Simulatability constitutes the cryptographic notion of a secure refinement and has asserted its position as one of the fundamental concepts of modern cryptography. Although simulatability carefully captures that a distributed protocol does not behave any worse than an ideal specification, it however does not capture any form of liveness guarantees, i.e., that something good eventually happens in the protocol.
We show how one can extend the notion of simulatability to comprise liveness guarantees by imposing specific fairness constraints on the adversary. As the common notion of fairness based on infinite runs and eventual message delivery is not suited for reasoning about polynomial-time, cryptographic systems, we propose a new definition of fairness that enforces the delivery of messages after a polynomial number of steps. We provide strengthened variants of this definition by granting the protocol parties explicit guarantees on the maximum delay of messages. The variants thus capture fairness with explicit timeout signals, and we further distinguish between fairness with local timeouts and fairness with global timeouts.
We compare the resulting notions of fair simulatability, and provide separating examples that help to classify the strengths of the definitions and that show that the different definitions of fairness imply different variants of simulatability.

2004

EPRINT

A Synchronous Model for Multi-Party Computation and the Incompleteness of Oblivious Transfer
Abstract

This work develops a composable notion of security in a synchronous communication network to analyze cryptographic primitives and protocols in a reliable network with guaranteed delivery. In such a synchronous model the abort of protocols must be handled explicitly. It is shown that a version of *global bit commitment* which allows to identify parties that did not give proper input cannot be securely realized with the primitives *oblivious transfer* and *broadcast*. This proves that the primitives oblivious transfer and broadcast are not complete in our synchronous model of security.
In the synchronous model presented ideal functionalities as well as parties can be equipped with a ``shell'' which can delay communication until the adversary allows delivery or the number of rounds since the shell received the message exceeds a specified threshold. This additionally allows asynchronous specification of ideal functionalities and allows to model a network where messages are not necessarily delivered in the right order. If these latency times are chosen to be infinite the network is no more reliable and becomes completely asynchronous. It is shown that secure protocols in the setting of [Canetti01] or [CLOS02] can be transformed to secure realizations in the new model if latency times are chosen to be infinite.

2003

EPRINT

On Modeling IND-CCA Security in Cryptographic Protocols
Abstract

Two common notions of security for public key encryption schemes are shown to be equivalent: we prove that indistinguishability against chosen-ciphertext attacks (IND-CCA) is in fact polynomially equivalent to (yet "slightly" weaker than) securely realizing the ideal functionality F_PKE in the general modeling of cryptographic protocols of [http://eprint.iacr.org/2000/067]. This disproves in particular the claim that security in the sense of IND-CCA strictly implies security in the sense of realizing F_PKE (see [http://eprint.iacr.org/2000/067]). Moreover, we give concrete reductions among such security notions and show that these relations hold for both uniform and non-uniform adversarial entities.

2003

EPRINT

Initiator-Resilient Universally Composable Key Exchange
Abstract

Key exchange protocols in the setting of universal composability are investigated. First we show that the ideal functionality F_KE of [CK02] cannot be realized in the presence of adaptive adversaries, thereby disproving a claim in [CK02]. We proceed to propose a modification F_KE^(i,j), which is proven to be realizable by two natural protocols for key exchange. Furthermore, sufficient conditions for securely realizing this modified functionality are given. Two notions of key exchange are introduced that allow for security statements even when one party is corrupted. Two natural key exchange protocols are proven to fulfill the "weaker" of these notions, and a construction for deriving protocols that satisfy the "stronger" notion is given.

2003

EPRINT

How to Break and Repair a Universally Composable Signature Functionality
Abstract

Canetti and Rabin recently proposed a universally composable ideal functionality F_SIG for digital signatures. We show that this functionality cannot be securely realized by \emph{any} signature scheme, thereby disproving their result that any signature scheme that is existentially unforgeable under adaptive chosen-message attack is a secure realization.
Next, an improved signature functionality is presented. We show that our improved functionality can be securely realized by precisely those signature schemes that are secure against existential forgery under adaptive chosen-message attacks.

#### Program Committees

- TCC 2019
- TCC 2019
- Eurocrypt 2018
- Crypto 2017
- PKC 2017
- Eurocrypt 2016
- TCC 2015
- TCC 2014
- Crypto 2013
- TCC 2012
- Eurocrypt 2012
- Asiacrypt 2011
- Asiacrypt 2010
- TCC 2008

#### Coauthors

- Masayuki Abe (1)
- Thomas Agrikola (1)
- Martin R. Albrecht (2)
- Michael Backes (4)
- Christoph Bader (1)
- Boaz Barak (1)
- Mihir Bellare (2)
- Florian Böhl (4)
- David Cash (2)
- Geoffroy Couteau (1)
- Ronald Cramer (3)
- Gareth T. Davies (1)
- Markus Duermuth (1)
- Pooya Farshim (3)
- Serge Fehr (1)
- Eduarda S. V. Freire (2)
- Eduarda S.V. Freire (1)
- Romain Gay (3)
- Anja Groch (1)
- Iftach Haitner (1)
- Goichiro Hanaoka (1)
- Gottfried Herold (2)
- Javier Herranz (2)
- Julia Hesse (7)
- Kathrin Hövelmanns (1)
- Hideki Imai (1)
- Yuval Ishai (1)
- Tibor Jager (10)
- Dingding Jia (1)
- Dakshita Khurana (1)
- Eike Kiltz (20)
- Edward Knapp (1)
- Jessica Koch (4)
- Lisa Kohl (3)
- Daniel Kraschewski (1)
- Ralf Küsters (1)
- Enrique Larraia (3)
- Yong Li (1)
- John Malone-Lee (3)
- Christian Matt (2)
- Ueli Maurer (2)
- Jörn Müller-Quade (9)
- Ngoc Khanh Nguyen (1)
- Ryo Nishimaki (1)
- Miyako Ohkubo (1)
- Jiaxin Pan (3)
- Rafael Pass (1)
- Kenneth G. Paterson (4)
- Chris Peikert (2)
- Carla Ràfols (2)
- Vanishree Rao (2)
- Andy Rupp (6)
- Amit Sahai (1)
- Jae Hong Seo (1)
- Abhi Shelat (1)
- Victor Shoup (1)
- Martijn Stam (3)
- Rainer Steinwandt (4)
- Christoph Striecks (5)
- Dominique Unruh (11)
- Bogdan Ursu (1)
- Vinod Vaikuntanathan (1)
- Brent Waters (1)
- Hoeteck Wee (2)
- Daniel Wichs (2)
- Scott Yilek (1)
- Mark Zhandry (1)