## CryptoDB

### Yevgeniy Dodis

#### Publications

**Year**

**Venue**

**Title**

2021

CRYPTO

No Time to Hash:On Super-Efficient Entropy Accumulation
📺
Abstract

Real-world random number generators (RNGs) cannot afford to use (slow) cryptographic hashing every time they refresh their state R with a new entropic input X. Instead, they use ``super-efficient'' simple entropy-accumulation procedures, such as
R <- rot_{alpha, n}(R) XOR X
where rot_{alpha,n} rotates an n-bit state R by some fixed number alpha. For example, Microsoft's RNG uses alpha=5 for n=32 and alpha=19 for n=64. Where do these numbers come from? Are they good choices? Should rotation be replaced by a better permutation pi of the input bits?
In this work we initiate a rigorous study of these pragmatic questions, by modeling the sequence of successive entropic inputs X_1,X_2, ... as independent (but otherwise adversarial) samples from some natural distribution family D. We show a simple but surprisingly powerful connection between entropy accumulation and understanding the Fourier spectrum of distributions in D. Our contribution is as follows.
- We define 2-monotone distributions as a rich family D that includes relevant real-world distributions (Gaussian, exponential, etc.), but avoids trivial impossibility results.
- For any alpha with gcd(alpha,n)=1, we show that rotation accumulates Omega(n) bits of entropy from n independent samples X_1,...,X_n from any (unknown) 2-monotone distribution with entropy k > 1.
- However, we also show some choices of alpha perform much better than others for a given n. E.g., we show alpha=19 is one of the best choices for n=64; in contrast, alpha=5 is good, but generally worse than alpha=7, for n=32.
- More generally, given a permutation pi and k > 1, we define a simple parameter, the covering number C_{pi,k}, and show that it characterizes the number of steps before the rule
(R_1,...,R_n) <- (R_{pi(1)},..., R_{pi(n)}) XOR X
accumulates nearly n bits of entropy from independent, 2-monotone samples of min-entropy k each.
- We build a simple permutation pi^*, which achieves nearly optimal C_{pi^*,k} \approx n/k for all values of k simultaneously, and experimentally validate that it compares favorably with all rotations rot_{alpha,n}.

2020

EUROCRYPT

Extracting Randomness from Extractor-Dependent Sources
📺
Abstract

We revisit the well-studied problem of extracting nearly uniform randomness from an arbitrary source of sufficient min-entropy. Strong seeded extractors solve this problem by relying on a public random seed, which is unknown to the source. Here, we consider a setting where the seed is reused over time and the source may depend on prior calls to the extractor with the same seed. Can we still extract nearly uniform randomness?
In more detail, we assume the seed is chosen randomly, but the source can make arbitrary oracle queries to the extractor with the given seed before outputting a sample. We require that the sample has entropy and differs from any of the previously queried values. The extracted output should look uniform even to a distinguisher that gets the seed. We consider two variants of the problem, depending on whether the source only outputs the sample, or whether it can also output some correlated public auxiliary information that preserves the sample's entropy. Our results are:
* Without Auxiliary Information: We show that every pseudo-random function (PRF) with a sufficiently high security level is a good extractor in this setting, even if the distinguisher is computationally unbounded. We further show that the source necessarily needs to be computationally bounded and that such extractors imply one-way functions.
* With Auxiliary Information: We construct secure extractors in this setting, as long as both the source and the distinguisher are computationally bounded. We give several constructions based on different intermediate primitives, yielding instantiations based on the DDH, DLIN, LWE or DCR assumptions. On the negative side, we show that one cannot prove security against computationally unbounded distinguishers in this setting under any standard assumption via a black-box reduction. Furthermore, even when restricting to computationally bounded distinguishers, we show that there exist PRFs that are insecure as extractors in this setting and that a large class of constructions cannot be proven secure via a black-box reduction from standard assumptions.

2020

CRYPTO

Security Analysis and Improvements for the IETF MLS Standard for Group Messaging
📺
Abstract

Secure messaging (SM) protocols allow users to communicate securely
over untrusted infrastructure. In contrast to most other secure
communication protocols (such as TLS, SSH, or Wireguard), SM
sessions may be long-lived (e.g., years) and highly asynchronous.
In order to deal with likely state compromises of users during the
lifetime of a session, SM protocols do not only protect authenticity
and privacy, but they also guarantee forward secrecy (FS) and
post-compromise security (PCS). The former ensures that
messages sent and received before a state compromise remain secure,
while the latter ensures that users can recover from state
compromise as a consequence of normal protocol usage.
SM has received considerable attention in the two-party
case, where prior work has studied the well-known double-ratchet
paradigm, in particular, and SM as a cryptographic primitive, in
general. Unfortunately, this paradigm does not scale well to the
problem of secure group messaging (SGM). In order to address
the lack of satisfactory SGM protocols, the IETF has launched the
message-layer security (MLS) working group, which aims to
standardize an eponymous SGM protocol.
In this work we analyze the TreeKEM protocol, which is at the
core of the SGM protocol proposed by the MLS working group.
On a positive note, we show that TreeKEM achieves PCS in isolation
(and slightly more). However, we observe that the current version
of TreeKEM does not provide an adequate form of FS. More precisely,
our work proceeds by formally capturing the exact security of
TreeKEM as a so-called continuous group key agreement (CGKA)
protocol, which we believe to be a primitive of independent
interest. To address the insecurity of TreeKEM, we propose a simple
modification to TreeKEM inspired by recent work of Jost et al.
(EUROCRYPT '19) and an idea due to Kohbrok (MLS Mailing List). We
then show that the modified version of TreeKEM comes with almost no
efficiency degradation but achieves optimal (according to MLS
specification) CGKA security, including FS and PCS. Our work also
lays out how a CGKA protocol can be used to design a full SGM
protocol.

2020

TCC

Towards Defeating Backdoored Random Oracles: Indifferentiability with Bounded Adaptivity
📺
Abstract

In the backdoored random-oracle (BRO) model, besides access to a random function $\hash$, adversaries are provided with a backdoor oracle that can compute arbitrary leakage functions $f$ of the function table of $\hash$. Thus, an adversary would be able to invert points, find collisions, test for membership in certain sets, and more. This model was introduced in the work of Bauer, Farshim, and Mazaheri (Crypto 2018) and extends the auxiliary-input idealized models of Unruh (Crypto 2007), Dodis, Guo, and Katz (Eurocrypt 2017), Coretti et al. (Eurocrypt 2018), and Coretti, Dodis, and Guo (Crypto~2018). It was shown that certain security properties, such as one-wayness, pseudorandomness, and collision resistance can be re-established by combining two independent BROs, even if the adversary has access to both backdoor oracles.
In this work we further develop the technique of combining two or more independent BROs to render their backdoors useless in a more general sense. More precisely, we study the question of building an \emph{indifferentiable} and backdoor-free random function by combining multiple BROs. Achieving full indifferentiability in this model seems very challenging at the moment. We however make progress by showing that the xor combiner goes well beyond security against preprocessing attacks and offers indifferentiability as long as the adaptivity of queries to different backdoor oracles remains logarithmic in the input size of the BROs. We even show that an extractor-based combiner of three BROs can achieve indifferentiability with respect to a linear adaptivity of backdoor queries. Furthermore, a natural restriction of our definition gives rise to a notion of \emph{indifferentiability with auxiliary input}, for which we give two positive feasibility results.
To prove these results we build on and refine techniques by Göös et al. (STOC 2015) and Kothari et al. (STOC 2017) for decomposing distributions with high entropy into distributions with more structure and show how they can be applied in the more involved adaptive settings.

2020

TCC

On the Price of Concurrency in Group Ratcheting Protocols
📺
Abstract

Post-Compromise Security, or PCS, refers to the ability of a given protocol to recover—by means of normal protocol operations—from the exposure of local states of its (otherwise honest) participants. While PCS in the two-party setting has attracted a lot of attention recently, the problem of achieving PCS in the group setting—called group ratcheting here—is much less understood. On the one hand, one can achieve excellent security by simply executing, in parallel, a two-party ratcheting protocol (e.g., Signal) for each pair of members in a group. However, this incurs O(n) communication overhead for every message sent, where n is the group size. On the other hand, several related protocols were recently developed in the context of the IETF Messaging Layer Security (MLS) effort that improve the communication overhead per message to O(log n). However, this reduction of communication overhead involves a great restriction: group members are not allowed to send and recover from exposures concurrently such that reaching PCS is delayed up to n communication time slots (potentially even more).
In this work we formally study the trade-off between PCS, concurrency, and communication overhead in the context of group ratcheting. Since our main result is a lower bound, we define the cleanest and most restrictive setting where the tension already occurs: static groups equipped with a synchronous (and authenticated) broadcast channel, where up to t arbitrary parties can concurrently send messages in any given round. Already in this setting, we show in a symbolic execution model that PCS requires Omega(t) communication overhead per message. Our symbolic model permits as building blocks black-box use of (even "dual") PRFs, (even key-updatable) PKE (which in our symbolic definition is at least as strong as HIBE), and broadcast encryption, covering all tools used in previous constructions, but prohibiting the use of exotic primitives.
To complement our result, we also prove an almost matching upper bound of O(t(1+log(n/t))), which smoothly increases from O(log n) with no concurrency, to O(n) with unbounded concurrency, matching the previously known protocols.

2020

JOFC

Non-malleable Encryption: Simpler, Shorter, Stronger
Abstract

One approach toward basing public-key encryption (PKE) schemes on weak and credible assumptions is to build “stronger” or more general schemes generically from “weaker” or more restricted ones. One particular line of work in this context was initiated by Myers and Shelat (FOCS ’09) and continued by Hohenberger, Lewko, and Waters (Eurocrypt ’12), who provide constructions of multi-bit CCA-secure PKE from single-bit CCA-secure PKE. It is well known that encrypting each bit of a plaintext string independently is not CCA-secure—the resulting scheme is malleable . We therefore investigate whether this malleability can be dealt with using the conceptually simple approach of applying a suitable non-malleable code (Dziembowski et al., ICS ’10) to the plaintext and subsequently encrypting the resulting codeword bit by bit. We find that an attacker’s ability to ask multiple decryption queries requires that the underlying code be continuously non-malleable (Faust et al., TCC ’14). Since, as we show, this flavor of non-malleability can only be achieved if the code is allowed to “self-destruct,” the resulting scheme inherits this property and therefore only achieves a weaker variant of CCA security. We formalize this new notion of so-called indistinguishability under self-destruct attacks (IND-SDA) as CCA security with the restriction that the decryption oracle stops working once the attacker submits an invalid ciphertext. We first show that the above approach based on non-malleable codes yields a solution to the problem of domain extension for IND-SDA-secure PKE, provided that the underlying code is continuously non-malleable against (a reduced form of) bit-wise tampering. Then, we prove that the code of Dziembowski et al. is actually already continuously non-malleable against bit-wise tampering. We further investigate the notion of security under self-destruct attacks and combine IND-SDA security with non-malleability under chosen-ciphertext attacks (NM-CPA) to obtain the strictly stronger notion of non-malleability under self-destruct attacks (NM-SDA) . We show that NM-SDA security can be obtained from basic IND-CPA security by means of a black-box construction based on the seminal work by Choi et al. (TCC ’08). Finally, we provide a domain extension technique for building a multi-bit NM-SDA scheme from a single-bit NM-SDA scheme. To achieve this goal, we define and construct a novel type of continuous non-malleable code, called secret-state NMC , since, as we show, standard continuous NMCs are insufficient for the natural “encode-then-encrypt-bit-by-bit” approach to work.

2019

EUROCRYPT

The Double Ratchet: Security Notions, Proofs, and Modularization for the Signal Protocol
Abstract

Signal is a famous secure messaging protocol used by billions of people, by virtue of many secure text messaging applications including Signal itself, WhatsApp, Facebook Messenger, Skype, and Google Allo. At its core it uses the concept of “double ratcheting,” where every message is encrypted and authenticated using a fresh symmetric key; it has many attractive properties, such as forward security, post-compromise security, and “immediate (no-delay) decryption,” which had never been achieved in combination by prior messaging protocols.While the formal analysis of the Signal protocol, and ratcheting in general, has attracted a lot of recent attention, we argue that none of the existing analyses is fully satisfactory. To address this problem, we give a clean and general definition of secure messaging, which clearly indicates the types of security we expect, including forward security, post-compromise security, and immediate decryption. We are the first to explicitly formalize and model the immediate decryption property, which implies (among other things) that parties seamlessly recover if a given message is permanently lost—a property not achieved by any of the recent “provable alternatives to Signal.”We build a modular “generalized Signal protocol” from the following components: (a) continuous key agreement (CKA), a clean primitive we introduce and which can be easily and generically built from public-key encryption (not just Diffie-Hellman as is done in the current Signal protocol) and roughly models “public-key ratchets;” (b) forward-secure authenticated encryption with associated data (FS-AEAD), which roughly captures “symmetric-key ratchets;” and (c) a two-input hash function that is a pseudorandom function (resp. generator with input) in its first (resp. second) input, which we term PRF-PRNG. As a result, in addition to instantiating our framework in a way resulting in the existing, widely-used Diffie-Hellman based Signal protocol, we can easily get post-quantum security and not rely on random oracles in the analysis.

2019

CRYPTO

Seedless Fruit Is the Sweetest: Random Number Generation, Revisited
📺
Abstract

The need for high-quality randomness in cryptography makes random-number generation one of its most fundamental tasks.A recent important line of work (initiated by Dodis et al., CCS ’13) focuses on the notion of robustness for pseudorandom number generators (PRNGs) with inputs. These are primitives that use various sources to accumulate sufficient entropy into a state, from which pseudorandom bits are extracted. Robustness ensures that PRNGs remain secure even under state compromise and adversarial control of entropy sources. However, the achievability of robustness inherently depends on a seed, or, alternatively, on an ideal primitive (e.g., a random oracle), independent of the source of entropy. Both assumptions are problematic: seed generation requires randomness to start with, and it is arguable whether the seed or the ideal primitive can be kept independent of the source.
This paper resolves this dilemma by putting forward new notions of robustness which enable both (1) seedless PRNGs and (2) primitive-dependent adversarial sources of entropy. To bypass obvious impossibility results, we make a realistic compromise by requiring that the source produce sufficient entropy even given its evaluations of the underlying primitive. We also provide natural, practical, and provably secure constructions based on hash-function designs from compression functions, block ciphers, and permutations. Our constructions can be instantiated with minimal changes to industry-standard hash functions SHA-2 and SHA-3, or key derivation function HKDF, and can be downgraded to (online) seedless randomness extractors, which are of independent interest.On the way we consider both a computational variant of robustness, where attackers only make a bounded number of queries to the ideal primitive, as well as a new information-theoretic variant, which dispenses with this assumption to a certain extent, at the price of requiring a high rate of injected weak randomness (as it is, e.g., plausible on Intel’s on-chip RNG). The latter notion enables applications such as everlasting security. Finally, we show that the CBC extractor, used by Intel’s on-chip RNG, is provably insecure in our model.

2019

CRYPTO

Reusable Non-Interactive Secure Computation
📺
Abstract

We consider the problem of Non-Interactive Two-Party Secure Computation (NISC), where Rachel wishes to publish an encryption of her input x, in such a way that any other party, who holds an input y, can send her a single message which conveys to her the value f(x, y), and nothing more. We demand security against malicious parties. While such protocols are easy to construct using garbled circuits and general non-interactive zero-knowledge proofs, this approach inherently makes a non-black-box use of the underlying cryptographic primitives and is infeasible in practice.Ishai et al. (Eurocrypt 2011) showed how to construct NISC protocols that only use parallel calls to an ideal oblivious transfer (OT) oracle, and additionally make only a black-box use of any pseudorandom generator. Combined with the efficient 2-message OT protocol of Peikert et al. (Crypto 2008), this leads to a practical approach to NISC that has been implemented in subsequent works. However, a major limitation of all known OT-based NISC protocols is that they are subject to selective failure attacks that allows a malicious sender to entirely compromise the security of the protocol when the receiver’s first message is reused.Motivated by the failure of the OT-based approach, we consider the problem of basing reusable NISC on parallel invocations of a standard arithmetic generalization of OT known as oblivious linear-function evaluation (OLE). We obtain the following results:We construct an information-theoretically secure reusable NISC protocol for arithmetic branching programs and general zero-knowledge functionalities in the OLE-hybrid model. Our zero-knowledge protocol only makes an absolute constant number of OLE calls per gate in an arithmetic circuit whose satisfiability is being proved. We also get reusable NISC in the OLE-hybrid model for general Boolean circuits using any one-way function.We complement this by a negative result, showing that reusable NISC is impossible to achieve in the OT-hybrid model. This provides a formal justification for the need to replace OT by OLE.We build a universally composable 2-message reusable OLE protocol in the CRS model that can be based on the security of Paillier encryption and requires only a constant number of modular exponentiations. This provides the first arithmetic analogue of the 2-message OT protocols of Peikert et al. (Crypto 2008).By combining our NISC protocol in the OLE-hybrid model and the 2-message OLE protocol, we get protocols with new attractive asymptotic and concrete efficiency features. In particular, we get the first (designated-verifier) NIZK protocols for NP where following a statement-independent preprocessing, both proving and verifying are entirely “non-cryptographic” and involve only a constant computational overhead. Furthermore, we get the first statistical designated-verifier NIZK argument for NP under an assumption related to factoring.

2018

CRYPTO

Non-Uniform Bounds in the Random-Permutation, Ideal-Cipher, and Generic-Group Models
📺
Abstract

The random-permutation model (RPM) and the ideal-cipher model (ICM) are idealized models that offer a simple and intuitive way to assess the conjectured standard-model security of many important symmetric-key and hash-function constructions. Similarly, the generic-group model (GGM) captures generic algorithms against assumptions in cyclic groups by modeling encodings of group elements as random injections and allows to derive simple bounds on the advantage of such algorithms.Unfortunately, both well-known attacks, e.g., based on rainbow tables (Hellman, IEEE Transactions on Information Theory ’80), and more recent ones, e.g., against the discrete-logarithm problem (Corrigan-Gibbs and Kogan, EUROCRYPT ’18), suggest that the concrete security bounds one obtains from such idealized proofs are often completely inaccurate if one considers non-uniform or preprocessing attacks in the standard model. To remedy this situation, this workdefines the auxiliary-input (AI) RPM/ICM/GGM, which capture both non-uniform and preprocessing attacks by allowing an attacker to leak an arbitrary (bounded-output) function of the oracle’s function table;derives the first non-uniform bounds for a number of important practical applications in the AI-RPM/ICM, including constructions based on the Merkle-Damgård and sponge paradigms, which underly the SHA hashing standards, and for AI-RPM/ICM applications with computational security; andusing simpler proofs, recovers the AI-GGM security bounds obtained by Corrigan-Gibbs and Kogan against preprocessing attackers, for a number of assumptions related to cyclic groups, such as discrete logarithms and Diffie-Hellman problems, and provides new bounds for two assumptions.
An important step in obtaining these results is to port the tools used in recent work by Coretti et al. (EUROCRYPT ’18) from the ROM to the RPM/ICM/GGM, resulting in very powerful and easy-to-use tools for proving security bounds against non-uniform and preprocessing attacks.

2018

CRYPTO

Provable Security of (Tweakable) Block Ciphers Based on Substitution-Permutation Networks
📺
Abstract

Substitution-Permutation Networks (SPNs) refer to a family of constructions which build a wn-bit block cipher from n-bit public permutations (often called S-boxes), which alternate keyless and “local” substitution steps utilizing such S-boxes, with keyed and “global” permutation steps which are non-cryptographic. Many widely deployed block ciphers are constructed based on the SPNs, but there are essentially no provable-security results about SPNs.In this work, we initiate a comprehensive study of the provable security of SPNs as (possibly tweakable) wn-bit block ciphers, when the underlying n-bit permutation is modeled as a public random permutation. When the permutation step is linear (which is the case for most existing designs), we show that 3 SPN rounds are necessary and sufficient for security. On the other hand, even 1-round SPNs can be secure when non-linearity is allowed. Moreover, 2-round non-linear SPNs can achieve “beyond-birthday” (up to
$$2^{2n/3}$$
22n/3 adversarial queries) security, and, as the number of non-linear rounds increases, our bounds are meaningful for the number of queries approaching
$$2^n$$
2n. Finally, our non-linear SPNs can be made tweakable by incorporating the tweak into the permutation layer, and provide good multi-user security.As an application, our construction can turn two public n-bit permutations (or fixed-key block ciphers) into a tweakable block cipher working on wn-bit inputs, 6n-bit key and an n-bit tweak (for any
$$w\ge 2$$
w≥2); the tweakable block cipher provides security up to
$$2^{2n/3}$$
22n/3 adversarial queries in the random permutation model, while only requiring w calls to each permutation, and 3w field multiplications for each wn-bit input.

2018

CRYPTO

Fast Message Franking: From Invisible Salamanders to Encryptment
📺
Abstract

Message franking enables cryptographically verifiable reporting of abusive messages in end-to-end encrypted messaging. Grubbs, Lu, and Ristenpart recently formalized the needed underlying primitive, what they call compactly committing authenticated encryption (AE), and analyze security of a number of approaches. But all known secure schemes are still slow compared to the fastest standard AE schemes. For this reason Facebook Messenger uses AES-GCM for franking of attachments such as images or videos.We show how to break Facebook’s attachment franking scheme: a malicious user can send an objectionable image to a recipient but that recipient cannot report it as abuse. The core problem stems from use of fast but non-committing AE, and so we build the fastest compactly committing AE schemes to date. To do so we introduce a new primitive, called encryptment, which captures the essential properties needed. We prove that, unfortunately, schemes with performance profile similar to AES-GCM won’t work. Instead, we show how to efficiently transform Merkle-Damgärd-style hash functions into secure encryptments, and how to efficiently build compactly committing AE from encryptment. Ultimately our main construction allows franking using just a single computation of SHA-256 or SHA-3. Encryptment proves useful for a variety of other applications, such as remotely keyed AE and concealments, and our results imply the first single-pass schemes in these settings as well.

2016

CRYPTO

2015

EPRINT

2014

CRYPTO

2014

EPRINT

2010

EPRINT

Efficient Public-Key Cryptography in the Presence of Key Leakage
Abstract

We study the design of cryptographic primitives resistant to a large class of side-channel attacks, called "memory attacks", where an attacker can repeatedly and adaptively learn information about the secret key, subject *only* to the constraint that the *overall amount* of such information is bounded by some parameter $\ell$. Although the study of such primitives was initiated only recently by Akavia et al. [AGV09], subsequent work already produced many such "leakage-resilient" primitives [NS09,ADW09,KV09], including signature, encryption, identification (ID) and authenticated key agreement (AKA) schemes. Unfortunately, every existing scheme, --- for any of the four fundamental primitives above, --- fails to satisfy at least one of the following desirable properties:
- Efficiency. While the construction may be generic, it should have some *efficient* instantiations, based on standard cryptographic assumptions, and without relying on random oracles.
- Strong Security. The construction should satisfy the strongest possible definition of security (even in the presence of leakage). For example, encryption schemes should be secure against chosen *ciphertext* attack (CCA), while signatures should be *existentially* unforgeable.
- Leakage Flexibility. It should be possible to set the parameters of the schemes so that the leakage bound $\ell$ can come arbitrarily close to the size of the secret key $sk$.
In this work we design the first signature, encryption, ID and AKA schemes which overcome these limitations, and satisfy all the properties above. Moreover, all our constructions are generic, in several cases elegantly simplifying and generalizing the prior constructions (which did not have any efficient instantiations).
We also introduce several tools of independent interest, such as the abstraction (and constructions) of *simulation extractable* NIZK arguments, and a new *deniable* DH-based AKA protocol based on any CCA-secure encryption.

2010

EPRINT

Cryptography Against Continuous Memory Attacks
Abstract

We say that a cryptographic scheme is Continous Leakage-Resilient (CLR), if it allows users to refresh their secret keys, using only fresh local randomness, such that:
1. The scheme remains functional after any number of key refreshes, although the public key never changes. Thus, the outside world is neither affected by these key refreshes, nor needs to know about their frequency.
2. The scheme remains secure even if the adversary can continuously leak arbitrary information about the current secret-key of the system, as long as the amount of leaked information is bounded in between any two successive key refreshes. There is no bound on the total amount of information that can be leaked during the lifetime of the system.
In this work, we construct a variety of practical CLR schemes, including CLR one-way relations, CLR signatures, CLR identification schemes, and CLR authenticated key agreement protocols. For each of the above, we give general constructions, and then show how to instantiate them efficiently using a well established assumption on bilinear groups, called the K-Linear assumption (for any constant K >= 1).
Our constructions are highly modular, and we develop many interesting techniques and building-blocks along the way, including: leakage-indistinguishable re-randomizable relations, homomorphic NIZKs, and leakage-of-ciphertext non-malleable encryption schemes.
Prior to our work, no truly CLR schemes were known, as previous leakage-resilient schemes suffer from one or more of the following drawbacks: (a) restrictions are placed on the type of allowed leakage, such as the axiom that only computation leaks information; (b) the overall amount of key leakage is bounded a-priori for the lifetime
of the system and there is no method for refreshing keys ; (c) the efficiency of the scheme degrades proportionally with the number of refreshes; (d) the key updates require an additional leak-free master secret key to be stored securely; (e) the scheme is only proven secure under a strong non-standard assumption.

2010

EPRINT

Robust Fuzzy Extractors and Authenticated Key Agreement from Close Secrets
Abstract

Consider two parties holding samples from correlated distributions W and W', respectively, that are within distance t of each other in some metric space. These parties wish to agree on a uniformly distributed secret key R by sending a single message over an insecure channel controlled by an all-powerful adversary. We consider both the keyless case, where the parties share no additional secret information, and the keyed case, where the parties share a long-term secret SK that they can use to generate a sequence of session keys {R_j} using multiple pairs {W_j, W'_j}. The former has applications to, e.g., biometric authentication, while the latter arises in, e.g., the bounded storage model with errors.
Our results improve upon previous work in several respects:
-- The best previous solution for the keyless case with no errors (i.e., t=0) requires the min-entropy of W to exceed 2n/3, where n is the bit-length of W. Our solution applies whenever min-entropy of W exceeds the minimal possible} threshold n/2, and yields a longer key.
-- Previous solutions for the keyless case in the presence of errors (i.e., t>0) required random oracles. We give the first constructions (for certain metrics) in the standard model.
-- Previous solutions for the keyed case were stateful. We give the first stateless solution.

2009

EPRINT

Proofs of Retrievability via Hardness Amplification
Abstract

Proofs of Retrievability (PoR), introduced by Juels and Kaliski, allow the client to store a file $F$ on an untrusted server, and later run an efficient audit protocol in which
the server proves that it (still) possesses the client's data.
Constructions of PoR schemes attempt to minimize the client and
server storage, the communication complexity of an audit, and even
the number of file-blocks accessed by the server during the audit.
In this work, we identify several different variants of the problem
(such as bounded-use vs. unbounded-use, knowledge-soundness vs.
information-soundness), and giving nearly optimal PoR schemes for
each of these variants. Our constructions either improve (and
generalize) the prior PoR constructions, or give the first known PoR
schemes with the required properties. In particular, we
\begin{itemize}
\item Formally prove the security of an (optimized) variant of the
bounded-use scheme of Juels and Kaliski~\cite{JuelsK07}, without
making any simplifying assumptions on the behavior of the
adversary.
\item Build the first unbounded-use PoR scheme where the communication
complexity is linear in the security parameter and which does not rely on Random Oracles, resolving an open
question of Shacham and Waters~\cite{ShachamW08}.
\item Build the first bounded-use scheme with {\em
information-theoretic} security.
\end{itemize}
The main insight of our work comes from a simple
connection between PoR schemes and the notion of {\em hardness
amplification}, extensively studied in complexity theory. In
particular, our improvements come from first abstracting a purely
information-theoretic notion of {\em PoR codes},
and then building nearly optimal PoR codes using state-of-the-art
tools from coding and complexity theory.

2008

EUROCRYPT

2008

EPRINT

Detection of Algebraic Manipulation with Applications to Robust Secret Sharing and Fuzzy Extractors
Abstract

Consider an abstract storage device $\Sigma(\G)$ that can hold a
single element $x$ from a fixed, publicly known finite group $\G$.
Storage is private in the sense that an adversary does not have read
access to $\Sigma(\G)$ at all. However, $\Sigma(\G)$ is non-robust in the sense
that the adversary can modify its contents by adding some offset $\Delta \in \G$.
Due to the privacy of the storage device, the value $\Delta$ can only depend on an adversary's {\em a priori} knowledge of $x$. We introduce a new primitive called an {\em
algebraic manipulation detection} (AMD) code, which encodes a source $s$ into a value $x$ stored on $\Sigma(\G)$ so that any tampering
by an adversary will be detected, except with a small error probability $\delta$. We give a nearly optimal construction of AMD codes,
which can flexibly accommodate arbitrary choices for the length of the source $s$ and security level $\delta$. We use this construction in two applications:
\begin{itemize}
\item We show how to efficiently convert any linear secret sharing
scheme into a {\em robust secret sharing scheme}, which ensures that
no \emph{unqualified subset} of players can modify their shares and cause
the reconstruction of some value $s'\neq s$.
\item
We show how how to build nearly optimal {\em robust fuzzy
extractors} for several natural metrics. Robust fuzzy extractors enable one to reliably extract and later recover random keys from noisy and non-uniform secrets,
such as biometrics, by relying only on {\em non-robust public storage}. In the past, such constructions were known only in the random oracle model, or required the entropy rate of the secret to be greater than half. Our construction relies on a randomly chosen common reference string (CRS) available to all parties.
\end{itemize}

2008

EPRINT

One-Round Authenticated Key Agreement from Weak Secrets
Abstract

We study the question of information-theoretically secure authenticated key
agreement from weak secrets. In this setting, Alice and Bob share a $n$-bit
secret $W$, which might \emph{not} be uniformly random but the adversary has
at least $k$ bits of uncertainty about it (formalized using conditional
min-entropy). Alice and Bob wish to use $W$ to agree on a nearly uniform
secret key $R$, over a public channel controlled by an \emph{active} adversary
Eve. We show that non-interactive (single-message) protocols do not work when
$k\le \frac{n}{2}$, and require poor parameters even when $\frac{n}{2} < k\ll
n$.
On the other hand, for arbitrary values of $k$, we design a communication
efficient {\em two-message (i.e, one-round!)} protocol extracting nearly $k$
random bits. This dramatically improves the only previously known protocol of
Renner and Wolf~\cite{RennerW03}, which required $O(\lambda)$ rounds where
$\lambda$ is the security parameter. Our solution takes a new approach by
studying and constructing \emph{``non-malleable'' seeded randomness
extractors} --- if an attacker sees a random seed $X$ and comes up with an
arbitrarily related seed $X'$, then we bound the relationship between $R=
\Ext(W;X)$ and $R' = \Ext(W;X')$.
We also extend our one-round key agreement protocol to the ``fuzzy'' setting,
where Alice and Bob share ``close'' (but not equal) secrets $W_A$ and $W_B$,
and to the Bounded Retrieval Model (BRM) where the size of the secret $W$ is
huge.

2007

EPRINT

Optimistic Fair Exchange in a Multi-user Setting
Abstract

This paper addresses the security of optimistic fair exchange in a multi-user setting. While the security of public key encryption and public key signature schemes in a single-user setting guarantees the security in a multi-user setting, we show that the situation is different in the optimistic fair exchange. First, we show how to break, in the multi-user setting, an optimistic fair exchange scheme provably secure in the single-user setting. This example separates the security of optimistic fair exchange between the single-user setting and the multi-user setting. We then define the formal security model of optimistic fair exchange in the multi-user setting, which is the first complete security model of optimistic fair exchange in the multi-user setting. We prove the existence of a generic construction meeting our multi-user security based on one-way functions in the random oracle model and trapdoor one-way permutations in the standard model. Finally, we revisit two well-known methodologies of optimistic fair exchange, which are based on the verifiably encrypted signature and the sequential two-party multisignature, respectively. Our result shows that these paradigms remain valid in the multi-user setting.

2006

EPRINT

Threshold and Proactive Pseudo-Random Permutations
Abstract

We construct a reasonably efficient threshold and proactive pseudo-random permutation (PRP). Our protocol needs only O(1) communication rounds. It tolerates up to (n-1)/2 of n dishonest servers in the semi-honest environment. Many protocols that use PRPs (e.g., a CBC block cipher mode) can now be translated into the distributed setting. Our main technique for constructing invertible threshold PRPs is a distributed Luby-Rackoff construction where both the secret keys *and* the input are shared among the servers. We also present protocols for obliviously computing pseudo-random functions by Naor-Reingold and Dodis-Yampolskiy with shared input and keys.

2006

EPRINT

Verifiable Random Permutations
Abstract

Pseudorandom Functions (PRFs), introduced by Goldreich, Goldwasser and Micali, allow one to efficiently simulate the computation of a function which is indistinguishable from a truly random function. A seemingly stronger primitive is that of a (strong) pseudorandom permutation (PRP), which allows one to efficiently simulate a truly random permutation (and its inverse). The celebrated result of Luby and Rackoff shows that these primitives are, in fact, equivalent: four rounds of the Feistel transform are necessary and sufficient to turn a PRF into a (strong) PRP.
In this paper we study a similar conversion for the verifiable analogs of PRFs and PRPs, called Verifiable Random Functions (VRFs) and Verifiable Random Permutations (VRPs). VRFs, introduced by Micali, Rabin and Vadhan, extend the notion of a PRF to allow the owner of the secret key for the VRF to prove to the outside parties that a given VRF value was correctly (and uniquely!) computed. Yet, such proofs do not violate the pseudorandomness of the remaining, yet ``unopened'' values. VRPs, introduced in this paper, similarly extend the notion of PRPs. We notice that the result of Luby and Rackoff no longer applies to converting VRFs into VRPs, since the VRP proofs must reveal
the VRF outputs (and proofs) of the intermediate rounds. Indeed, we
show that even logarithmic (in the security parameter) number of rounds is not enough for this conversion. Our main result, however, shows that super-logarithmic number of rounds of the Feistel transform suffice to build a VRP out of an arbitrary VRF.
As an application, we give a construction of non-interactive zero-knowledge (NIZK) proofs with efficient provers for any NP language from any VRF. The result is obtained from our VRF--->VRP conversion, by noticing that VRPs easily yield ``invariant signatures'' of Goldwasser and Ostrovsky , which are known to imply NIZK. (We also notice that the detour through VRPs seems necessary for this implication, since using VRFs in place of invariant signatures is provably insufficient for the NIZK construction of Goldwasser et al. to go through.)

2006

EPRINT

Does Privacy Require True Randomness?
Abstract

Most cryptographic primitives require randomness (for example, to
generate their secret keys). Usually, one assumes that perfect
randomness is available, but, conceivably, such primitives might be
built under weaker, more realistic assumptions. This is known to be
true for many authentication applications, when entropy alone is
typically sufficient. In contrast, all known techniques for achieving
privacy seem to fundamentally require (nearly) perfect randomness. We
ask the question whether this is just a coincidence, or, perhaps,
privacy inherently requires true randomness?
We completely resolve this question for the case of
(information-theoretic) private-key encryption, where parties wish to
encrypt a b-bit value using a shared secret key sampled from some
imperfect source of randomness S. Our main result shows that if such n-bit source S allows for a secure encryption of b bits, where b>log n, then one can deterministically extract nearly b almost perfect random bits from S. Further, the restriction that b>log n is nearly tight: there exist sources S allowing one to perfectly encrypt (log n - loglog n) bits, but not to deterministically extract even a single slightly unbiased bit.
Hence, to a large extent, *true randomness is inherent for encryption*: either the key length must be exponential in the message length b, or one can deterministically extract nearly b almost unbiased random bits from the key. In particular, the *one-time pad scheme is essentially universal*.
Our technique also extends to related *computational* primitives which
are perfectly-binding, such as perfectly-binding commitment and
computationally secure private- or public-key encryption, showing the
necessity to efficiently extract almost b *pseudorandom* bits.

2006

EPRINT

Universally Composable Security with Global Setup
Abstract

Cryptographic protocols are often designed and analyzed under some trusted setup assumptions, namely in settings where the participants have access to global information that is trusted to have some basic security properties. However, current modeling of security in the presence of such setup falls short of providing the expected security guarantees. A quintessential example of this phenomenon is the deniability concern: there exist natural protocols that meet the strongest known composable security notions, and are still vulnerable to bad interactions with rogue protocols that use the same setup.
We extend the notion of universally composable (UC) security in a way that re-establishes its original intuitive guarantee even for protocols that use globally available setup. The new formulation prevents bad interactions even with adaptively chosen protocols that use the same setup. In particular, it guarantees deniability. While for protocols that use no setup the proposed requirements are the same as in traditional UC security, for protocols that use global setup the proposed requirements are significantly stronger. In fact, realizing Zero Knowledge or commitment becomes provably impossible, even in the Common Reference String model. Still, we propose reasonable alternative setup assumptions and protocols that allow realizing practically any cryptographic task under standard hardness assumptions even against adaptive corruptions.

2005

EPRINT

Minimal Assumptions for Efficient Mercurial Commitments
Abstract

Mercurial commitments were introduced by Chase et al. (Eurocrypt 2005)
and form a key building block for constructing zero-knowledge sets
(introduced by Micali, Rabin and Kilian (FOCS'03)}). Unlike regular
commitments, which are strictly binding, mercurial commitments allow
for certain amount of (limited) freedom. The notion of Chase et al.
also required that mercurial commitments should be equivocable given a
certain trapdoor, although the notion is interesting even without this
condition. While trivially implying regular (trapdoor) commitments, it
was not clear from the prior work if the converse was true: Chase et
al. gave several constructions of mercurial commitments from various
incompatible assumptions, leaving open if they can be built from any
(trapdoor) commitment scheme, and, in particular, from any one-way
function. We give an affirmative answer to this question, by giving
two simple constructions of mercurial commitments from any trapdoor
bit commitment scheme. By plugging in various (trapdoor) bit
commitment schemes, we get *all* the efficient constructions from
Chase et al. and Micali et al., as well as several immediate new
constructions.
Our results imply that (a) ** mercurial commitments can be viewed as
surprisingly simple variations of regular (trapdoor) commitments **
(and, thus, can be built from one-way functions and, more efficiently,
from a variety of other assumptions); and (b) ** the existence of
zero-knowledge sets is equivalent to the existence of
collision-resistant hash functions ** (moreover, the former can be
efficiently built from the latter and trapdoor commitments). Of
independent interest, we also give a stronger and yet much simpler
definition of mercurial commitments than that of Chase et al. (which
is also met by our constructions). Overall, we believe that our
results eludicate the notion of mercurial commitments, and better
explain the rational following the previous constructions of
mercurial commitments.

2004

EPRINT

Optimal Signcryption from Any Trapdoor Permutation
Abstract

We build several highly-practical and optimized signcryption
constructions directly from trapdoor permutations, in the random oracle model. All our constructions share features such as
simplicity, efficiency, generality, near-optimal exact security, flexible and ad-hoc key management, key reuse for sending/receiving data, optimally-low message expansion, "backward" use for plain signature/encryption, long message and associated data support, the strongest-known qualitative security (so-called IND-CCA and sUF-CMA) and, finally, complete compatibility with the PKCS#1 infrastructure. While some of these features are present in previous works to various extents, we believe that our schemes improve on earlier proposals in at least several dimensions, making the overall difference quite noticeable in practice.
Concretely, we present three methods generally based on what we call
Parallel, Sequential, and eXtended sequential Padding schemes (P-Pad,
S-Pad, X-Pad). P-Pad offers parallel "signing" and "encrypting",
optimal exact security, and minimum ciphertext length twice as long as the length of a TDP , while still maintaining optimal bandwidth.
S-Pad loses parallelism and some exact security, but has minimal
ciphertext length equal to that of a TDP. Any S-Pad can also be
used as a "universal padding" scheme. X-Pad is similar to S-Pad,
but regains optimal exact security at the expense of a
marginally-longer minimum ciphertext length. Moreover, to unify
various padding options, we construct a single versatile padding
scheme PSEP (Probabilistic Signature-Encryption Padding) which, by simply adjusting the lengths of the parameters, can work optimally as either a P-Pad, S-Pad or X-Pad.

2004

EPRINT

Scalable Public-Key Tracing and Revoking
Abstract

Traitor Tracing Schemes constitute a very useful tool against
piracy in the context of digital content broadcast. In such
multi-recipient encryption schemes, each decryption key is
fingerprinted and when a pirate decoder is discovered, the
authorities can trace the identities of the users that
contributed in its construction (called traitors).
Public-key traitor tracing schemes allow for a multitude of
non-trusted content providers using the same set of keys, which
makes the scheme ``server-side scalable.''
To make such schemes also ``client-side scalable,'' i.e. long lived and usable for a large population of subscribers that changes dynamically over time, it is crucial to implement efficient Add-user and Remove-user operations.
Previous work on public-key traitor tracing did not address this
dynamic scenario thoroughly, and there is no efficient scalable public
key traitor tracing scheme that allows an increasing number of Add-user and Remove-user operations.
To address these issues, we introduce the model of Scalable
Public-Key Traitor Tracing, and present the first construction of such
a scheme. Our model mandates for deterministic traitor tracing and an
unlimited number of efficient Add-user operations and Remove-user operations.
A scalable system achieves an unlimited number of revocations
while retaining high level of efficiency by dividing the run-time of
the system into periods. Each period has a saturation level for the
number of revocations. When a period becomes saturated, an _efficient_ New-period operation is issued by the system server that resets the saturation level.
We present a formal adversarial model for our system taking into
account its periodic structure, and we prove our construction secure, both against adversaries that attempt to cheat the revocation mechanism as well as against adversaries that attempt to cheat the traitor tracing mechanism.

2004

EPRINT

ID-Based Encryption for Complex Hierarchies with Applications to Forward Security and Broadcast Encryption
Abstract

A forward-secure encryption scheme protects secret keys from exposure by evolving the keys with time. Forward security has several unique requirements in Hierarchical Identity-Based Encryption (HIBE) scheme: (1) users join dynamically; (2) encryption
is joining-time-oblivious; (3) users evolve secret keys autonomously.
We present a scalable forward-secure HIBE scheme satisfying the above properties. Note that a naive combination of Gentry-Silverberg HIBE scheme with the forward-secure Public-Key Encryption scheme by Canetti, Halevi and Katz would not meet the requirements. We also show how our fs-HIBE scheme can be
used to construct a forward-secure public-key Broadcast Encryption
scheme, which protects the secrecy of prior transmissions in the Broadcast Encryption setting. We further generalize fs-HIBE into a collusion-resistant Multiple Hierarchical ID-Based Encryption scheme, which can be used for secure communications with entities having multiple roles in Role-Based Access Control. The security of our schemes is based on the Bilinear Diffie-Hellman assumption in the random oracle model.

2004

EPRINT

Entropic Security and the Encryption of High Entropy Messages
Abstract

Russell and Wang (Eurocrypt 2002) recently introduced an elegant,
information-theoretic notion called entropic security of
encryption: they required that the cipher text leak no predicate of
the plaintext (similar to semantic security (Goldwasser and Micali,
JCSS 1984)) but only as long as the distribution on messages has high
entropy from the adversary's point of view. They show that this
notion of security can be achieved with very short keys for
entropically rich message spaces. Canetti, Miccianci and Reingold
(STOC, 1998) had previously constructed hash functions which satisfy a
similar entropic security condition. The output of such hash function
leaks no partial information about the input, provided the input has
sufficiently high entropy. This paper studies entropic security in
general, and its application to the encryption of high-entropy
messages.
(1) We elucidate the notion of entropic security. Our results apply to
all entropically-secure primitives, including both encryption and hash
functions. We strengthen the formulation of Canetti and Russell and
Wang: we require that an entropically-secure primitive leak no
function whasoever of its input, while the previous works focus only
on predicates. This stronger formulation is much closer to the
original notion of semantic security. We also show that entropic
security is equivalent to indistinguishability on pairs of input
distributions of sufficiently high entropy. This equivalence
generalizes the conventional equivalence between indistinguishability
and semantic security. Indistinguishability, in turn, is related to
randomness extraction from non-uniform distributions. The proof of
the equivalence of hiding predicates to hiding all functions is quite
involved, and requires very different techniques from those of
Goldwasser and Micali.
(2) We use the equivalence above, and the connection to randomness
extraction, to prove several new results on entropically-secure
encryption. We obtain:
(a) Two general frameworks for constructing entropically secure
encryption schemes, one based on expander graphs and the other on
XOR-universal hash functions. These schemes generalize the schemes of
Russell and Wang, yielding simpler constructions and proofs as well as
improved parameters. The best scheme uses a key of length
k=n-t+2log(1/eps)+O(1), where eps is a measure of leakage and
t is the initial min-entropy of the message.
(b) Lower bounds on the key length k for entropic security and
indistinguishability. In particular, we show near tightness of
Russell-Wang constructions: k > n-t. (In fact, for a large class
of schemes k >= n-t + log(1/eps).)

2004

EPRINT

A Verifiable Random Function With Short Proofs and Keys
Abstract

We give a simple and efficient construction of a verifiable random function (VRF) on bilinear groups. Our construction is direct.
In contrast to prior VRF constructions [MRV99, Lys02], it avoids using an inefficient Goldreich-Levin transformation, thereby saving several factors in security. Our proofs of security are based on a decisional bilinear Diffie-Hellman inversion assumption, which seems reasonable given current state of knowledge. For small message spaces, our VRF's proofs and keys have constant size. By utilizing a collision-resistant hash function, our VRF can also be used with arbitrary message spaces. We show that our scheme can be instantiated with an elliptic group of very reasonable size.
Furthermore, it can be made distributed and proactive.

2003

EPRINT

Parallel Signcryption with OAEP, PSS-R, and other Feistel Paddings
Abstract

We present a new, elegant composition method for joint signature
and encryption, also referred to as signcryption. The new
method, which we call *Padding-based Parallel Signcryption*
(PbPS), builds an efficient signcryption scheme from any family of
trapdoor permutations, such as RSA. Each user U generates a single
public/secret key pair f_U/f^{-1}_U used for both sending and
receiving the data. To signcrypt a message m to a recipient with key
f_{rcv}, a sender with key f_{snd} efficiently transforms m into
a pair {w|s}, and simply sends { f_{rcv}(w) | f^{-1}_{snd}(s) }.
PbPS enjoys many attractive properties: simplicity, efficiency,
generality, parallelism of ``encrypting''/``signing'', optimal exact
security, flexible and ad-hoc key management, key reuse for
sending/receiving data, optimally-low message expansion, long message
and associated data support, and, finally, complete compatibility with
the PKCS#1 infrastructure.
The pairs {w|s} sufficient for the security of PbPS are
called *universal two-padding schemes*. Using one round of the
Feistel transform, we give a very general construction of such
schemes. Interestingly, we notice that all popular padding schemes
with message recovery used for plain signature or encryption, such as
OAEP, OAEP+, PSS-R, and ``scramble all, encrypt small'', naturally
consist of two pieces {w|s}. Quite remarkably, we show that all such
pairs become special cases of our construction. As a result, we find
a natural generalization of all conventional padding schemes, and
show that any such padding can be used for signcryption with PbPS.
However, none of such paddings gives optimal message bandwidth. For
that purpose and of independent interest, we define a new ``hybrid''
between PSS-R and OAEP, which we call *Probabilistic
Signature-Encryption Padding* (PSEP). We recommend using PbPS with
PSEP to achieve the most flexible and secure signcryption scheme
up-to-date. To justify this point, we provide a detailed practical
comparison of PbPS/PSEP with other previously-proposed signcryption
candidates.

2003

EPRINT

Concealment and its Applications to Authenticated Encryption
Abstract

We introduce a new cryptographic primitive we call **concealment**,
which is related, but quite different from the notion of commitment.
A concealment is a publicly known randomized transformation, which,
on input m, outputs a *hider* h and a *binder* b. Together, h and b
allow one to recover m, but separately, (1) the hider h reveals
"no information" about m, while (2) the binder b can be
"meaningfully opened" by at most one hider h. While setting
b=m, h=empty is a trivial concealment, the challenge is to make
|b|<<|m|, which we call a "non-trivial" concealment. We show that
non-trivial concealments are equivalent to the existence of
collision-resistant hash functions. Moreover, our construction of
concealments is extremely simple, optimal, and yet very general,
giving rise to a multitude of efficient implementations.
We show that concealments have natural and important applications in
the area of **authenticated encryption**. Specifically, let AE be an
authenticated encryption scheme (either public- or symmetric-key)
designed to work on short messages. We show that concealments are
**exactly** the right abstraction allowing one to use AE for
encrypting long messages. Namely, to encrypt long m, one uses a
concealment scheme to get h and b, and outputs authenticated
ciphertext (AE(b),h). More surprisingly, the above paradigm leads
to a very simple and general solution to the problem of
**remotely keyed (authenticated) encryption** (RKAE).
In this problem, one wishes to split the task of high-bandwidth
authenticated encryption between a secure, but
low-bandwidth/computationally limited device, and an insecure, but
computationally powerful host. We give formal definitions for RKAE,
which we believe are simpler and more natural than all the previous
definitions. We then show that our composition paradigm satisfies
our (very strong) definition. Namely, for authenticated encryption,
the host simply sends a short value b to the device (which stores
the actual secret key for AE), gets back AE(b), and outputs (AE(b),h)
(authenticated decryption is similar). Finally, we also observe that
several previous RKAE proposals are all special examples of our
general paradigm.

2003

EPRINT

Public Key Trace and Revoke Scheme Secure against Adaptive Chosen Ciphertext Attack
Abstract

A (public key) Trace and Revoke Scheme combines the functionality
of broadcast encryption with the capability of traitor tracing.
Specifically, (1) a trusted center publishes a single public key
and distributes individual secret keys to the users of the system;
(2) anybody can encrypt a message so that all but a specified
subset of ``revoked'' users can decrypt the resulting ciphertext;
and (3) if a (small) group of users combine their secret keys to
produce a ``pirate decoder'', the center can trace at least one of
the ``traitors'' given access to this decoder.
We construct the first chosen ciphertext (CCA2) secure Trace and
Revoke Scheme based on the DDH assumption. Our scheme is also the
first adaptively secure scheme, allowing the adversary to corrupt
players at any point during execution, while prior works (e.g.,
[NP00,TT01]) only achieves a very weak form of non-adaptive
security even against chosen plaintext attacks. In fact, no CCA2
scheme was known even in the symmetric setting.
Of independent interest, we present a slightly simpler
construction that shows a ``natural separation'' between the
classical notion of CCA2 security and the recently proposed
[Sho01,ADR02] relaxed notion of gCCA2 security.

2003

EPRINT

Breaking and Repairing Optimistic Fair Exchange from PODC 2003
Abstract

In PODC 2003, Park, Chong, Siegel and Ray [PCSR03] proposed an optimistic protocol for fair exchange, based on RSA signatures. We show that their protocol is *totally breakable* already in the registration phase: the honest-but-curious arbitrator can easily determine the signer's secret key.
On a positive note, the authors of [PCSR03] informally introduced a connection between fair exchange and "sequential two-party multisignature schemes" (which we call two-signatures), but used an insecure two-signature scheme in their actual construction. Nonetheless, we show that this connection *can* be properly formalized to imply *provably secure* fair exchange protocols. By utilizing the state-of-the-art non-interactive two-signature of Boldyreva (PKC 2003), we obtain an efficient and provably secure (in the random oracle model) fair exchange protocol, which is based on GDH signatures of Boneh, Lynn and Shacham (Asiacrypt 2001).
Of independent interest, we introduce a unified model for non-interactive fair exchange protocols, which results in a new primitive we call *verifiably committed signatures*. Verifiably committed signatures generalize (non-interactive) verifiably encrypted signatures and two-signatures, both of which are sufficient for fair exchange.

2003

EPRINT

Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data
Abstract

We provide formal definitions and efficient secure techniques for
-- turning noisy information into keys usable for any cryptographic application, and, in particular,
-- reliably and securely authenticating biometric data.
Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a fuzzy extractor reliably extracts nearly uniform randomness R from its input; the extraction is error-tolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A secure sketch produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce error-prone biometric inputs without incurring the security risk inherent in storing them.
We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of "closeness" of input data, such as Hamming distance, edit distance, and set difference.

2002

EPRINT

On the Security of Joint Signature and Encryption
Abstract

We formally study the notion of a joint signature and encryption in
the public-key setting. We refer to this primitive as {\em
signcryption}, adapting the terminology of Zheng [Zhe97]. We present
wo definitions for the security of signcryption depending on whether
the adversary is an outsider or a legal user of the system. We then
examine generic sequential composition methods of building
signcryption from a signature and encryption scheme. Contrary to
what recent results in the symmetric setting [BN00,Kra01] might
lead one to expect, we show that classical ``encrypt-then-sign''
(EtS) and ``sign-then-encrypt'' (StE) methods are both {\em secure}
composition methods in the public-key setting.
We also present a new composition method which we call
``commit-then-encrypt-and-sign'' (CtE&S). Unlike the generic
sequential composition methods, CtE&S applies the expensive
signature and encryption operations {\em in parallel}, which could
imply a gain in efficiency over the StE and EtS schemes. We
also show that the new CtE&S method elegantly combines with the
recent ``hash-sign-switch'' technique of Shamir and Tauman [ST01],
leading to efficient {\em on-line/off-line} signcryption.
Finally and of independent interest, we discuss the {\em definitional}
inadequacy of the standard notion of chosen ciphertext (CAA)
security. Motivated by our applications to signcryption, we show
that the notion of CAA-security is syntactically ill-defined, and
leads to artificial examples of ``secure'' encryption schemes which
do not meet the formal definition of CCA-security. We suggest a
natural and very slight relaxation of CAA-security, which we call
generalized CCA-security (gCCA). We show that gCCA-security suffices
for all known uses of CCA-secure encryption, while no longer
suffering from the definitional shortcomings of the latter.

2002

EPRINT

Key-Insulated Public-Key Cryptosystems
Abstract

Cryptographic computations (decryption, signature generation, etc.)
are often performed on a relatively insecure device (e.g., a mobile
device or an Internet-connected host) which cannot be trusted to
maintain secrecy of the private key. We propose and investigate the
notion of \emph{key-insulated security} whose goal is to minimize the damage
caused by secret-key exposures. In our model, the secret key(s)
stored on the insecure device are refreshed at discrete time periods
via interaction with a physically-secure --- but
computationally-limited --- device which stores a ``master key''. All
cryptographic computations are still done on the insecure device, and
the public key remains unchanged. In a (t, N)-key-insulated scheme, an
adversary who compromises the insecure device and obtains secret keys
for up to t periods of his choice is unable to violate the security
of the cryptosystem for \emph{any} of the remaining N-t periods.
Furthermore, the scheme remains secure (for \emph{all} time periods)
against an adversary who compromises \emph{only} the physically-secure
device.
We notice that key-insulated schemes significantly improve the security
guarantee of forward-secure schemes [A97,BM99], in which exposure
of the secret key at even a single time period (necessarily)
compromises the security of the system for all future time
periods. This improvement is achieved with minimal cost: infrequent
key updates with a (possibly untrusted) secure device.
We focus primarily on key-insulated public-key encryption. We construct a
(t,N)-key-insulated encryption scheme based on any (standard) public-key
encryption scheme, and give a more efficient construction based on the
DDH assumption. The latter construction is then extended to achieve
chosen-ciphertext security.

2002

EPRINT

On the Power of Claw-Free Permutations
Abstract

Probabilistic Signature Scheme (PSS), Full Domain Hash (FDH) and
several of their variants are widely used signature schemes, which can
be formally analyzed in the random oracle model. These schemes output
a signature of the form (f^{-1}(y),pub), where y somehow depends
on the message signed (and pub) and f is some public trapdoor
permutation (typically RSA). Interestingly, all these signature
schemes can be proven {\em asymptotically} secure for an {\em
arbitrary} trapdoor permutation f, but their {\em exact} security
seems to be significantly better for {\em special} trapdoor
permutations like RSA. This leads to two natural questions:
(1) can the asymptotic security analysis be improved with general
trapdoor permutations?; and, if not, (2) what {\em general
cryptographic assumption} on f --- enjoyed by specific functions
like RSA --- is ``responsible'' for the improved security?
We answer both these questions. First, we show that if f is a
``black-box'' trapdoor permutation, then the poor exact security is
unavoidable. More specifically, the ``security loss'' for general
trapdoor permutations is \Omega(q_hash), where q_hash is the
number of random oracle queries made by the adversary (which could be
quite large). On the other hand, we show that all the security
benefits of the RSA-based variants come into effect once f comes
from a family of {\em claw-free permutation} pairs. Our results
significantly narrow the current ``gap'' between general trapdoor
permutations and RSA to the ``gap'' between trapdoor permutations and
claw-free permutations. Additionally, they can be viewed as the first
{\em security/efficiency} separation between these basic cryptographic
primitives. In other words, while it was already believed that certain
cryptographic objects {\em can} be build from claw-free permutations
but {\em not} from general trapdoor permutations, we show that certain
important schemes (like FDH and PSS) provably work with {\em either},
but enjoy a much better tradeoff between security and efficiency when
deployed with claw-free permutations.

2002

EPRINT

Efficient Construction of (Distributed) Verifiable Random Functions
Abstract

We give the first simple and efficient construction of {\em verifiable
random functions} (VRFs). VRFs, introduced by Micali et al. [MRV99],
combine the properties of regular pseudorandom functions (PRFs)
[GGM86] (i.e., indistinguishability from a random function) and
digital signatures [GMR88] (i.e., one can provide an unforgeable proof
that the VRF\ value is correctly computed). The efficiency of our VRF
construction is only slightly worse than that of a regular PRF
construction of Naor and Reingold [NR97]. In contrast to ours, the
previous VRF constructions [MRV99,Lys02] all involved an expensive
generic transformation from verifiable unpredictable functions (VUFs),
while our construction is simple and direct.
We also provide the first construction of {\em distributed} VRFs. Our
construction is more efficient than the only known construction of
distributed (non-verifiable) PRFs [Nie02], but has more applications
than the latter. For example, it can be used to distributively
implement the random oracle model in a {\em publicly verifiable}
manner, which by itself has many applications (e.g., constructing
threshold signature schemes).
Our main construction is based on a new variant of decisional
Diffie-Hellman (DDH) assumption on certain groups where the regular
DDH assumption does {\em not} hold. We do not make any claims about
the validity of our assumption (which we call {\em sum-free} DDH, or
sf-DDH). However, this assumption seems to be plausible based on our
{\em current} understanding of certain candidate elliptic and
hyper-elliptic groups which were recently proposed for use in
cryptography [JN01,Jou00]. We hope that the demonstrated power of our
sf-DDH assumption will serve as a motivation for its closer study.

#### Program Committees

- Eurocrypt 2018
- Eurocrypt 2016
- TCC 2015 (Program chair)
- Crypto 2014
- TCC 2014
- Crypto 2012
- Crypto 2011
- Asiacrypt 2010
- Crypto 2008
- TCC 2008
- Eurocrypt 2006
- Eurocrypt 2005
- Crypto 2004
- Eurocrypt 2003

#### Coauthors

- Divesh Aggarwal (1)
- Shweta Agrawal (1)
- Joël Alwen (4)
- Jee Hea An (4)
- Elena Andreeva (1)
- Boaz Barak (1)
- Alexander Bienstock (1)
- Allison Bishop (1)
- Andrey Bogdanov (1)
- Carl Bosley (2)
- Xavier Boyen (1)
- Ran Canetti (3)
- David Cash (1)
- Dario Catalano (1)
- Melissa Chase (1)
- Rahul Chatterjee (1)
- Benoît Cogliati (1)
- Sandro Coretti (8)
- Jean-Sébastien Coron (2)
- Ronald Cramer (2)
- Yan Zong Ding (1)
- Pooya Farshim (1)
- Nelly Fazio (4)
- Serge Fehr (2)
- Michael J. Freedman (2)
- Chaya Ganesh (1)
- Rosario Gennaro (1)
- Shafi Goldwasser (1)
- Alexander Golovnev (1)
- Paul Grubbs (1)
- Siyao Guo (4)
- Iftach Haitner (1)
- Shai Halevi (3)
- Kristiyan Haralambiev (3)
- Johan Håstad (1)
- Russell Impagliazzo (1)
- Yuval Ishai (1)
- Zahra Jafargholi (1)
- Abhishek Jain (1)
- Ragesh Jaiswal (1)
- Stanislaw Jarecki (1)
- Ari Juels (2)
- Valentine Kabanets (1)
- Yael Tauman Kalai (1)
- Harish Karthikeyan (1)
- Jonathan Katz (10)
- Aggelos Kiayias (3)
- Eike Kiltz (1)
- Daniel Kraschewski (1)
- Hugo Krawczyk (2)
- Eyal Kushilevitz (1)
- Wenke Lee (1)
- Pil Joong Lee (2)
- Jooyoung Lee (1)
- Richard J. Lipton (1)
- Tianren Liu (3)
- Adriana López-Alt (4)
- Anna Lysyanskaya (1)
- Cécile Malinaud (1)
- Tal Malkin (1)
- Avradip Mandal (1)
- Ueli Maurer (1)
- Sogol Mazaheri (1)
- Bart Mennink (1)
- Silvio Micali (2)
- Eric Miles (1)
- Ilya Mironov (3)
- Tal Moran (1)
- Moni Naor (1)
- Antonio Nicolosi (1)
- Roberto Oliveira (1)
- Rafail Ostrovsky (3)
- Carles Padró (2)
- Rafael Pass (2)
- Chris Peikert (1)
- Olivier Pereira (1)
- Krzysztof Pietrzak (8)
- Bartosz Przydatek (1)
- Prashant Puniya (5)
- Tal Rabin (4)
- Leonid Reyzin (8)
- Thomas Ristenpart (6)
- Ronald L. Rivest (1)
- Paul Rösler (1)
- Ron D. Rothblum (1)
- Amit Sahai (2)
- Gil Segev (1)
- Yannick Seurin (1)
- Adi Shamir (2)
- Emily Shen (1)
- Victor Shoup (2)
- Thomas Shrimpton (1)
- Adam Smith (9)
- Martijn Stam (2)
- François-Xavier Standaert (1)
- John P. Steinberger (8)
- Noah Stephens-Davidowitz (5)
- Björn Tackmann (3)
- Aris Tentes (1)
- Stefano Tessaro (3)
- Aishwarya Thiruvengadam (1)
- Yiannis Tselekounis (1)
- Salil P. Vadhan (4)
- Vinod Vaikuntanathan (4)
- Daniele Venturi (3)
- Ivan Visconti (1)
- Shabsi Walfish (8)
- Daniel Wichs (18)
- Joanne Woodage (2)
- Zhiye Xie (1)
- Shouhuai Xu (3)
- Aleksandr Yampolskiy (4)
- Yanqing Yao (2)
- Danfeng Yao (1)
- Yu Yu (2)
- Dae Hyun Yum (2)
- Moti Yung (7)
- Zhe Zhang (1)