International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Moni Naor

Publications

Year
Venue
Title
2024
CRYPTO
MPC for Tech Giants (GMPC): Enabling Gulliver and the Lilliputians to Cooperate Amicably
In the current digital world, large organizations (sometimes referred to as tech giants) provide service to extremely large numbers of users. The service provider is often interested in computing various data analyses over the private data of its users, which in turn have their incentives to cooperate, but do not necessarily trust the service provider. In this work, we introduce the \emph{Gulliver multi-party computation model} (GMPC) to realistically capture the above scenario. The GMPC model considers a single highly powerful party, called the {\em server} or {\em Gulliver}, that is connected to $n$ users over a star topology network (alternatively formulated as a full network, where the server can block any message). The users are significantly less powerful than the server, and, in particular, should have both computation and communication complexities that are polylogarithmic in $n$. Protocols in the GMPC model should be secure against malicious adversaries that may corrupt a subset of the users and/or the server. Designing protocols in the GMPC model is a delicate task, since users can only hold information about $\polylog(n)$ other users (and, in particular, can only communicate with $\polylog(n)$ other users). In addition, the server can block any message between any pair of honest parties. Thus, reaching an agreement becomes a challenging task. Nevertheless, we design generic protocols in the GMPC model, assuming that at most $\alpha<1/8$ fraction of the users may be corrupted (in addition to the server). Our main contribution is a variant of Feige's committee election protocol [FOCS 1999] that is secure in the GMPC model. Given this tool we show: 1. Assuming fully homomorphic encryption (FHE), any computationally efficient function with $O(n\cdot\polylog(n))$-size output can be securely computed in the GMPC model. 2. Any function that can be computed by a circuit of $O(\polylog(n))$ depth, $O(n\cdot\polylog(n))$ size, and bounded fan-in and fan-out can be securely computed in the GMPC model assuming vector commitment schemes (without assuming FHE). 3. In particular, {\em sorting} can be securely computed in the GMPC model assuming vector commitment schemes. This has important applications for the {\em shuffle model of differential privacy}, and resolves an open question of Bell et al. [CCS 2020].
2024
CRYPTO
That’s not my signature! Fail-stop signatures for a post-quantum world
The Snowden's revelations kick-started a community-wide effort to develop cryptographic tools against mass surveillance. In this work, we propose to add another primitive to that toolbox: Fail-Stop Signatures (FSS) [EC'89]. FSS are digital signatures enhanced with a forgery-detection mechanism that can protect a computationally bounded signer from more powerful attackers. Despite the fascinating concept, research in this area stalled after the '90s. However, the ongoing transition to post-quantum cryptography, with its hiccups due to the novelty of underlying assumptions, has become the perfect use case for FSS. This paper aims to reboot research on FSS with practical use in mind: Our framework for FSS includes ``fine-grained'' security definitions (that assume a powerful, but bounded adversary e.g: can break 128-bit of security, but not 256-bit). As an application, we show new FSS constructions for the post-quantum setting. We show that FSS are equivalent to standard, provably secure digital signatures that do not require rewinding or programming random oracles, and that this implies lattice-based FSS. Our main construction is an FSS version of SPHINCS, which required building FSS versions of all its building blocks: WOTS, XMSS, and FORS. In the process, we identify and provide generic solutions for two fundamental issues arising when deriving a large number of private keys from a single seed, and when building FSS for Hash-and-Sign-based signatures.
2022
CRYPTO
CHIP and CRISP: Protecting All Parties Against Compromise through Identity-Binding PAKEs
Recent advances in password-based authenticated key exchange (PAKE) protocols can offer stronger security guarantees for globally deployed security protocols. Notably, the OPAQUE protocol [Eurocrypt2018] realizes Strong Asymmetric PAKE (saPAKE), strengthening the protection offered by aPAKE to compromised servers: after compromising an saPAKE server, the adversary still has to perform a full brute-force search to recover any passwords or impersonate users. However, (s)aPAKEs do not protect client storage, and can only be applied in the so-called asymmetric setting, in which some parties, such as servers, do not communicate with each other using the protocol. Nonetheless, passwords are also widely used in symmetric settings, where a group of parties share a password and can all communicate (e.g., Wi-Fi with client devices, routers, and mesh nodes; or industrial IoT scenarios). In these settings, the (s)aPAKE techniques cannot be applied, and the state-of-the-art still involves handling plaintext passwords. In this work, we propose the notions of (strong) identity-binding PAKEs that improve this situation: they protect against compromise of any party, and can also be applied in the symmetric setting. We propose counterparts to state-of-the-art security notions from the asymmetric setting in the UC model, and construct protocols that provably realize them. Our constructions bind the local storage of all parties to abstract identities, building on ideas from identity-based key exchange, but without requiring a third party. Our first protocol, CHIP, generalizes the security of aPAKE protocols to all parties, forcing the adversary to perform a brute-force search to recover passwords or impersonate others. Our second protocol, CRISP, additionally renders any adversarial pre-computation useless, thereby offering saPAKE-like guarantees for all parties, instead of only the server. We evaluate prototype implementations of our protocols and show that even though they offer stronger security for real-world use cases, their performance is in line with, or even better than, state-of-the-art protocols.
2022
CRYPTO
Low Communication Complexity Protocols, Collision Resistant Hash Functions and Secret Key-Agreement Protocols 📺
Shahar Cohen Moni Naor
We study communication complexity in computational settings where bad inputs may exist, but they should be hard to find for any computationally bounded adversary. We define a model where there is a source of public randomness but the inputs are chosen by a computationally bounded adversarial participant \emph{after seeing the public randomness}. We show that breaking some known communication lower bounds in this model is closely connected to known cryptographic assumptions. We consider the simultaneous messages model and the interactive communication model and show that for any non trivial predicate (such as equality): 1. Breaking the \Omega(\sqrt n) bound in the simultaneous message case or the \Omega(\log n) bound in the interactive communication case, implies the existence of distributional collision-resistant hash functions (dCRH). Note that with a CRH the lower bounds can be broken. This is shown using techniques from Babai and Kimmel [CCC 1997]. 2. There are no protocols of constant communication in this preset randomness settings (unlike the plain public randomness model). The other model we study is that of a stateful ``free talk", where participants can communicate freely \emph{before} the inputs are chosen and may maintain a state, and the communication complexity is measured only afterwards. We show that efficient protocols for equality in this model imply secret key-agreement protocols (in a constructive sense). On the other hand, secret key-agreement protocols imply optimal in terms of error protocols for equality.
2022
TCC
Bet-or-Pass: Adversarially Robust Bloom Filters
Moni Naor Noa Oved
A Bloom filter is a data structure that maintains a succinct and probabilistic representation of a set of elements from a universe. It supports approximate membership queries. The price of the succinctness is allowing some error, namely false positives: for any element not in the set, it might answer `Yes' but with a small (non-negligible) probability. When dealing with such data structures in adversarial settings, we need to define the correctness guarantee and formalize the requirement that bad events happen infrequently and those false positives are appropriately distributed. Recently, several papers investigated this topic, suggesting different robustness definitions. In this work, we try to unify this line of research and propose several robustness notions for Bloom filters that allow the adaptivity of queries. The goal is that a robust Bloom filter should behave like a random biased coin even against an adaptive adversary. The robustness definitions are formalized by the type of test the Bloom filter should withstand. We then explore the relationships between these notions and highlight the notion of Bet-or-Pass as capturing the desired properties of such a data structure.
2019
JOFC
Hardness-Preserving Reductions via Cuckoo Hashing
The focus of this work is hardness-preserving transformations of somewhat limited pseudorandom functions families (PRFs) into ones with more versatile characteristics. Consider the problem of domain extension of pseudorandom functions: given a PRF that takes as input elements of some domain $$\mathcal {U}$$U, we would like to come up with a PRF over a larger domain. Can we do it with little work and without significantly impacting the security of the system? One approach is to first hash the larger domain into the smaller one and then apply the original PRF. Such a reduction, however, is vulnerable to a “birthday attack”: after $$\sqrt{\left| \mathcal {U}\right| }$$U queries to the resulting PRF, a collision (i.e., two distinct inputs having the same hash value) is very likely to occur. As a consequence, the resulting PRF is insecure against an attacker making this number of queries. In this work, we show how to go beyond the aforementioned birthday attack barrier by replacing the above simple hashing approach with a variant of cuckoo hashing, a hashing paradigm that resolves collisions in a table by using two hash functions and two tables, cleverly assigning each element to one of the two tables. We use this approach to obtain: (i) a domain extension method that requires just two calls to the original PRF can withstand as many queries as the original domain size, and has a distinguishing probability that is exponentially small in the amount of non-cryptographic work; and (ii) a security-preserving reduction from non-adaptive to adaptive PRFs.
2019
TCC
Incrementally Verifiable Computation via Incremental PCPs
If I commission a long computation, how can I check that the result is correct without re-doing the computation myself? This is the question that efficient verifiable computation deals with. In this work, we address the issue of verifying the computation as it unfolds. That is, at any intermediate point in the computation, I would like to see a proof that the current state is correct. Ideally, these proofs should be short, non-interactive, and easy to verify. In addition, the proof at each step should be generated efficiently by updating the previous proof, without recomputing the entire proof from scratch. This notion, known as incrementally verifiable computation, was introduced by Valiant [TCC 08] about a decade ago. Existing solutions follow the approach of recursive proof composition and can be based on strong and non-falsifiable cryptographic assumptions (so-called “knowledge assumptions”).In this work, we present a new framework for constructing incrementally verifiable computation schemes in both the publicly verifiable and designated-verifier settings. Our designated-verifier scheme is based on somewhat homomorphic encryption (which can be based on Learning with Errors) and our publicly verifiable scheme is based on the notion of zero-testable homomorphic encryption, which can be constructed from ideal multi-linear maps [Paneth and Rothblum, TCC 17].Our framework is anchored around the new notion of a probabilistically checkable proof (PCP) with incremental local updates. An incrementally updatable PCP proves the correctness of an ongoing computation, where after each computation step, the value of every symbol can be updated locally without reading any other symbol. This update results in a new PCP for the correctness of the next step in the computation. Our primary technical contribution is constructing such an incrementally updatable PCP. We show how to combine updatable PCPs with recently suggested (ordinary) verifiable computation to obtain our results.
2018
EUROCRYPT
2018
TCC
2018
TCC
The Security of Lazy Users in Out-of-Band Authentication
Moni Naor Lior Rotem Gil Segev
Faced with the threats posed by man-in-the-middle attacks, messaging platforms rely on “out-of-band” authentication, assuming that users have access to an external channel for authenticating one short value. For example, assuming that users recognizing each other’s voice can authenticate a short value, Telegram and WhatApp ask their users to compare 288-bit and 200-bit values, respectively. The existing protocols, however, do not take into account the plausible behavior of users who may be “lazy” and only compare parts of these values (rather than their entirety).Motivated by such a security-critical user behavior, we study the security of lazy users in out-of-band authentication. We start by showing that both the protocol implemented by WhatsApp and the statistically-optimal protocol of Naor, Segev and Smith (CRYPTO ’06) are completely vulnerable to man-in-the-middle attacks when the users consider only a half of the out-of-band authenticated value. In this light, we put forward a framework that captures the behavior and security of lazy users. Our notions of security consider both statistical security and computational security, and for each flavor we derive a lower bound on the tradeoff between the number of positions that are considered by the lazy users and the adversary’s forgery probability.Within our framework we then provide two authentication protocols. First, in the statistical setting, we present a transformation that converts any out-of-band authentication protocol into one that is secure even when executed by lazy users. Instantiating our transformation with a new refinement of the protocol of Naor et al. results in a protocol whose tradeoff essentially matches our lower bound in the statistical setting. Then, in the computational setting, we show that the computationally-optimal protocol of Vaudenay (CRYPTO ’05) is secure even when executed by lazy users – and its tradeoff matches our lower bound in the computational setting.
2017
JOFC
2016
CRYPTO
2016
CRYPTO
2016
TCC
2016
JOFC
2015
TCC
2015
TCC
2015
CRYPTO
2015
ASIACRYPT
2014
CRYPTO
2014
ASIACRYPT
2013
TCC
2010
EUROCRYPT
2009
TCC
2009
TCC
2009
ASIACRYPT
2009
CRYPTO
2008
TCC
2006
CRYPTO
2006
CRYPTO
2006
EUROCRYPT
2006
EUROCRYPT
2006
JOFC
2005
CRYPTO
2005
EUROCRYPT
2005
TCC
2005
JOFC
2004
EUROCRYPT
2003
CRYPTO
2003
CRYPTO
2002
CRYPTO
2002
JOFC
2001
CRYPTO
2000
ASIACRYPT
2000
CRYPTO
1999
CRYPTO
1999
EUROCRYPT
1999
JOFC
1998
CRYPTO
1998
CRYPTO
1998
EUROCRYPT
1998
EUROCRYPT
1998
JOFC
1998
JOFC
1997
CRYPTO
1997
CRYPTO
1996
JOFC
1994
CRYPTO
1994
CRYPTO
1994
EUROCRYPT
1993
CRYPTO
1993
CRYPTO
1992
CRYPTO
1992
CRYPTO
1992
CRYPTO
1991
JOFC
1989
CRYPTO
1988
CRYPTO

Program Committees

TCC 2015
TCC 2008
Eurocrypt 2007 (Program chair)
Crypto 2006
Crypto 2005
TCC 2004 (Program chair)
Eurocrypt 2003
PKC 2003
Asiacrypt 2000
Eurocrypt 2000
Crypto 1997
Crypto 1995
Crypto 1991