International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Rafail Ostrovsky

Affiliation: UCLA

Publications

Year
Venue
Title
2021
EUROCRYPT
Alibi: A Flaw in Cuckoo-Hashing based Hierarchical ORAM Schemes and a Solution
There once was a table of hashes That held extra items in stashes It all seemed like bliss But things went amiss When the stashes were stored in the caches The first Oblivious RAM protocols introduced the ``hierarchical solution,'' (STOC '90) where the server stores a series of hash tables of geometrically increasing capacities. Each ORAM query would read a small number of locations from each level of the hierarchy, and each level of the hierarchy would be reshuffled and rebuilt at geometrically increasing intervals to ensure that no single query was ever repeated twice at the same level. This yielded an ORAM protocol with polylogarithmic overhead. Future works extended and improved the hierarchical solution, replacing traditional hashing with cuckoo hashing (ICALP '11) and cuckoo hashing with a combined stash (Goodrich et al. SODA '12). In this work, we identify a subtle flaw in the protocol of Goodrich et al. (SODA '12) that uses cuckoo hashing with a stash in the hierarchical ORAM solution. We give a concrete distinguishing attack against this type of hierarchical ORAM that uses cuckoo hashing with a \emph{combined} stash. This security flaw has propagated to at least 5 subsequent hierarchical ORAM protocols, including the recent optimal ORAM scheme, OptORAMa (Eurocrypt '20). In addition to our attack, we identify a simple fix that does not increase the asymptotic complexity. We note, however, that our attack only affects more recent \emph{hierarchical ORAMs}, but does not affect the early protocols that predate the use of cuckoo hashing, or other types of ORAM solutions (e.g. Path ORAM or Circuit ORAM).
2021
EUROCRYPT
Threshold Garbled Circuits and Ad Hoc Secure Computation
Michele Ciampi Vipul Goyal Rafail Ostrovsky
Garbled Circuits (GCs) represent fundamental and powerful tools in cryptography, and many variants of GCs have been considered since their introduction. An important property of the garbled circuits is that they can be evaluated securely if and only if exactly 1 key for each input wire is obtained: no less and no more. In this work we study the case when: 1) some of the wire-keys are missing, but we are still interested in computing the output of the garbled circuit and 2) the evaluator of the GC might have both keys for a constant number of wires. We start to study this question in terms of non-interactive multi-party computation (NIMPC) which is strongly connected with GCs. In this notion, there is a fixed number of parties (n) that can get correlated information from a trusted setup. Then these parties can send an encoding of their input to an evaluator, which can compute the output of the function. Similarly to the notion of ad hoc secure computation proposed by Beimel et al. [ITCS 2016], we consider the case when less than n parties participate in the online phase, and in addition we let these parties colluding with the evaluator. We refer to this notion as Threshold NIMPC. In addition, we show that when the number of parties participating in the online phase is a fixed threshold l <= n then it is possible to securely evaluate any l-input function. We build our result on top of a new secret-sharing scheme (which can be of independent interest) and on the results proposed by Benhamouda, Krawczyk and Rabin [Crypto 2017]. Our protocol can be used to compute any function in NC1 in the information-theoretic setting and any function in P assuming one-way functions. As a second (and main) contribution, we consider a slightly different notion of security in which the number of parties that can participate in the online phase is not specified, and can be any number c above the threshold l (in this case the evaluator cannot collude with the other parties). We solve an open question left open by Beimel, Ishai and Kushilevitz [Eurocrypt 2017] showing how to build a secure protocol for the case when c is constant, under the Learning with Errors assumption.
2020
TCC
Round Optimal Secure Multiparty Computation from Minimal Assumptions 📺
We construct a four round secure multiparty computation (MPC) protocol in the plain model that achieves security against any dishonest majority. The security of our protocol relies only on the existence of four round oblivious transfer. This culminates the long line of research on constructing round-efficient MPC from minimal assumptions (at least w.r.t. black-box simulation).
2020
EUROCRYPT
Resource-Restricted Cryptography: Revisiting MPC Bounds in the Proof-of-Work Era 📺
Traditional bounds on synchronous Byzantine agreement (BA) and secure multi-party computation (MPC) establish that in absence of a private correlated-randomness setup, such as a PKI, protocols can tolerate up to $t<n/3$ of the parties being malicious. The introduction of ``Nakamoto style'' consensus, based on Proof-of-Work (PoW) blockchains, put forth a somewhat different flavor of BA, showing that even a majority of corrupted parties can be tolerated as long as the majority of the computation resources remain at honest hands. This assumption on honest majority of some resource was also extended to other resources such as stake, space, etc., upon which blockchains achieving Nakamoto-style consensus were built that violated the $t<n/3$ bound in terms of number of party corruptions. The above state of affairs begs the question of whether the seeming mismatch is due to different goals and models, or whether the resource-restricting paradigm can be generically used to circumvent the $n/3$ lower bound. In this work we study this question and formally demonstrate how the above paradigm changes the rules of the game in cryptographic definitions. First, we abstract the core properties that the resource-restricting paradigm offers by means of a functionality {\em wrapper}, in the UC framework, which when applied to a standard point-to-point network restricts the ability (of the adversary) to send new messages. We show that such a wrapped network can be implemented using the resource-restricting paradigm---concretely, using PoWs and honest majority of computing power---and that the traditional $t<n/3$ impossibility results fail when the parties have access to such a network. Our construction is in the {\em fresh} Common Reference String (CRS) model---i.e., it assumes a CRS which becomes available to the parties at the same time as to the adversary. We then present constructions for BA and MPC, which given access to such a network tolerate $t<n/2$ corruptions without assuming a private correlated randomness setup. We also show how to remove the freshness assumption from the CRS by leveraging the power of a random oracle. Our MPC protocol achieves the standard notion of MPC security, where parties might have dedicated roles, as is for example the case in Oblivious Transfer protocols. This is in contrast to existing solutions basing MPC on PoWs, which associate roles to pseudonyms but do not link these pseudonyms with the actual parties.
2020
CRYPTO
On Succinct Arguments and Witness Encryption from Groups 📺
Succinct non-interactive arguments (SNARGs) enable proofs of NP statements with very low communication. Recently, there has been significant work in both theory and practice on constructing SNARGs with very short proofs. Currently, the state-of-the-art in succinctness is due to Groth (Eurocrypt 2016) who constructed a SNARG from bilinear maps where the proof consists of just 3 group elements. In this work, we first construct a concretely-efficient designated-verifier (preprocessing) SNARG with inverse polynomial soundness, where the proof consists of just 2 group elements in a standard (generic) group. This leads to a 50% reduction in concrete proof size compared to Groth's construction. We follow the approach of Bitansky et al. (TCC 2013) who describe a compiler from linear PCPs to SNARGs in the preprocessing model. Our improvement is based on a new linear PCP packing technique that allows us to construct 1-query linear PCPs which can then be compiled into a SNARG (using ElGamal encryption over a generic group). An appealing feature of our new SNARG is that the verifier can precompute a statement-independent lookup table in an offline phase; verifying proofs then only requires 2 exponentiations and a single table lookup. This makes our new designated-verifier SNARG appealing in settings that demand fast verification and minimal communication. We then turn to the question of constructing arguments where the proof consists of a single group element. Here, we first show that any (possibly interactive) argument for a language L where the verification algorithm is "generic" (i.e., only performs generic group operations) and the proof consists of a single group element, implies a witness encryption scheme for L. We then show that under a yet-unproven, but highly plausible, hypothesis on the hardness of approximating the minimal distance of linear codes, we can construct a 2-message laconic argument for NP where the proof consists of a single group element. Under the same hypothesis, we obtain a witness encryption scheme for NP in the generic group model. Along the way, we show that under a conceptually-similar but proven hardness of approximation result, there is a 2-message laconic argument for NP with negligible soundness error where the prover's message consists of just 2 group elements. In both settings, we obtain laconic arguments (and linear PCPs) with linear decision procedures. Our constructions circumvent a previous lower bound by Groth on such argument systems with linear decision procedures by relying on imperfect completeness. Namely, our constructions have vanishing but not negligible completeness error, while the lower bound of Groth implicitly assumes negligible completeness error of the underlying argument. Our techniques thus highlight new avenues for designing linear PCPs, succinct arguments, and witness encryption schemes.
2020
TCC
Efficient Range-Trapdoor Functions and Applications: Rate-1 OT and More
Sanjam Garg Mohammad Hajiabadi Rafail Ostrovsky
Substantial work on trapdoor functions (TDFs) has led to many powerful notions and applications. However, despite tremendous work and progress, all known constructions have prohibitively large public keys. In this work, we introduce new techniques for realizing so-called range-trapdoor hash functions with short public keys. This notion, introduced by Döttling et al. [Crypto 2019], allows for encoding a range of indices into a public key in a way that the public key leaks no information about the range, yet an associated trapdoor enables recovery of the corresponding input part. We give constructions of range-trapdoor hash functions, where for a given range $I$ the public key consists of $O(n)$ group elements, improving upon $O(n |I|)$ achieved by Döttling et al. Moreover, by designing our evaluation algorithm in a special way involving Toeplitz matrix multiplication and by showing how to perform fast-Fourier transforms in the exponent, we arrive at $O(n \log n)$ group operations for evaluation, improving upon $O(n^2)$, required of previous constructions. Our constructions rely on power-DDH assumptions in pairing-free groups. As applications of our results we obtain --- The first construction of (rate-1) lossy TDFs with public keys consisting of a linear number of group elements (without pairings). --- Rate-1 string OT with receiver communication complexity of $O(n)$ group elements, where $n$ is the sender's message size, improving upon $O(n^2)$ [Crypto 2019]. This leads to a similar result in the context of private-information retrieval (PIR). --- Semi-compact homomorphic encryption for branching programs: A construction of homomorphic encryption for branching programs, with ciphertexts consisting of $O(\lambda n d)$ group elements, improving upon $O(\lambda^2 n d)$. Here $\lambda $ denotes the security parameter, $n$ the input size and $d$ the depth of the program.
2020
JOFC
Oblivious Sampling with Applications to Two-Party k-Means Clustering
Paul Bunn Rafail Ostrovsky
The k -means clustering problem is one of the most explored problems in data mining. With the advent of protocols that have proven to be successful in performing single database clustering, the focus has shifted in recent years to the question of how to extend the single database protocols to a multiple database setting. To date, there have been numerous attempts to create specific multiparty k -means clustering protocols that protect the privacy of each database, but according to the standard cryptographic definitions of “privacy-protection”, so far all such attempts have fallen short of providing adequate privacy. In this paper, we describe a Two-Party k -Means Clustering Protocol that guarantees privacy against an honest-but-curious adversary, and is more efficient than utilizing a general multiparty “compiler” to achieve the same task. In particular, a main contribution of our result is a way to compute efficiently multiple iterations of k -means clustering without revealing the intermediate values. To achieve this, we describe a technique for performing two-party division securely and also introduce a novel technique allowing two parties to securely sample uniformly at random from an unknown domain size. The resulting Division Protocol and Random Value Protocol are of use to any protocol that requires the secure computation of a quotient or random sampling. Our techniques can be realized based on the existence of any semantically secure homomorphic encryption scheme. For concreteness, we describe our protocol based on Paillier Homomorphic Encryption scheme (see Paillier in Advances in: cryptology EURO-CRYPT’99 proceedings, LNCS 1592, pp 223–238, 1999). We will also demonstrate that our protocol is efficient in terms of communication, remaining competitive with existing protocols (such as Jagannathan and Wright in: KDD’05, pp 593–599, 2005) that fail to protect privacy.
2019
CRYPTO
Cryptographic Sensing 📺
Is it possible to measure a physical object in a way that makes the measurement signals unintelligible to an external observer? Alternatively, can one learn a natural concept by using a contrived training set that makes the labeled examples useless without the line of thought that has led to their choice? We initiate a study of “cryptographic sensing” problems of this type, presenting definitions, positive and negative results, and directions for further research.
2019
EUROCRYPT
Private Anonymous Data Access 📺
We consider a scenario where a server holds a huge database that it wants to make accessible to a large group of clients. After an initial setup phase, clients should be able to read arbitrary locations in the database while maintaining privacy (the server does not learn which locations are being read) and anonymity (the server does not learn which client is performing each read). This should hold even if the server colludes with a subset of the clients. Moreover, the run-time of both the server and the client during each read operation should be low, ideally only poly-logarithmic in the size of the database and the number of clients. We call this notion Private Anonymous Data Access (PANDA). PANDA simultaneously combines aspects of Private Information Retrieval (PIR) and Oblivious RAM (ORAM). PIR has no initial setup, and allows anybody to privately and anonymously access a public database, but the server’s run-time is linear in the data size. On the other hand, ORAM achieves poly-logarithmic server run-time, but requires an initial setup after which only a single client with a secret key can access the database. The goal of PANDA is to get the best of both worlds: allow many clients to privately and anonymously access the database as in PIR, while having an efficient server as in ORAM.In this work, we construct bounded-collusion PANDA schemes, where the efficiency scales linearly with a bound on the number of corrupted clients that can collude with the server, but is otherwise poly-logarithmic in the data size and the total number of clients. Our solution relies on standard assumptions, namely the existence of fully homomorphic encryption, and combines techniques from both PIR and ORAM. We also extend PANDA to settings where clients can write to the database.
2019
CRYPTO
Trapdoor Hash Functions and Their Applications 📺
We introduce a new primitive, called trapdoor hash functions (TDH), which are hash functions $$\mathsf {H}: \{0,1\}^n \rightarrow \{0,1\}^\lambda $$ with additional trapdoor function-like properties. Specifically, given an index $$i\in [n]$$, TDHs allow for sampling an encoding key $$\mathsf {ek}$$ (that hides i) along with a corresponding trapdoor. Furthermore, given $$\mathsf {H}(x)$$, a hint value $$\mathsf {E}(\mathsf {ek},x)$$, and the trapdoor corresponding to $$\mathsf {ek}$$, the $$i^{th}$$ bit of x can be efficiently recovered. In this setting, one of our main questions is: How small can the hint value $$\mathsf {E}(\mathsf {ek},x)$$ be? We obtain constructions where the hint is only one bit long based on DDH, QR, DCR, or LWE.This primitive opens a floodgate of applications for low-communication secure computation. We mainly focus on two-message protocols between a receiver and a sender, with private inputs x and y, resp., where the receiver should learn f(x, y). We wish to optimize the (download) rate of such protocols, namely the asymptotic ratio between the size of the output and the sender’s message. Using TDHs, we obtain:1.The first protocols for (two-message) rate-1 string OT based on DDH, QR, or LWE. This has several useful consequences, such as:(a)The first constructions of PIR with communication cost poly-logarithmic in the database size based on DDH or QR. These protocols are in fact rate-1 when considering block PIR.(b)The first constructions of a semi-compact homomorphic encryption scheme for branching programs, where the encrypted output grows only with the program length, based on DDH or QR.(c)The first constructions of lossy trapdoor functions with input to output ratio approaching 1 based on DDH, QR or LWE.(d)The first constant-rate LWE-based construction of a 2-message “statistically sender-private” OT protocol in the plain model.2.The first rate-1 protocols (under any assumption) for n parallel OTs and matrix-vector products from DDH, QR or LWE. We further consider the setting where f evaluates a RAM program y with running time $$T\ll |x|$$ on x. We obtain the first protocols with communication sublinear in the size of x, namely $$T\cdot \sqrt{|x|}$$ or $$T\cdot \root 3 \of {|x|}$$, based on DDH or, resp., pairings (and correlated-input secure hash functions).
2019
CRYPTO
Universally Composable Secure Computation with Corrupted Tokens 📺
We introduce the corrupted token model. This model generalizes the tamper-proof token model proposed by Katz (EUROCRYPT ’07) relaxing the trust assumption on the honest behavior of tokens. Our model is motivated by the real-world practice of outsourcing hardware production to possibly corrupted manufacturers. We capture the malicious behavior of token manufacturers by allowing the adversary to corrupt the tokens of honest players at the time of their creation.We show that under minimal complexity assumptions, i.e., the existence of one-way functions, it is possible to UC-securely realize (a variant of) the tamper-proof token functionality of Katz in the corrupted token model with n stateless tokens assuming that the adversary corrupts at most $$n-1$$ of them (for any $$n>0$$). We apply this result to existing multi-party protocols in Katz’s model to achieve UC-secure MPC in the corrupted token model assuming only the existence of one-way functions. Finally, we show how to obtain the above results using tokens of small size that take only short inputs. The technique in this result can also be used to improve the assumption of UC-secure hardware obfuscation recently proposed by Nayak et al. (NDSS ’17). While their construction requires the existence of collision-resistant hash functions, we can obtain the same result from only one-way functions. Moreover using our main result we can improve the trust assumption on the tokens as well.
2019
CRYPTO
Reusable Non-Interactive Secure Computation 📺
We consider the problem of Non-Interactive Two-Party Secure Computation (NISC), where Rachel wishes to publish an encryption of her input x, in such a way that any other party, who holds an input y, can send her a single message which conveys to her the value f(x, y), and nothing more. We demand security against malicious parties. While such protocols are easy to construct using garbled circuits and general non-interactive zero-knowledge proofs, this approach inherently makes a non-black-box use of the underlying cryptographic primitives and is infeasible in practice.Ishai et al. (Eurocrypt 2011) showed how to construct NISC protocols that only use parallel calls to an ideal oblivious transfer (OT) oracle, and additionally make only a black-box use of any pseudorandom generator. Combined with the efficient 2-message OT protocol of Peikert et al. (Crypto 2008), this leads to a practical approach to NISC that has been implemented in subsequent works. However, a major limitation of all known OT-based NISC protocols is that they are subject to selective failure attacks that allows a malicious sender to entirely compromise the security of the protocol when the receiver’s first message is reused.Motivated by the failure of the OT-based approach, we consider the problem of basing reusable NISC on parallel invocations of a standard arithmetic generalization of OT known as oblivious linear-function evaluation (OLE). We obtain the following results:We construct an information-theoretically secure reusable NISC protocol for arithmetic branching programs and general zero-knowledge functionalities in the OLE-hybrid model. Our zero-knowledge protocol only makes an absolute constant number of OLE calls per gate in an arithmetic circuit whose satisfiability is being proved. We also get reusable NISC in the OLE-hybrid model for general Boolean circuits using any one-way function.We complement this by a negative result, showing that reusable NISC is impossible to achieve in the OT-hybrid model. This provides a formal justification for the need to replace OT by OLE.We build a universally composable 2-message reusable OLE protocol in the CRS model that can be based on the security of Paillier encryption and requires only a constant number of modular exponentiations. This provides the first arithmetic analogue of the 2-message OT protocols of Peikert et al. (Crypto 2008).By combining our NISC protocol in the OLE-hybrid model and the 2-message OLE protocol, we get protocols with new attractive asymptotic and concrete efficiency features. In particular, we get the first (designated-verifier) NIZK protocols for NP where following a statement-independent preprocessing, both proving and verifying are entirely “non-cryptographic” and involve only a constant computational overhead. Furthermore, we get the first statistical designated-verifier NIZK argument for NP under an assumption related to factoring.
2019
TCC
Lower and Upper Bounds on the Randomness Complexity of Private Computations of AND
We consider multi-party information-theoretic private protocols, and specifically their randomness complexity. The randomness complexity of private protocols is of interest both because random bits are considered a scarce resource, and because of the relation between that complexity measure and other complexity measures of boolean functions such as the circuit size or the sensitivity of the function being computed [12, 17].More concretely, we consider the randomness complexity of the basic boolean function and, that serves as a building block in the design of many private protocols. We show that and cannot be privately computed using a single random bit, thus giving the first non-trivial lower bound on the 1-private randomness complexity of an explicit boolean function, $$f: \{0,1\}^n \rightarrow \{0,1\}$$. We further show that the function and, on any number of inputs n (one input bit per player), can be privately computed using 8 random bits (and 7 random bits in the special case of $$n=3$$ players), improving the upper bound of 73 random bits implicit in [17]. Together with our lower bound, we thus approach the exact determination of the randomness complexity of and. To the best of our knowledge, the exact randomness complexity of private computation is not known for any explicit function (except for xor, which is trivially 1-random, and for several degenerate functions).
2019
ASIACRYPT
UC-Secure Multiparty Computation from One-Way Functions Using Stateless Tokens
We revisit the problem of universally composable (UC) secure multiparty computation in the stateless hardware token model. We construct a three round multi-party computation protocol for general functions based on one-way functions where each party sends two tokens to every other party. Relaxing to the two-party case, we also construct a two round protocol based on one-way functions where each party sends a single token to the other party, and at the end of the protocol, both parties learn the output.One of the key components in the above constructions is a new two-round oblivious transfer protocol based on one-way functions using only one token, which can be reused an unbounded polynomial number of times. All prior constructions required either stronger complexity assumptions, or larger number of rounds, or a larger number of tokens.
2018
PKC
On the Message Complexity of Secure Multiparty Computation
Yuval Ishai Manika Mittal Rafail Ostrovsky
We study the minimal number of point-to-point messages required for general secure multiparty computation (MPC) in the setting of computational security against semi-honest, static adversaries who may corrupt an arbitrary number of parties.We show that for functionalities that take inputs from n parties and deliver outputs to k parties, $$2n+k-3$$2n+k-3 messages are necessary and sufficient. The negative result holds even when given access to an arbitrary correlated randomness setup. The positive result can be based on any 2-round MPC protocol (which can in turn can be based on 2-message oblivious transfer), or on a one-way function given a correlated randomness setup.
2018
CRYPTO
Adaptive Garbled RAM from Laconic Oblivious Transfer
We give a construction of an adaptive garbled RAM scheme. In the adaptive setting, a client first garbles a “large” persistent database which is stored on a server. Next, the client can provide garbling of multiple adaptively and adversarially chosen RAM programs that execute and modify the stored database arbitrarily. The garbled database and the garbled program should reveal nothing more than the running time and the output of the computation. Furthermore, the sizes of the garbled database and the garbled program grow only linearly in the size of the database and the running time of the executed program respectively (up to poly logarithmic factors). The security of our construction is based on the assumption that laconic oblivious transfer (Cho et al., CRYPTO 2017) exists. Previously, such adaptive garbled RAM constructions were only known using indistinguishability obfuscation or in random oracle model. As an additional application, we note that this work yields the first constant round secure computation protocol for persistent RAM programs in the malicious setting from standard assumptions. Prior works did not support persistence in the malicious setting.
2018
CRYPTO
Continuously Non-Malleable Codes in the Split-State Model from Minimal Assumptions 📺
At ICS 2010, Dziembowski, Pietrzak and Wichs introduced the notion of non-malleable codes, a weaker form of error-correcting codes guaranteeing that the decoding of a tampered codeword either corresponds to the original message or to an unrelated value. The last few years established non-malleable codes as one of the recently invented cryptographic primitives with the highest impact and potential, with very challenging open problems and applications.In this work, we focus on so-called continuously non-malleable codes in the split-state model, as proposed by Faust et al. (TCC 2014), where a codeword is made of two shares and an adaptive adversary makes a polynomial number of attempts in order to tamper the target codeword, where each attempt is allowed to modify the two shares independently (yet arbitrarily). Achieving continuous non-malleability in the split-state model has been so far very hard. Indeed, the only known constructions require strong setup assumptions (i.e., the existence of a common reference string) and strong complexity-theoretic assumptions (i.e., the existence of non-interactive zero-knowledge proofs and collision-resistant hash functions).As our main result, we construct a continuously non-malleable code in the split-state model without setup assumptions, requiring only one-to-one one-way functions (i.e., essentially optimal computational assumptions). Our result introduces several new ideas that make progress towards understanding continuous non-malleability, and shows interesting connections with protocol-design and proof-approach techniques used in other contexts (e.g., look-ahead simulation in zero-knowledge proofs, non-malleable commitments, and leakage resilience).
2018
TCC
Round Optimal Black-Box “Commit-and-Prove”
Motivated by theoretical and practical considerations, an important line of research is to design secure computation protocols that only make black-box use of cryptography. An important component in nearly all the black-box secure computation constructions is a black-box commit-and-prove protocol. A commit-and-prove protocol allows a prover to commit to a value and prove a statement about this value while guaranteeing that the committed value remains hidden. A black-box commit-and-prove protocol implements this functionality while only making black-box use of cryptography.In this paper, we build several tools that enable constructions of round-optimal, black-box commit and prove protocols. In particular, assuming injective one-way functions, we design the first round-optimal, black-box commit-and-prove arguments of knowledge satisfying strong privacy against malicious verifiers, namely:Zero-knowledge in four rounds and,Witness indistinguishability in three rounds. Prior to our work, the best known black-box protocols achieving commit-and-prove required more rounds.We additionally ensure that our protocols can be used, if needed, in the delayed-input setting, where the statement to be proven is decided only towards the end of the interaction. We also observe simple applications of our protocols towards achieving black-box four-round constructions of extractable and equivocal commitments.We believe that our protocols will provide a useful tool enabling several new constructions and easy round-efficient conversions from non-black-box to black-box protocols in the future.
2018
TCC
Information-Theoretic Broadcast with Dishonest Majority for Long Messages
Wutichai Chongchitmate Rafail Ostrovsky
Byzantine broadcast is a fundamental primitive for secure computation. In a setting with n parties in the presence of an adversary controlling at most t parties, while a lot of progress in optimizing communication complexity has been made for $$t < n/2$$t<n/2, little progress has been made for the general case $$t<n$$t<n, especially for information-theoretic security. In particular, all information-theoretic secure broadcast protocols for $$\ell $$ℓ-bit messages and $$t<n$$t<n and optimal round complexity $${\mathcal {O}}(n)$$O(n) have, so far, required a communication complexity of $${\mathcal {O}}(\ell n^2)$$O(ℓn2). A broadcast extension protocol allows a long message to be broadcast more efficiently using a small number of single-bit broadcasts. Through broadcast extension, so far, the best achievable round complexity for $$t<n$$t<n setting with the optimal communication complexity of $${\mathcal {O}}(\ell n)$$O(ℓn) is $${\mathcal {O}}(n^4)$$O(n4) rounds.In this work, we construct a new broadcast extension protocol for $$t<n$$t<n with information-theoretic security. Our protocol improves the round complexity to $${\mathcal {O}}(n^3)$$O(n3) while maintaining the optimal communication complexity for long messages. Our result shortens the gap between the information-theoretic setting and the computational setting, and between the optimal communication protocol and the optimal round protocol in the information-theoretic setting for $$t<n$$t<n.
2018
ASIACRYPT
Non-interactive Secure Computation from One-Way Functions
The notion of non-interactive secure computation (NISC) first introduced in the work of Ishai et al. [EUROCRYPT 2011] studies the following problem: Suppose a receiver R wishes to publish an encryption of her secret input y so that any sender S with input x can then send a message m that reveals f(x, y) to R (for some function f). Here, m can be viewed as an encryption of f(x, y) that can be decrypted by R. NISC requires security against both malicious senders and receivers, and also requires the receiver’s message to be reusable across multiple computations (w.r.t. a fixed input of the receiver).All previous solutions to this problem necessarily rely upon OT (or specific number-theoretic assumptions) even in the common reference string model or the random oracle model or to achieve weaker notions of security such as super-polynomial-time simulation.In this work, we construct a NISC protocol based on the minimal assumption of one way functions, in the stateless hardware token model. Our construction achieves UC security and requires a single token sent by the receiver to the sender.
2017
EUROCRYPT
2017
PKC
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
TCC
2017
TCC
2017
TCC
2016
EUROCRYPT
2016
CRYPTO
2016
CRYPTO
2016
TCC
2015
JOFC
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
TCC
2015
TCC
2015
EUROCRYPT
2015
CRYPTO
2015
CRYPTO
2015
CRYPTO
2015
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
EUROCRYPT
2014
PKC
2014
PKC
2014
TCC
2014
TCC
2014
TCC
2014
EPRINT
2014
EPRINT
2014
EPRINT
2014
EPRINT
2014
EPRINT
2014
EPRINT
2014
JOFC
2014
JOFC
2013
PKC
2013
TCC
2013
TCC
2013
TCC
2013
TCC
2013
CRYPTO
2013
ASIACRYPT
2013
ASIACRYPT
2013
EUROCRYPT
2013
EUROCRYPT
2012
TCC
2012
TCC
2012
TCC
2012
EUROCRYPT
2012
CRYPTO
2012
CRYPTO
2012
PKC
2012
PKC
2012
PKC
2011
CRYPTO
2011
CRYPTO
2011
EUROCRYPT
2011
ASIACRYPT
2010
TCC
2010
TCC
2010
EPRINT
Homomorphic Encryption Over Cyclic Groups Implies Chosen-Ciphertext Security
Brett Hemenway Rafail Ostrovsky
Chosen-Ciphertext (IND-CCA) security is generally considered the right notion of security for a cryptosystem. Because of its central importance much effort has been devoted to constructing IND-CCA secure cryptosystems. In this work, we consider the problem of constructing IND-CCA secure cryptosystems from (group) homomorphic encryption. Our main results are natural and efficient constructions of IND-CCA secure cryptosystems from any homomorphic encryption scheme that satisfies weak cyclic properties, either in the plaintext, ciphertext or randomness space. Our results have the added benefit of simple and elegant proofs.
2010
CRYPTO
2010
CRYPTO
2010
EUROCRYPT
2010
EPRINT
Correlated Product Security From Any One-Way Function and the New Notion of Decisional Correlated Product Security
Brett Hemenway Steve Lu Rafail Ostrovsky
It is well-known that the k-wise product of one-way functions remains one-way, but may no longer be when the k inputs are correlated. At TCC 2009, Rosen and Segev introduced a new notion known as Correlated Product secure functions. These functions have the property that a k-wise product of them remains one-way even under correlated inputs. Rosen and Segev gave a construction of injective trapdoor functions which were correlated product secure from the existence of Lossy Trapdoor Functions (introduced by Peikert and Waters in STOC 2008). The first main result of this work shows the surprising fact that a family of correlated product secure functions can be constructed from any one-way function. Because correlated product secure functions are trivially one-way, this shows an equivalence between the existence of these two cryptographic primitives. In the second main result of this work, we consider a natural decisional variant of correlated product security. Roughly, a family of functions are Decisional Correlated Product (DCP) secure if $f_1(x_1),\ldots,f_k(x_1)$ is indistinguishable from $f_1(x_1),\ldots,f_k(x_k)$ when $x_1,\ldots,x_k$ are chosen uniformly at random. We argue that the notion of Decisional Correlated Product security is a very natural one. To this end, we show a parallel from the Discrete Log Problem and Decision Diffie-Hellman Problem to Correlated Product security and its decisional variant. This intuition gives very simple constructions of PRGs and IND-CPA encryption from DCP secure functions. Furthermore, we strengthen our first result by showing that the existence of DCP secure one-way functions is also equivalent to the existence of any one-way function. When considering DCP secure functions with trapdoors, we give a construction based on Lossy Trapdoor Functions, and show that any DCP secure function family with trapdoor satisfy the security requirements for Deterministic Encryption as defined by Bellare, Boldyreva and O'Neill in CRYPTO 2007. In fact, we also show that definitionally, DCP secure functions with trapdoors are a strict subset of Deterministic Encryption functions by showing an example of a Deterministic Encryption function which according to the definition is not a DCP secure function.
2010
EPRINT
Throughput-Optimal Routing in Unreliable Networks
Paul Bunn Rafail Ostrovsky
We demonstrate the feasibility of throughput-efficient routing in a highly unreliable network. Modeling a network as a graph with vertices representing nodes and edges representing the links between them, we consider two forms of unreliability: unpredictable edge-failures, and deliberate deviation from protocol specifications by corrupt nodes. The first form of unpredictability represents networks with dynamic topology, whose links may be constantly going up and down; while the second form represents malicious insiders attempting to disrupt communication by deliberately disobeying routing rules, by e.g. introducing junk messages or deleting or altering messages. We present a robust routing protocol for end-to-end communication that is simultaneously resilient to both forms of unreliability, achieving provably optimal throughput performance. Our proof proceeds in three steps: 1) We use competitive-analysis to find a lower-bound on the optimal throughput-rate of a routing protocol in networks susceptible to only edge-failures (i.e. networks with no malicious nodes); 2) We prove a matching upper bound by presenting a routing protocol that achieves this throughput rate (again in networks with no malicious nodes); and 3) We modify the protocol to provide additional protection against malicious nodes, and prove the modified protocol performs (asymptotically) as well as the original.
2010
EPRINT
Position-Based Quantum Cryptography
In this work, we initiate the study of position-based cryptography in the quantum setting. The aim of position-based cryptography is to use the geographical position of a party as its only credential. This has interesting applications, e.g., it enables two military bases to talk to each other over insecure (i.e. neither private nor authenticated) channels and without having any pre-shared key, with the guarantee that only parties within the bases learn the content of the conversation. We present schemes for several important position-based cryptographic tasks: positioning, authentication, and key exchange, and we prove them unconditionally secure, i.e., without assuming any restriction on the adversaries (beyond the laws of quantum mechanics). At the core of our security proofs lies the strong complementary information tradeoff recently introduced by Renes and Boileau. An attractive feature of all our schemes is that they only involve ``simple'' quantum operations, namely to prepare, communicate and measure-upon-arrival individual qubits. We stress that the above position-based tasks are impossible in the classical setting without limiting the adversary. Therefore, our work shows that position-based quantum cryptography is one of the rare examples besides QKD for which there is such a strong separation between classical and quantum cryptography. Besides the schemes for which we give rigorous security proofs, we also present a couple of significantly more efficient schemes for which we can merely conjecture security; proving them secure remains an interesting challenge. Our results open a fascinating new direction for position-based security in cryptography where security of protocols is solely based on the laws of physics and proofs of security do not require any pre-existing infrastructure.
2009
TCC
2009
TCC
2009
CRYPTO
2009
EPRINT
Re-randomizable Encryption implies Selective Opening Security
Brett Hemenway Rafail Ostrovsky
In this paper, we present new general constructions of commitments and encryptions secure against a Selective Opening Adversary (SOA). Although it was recognized almost twenty years ago that SOA security was important, it was not until the recent breakthrough works of Hofheinz (H08) and Bellare, Hofheinz and Yilek (BHY09) that any progress was made on this fundamental problem. The Selective Opening problem is as follows: suppose an adversary receives $n$ commitments (or encryptions) of (possibly) correlated messages, and now the adversary can choose $n/2$ of the messages, and receive decommitments (or decryptions \emph{and} the randomness used to encrypt them). Do the unopened commitments (encryptions) remain secure? A protocol which achieves this type of security is called \emph{secure against a Selective Opening Adversary (SOA)}. This question arises naturally in the context of Byzantine Agreement and Secure Multiparty Computation, where an active adversary is able to eavesdrop on all the wires, and then choose a subset of players to corrupt. Unfortunately, the traditional definitions of security (IND-CPA,IND-CCA) do not guarantee security in this setting. In this paper: We formally define re-randomizable encryption and show that every re-randomizable encryption scheme gives rise to efficient encryptions secure against a selective opening adversary. (Very informally, an encryption is re- We formally define e-randomizable one-way functions and show that every re-randomizable one-way function family gives rise to efficient commitments secure against a Selective Opening Adversary. Applying our constructions to the known cryptosystems of El-Gamal, Paillier, and Goldwasser and Micali, we obtain IND-SO secure commitments and encryptions from the Decisional Diffie-Hellman (DDH), Decisional Composite Residuosity (DCR) and Quadratic Residuosity (QR) assumptions, that are either simpler or more efficient than existing constructions of Bellare Hofheinz and Yilek. Applying our general results to the Paillier Cryptosystem we demonstrate the first cryptosystem to achieve Semantic Selective Opening security from the DCR assumption. We give black-box constructions of Perfectly Binding SOA secure commitments, which is surprising given the negative results of Bellare, Hofheinz and Yilek. We define the notion of adaptive chosen ciphertext security (CCA-2) in the selective opening setting, and describe the first encryption scheme which is CCA-2 secure (and simultaneously SOA-secure).
2008
EUROCRYPT
2008
CRYPTO
2008
CRYPTO
2008
CRYPTO
2008
EPRINT
Constant-Round Concurrent Non-Malleable Commitments and Decommitments
Rafail Ostrovsky Giuseppe Persiano Ivan Visconti
In this paper we consider commitment schemes that are secure against concurrent poly-time man-in-the-middle (cMiM) attacks. Under such attacks, two possible notions of security for commitment schemes have been proposed in the literature: concurrent non-malleability with respect to commitment and concurrent non-malleability with respect to decommitment (i.e., opening). After the original notion of non-malleability introduced by [Dolev, Dwork and Naor STOC 91] that is based on the independence of the committed and decommitted message, a new and stronger notion of non-malleability has been given in [Pass and Rosen STOC 05] by requiring that for any man-in-the-middle adversary there is a stand-alone adversary that succeeds with the same probability. Under this stronger security notion, a constant-round commitment scheme that is concurrent non-malleable only with respect to commitment has been given in [Pass and Rosen FOCS 05] for the plain model, thus leaving as an open problem the construction of a constant-round concurrent non-malleable commitments with respect to decommitment. In other words, in [Pass and Rosen FOCS 05] security against adversaries that mount concurrent man-in-the-middle attacks is guaranteed only during the commitment phase (under their stronger notion of non-malleability). The main result of this paper is a commitment scheme that is concurrent non-malleable with respect to both commitment and decommitment, under the stronger notion of [Pass and Rosen STOC 05]. This property protects against cMiM attacks mounted during both commitments and decommitments which is a crucial security requirement in several applications, as in some digital auctions, in which players have to perform both commitments and decommitments. Our scheme uses a constant number of rounds of interaction in the plain model and is the first scheme that enjoys all these properties under the definitions of [Pass and Rosen FOCS 05]. We stress that, exactly as in [Pass and Rosen FOCS 05], we assume that commitments and decommitments are performed in two distinct phases that do not overlap in time.
2008
EPRINT
Public-Key Encryption with Efficient Amortized Updates
Searching and modifying public-key encrypted data (without having the decryption key) has received a lot of attention in recent literature. In this paper we re-visit this important problem and achieve much better amortized communication-complexity bounds. Our solution resolves the main open question posed by Boneh at al., \cite{BKOS07}. First, we consider the following much simpler to state problem (which turns out to be central for the above): A server holds a copy of Alice's database that has been encrypted under Alice's public key. Alice would like to allow other users in the system to replace a bit of their choice in the server's database by communicating directly with the server, despite other users not having Alice's private key. However, Alice requires that the server should not know which bit was modified. Additionally, she requires that the modification protocol should have ``small" communication complexity (sub-linear in the database size). This task is referred to as private database modification, and is a central tool in building a more general protocol for modifying and searching over public-key encrypted data with small communication complexity. The problem was first considered by Boneh at al., \cite{BKOS07}. The protocol of \cite{BKOS07} to modify $1$ bit of an $N$-bit database has communication complexity $\mathcal{O}(\sqrt N)$. Naturally, one can ask if we can improve upon this. Unfortunately, \cite{OS08} give evidence to the contrary, showing that using current algebraic techniques, this is not possible to do. In this paper, we ask the following question: what is the communication complexity when modifying $L$ bits of an $N$-bit database? Of course, one can achieve naive communication complexity of $\mathcal{O}(L\sqrt N)$ by simply repeating the protocol of \cite{BKOS07}, $L$ times. Our main result is a private database modification protocol to modify $L$ bits of an $N$-bit database that has communication complexity $\mathcal{O}(\sqrt{NL^{1+\alpha}}\textrm{poly-log~} N)$, where $0<\alpha<1$ is a constant. (We remark that in contrast with recent work of Lipmaa \cite{L08} on the same topic, our database size {\em does not grow} with every update, and stays exactly the same size.) As sample corollaries to our main result, we obtain the following: \begin{itemize} \item First, we apply our private database modification protocol to answer the main open question of \cite{BKOS07}. More specifically, we construct a public key encryption scheme supporting PIR queries that allows every message to have a non-constant number of keywords associated with it. \item Second, we show that one can apply our techniques to obtain more efficient communication complexity when parties wish to increment or decrement multiple cryptographic counters (formalized by Katz at al. ~\cite{KMO01}). \end{itemize} We believe that ``public-key encrypted'' amortized database modification is an important cryptographic primitive in it's own right and will be a useful in other applications.
2008
EPRINT
Authenticated Adversarial Routing
Yair Amir Paul Bunn Rafail Ostrovsky
The aim of this paper is to demonstrate the feasibility of authenticated throughput-efficient routing in an unreliable and dynamically changing synchronous network in which the majority of malicious insiders try to destroy and alter messages or disrupt communication in any way. More specifically, in this paper we seek to answer the following question: Given a network in which the majority of nodes are controlled by a malicious adversary and whose topology is changing every round, is it possible to develop a protocol with polynomially-bounded memory per processor that guarantees throughput-efficient and correct end-to-end communication? We answer the question affirmatively for extremely general corruption patterns: we only request that the topology of the network and the corruption pattern of the adversary leaves at least one path each round connecting the sender and receiver through honest nodes (though this path may change at every round). Out construction works in the public-key setting and enjoys bounded memory per processor (that does not depend on the amount of traffic and is polynomial in the network size.) Our protocol achieves optimal transfer rate with negligible decoding error. We stress that our protocol assumes no knowledge of which nodes are corrupted nor which path is reliable at any round, and is also fully distributed with nodes making decisions locally, so that they need not know the topology of the network at any time. The optimality that we prove for our protocol is very strong. Given any routing protocol, we evaluate its efficiency (rate of message delivery) in the ``worst case,'' that is with respect to the worst possible graph and against the worst possible (polynomially bounded) adversarial strategy (subject to the above mentioned connectivity constraints). Using this metric, we show that there does not exist any protocol that can be asymptotically superior (in terms of throughput) to ours in this setting. We remark that the aim of our paper is to demonstrate via explicit example the feasibility of throughput-efficient authenticated adversarial routing. However, we stress that out protocol is not intended to provide a practical solution, as due to its complexity, no attempt thus far has been made to make the protocol practical by reducing constants or the large (though polynomial) memory requirements per processor. Our result is related to recent work of Barak, Goldberg and Xiao in 2008 who studied fault localization in networks assuming a private-key trusted setup setting. Our work, in contrast, assumes a public-key PKI setup and aims at not only fault localization, but also transmission optimality. Among other things, our work answers one of the open questions posed in the Barak et. al. paper regarding fault localization on multiple paths. The use of a public-key setting to achieve strong error-correction results in networks was inspired by the work of Micali, Peikert, Sudan and Wilson who showed that classical error-correction against a polynomially-bounded adversary can be achieved with surprisingly high precision. Our work is also related to an interactive coding theorem of Rajagopalan and Schulman who showed that in noisy-edge static-topology networks a constant overhead in communication can also be achieved (provided none of the processors are malicious), thus establishing an optimal-rate routing theorem for static-topology networks. Finally, our work is closely related and builds upon to the problem of End-To-End Communication in distributed networks, studied by Afek and Gafni, Awebuch, Mansour, and Shavit, and Afek, Awerbuch, Gafni, Mansour, Rosen, and Shavit, though none of these papers consider or ensure correctness in the setting of a malicious adversary controlling the majority of the network.
2007
ASIACRYPT
2007
CRYPTO
2007
CRYPTO
2007
PKC
2007
EPRINT
Private Locally Decodable Codes
Rafail Ostrovsky Omkant Pandey Amit Sahai
We consider the problem of constructing efficient locally decodable codes in the presence of a computationally bounded adversary. Assuming the existence of one-way functions, we construct {\em efficient} locally decodable codes with positive information rate and \emph{low} (almost optimal) query complexity which can correctly decode any given bit of the message from constant channel error rate $\rho$. This compares favorably to our state of knowledge locally-decodable codes without cryptographic assumptions. For all our constructions, the probability for any polynomial-time adversary, that the decoding algorithm incorrectly decodes any bit of the message is negligible in the security parameter.
2007
EPRINT
A Survey of Single Database PIR: Techniques and Applications
Rafail Ostrovsky William E. Skeith III
In this paper we survey the notion of Single-Database Private Information Retrieval (PIR). The first Single-Database PIR was constructed in 1997 by Kushilevitz and Ostrovsky and since then Single-Database PIR has emerged as an important cryptographic primitive. For example, Single-Database PIR turned out to be intimately connected to collision-resistant hash functions, oblivious transfer and public-key encryptions with additional properties. In this survey, we give an overview of many of the constructions for Single-Database PIR (including an abstract construction based upon homomorphic encryption) and describe some of the connections of PIR to other primitives.
2007
EPRINT
Attribute-Based Encryption with Non-Monotonic Access Structures
Rafail Ostrovsky Amit Sahai Brent Waters
We construct an Attribute-Based Encryption (ABE) scheme that allows a user's private key to be expressed in terms of any access formula over attributes. Previous ABE schemes were limited to expressing only monotonic access structures. We provide a proof of security for our scheme based on the Decisional Bilinear Diffie-Hellman (BDH) assumption. Furthermore, the performance of our new scheme compares favorably with existing, less-expressive schemes.
2007
EPRINT
Algebraic Lower Bounds for Computing on Encrypted Data
Rafail Ostrovsky William E. Skeith III
In cryptography, there has been tremendous success in building primitives out of homomorphic semantically-secure encryption schemes, using homomorphic properties in a black-box way. A few notable examples of such primitives include items like private information retrieval schemes and collision-resistant hash functions. In this paper, we illustrate a general methodology for determining what types of protocols can be implemented in this way and which cannot. This is accomplished by analyzing the computational power of various algebraic structures which are preserved by existing cryptosystems. More precisely, we demonstrate lower bounds for algebraically generating generalized characteristic vectors over certain algebraic structures, and subsequently we show how to directly apply this abstract algebraic results to put lower bounds on algebraic constructions of a number of cryptographic protocols, including PIR-writing and private keyword search protocols. We hope that this work will provide a simple ``litmus test'' of feasibility for use by other cryptographic researchers attempting to develop new protocols that require computation on encrypted data. Additionally, a precise mathematical language for reasoning about such problems is developed in this work, which may be of independent interest.
2007
EPRINT
Public Key Encryption that Allows PIR Queries
Consider the following problem: Alice wishes to maintain her email using a storage-provider Bob (such as a Yahoo! or hotmail e-mail account). This storage-provider should provide for Alice the ability to collect, retrieve, search and delete emails but, at the same time, should learn neither the content of messages sent from the senders to Alice (with Bob as an intermediary), nor the search criteria used by Alice. A trivial solution is that messages will be sent to Bob in encrypted form and Alice, whenever she wants to search for some message, will ask Bob to send her a copy of the entire database of encrypted emails. This however is highly inefficient. We will be interested in solutions that are communication-efficient and, at the same time, respect the privacy of Alice. In this paper, we show how to create a public-key encryption scheme for Alice that allows PIR searching over encrypted documents. Our solution provides a theoretical solution to an open problem posed by Boneh, DiCrescenzo, Ostrovsky and Persiano on ``Public-key Encryption with Keyword Search'', providing the first scheme that does not reveal any partial information regarding user's search (including the access pattern) in the public-key setting and with non-trivially small communication complexity. The main technique of our solution also allows for Single-Database PIR writing with sub-linear communication complexity, which we consider of independent interest.
2007
EPRINT
Public Key Encryption Which is Simultaneously a Locally-Decodable Error-Correcting Code
Brett Hemenway Rafail Ostrovsky
In this paper, we introduce the notion of a Public-Key Encryption (PKE) Scheme that is also a Locally-Decodable Error-Correcting Code. In particular, our construction simultaneously satisfies all of the following properties: \begin{itemize} \item Our Public-Key Encryption is semantically secure under a certain number-theoretic hardness assumption (a specific variant of the $\Phi$-hiding assumption). \item Our Public-Key Encryption function has \emph{constant expansion}: it maps plaintexts of length $n$ (for any $n$ polynomial in $k$, where $k$ is a security parameter) to ciphertexts of size $\O(n+k)$. The size of our Public Key is also linear in $n$ and $k$. \item Our Public-Key Encryption is also a \emph{constant rate} binary error-correcting code against any polynomial-time Adversary. That is, we allow a polynomial-time Adversary to read the entire ciphertext, perform any polynomial-time computation and change an arbitrary (i.e. adversarially chosen) constant fraction of {\em all}\ bits of the ciphertext. The goal of the Adversary is to cause error in decoding any bit of the plaintext. Nevertheless, the decoding algorithm can decode {\bf all} bits of the plaintext (given the corrupted ciphertext) while making a mistake on \emph{any} bit of the plaintext with only a negligible in $k$ error probability. \item Our Decoding algorithm has a {\bf Local Decodability} property. That is, given a corrupted ciphertext of $E(x)$ the decryption algorithm, for any $1 \le i \le n$ can recover the $i$'th bit of $x$ (i.e., $x_i$) with overwhelming probability reading at most $\O(k^2)$ bits of the corrupted ciphertext and performing computation polynomial in $k$. Thus, for large plaintext messages, out Public Key Decryption algorithm can decode and error-correct any $x_i$ with sublinear (in $|x|$) computation. \end{itemize} We believe that the tools and techniques developed in this paper will be of independent interest in other settings.
2007
EPRINT
Secure Two-Party k-Means Clustering
Paul Bunn Rafail Ostrovsky
The k-Means Clustering problem is one of the most-explored problems in data mining to date. With the advent of protocols that have proven to be successful in performing single database clustering, the focus has changed in recent years to the question of how to extend the single database protocols to a multiple database setting. To date there have been numerous attempts to create specific multiparty k-means clustering protocols that protect the privacy of each database, but according to the standard cryptographic definitions of ``privacy-protection,'' so far all such attempts have fallen short of providing adequate privacy. In this paper we describe a Two-Party k-Means Clustering Protocol that guarantees privacy, and is more efficient than utilizing a general multiparty ``compiler'' to achieve the same task. In particular, a main contribution of our result is a way to compute efficiently multiple iterations of k-means clustering without revealing the intermediate values. To achieve this, we use novel techniques to perform two-party division and sample uniformly at random from an unknown domain size. Our techniques are quite general and can be realized based on the existence of any semantically secure homomorphic encryption scheme. For concreteness, we describe our protocol based on Paillier Homomorphic Encryption scheme (see [Pa]). We will also demonstrate that our protocol is efficient in terms of communication, remaining competitive with existing protocols (such as [JW]) that fail to protect privacy.
2007
EPRINT
Almost-everywhere Secure Computation
Juan A. Garay Rafail Ostrovsky
Secure multi-party computation (MPC) is a central problem in cryptography. Unfortunately, it is well known that MPC is possible if and only if the underlying communication network has very large connectivity---specifically, $\Omega(t)$, where $t$ is the number of potential corruptions in the network. This impossibility result renders existing MPC results far less applicable in practice, since most deployed networks have in fact a very small degree. In this paper, we show how to circumvent this impossibility result and achieve meaningful security guarantees for graphs with small degree (such as expander graphs and several other topologies). In fact, the notion we introduce, which we call {\em almost-everywhere MPC}, building on the notion of almost-everywhere agreement due to Dwork, Peleg, Pippenger and Upfal, allows the degree of the network to be much smaller than the total number of allowed corruptions. In essence, our definition allows the adversary to {\em implicitly} wiretap some of the good nodes by corrupting sufficiently many nodes in the ``neighborhood'' of those nodes. We show protocols that satisfy our new definition, retaining both correctness and privacy for most nodes despite small connectivity, no matter how the adversary chooses his corruptions. Instrumental in our constructions is a new model and protocol for the {\em secure message transmission} (SMT) problem, which we call {\em SMT by public discussion}, and which we use for the establishment of pairwise secure channels in limited connectivity networks.
2006
CRYPTO
2006
EUROCRYPT
2006
EUROCRYPT
2006
EPRINT
Sequential Aggregate Signatures and Multisignatures without Random Oracles
We present the first aggregate signature, the first multisignature, and the first verifiably encrypted signature provably secure without random oracles. Our constructions derive from a novel application of a recent signature scheme due to Waters. Signatures in our aggregate signature scheme are sequentially constructed, but knowledge of the order in which messages were signed is not necessary for verification. The aggregate signatures obtained are shorter than Lysyanskaya et~al. sequential aggregates and can be verified more efficiently than Boneh et~al. aggregates. We also consider applications to secure routing and proxy signatures.
2006
EPRINT
Cryptography from Anonymity
There is a vast body of work on {\em implementing} anonymous communication. In this paper, we study the possibility of using anonymous communication as a {\em building block}, and show that one can leverage on anonymity in a variety of cryptographic contexts. Our results go in two directions. \begin{itemize} \item{\bf Feasibility.} We show that anonymous communication over {\em insecure} channels can be used to implement unconditionally secure point-to-point channels, and hence general multi-party protocols with unconditional security in the presence of an honest majority. In contrast, anonymity cannot be generally used to obtain unconditional security when there is no honest majority. \item{\bf Efficiency.} We show that anonymous channels can yield substantial efficiency improvements for several natural secure computation tasks. In particular, we present the first solution to the problem of private information retrieval (PIR) which can handle multiple users while being close to optimal with respect to {\em both} communication and computation. A key observation that underlies these results is that {\em local randomization} of inputs, via secret-sharing, when combined with the {\em global mixing} of the shares, provided by anonymity, allows to carry out useful computations on the inputs while keeping the inputs private. \end{itemize}
2006
EPRINT
Searchable Symmetric Encryption: Improved Definitions and Efficient Constructions
Searchable symmetric encryption (SSE) allows a party to outsource the storage of his data to another party in a private manner, while maintaining the ability to selectively search over it. This problem has been the focus of active research and several security definitions and constructions have been proposed. In this paper we review existing security definitions, pointing out their shortcomings, and propose two new stronger definitions which we prove equivalent. We then present two constructions that we show secure under our new definitions. Interestingly, in addition to satisfying stronger security guarantees, our constructions are more efficient than all previous constructions. Further, prior work on SSE only considered the setting where only the owner of the data is capable of submitting search queries. We consider the natural extension where an arbitrary group of parties other than the owner can submit search queries. We formally define SSE in this multi-user setting, and present an efficient construction.
2006
EPRINT
Concurrent Statistical Zero-Knowledge Arguments for NP from One Way Functions
In this paper we show a general transformation from any honest verifier statistical zero-knowledge argument to a concurrent statistical zero-knowledge argument. Our transformation relies only on the existence of one-way functions. It is known that the existence of zero-knowledge systems for any non-trivial language implies one way functions. Hence our transformation \emph{unconditionally} shows that concurrent statistical zero-knowledge arguments for a non-trivial language exist if and only if standalone secure statistical zero-knowledge arguments for that language exist. Further, applying our transformation to the recent statistical zero-knowledge argument system of Nguyen et al (STOC'06) yields the first concurrent statistical zero-knowledge argument system for all languages in \textbf{NP} from any one way function.
2006
EPRINT
Cryptography in the Multi-string Model
Jens Groth Rafail Ostrovsky
The common random string model permits the construction of cryptographic protocols that are provably impossible to realize in the standard model. In this model, a trusted party generates a random string and gives it to all parties in the protocol. However, the introduction of such a third party should set alarm bells going off: Who is this trusted party? Why should we trust that the string is random? Even if the string is uniformly random, how do we know it does not leak private information to the trusted party? The very point of doing cryptography in the first place is to prevent us from trusting the wrong people with our secrets. In this paper, we propose the more realistic multi-string model. Instead of having one trusted authority, we have several authorities that generate random strings. We do not trust any single authority, we only assume a majority of them generate the random string honestly. We demonstrate the use of this model for two fundamental cryptographic taks. We define non-interactive zero-knowledge in the multi-string model and construct NIZK proofs in the multi-string model. We also consider multi-party computation and show that any functionality can be securely realized in the multi-string model.
2005
CRYPTO
2005
EUROCRYPT
2005
TCC
2005
JOFC
2005
EPRINT
Perfect Non-Interactive Zero Knowledge for NP
Jens Groth Rafail Ostrovsky Amit Sahai
Non-interactive zero-knowledge (NIZK) systems are fundamental cryptographic primitives used in many constructions, including CCA2-secure cryptosystems, digital signatures, and various cryptographic protocols. What makes them especially attractive, is that they work equally well in a concurrent setting, which is notoriously hard for interactive zero-knowledge protocols. However, while for interactive zero-knowledge we know how to construct statistical zero-knowledge argument systems for all NP languages, for non-interactive zero-knowledge, this problem remained open since the inception of NIZK in the late 1980's. Here we resolve two problems regarding NIZK: - we construct the first perfect NIZK argument system for any NP language. - we construct the first UC-secure NIZK protocols for any NP language in the presence of a dynamic/adaptive adversary. While it was already known how to construct efficient prover computational NIZK proofs for any NP language, the known techniques yield large common reference strings and large NIZK proofs. As an additional implication of our techniques, we considerably reduce both the size of the common reference string and the size of the proofs.
2004
CRYPTO
2004
EUROCRYPT
2004
EPRINT
Efficient Consistency Proofs for Generalized Queries on a Committed Database
Rafail Ostrovsky Charles Rackoff Adam Smith
A *consistent query protocol* (CQP) allows a database owner to publish a very short string $c$ which *commits* her and everybody else to a particular database $D$, so that any copy of the database can later be used to answer queries and give short proofs that the answers are consistent with the commitment $c$. Here *commits* means that there is at most one database $D$ that anybody can find (in polynomial time) which is consistent with $c$. (Unlike in some previous work, this strong guarantee holds even for owners who try to cheat while creating $c$.) Efficient CQPs for membership and one-dimensional range queries are known \cite{BRW02,K98,MR}: given a query pair $a,b\in \mathbb{R}$, the server answers with all the keys in the database which lie in the interval $[a,b]$ and a proof that the answer is correct. This paper explores CQPs for more general types of databases. We put forward a general technique for constructing CQPs for any type of query, assuming the existence of a data structure/algorithm with certain inherent robustness properties that we define (called a *data robust algorithm*). We illustrate our technique by constructing an efficient protocol for *orthogonal range queries*, where the database keys are points in $\mathbb{R}^d$ and a query asks for all keys in a rectangle $[a_1,b_1]\times\ldots \times [a_d,b_d]$. Our data-robust algorithm is within a $O(\log N)$ factor of the best known standard data structure (a range tree, due to Bentley (1980)). We modify our protocol so that it is also private, that is, the proofs leak no information about the database beyond the query answers. We show a generic modification to ensure privacy based on zero-knowledge proofs, and also give a new, more efficient protocol tailored to hash trees.
2003
EUROCRYPT
2003
EPRINT
Public Key Encryption with keyword Search
We study the problem of searching on data that is encrypted using a public key system. Consider user Bob who sends email to user Alice encrypted under Alice's public key. An email gateway wants to test whether the email contains the keyword `urgent' so that it could route the email accordingly. Alice, on the other hand does not wish to give the gateway the ability to decrypt all her messages. We define and construct a mechanism that enables Alice to provide a key to the gateway that enables the gateway to test whether the word `urgent' is a keyword in the email without learning anything else about the email. We refer to this mechanism as <I>Public Key Encryption with keyword Search</I>. As another example, consider a mail server that stores various messages publicly encrypted for Alice by others. Using our mechanism Alice can send the mail server a key that will enable the server to identify all messages containing some specific keyword, but learn nothing else. We define the concept of public key encryption with keyword search and give several constructions.
2003
EPRINT
Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data
We provide formal definitions and efficient secure techniques for -- turning noisy information into keys usable for any cryptographic application, and, in particular, -- reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a fuzzy extractor reliably extracts nearly uniform randomness R from its input; the extraction is error-tolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A secure sketch produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce error-prone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of "closeness" of input data, such as Hamming distance, edit distance, and set difference.
2002
EPRINT
Universally Composable Two-Party and Multi-Party Secure Computation
We show how to securely realize any two-party and multi-party functionality in a {\em universally composable} way, regardless of the number of corrupted participants. That is, we consider an asynchronous multi-party network with open communication and an adversary that can adaptively corrupt as many parties as it wishes. In this setting, our protocols allow any subset of the parties (with pairs of parties being a special case) to securely realize any desired functionality of their local inputs, and be guaranteed that security is preserved regardless of the activity in the rest of the network. This implies that security is preserved under concurrent composition of an unbounded number of protocol executions, it implies non-malleability with respect to arbitrary protocols, and more. Our constructions are in the common reference string model and rely on standard intractability assumptions.
2001
CRYPTO
2001
CRYPTO
2001
EUROCRYPT
2001
EUROCRYPT
2001
EUROCRYPT
2001
EPRINT
Efficient Password-Authenticated Key Exchange Using Human-Memorable Passwords
Jonathan Katz Rafail Ostrovsky Moti Yung
We present an efficient password-authenticated key exchange protocol which is secure against off-line dictionary attacks even when users choose passwords from a very small space (say, a dictionary of English words). We prove security in the standard model under the decisional Diffie-Hellman assumption, assuming public parameters generated by a trusted party. Compared to the recent work of Goldreich and Lindell (which was the first to give a secure construction, under general assumptions, in the standard model), our protocol requires only 3 rounds and is efficient enough to be used in practice.
2001
EPRINT
Efficient and Non-Interactive Non-Malleable Commitment
We present new constructions of non-malleable commitment schemes, in the public parameter model (where a trusted party makes parameters available to all parties), based on the discrete logarithm or RSA assumptions. The main features of our schemes are: they achieve near-optimal communication for arbitrarily-large messages and are non-interactive. Previous schemes either required (several rounds of) interaction or focused on achieving non-malleable commitment based on general assumptions and were thus efficient only when committing to a single bit. Although our main constructions are for the case of perfectly-hiding commitment, we also present a communication-efficient, non-interactive commitment scheme (based on general assumptions) that is perfectly binding.
2001
JOFC
2000
EUROCRYPT
2000
EUROCRYPT
2000
EPRINT
Fast Verification of Any Remote Procedure Call: Short Witness-Indistinguishable One-Round Proofs for NP
The paper is withdrawn. As communicated to us by C. Dwork, M. Langberg, M. Naor and K. Nissim [1] the protocol as presented in the paper is not sufficient to prove the claims. We gratefully acknowledge the authors of [1] for pointing out this error to us. REFERENCES: [1] C. Dwork, M. Langberg, M. Naor, and K. Nissim, "Succinct Proofs for NP and Spooky Interactions" private communication, July 4, 2000.
2000
JOFC
1999
CRYPTO
1999
EUROCRYPT
1998
CRYPTO
1998
EPRINT
Universal Service Providers for Database Private Information Retrieval
We consider the question of private information retrieval in the so-called ``commodity-based'' model. This model was recently proposed by Beaver for practically-oriented service-provider internet applications. In this paper, we show the following, somewhat surprising, results regarding this model for the problem of private-information retrieval: (1) the service-provider model allows to dramatically reduce the overall communication involving the user, using off-line pre-processing messages from ``service-providers'' to databases, where the service-providers do not need to know the database contents, nor the future user's requests; (2) our service-provider solutions are resilient against more than a majority (in fact, all-but-one) coalitions of service-providers; and (3) these results hold for {\em both} the computational and the information-theoretic setting. More specifically, we exhibit a service-provider algorithm which can ``sell'' (i.e., generate and send) ``commodities'' to users and databases, that subsequently allow users to retrieve database contents in a way which hides from the database which particular item the user retrieves. The service-providers need not know anything about the contents of the databases nor the nature of the user's requests in order to generate commodities. Our commodity-based solution significantly improves communication complexity of the users (i.e., counting both the size of commodities bought by the user from the service-providers and the subsequent communication with the databases) compared to all previously known on-line private information retrieval protocols (i.e., without the help of the service-providers). Moreover, we show how commodities from different service-providers can be {\em combined} in such a way that even if ``all-but-one'' of the service-providers collude with the database, the user's privacy remains intact. Finally, we show how to re-use commodities in case of multiple requests (i.e., in the amortized sense), how to ``check'' commodity-correctness, and how some of the solutions can be extended to the related problem of {\em Private Information Storage}.
1998
EPRINT
Randomness versus Fault-Tolerance
We investigate the relations between two major requirements of multiparty protocols: {\em fault tolerance} (or {\em resilience}) and {\em randomness}. Fault-tolerance is measured in terms of the maximum number of colluding faulty parties, t, that a protocol can withstand and still maintain the privacy of the inputs and the correctness of the outputs (of the honest parties). Randomness is measured in terms of the total number of random bits needed by the parties in order to execute the protocol. Previously, the upper bound on the amount of randomness required by general constructions for securely computing any non-trivial function f was polynomial both in $n$, the total number of parties, and the circuit-size C(f). This was the state of knowledge even for the special case t=1 (i.e., when there is at most one faulty party). In this paper, we show that for any linear-size circuit, and for any number t < n/3 of faulty parties, O(poly(t) * log n) randomness is sufficient. More generally, we show that for any function f with circuit-size C(f), we need only O(poly(t) * log n + poly(t) * C(f)/n) randomness in order to withstand any coalition of size at most t. Furthermore, in our protocol only t+1 parties flip coins and the rest of the parties are deterministic. Our results generalize to the case of adaptive adversaries as well.
1998
JOFC
1997
CRYPTO
1997
CRYPTO
1997
CRYPTO
1996
EPRINT
Deniable Encryption
Consider a situation in which the transmission of encrypted messages is intercepted by an adversary who can later ask the sender to reveal the random choices (and also the secret key, if one exists) used in generating the ciphertext, thereby exposing the cleartext. An encryption scheme is <B>deniable</B> if the sender can generate `fake random choices' that will make the ciphertext `look like' an encryption of a different cleartext, thus keeping the real cleartext private. Analogous requirements can be formulated with respect to attacking the receiver and with respect to attacking both parties. In this paper we introduce deniable encryption and propose constructions of schemes with polynomial deniability. In addition to being interesting by itself, and having several applications, deniable encryption provides a simplified and elegant construction of <B>adaptively secure</B> multiparty computation.
1996
EPRINT
Private Information Storage
Rafail Ostrovsky Victor Shoup
We consider the setting of hiding information through the use of multiple databases that do not interact with one another. Previously, in this setting solutions for retrieval of data in the efficient manner were given, where a user achieves this by interacting with all the databases. We consider the case of both writing and reading. While the case of reading was well studied before, the case of writing was previously completely open. In this paper, we show how to implement both read and write operations. As in the previous papers, we measure, as a function of k and n the amount of communication required between a user and all the databases for a single read/write operation, and achieve efficient read/write schemes. Moreover, we show a general reduction from reading database scheme to reading and writing database scheme, with the following guarantees: for any k, given a retrieval only k-database scheme with communication complexity R(k,n) we show a (k+1) reading and writing database scheme with total communication complexity O(R(k,n) * (log n)^{O(1)}). It should be stressed that prior to the current paper no trivial (i.e. sub-linear) bounds for private information storage were known.
1993
EUROCRYPT
1992
CRYPTO
1992
CRYPTO
1991
ASIACRYPT
1989
CRYPTO
1989
CRYPTO

Program Committees

Eurocrypt 2019
Eurocrypt 2017
PKC 2016
Eurocrypt 2011
TCC 2010
Eurocrypt 2009
PKC 2007
PKC 2006
Eurocrypt 2005
TCC 2005
Crypto 2004
Crypto 2003
Crypto 2002
Crypto 1998

Coauthors

A. Aiello (1)
William Aiello (1)
Joël Alwen (1)
Yair Amir (3)
Prabhanjan Ananth (1)
Saikrishna Badrinarayanan (3)
Joshua Baron (1)
Ohad Barta (1)
Eli Ben-Sasson (1)
S. Bhatt (1)
Nir Bitansky (1)
Dan Boneh (5)
Xavier Boyen (1)
Harry Buhrman (1)
Paul Bunn (6)
Ran Canetti (5)
Alfonso Cevallos (1)
Nishanth Chandran (9)
Melissa Chase (2)
Alessandro Chiesa (1)
Chongwon Cho (3)
Wutichai Chongchitmate (5)
Arka Rai Choudhuri (1)
Kai-Min Chung (1)
Michele Ciampi (6)
Giovanni Di Crescenzo (9)
Reza Curtmola (1)
Ivan Damgård (1)
Giovanni Di-Crescenzo (1)
Yevgeniy Dodis (3)
Shlomi Dolev (1)
Nico Döttling (1)
Cynthia Dwork (2)
Karim Eldefrawy (1)
Brett Hemenway Falk (1)
Serge Fehr (4)
Joan Feigenbaum (1)
Matthias Fitzi (2)
Matthew K. Franklin (1)
Juan A. Garay (10)
Sanjam Garg (8)
Ran Gelles (3)
Craig Gentry (1)
Clint Givens (1)
Shafi Goldwasser (2)
S. Dov Gordon (1)
Vipul Goyal (12)
Jens Groth (6)
Mohammad Hajiabadi (1)
Shai Halevi (2)
Mike Hamburg (1)
Ariel Hamlin (1)
Brett Hemenway (15)
William E. Skeith III (8)
Yuval Ishai (18)
Zahra Jafargholi (1)
Abhishek Jain (6)
Ari Juels (1)
Seny Kamara (1)
Bhavana Kanukurthi (2)
Jonathan Katz (8)
Dakshita Khurana (2)
Aggelos Kiayias (1)
Joe Kilian (1)
Daniel Kraschewski (1)
Abishek Kumarasubramanian (2)
Eyal Kushilevitz (12)
Joshua Lampkins (1)
Chen-Kuei Lee (1)
Benoît Libert (1)
Yehuda Lindell (1)
Richard J. Lipton (1)
Tianren Liu (1)
Sachin Lodha (1)
Steve Lu (11)
Michael Luby (1)
Giulio Malavolta (1)
Tal Malkin (1)
Ueli Maurer (2)
Silvio Micali (1)
Manika Mittal (1)
Tal Moran (1)
Ryan Moriarty (3)
Tamer Mour (1)
Steven Myers (1)
Moni Naor (4)
Jesper Buus Nielsen (1)
Daniel Noble (1)
Claudio Orlandi (2)
Giorgos Panagiotakos (1)
Omkant Pandey (3)
Omer Paneth (1)
Anat Paskin-Cherniavsky (2)
Beni Paskin-Cherniavsky (1)
Rafael Pass (1)
Giuseppe Persiano (7)
Manoj Prabhakaran (2)
Emmanuel Prouff (1)
Yuval Rabani (1)
Charles Rackoff (1)
Sivaramakrishnan Rajagopalan (1)
S. Rajagopalan. (1)
Vanishree Rao (3)
Mariana Raykova (1)
Leonid Reyzin (1)
Silas Richelson (5)
Alon Rosen (3)
Adi Rosén (4)
Amit Sahai (19)
Alfredo De Santis (1)
Alessandra Scafuro (9)
Christian Schaffner (1)
Leonard J. Schulman (1)
Hakan Seyalioglu (1)
Hovav Shacham (2)
Victor Shoup (1)
Luisa Siniscalchi (4)
Adam Smith (6)
Akshayaram Srinivasan (2)
Adrian Thillard (1)
Vinod Vaikuntanathan (1)
Ramarathnam Venkatesan (3)
Muthuramakrishnan Venkitasubramaniam (3)
Daniele Venturi (1)
Damien Vergnaud (2)
Ivan Visconti (26)
Akshay Wadia (3)
Brent Waters (3)
Mor Weiss (1)
Daniel Wichs (3)
David J. Wu (1)
Jürg Wullschleger (1)
Moti Yung (5)
Hong-Sheng Zhou (1)
Vassilis Zikas (7)