IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
28 February 2019
Divesh Aggarwal, Kai-Min Chung, Han-Hsuan Lin, Thomas Vidick
In privacy amplification, two mutually trusted parties aim to amplify the secrecy of an initial shared secret X in order to establish a shared private key K by exchanging messages over an insecure communication channel. If the channel is authenticated the task can be solved in a single round of communication using a strong randomness extractor; choosing a quantum-proof extractor allows one to establish security against quantum adversaries.
In the case that the channel is not authenticated, this simple solution is no longer secure. Nevertheless, Dodis and Wichs (STOC'09) showed that the problem can be solved in two rounds of communication using a non-malleable extractor, a stronger pseudo-random construction than a strong extractor.
We give the first construction of a non-malleable extractor that is secure against quantum adversaries. The extractor is based on a construction by Li (FOCS'12), and is able to extract from source of min-entropy rates larger than 1/2. Combining this construction with a quantum-proof variant of the reduction of Dodis and Wichs, due to Cohen and Vidick (unpublished) we obtain the first privacy amplification protocol secure against active quantum adversaries.
In the case that the channel is not authenticated, this simple solution is no longer secure. Nevertheless, Dodis and Wichs (STOC'09) showed that the problem can be solved in two rounds of communication using a non-malleable extractor, a stronger pseudo-random construction than a strong extractor.
We give the first construction of a non-malleable extractor that is secure against quantum adversaries. The extractor is based on a construction by Li (FOCS'12), and is able to extract from source of min-entropy rates larger than 1/2. Combining this construction with a quantum-proof variant of the reduction of Dodis and Wichs, due to Cohen and Vidick (unpublished) we obtain the first privacy amplification protocol secure against active quantum adversaries.
Arka Rai Choudhuri, Vipul Goyal, Abhishek Jain
We study the foundations of secure computation in the blockchain-hybrid model, where a blockchain -- modeled as a global functionality -- is available as an Oracle to all the participants of a cryptographic protocol. We demonstrate both destructive and constructive applications of blockchains:
- We show that classical rewinding-based simulation techniques used in many security proofs fail against blockchain-active adversaries that have read and post access to a global blockchain. In particular, we show that zero-knowledge (ZK) proofs with black-box simulation are impossible against blockchain-active adversaries.
- Nevertheless, we show that achieving security against blockchain-active adversaries is possible if the honest parties are also blockchain active. We construct an $\omega(1)$-round ZK protocol with black-box simulation. We show that this result is tight by proving the impossibility of constant-round ZK with black-box simulation.
- Finally, we demonstrate a novel application of blockchains to overcome the known impossibility results for concurrent secure computation in the plain model. We construct a concurrent self-composable secure computation protocol for general functionalities in the blockchain-hybrid model based on standard cryptographic assumptions.
We develop a suite of techniques for constructing secure protocols in the blockchain-hybrid model that we hope will find applications to future research in this area.
- We show that classical rewinding-based simulation techniques used in many security proofs fail against blockchain-active adversaries that have read and post access to a global blockchain. In particular, we show that zero-knowledge (ZK) proofs with black-box simulation are impossible against blockchain-active adversaries.
- Nevertheless, we show that achieving security against blockchain-active adversaries is possible if the honest parties are also blockchain active. We construct an $\omega(1)$-round ZK protocol with black-box simulation. We show that this result is tight by proving the impossibility of constant-round ZK with black-box simulation.
- Finally, we demonstrate a novel application of blockchains to overcome the known impossibility results for concurrent secure computation in the plain model. We construct a concurrent self-composable secure computation protocol for general functionalities in the blockchain-hybrid model based on standard cryptographic assumptions.
We develop a suite of techniques for constructing secure protocols in the blockchain-hybrid model that we hope will find applications to future research in this area.
Hamza Abusalah, Chethan Kamath, Karen Klein, Krzysztof Pietrzak, Michael Walter
Proofs of sequential work (PoSW) are proof systems where a prover, upon receiving a statement $\chi$ and a time parameter $T$ computes a proof $\phi(\chi,T)$ which is efficiently and publicly verifiable. The proof can be computed in $T$ sequential steps, but not much less, even by a malicious party having large parallelism. A PoSW thus serves as a proof that $T$ units of time have passed since $\chi$ was received.
PoSW were introduced by Mahmoody, Moran and Vadhan [MMV11], a simple and practical construction was only recently proposed by Cohen and Pietrzak [CP18].
In this work we construct a new simple PoSW in the random permutation model which is almost as simple and efficient as [CP18] but conceptually very different.
Whereas the structure underlying [CP18] is a hash tree, our construction is based on skip lists and has the interesting property that computing the PoSW is a reversible computation.
The fact that the construction is reversible can potentially be used for new applications like constructing \emph{proofs of replication}. We also show how to ``embed" the sloth function of Lenstra and Weselowski [LW17] into our PoSW to get a PoSW where one additionally can verify correctness of the output much more efficiently than recomputing it (though recent constructions of ``verifiable delay functions" subsume most of the applications this construction was aiming at).
PoSW were introduced by Mahmoody, Moran and Vadhan [MMV11], a simple and practical construction was only recently proposed by Cohen and Pietrzak [CP18].
In this work we construct a new simple PoSW in the random permutation model which is almost as simple and efficient as [CP18] but conceptually very different.
Whereas the structure underlying [CP18] is a hash tree, our construction is based on skip lists and has the interesting property that computing the PoSW is a reversible computation.
The fact that the construction is reversible can potentially be used for new applications like constructing \emph{proofs of replication}. We also show how to ``embed" the sloth function of Lenstra and Weselowski [LW17] into our PoSW to get a PoSW where one additionally can verify correctness of the output much more efficiently than recomputing it (though recent constructions of ``verifiable delay functions" subsume most of the applications this construction was aiming at).
T-H. Hubert Chan, Rafael Pass, Elaine Shi
State Machine Replication (SMR) is an important abstraction for a set
of nodes to agree on an ever-growing, linearly-ordered log of transactions.
In decentralized cryptocurrency applications, we would like to design
SMR protocols that 1) resist adaptive corruptions;
and 2) achieve small bandwidth and small confirmation time.
All past approaches towards constructing SMR
fail to achieve either small confirmation time or small bandwidth
under adaptive corruptions (without resorting to strong assumptions
such as the erasure model or proof-of-work).
We propose a novel paradigm for reaching consensus that departs significantly from classical approaches. Our protocol is inspired by a social phenomenon called herding, where people tend to make choices considered as the social norm. In our consensus protocol, leader election and voting are coalesced into a single (randomized) process: in every round, every node tries to cast a vote for what it views as the {\it most popular} item so far: such a voting attempt is not always successful, but rather, successful with a certain probability. Importantly, the probability that the node is elected to vote for $v$ is independent from the probability it is elected to vote for $v' \neq v$. We will show how to realize such a distributed, randomized election process using appropriate, adaptively secure cryptographic building blocks.
We show that amazingly, not only can this new paradigm achieve consensus (e.g., on a batch of unconfirmed transactions in a cryptocurrency system), but it also allows us to derive the first SMR protocol which, even under adaptive corruptions, requires only polylogarithmically many rounds and polylogarithmically many honest messages to be multicast to confirm each batch of transactions; and importantly, we attain these guarantees under standard cryptographic assumptions.
We propose a novel paradigm for reaching consensus that departs significantly from classical approaches. Our protocol is inspired by a social phenomenon called herding, where people tend to make choices considered as the social norm. In our consensus protocol, leader election and voting are coalesced into a single (randomized) process: in every round, every node tries to cast a vote for what it views as the {\it most popular} item so far: such a voting attempt is not always successful, but rather, successful with a certain probability. Importantly, the probability that the node is elected to vote for $v$ is independent from the probability it is elected to vote for $v' \neq v$. We will show how to realize such a distributed, randomized election process using appropriate, adaptively secure cryptographic building blocks.
We show that amazingly, not only can this new paradigm achieve consensus (e.g., on a batch of unconfirmed transactions in a cryptocurrency system), but it also allows us to derive the first SMR protocol which, even under adaptive corruptions, requires only polylogarithmically many rounds and polylogarithmically many honest messages to be multicast to confirm each batch of transactions; and importantly, we attain these guarantees under standard cryptographic assumptions.
Lucas Schabhüser, Denis Butin, Johannes Buchmann
In cloud computing, delegated computing raises the security issue of guaranteeing data authenticity during a remote computation. In this context, the recently introduced function-dependent commitments (FDCs) are the only approach providing both fast correctness verification, information-theoretic input-output privacy, and strong unforgeability.
Homomorphic authenticators--- the established approach to this problem ---do not provide information-theoretic privacy and always reveal the computation's result upon verification, thus violating output privacy.
Since many homomorphic authenticator schemes already exist, we investigate the relation between them and FDCs to clarify how existing schemes can be supplemented with information-theoretic output privacy.
Specifically, we present a generic transformation turning any structure-preserving homomorphic authenticator scheme into an FDC scheme.
This facilitates the design of multi-party computation schemes with full information-theoretic privacy.
We also introduce a new structure-preserving, linearly homomorphic authenticator scheme suitable for our transformation.
It is the first both context hiding and structure-preserving homomorphic authenticator scheme.
Our scheme is also the first structure-preserving homomorphic authenticator scheme to achieve efficient verification.
Srimanta Bhattacharya, Mridul Nandi
Let $\sigma$ be some positive integer and $\mathcal{C} \subseteq \{(i,j): 1 \leq i < j \leq \sigma \}$. The theory behind finding a lower bound on the number of distinct blocks $P_1, \ldots, P_{\sigma} \in \{0,1\}^n$ satisfying a set of linear equations $\{ P_i \oplus P_j = c_{i,j} : (i,j) \in \mathcal{C} \}$ for some $c_{i,j} \in \{0,1\}^n$, is called {\em mirror theory}. Patarin introduced the mirror theory and provided a proof for this. However, the proof, even for a special class of equations, is complex and contains several non-trivial gaps. As an application of mirror theory, $XORP[w]$ (known as XOR construction) which returns $(w-1)$-block output, is a {\em pseudorandom function} (PRF) for some parameter $w$, called {\em width}.
The XOR construction can be seen as a basic structure of some encryption algorithms, e.g., the CENC encryption and the CHM authenticated encryption, proposed by Iwata in 2006. Due to potential application of $XORP[w]$ and the nontrivial gaps in the proof of mirror theory, an alternative simpler analysis of the PRF-security of $XORP[w]$ would be much desired. Recently (in Crypto 2017), Dai {\em et al.} have introduced a tool, called the $\chi^2$ method, for analyzing PRF-security. Using this tool, the authors have provided a proof of the PRF-security of $XORP[2]$ without relying on the mirror theory. In this paper, we resolve the general case; we apply the $\chi^2$ method to obtain {\em a simpler security proof of $XORP[w]$ for any $w \geq 2$}. For $w =2$, we obtain {\em a tighter bound for a wider range of parameters} than that of Dai {\em et al.}. Moreover, we consider variable width construction $XORP[*]$ (in which the widths are chosen by the adversary adaptively), and also provide {\em variable output length pseudorandom function} (VOLPRF) security analysis for it. As an application of VOLPRF, we propose {\em an authenticated encryption which is a simple variant of CHM or AES-GCM and provides much higher security} than those at the cost of one extra blockcipher call for every message.
Ting Li, Yao Sun
We present new preimage attacks on standard Keccak-224 and Keccak-256 that are reduced to 3 and 4 rounds. An allocating approach is used in the attacks, and the whole complexity is allocated to two stages, such that fewer constraints are considered and the complexity is lowered in each stage. Specifically, we are trying to find a 2-block preimage, instead of a 1-block one, for a given hash value, and the first and second message blocks are found in two stages, respectively. Both the message blocks are constrained by a set of newly proposed conditions on the middle state, which are weaker than those brought by the initial values and the hash values. Thus, the complexities in the two stages are both lower than that of finding a 1-block preimage directly. Together with the basic allocating approach, an improved method is given to balance the complexities of two stages, and hence, obtains the optimal attacks. As a result, we present the best theoretical preimage attacks on Keccak-224 and Keccak-256 that are reduced to 3 and 4 rounds. Moreover, we practically found a (second) preimage for 3-round Keccak-224 with a complexity of 2^{39.39}.
Andrea Coladangelo, Alex B. Grilo, Stacey Jeffery, Thomas Vidick
The problem of reliably certifying the outcome of a computation performed by a quantum device is rapidly gaining relevance. We present two protocols for a classical verifier to verifiably delegate a quantum computation to two non-communicating but entangled quantum provers. Our protocols have near-optimal complexity in terms of the total resources employed by the verifier and the honest provers, with the total number of operations of each party, including the number of entangled pairs of qubits required of the honest provers, scaling as O ( g log g ) for delegating a circuit of size g. This is in contrast to previous protocols, whose overhead in terms of resources employed, while polynomial, is far beyond what is feasible in practice. Our first protocol requires a number of rounds that is linear in the depth of the circuit being delegated, and is blind, meaning neither prover can learn the circuit or its input. The second protocol is not blind, but requires only a constant number of rounds of interaction.
Our main technical innovation is an efficient rigidity theorem which allows a verifier to test that two entangled provers perform measurements specified by an arbitrary m-qubit tensor product of single-qubit Clifford observables on their respective halves of m shared EPR pairs, with a robustness that is independent of m. Our two-prover classical-verifier delegation protocols are obtained by combining this rigidity theorem with a single-prover quantum-verifier protocol for the verifiable delegation of a quantum computation, introduced by Broadbent.
Our main technical innovation is an efficient rigidity theorem which allows a verifier to test that two entangled provers perform measurements specified by an arbitrary m-qubit tensor product of single-qubit Clifford observables on their respective halves of m shared EPR pairs, with a robustness that is independent of m. Our two-prover classical-verifier delegation protocols are obtained by combining this rigidity theorem with a single-prover quantum-verifier protocol for the verifiable delegation of a quantum computation, introduced by Broadbent.
Serge Fehr, Chen Yuan
Robust secret sharing enables the reconstruction of a secret-shared message in the presence of up to $t$ (out of $n$) {\em incorrect} shares. The most challenging case is when $n = 2t+1$, which is the largest $t$ for which the task is still possible, but only up to a small error probability $2^{- \kappa}$ and with some overhead in the share size.
Recently, Bishop, Pastro, Rajaraman and Wichs proposed a scheme with an (almost) optimal overhead of $\widetilde{O}(\kappa)$. This seems to answer the open question posed by Cevallos et al. who proposed a scheme with overhead of $\widetilde{O}(n+\kappa)$ and asked whether the linear dependency on $n$ was necessary or not. However, a subtle issue with Bishop et al.'s solution is that it (implicitly) assumes a {\em non-rushing} adversary, and thus it satisfies a {\em weaker} notion of security compared to the scheme by Cevallos et al. or to the classical scheme by Rabin and BenOr.
In this work, we almost close this gap. We propose a new robust secret sharing scheme that offers full security against a rushing adversary, and that has an overhead of $O(\kappa n^\varepsilon)$, where $\varepsilon > 0$ is arbitrary but fixed. This $n^\varepsilon$-factor is obviously worse than the $\mathrm{polylog}(n)$-factor hidden in the $\widetilde{O}$ notation of the scheme of Bishop et al., but it greatly improves on the linear dependency on $n$ of the best known scheme that features security against a rushing adversary.
A small variation of our scheme has the same $\widetilde{O}(\kappa)$ overhead as the scheme of Bishop et al.\ {\em and} achieves security against a rushing adversary, but suffers from a (slightly) superpolynomial reconstruction complexity.
Recently, Bishop, Pastro, Rajaraman and Wichs proposed a scheme with an (almost) optimal overhead of $\widetilde{O}(\kappa)$. This seems to answer the open question posed by Cevallos et al. who proposed a scheme with overhead of $\widetilde{O}(n+\kappa)$ and asked whether the linear dependency on $n$ was necessary or not. However, a subtle issue with Bishop et al.'s solution is that it (implicitly) assumes a {\em non-rushing} adversary, and thus it satisfies a {\em weaker} notion of security compared to the scheme by Cevallos et al. or to the classical scheme by Rabin and BenOr.
In this work, we almost close this gap. We propose a new robust secret sharing scheme that offers full security against a rushing adversary, and that has an overhead of $O(\kappa n^\varepsilon)$, where $\varepsilon > 0$ is arbitrary but fixed. This $n^\varepsilon$-factor is obviously worse than the $\mathrm{polylog}(n)$-factor hidden in the $\widetilde{O}$ notation of the scheme of Bishop et al., but it greatly improves on the linear dependency on $n$ of the best known scheme that features security against a rushing adversary.
A small variation of our scheme has the same $\widetilde{O}(\kappa)$ overhead as the scheme of Bishop et al.\ {\em and} achieves security against a rushing adversary, but suffers from a (slightly) superpolynomial reconstruction complexity.
Albert Cheu, Adam Smith, Jonathan Ullman, David Zeber, Maxim Zhilyaev
We consider the problem of designing scalable, robust protocols for computing statistics about sensitive data. Specifically, we look at how best to design differentially private protocols in a distributed setting, where each user holds a private datum. The literature has mostly considered two models: the "central" model, in which a trusted server collects users' data in the clear, which allows greater accuracy; and the "local" model, in which users individually randomize their data, and need not trust the server, but accuracy is limited. Attempts to achieve the accuracy of the central model without a trusted server have so far focused on variants of cryptographic multiparty computation (MPC), which limits scalability.
In this paper, we initiate the analytic study of a shuffled model for distributed differentially private algorithms, which lies between the local and central models. This simple-to-implement model, a special case of the ESA framework of (Bittau et al., SOSP 2017), augments the local model with an anonymous channel that randomly permutes a set of user-supplied messages.
For sum queries, we show that this model provides the power of the central model while avoiding the need to trust a central server and the complexity of cryptographic secure function evaluation. More generally, we give evidence that the power of the shuffled model lies strictly between those of the central and local models: for a natural restriction of the model, we show that shuffled protocols for a widely studied selection problem require exponentially higher sample complexity than do central-model protocols.
For sum queries, we show that this model provides the power of the central model while avoiding the need to trust a central server and the complexity of cryptographic secure function evaluation. More generally, we give evidence that the power of the shuffled model lies strictly between those of the central and local models: for a natural restriction of the model, we show that shuffled protocols for a widely studied selection problem require exponentially higher sample complexity than do central-model protocols.
Viet Tung Hoang, David Miller, Ni Trieu
We improve the attack of Durak and Vaudenay (CRYPTO'17) on NIST Format-Preserving Encryption standard FF3, reducing the running time from $O(N^5)$ to $O(N^{17/6})$ for domain $\Z_N \times \Z_N$. Concretely, DV's attack needs about $2^{50}$ operations to recover encrypted 6-digit PINs, whereas ours only spends about $2^{30}$ operations. In realizing this goal, we provide a pedagogical example of how to use distinguishing attacks to speed up slide attacks. In addition, we improve the running time of DV's known-plaintext attack on 4-round Feistel of domain $\Z_N \times \Z_N$ from $O(N^3)$ time to just $O(N^{5/3})$ time. We also generalize our attacks to a general domain $\Z_M \times \Z_N$, allowing one to recover encrypted SSNs using about $2^{50}$ operations. Finally, we provide some proof-of-concept implementations to empirically validate our results.
Akinori Hosoyamada, Tetsu Iwata
The Luby-Rackoff construction, or the Feistel construction, is one of the most important approaches to construct secure block ciphers from secure pseudorandom functions. The 3-round and 4-round Luby-Rackoff constructions are proven to be secure against chosen-plaintext attacks (CPAs) and chosen-ciphertext attacks (CCAs), respectively, in the classical setting. However, Kuwakado and Morii showed that a quantum superposed chosen-plaintext attack (qCPA) can distinguish the 3-round Luby-Rackoff construction from a random permutation in polynomial time. In addition, a recent work by Ito et al. showed a quantum superposed chosen-ciphertext attack (qCCA) that distinguishes the 4-round Luby-Rackoff construction. Since Kuwakado and Morii showed the result, it has been a problem of much interest how many rounds are sufficient to achieve the provable security against quantum query attacks. This paper shows the answer to this fundamental question by showing that 4-rounds suffice against qCPAs. Concretely, we prove that the 4-round Luby-Rackoff construction is secure up to $O(2^{n/6})$ quantum queries. We also show that our bound is tight by giving a distinguishing qCPA with $O(2^{n/6})$ quantum queries. Our result is the first one that shows security of a typical block-cipher construction against quantum query attacks, without any algebraic assumptions. To give security proofs, we introduce a proof technique which is a modification of Zhandry's compressed oracle technique.
Alex Lombardi, Willy Quach, Ron D. Rothblum, Daniel Wichs, David J. Wu
Non-interactive zero-knowledge arguments (NIZKs) for NP are an important cryptographic primitive, but we currently only have instantiations under a few specific assumptions. Notably, we are missing constructions from the plain learning with errors (LWE) assumption or the Diffie-Hellman (CDH/DDH) assumption.
In this paper, we study a relaxation of NIZKs to the designated-verifier setting (DV-NIZK), where a trusted setup generates a common reference string together with a secret key for the verifier. We want reusable schemes, which allow the verifier to reuse the secret key to verify many different proofs, and soundness should hold even if the malicious prover learns whether various proofs are accepted or rejected. Such reusable DV-NIZKs were recently constructed under the CDH assumption, but it was open whether they can also be constructed under LWE. In this work, we give a new construction using generic primitives that can be instantiated under CDH or LWE.
We also consider an extension of reusable DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK). In this setting, the only trusted setup consists of a common random string. However, there is also an additional untrusted setup in which the verifier chooses a public/secret key needed to generate/verify proofs, respectively. We require that zero-knowledge holds even if the public key is chosen maliciously by the verifier. Such reusable MDV-NIZKs were recently constructed under the ``one-more CDH'' assumption. In this work, we give a new construction using generic primitives that can be instantiated under DDH or LWE.
In this paper, we study a relaxation of NIZKs to the designated-verifier setting (DV-NIZK), where a trusted setup generates a common reference string together with a secret key for the verifier. We want reusable schemes, which allow the verifier to reuse the secret key to verify many different proofs, and soundness should hold even if the malicious prover learns whether various proofs are accepted or rejected. Such reusable DV-NIZKs were recently constructed under the CDH assumption, but it was open whether they can also be constructed under LWE. In this work, we give a new construction using generic primitives that can be instantiated under CDH or LWE.
We also consider an extension of reusable DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK). In this setting, the only trusted setup consists of a common random string. However, there is also an additional untrusted setup in which the verifier chooses a public/secret key needed to generate/verify proofs, respectively. We require that zero-knowledge holds even if the public key is chosen maliciously by the verifier. Such reusable MDV-NIZKs were recently constructed under the ``one-more CDH'' assumption. In this work, we give a new construction using generic primitives that can be instantiated under DDH or LWE.
Benny Pinkas, Thomas Schneider, Oleksandr Tkachenko, Avishay Yanai
We present a new protocol for computing a circuit which implements the private set intersection functionality (PSI). Using circuits for this task is advantageous over the usage of specific protocols for PSI, since many applications of PSI do not need to compute the intersection itself but rather functions based on the items in the intersection.
Our protocol is the first circuit-based PSI protocol to achieve linear com- munication complexity. It is also concretely more efficient than all previous circuit-based PSI protocols. For example, for sets of size 220 it improves the communication of the recent work of Pinkas et al. (EUROCRYPT18) by more than 10 times, and improves the run time by a factor of 2.8x in the LAN setting, and by a factor of 5.8x in the WAN setting.
Our protocol is based on the usage of a protocol for computing oblivious programmable pseudo-random functions (OPPRF), and more specifically on our technique to amortize the cost of batching together multiple invocations of OPPRF.
Vipul Goyal, Yifan Song
In this paper, we consider the setting where a party uses correlated random tapes across multiple executions of a cryptographic algorithm. We ask if the security properties could still be preserved in such a setting. As examples, we introduce the notion of correlated-tape zero knowledge, and, correlated-tape multi-party computation, where, the zero-knowledge property, and, the ideal/real model security must still be preserved even if a party uses correlated random tapes in multiple executions.
Our constructions are based on a new type of randomness extractor which we call correlated-source extractors. Correlated-source extractors can be seen as a dual of non-malleable extractors, and, allow an adversary to choose several tampering functions which are applied to the randomness source. Correlated-source extractors guarantee that even given the output of the extractor on the tampered sources, the output on the original source is still uniformly random. Given (seeded) correlated-source extractors, and, resettably-secure computation protocols, we show how to directly get a positive result for both correlated-tape zero-knowledge and correlated-tape multi-party computation in the CRS model. This is tight considering the known impossibility results on cryptography with imperfect randomness.
Our main technical contribution is an explicit construction of a correlated-source extractor where the length of the seed is independent of the number of tamperings. Additionally, we also provide a (non-explicit) existential result for correlated source extractors with almost optimal parameters.
Adam Groce, Peter Rindal, Mike Rosulek
In this work we demonstrate that allowing differentially private leakage can significantly improve the concrete performance of secure 2-party computation (2PC) protocols. Specifically, we focus on the private set intersection (PSI) protocol of Rindal and Rosulek (CCS 2017), which is the fastest PSI protocol with security against malicious participants. We show that if differentially private leakage is allowed, the cost of the protocol can be reduced by up to 63%, depending on the desired level of differential privacy. On the technical side, we introduce a security model for differentially-private leakage in malicious-secure 2PC. We also introduce two new and improved mechanisms for "differentially private histogram overestimates," the main technical challenge for differentially-private PSI.
Rémi Géraud, David Naccache, Răzvan Roşie
Robustness is a notion often tacitly assumed while working with encrypted data. Roughly speaking, it states that a ciphertext cannot be decrypted under different keys. Initially formalized in a public-key context, it has been further extended to key-encapsulation mechanisms, and more recently to pseudorandom functions, message authentication codes and authenticated encryption. In this work, we motivate the importance of establishing similar guarantees for functional encryption schemes, even under adversarially generated keys. Our main security notion is intended to capture the scenario where a ciphertext obtained under a master key (corresponding to Authority 1) is decrypted by functional keys issued under a different master key (Authority 2). Furthermore, we show there exist simple functional encryption schemes where robustness under adversarial key-generation is not achieved. As a secondary and independent result, we formalize robustness for digital signatures a signature should not verify under multiple keys and point out that certain signature schemes are not robust when the keys are adversarially generated.
We present simple, generic transforms that turn a scheme into a robust one, while maintaining the original schemes security. For the case of public-key functional encryption, we look into ciphertext anonymity and provide a transform achieving it.
Zahra Jafargholi, Kasper Green Larsen, Mark Simkin
In this work, we present the first asymptotically optimal oblivious priority queue, which matches the lower bound of Jacob, Larsen, and Nielsen (SODA'19). Our construction is conceptually simple, statistically secure, and has small hidden constants.
We illustrate the power of our optimal oblivious priority queue by presenting a conceptually equally simple construction of asymptotically optimal offline ORAM.
Our result provides the first matching upper bound to the 23 year old lower bound of Goldreich and Ostrovsky (JACM'96).
Geoffroy Couteau, Dennis Hofheinz
We provide a generic construction of non-interactive zero-knowledge (NIZK) schemes. Our construction is a refinement of Dwork and Naors (FOCS 2000) implementation of the hidden bits model using verifiable pseudorandom generators (VPRGs). Our refinement simplifies their construction and relaxes the necessary assumptions considerably.
As a result of this conceptual improvement, we obtain interesting new instantiations:
A designated-verifier NIZK (with unbounded soundness) based on the computational Diffie-Hellman (CDH) problem. If a pairing is available, this NIZK becomes publicly verifiable. This constitutes the first fully secure CDH-based designated-verifier NIZKs (and more generally, the first fully secure designated-verifier NIZK from a non-generic assumption which does not already imply publicly-verifiable NIZKs), and it answers an open problem recently raised by Kim and Wu (CRYPTO 2018).
A NIZK based on the learning with errors (LWE) assumption, and assuming a non-interactive witness-indistinguishable (NIWI) proof system for bounded distance decoding (BDD). This simplifies and improves upon a recent NIZK from LWE that assumes a NIZK for BDD (Rothblum et al., PKC 2019).
As a result of this conceptual improvement, we obtain interesting new instantiations:
A designated-verifier NIZK (with unbounded soundness) based on the computational Diffie-Hellman (CDH) problem. If a pairing is available, this NIZK becomes publicly verifiable. This constitutes the first fully secure CDH-based designated-verifier NIZKs (and more generally, the first fully secure designated-verifier NIZK from a non-generic assumption which does not already imply publicly-verifiable NIZKs), and it answers an open problem recently raised by Kim and Wu (CRYPTO 2018).
A NIZK based on the learning with errors (LWE) assumption, and assuming a non-interactive witness-indistinguishable (NIWI) proof system for bounded distance decoding (BDD). This simplifies and improves upon a recent NIZK from LWE that assumes a NIZK for BDD (Rothblum et al., PKC 2019).
Willy Quach, Ron D. Rothblum, Daniel Wichs
Non-interactive zero-knowledge proofs (NIZKs) are a fundamental cryptographic primitive. Despite a long history of research, we only know how to construct NIZKs under a few select assumptions, such as the hardness of factoring or using bilinear maps. Notably, there are no known constructions based on either the computational or decisional Diffie-Hellman (CDH/DDH) assumption without relying on a bilinear map.
In this paper, we study a relaxation of NIZKs in the designated verifier setting (DV-NIZK), in which the public common-reference string is generated together with a secret key that is given to the verifier in order to verify proofs. In this setting, we distinguish between one-time and reusable schemes, depending on whether they can be used to prove only a single statement or arbitrarily many statements. For reusable schemes, the main difficulty is to ensure that soundness continues to hold even when the malicious prover learns whether various proofs are accepted or rejected by the verifier. One-time DV-NIZKs are known to exist for general NP statements assuming only public-key encryption. However, prior to this work, we did not have any construction of reusable DV-NIZKs for general NP statements from any assumption under which we didn't already also have standard NIZKs.
In this work, we construct reusable DV-NIZKs for general NP statements under the CDH assumption, without requiring a bilinear map. Our construction is based on the hidden-bits paradigm, which was previously used to construct standard NIZKs. We define a cryptographic primitive called a hidden-bits generator (HBG), along with a designated-verifier variant (DV-HBG), which modularly abstract out how to use this paradigm to get both standard NIZKs and reusable DV-NIZKs. We construct a DV-HBG scheme under the CDH assumption by relying on techniques from the Cramer-Shoup hash-proof system, and this yields our reusable DV-NIZK for general NP statements under CDH.
We also consider a strengthening of DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK) where the setup consists of an honestly generated common random string and the verifier then gets to choose his own (potentially malicious) public/secret key pair to generate/verify proofs. We construct MDV-NIZKs under the ``one-more CDH'' assumption without relying on bilinear maps.
In this paper, we study a relaxation of NIZKs in the designated verifier setting (DV-NIZK), in which the public common-reference string is generated together with a secret key that is given to the verifier in order to verify proofs. In this setting, we distinguish between one-time and reusable schemes, depending on whether they can be used to prove only a single statement or arbitrarily many statements. For reusable schemes, the main difficulty is to ensure that soundness continues to hold even when the malicious prover learns whether various proofs are accepted or rejected by the verifier. One-time DV-NIZKs are known to exist for general NP statements assuming only public-key encryption. However, prior to this work, we did not have any construction of reusable DV-NIZKs for general NP statements from any assumption under which we didn't already also have standard NIZKs.
In this work, we construct reusable DV-NIZKs for general NP statements under the CDH assumption, without requiring a bilinear map. Our construction is based on the hidden-bits paradigm, which was previously used to construct standard NIZKs. We define a cryptographic primitive called a hidden-bits generator (HBG), along with a designated-verifier variant (DV-HBG), which modularly abstract out how to use this paradigm to get both standard NIZKs and reusable DV-NIZKs. We construct a DV-HBG scheme under the CDH assumption by relying on techniques from the Cramer-Shoup hash-proof system, and this yields our reusable DV-NIZK for general NP statements under CDH.
We also consider a strengthening of DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK) where the setup consists of an honestly generated common random string and the verifier then gets to choose his own (potentially malicious) public/secret key pair to generate/verify proofs. We construct MDV-NIZKs under the ``one-more CDH'' assumption without relying on bilinear maps.