CryptoDB
Prashant Nalini Vasudevan
Publications
Year
Venue
Title
2024
EUROCRYPT
Strong Batching for Non-Interactive Statistical Zero-Knowledge
Abstract
In a zero-knowledge proof, a prover needs to convince a verifier that an input x is contained in a language Pi without revealing any additional information. By repeating a zero-knowledge proof k times, it is possible to prove (still in zero-knowledge) that k separate inputs x1,...,xk all belong to Pi. But this increases the communication by a factor of k. Can one do better? In other words, is (non-trivial) zero-knowledge batch verification for Pi possible?
Recent works by Kaslasi et al. (TCC 2020, Eurocrypt 2021) show that any problem possessing a non-interactive statistical zero-knowledge proof (NISZK) has a non-trivial statistical zero-knowledge batch verification protocol. Two major limitations of their results are: (1) the communication in the batch protocol is roughly poly(n,log(k))+O(k), which is better than the naive cost of k*poly(n) but still scales linearly with k, and, (2) the batch protocol requires Omega(k) rounds of interaction.
In this work we remove both of these limitations by showing that any problem in NISZK has a non-interactive statistical zero-knowledge batch verification protocol with communication poly(n,log(k)).
2024
CRYPTO
$k$-SUM in the Sparse Regime: Complexity and Applications
Abstract
In the average-case k-SUM problem, given r integers chosen uniformly at random from {0,\dots,M-1}, the objective is to find a ``solution'' set of k numbers that sum to 0 modulo M. In the dense regime of M <= r^k, where solutions exist with high probability, the complexity of these problems is well understood. Much less is known in the sparse regime of M >> r^k, where solutions are unlikely to exist.
Motivated by applications to cryptography, we initiate the study of the sparse regime for k-SUM and its variant k-XOR, especially their planted versions, where a random solution is planted in a randomly generated instance and has to be recovered. We provide evidence for the hardness of these problems and show applications to constructing public-key encryption schemes. Our contributions are summarized below.
Complexity: We study the complexity of these problems in the sparse regime and show:
- Conditional Lower Bounds: Assuming established conjectures about the hardness of average-case (non-planted) k-SUM/k-XOR when M = r^k, we provide non-trivial lower bounds on the running time of algorithms for planted k-SUM when r^k <= M <= r^{2k}.
- Hardness Amplification: We show that for any M \geq r^k, if an algorithm running in time T solves planted k-SUM/k-XOR with success probability Omega(1/polylog(r)), then there is an algorithm running in time Otilde(T) that solves it with probability (1-o(1)).
This enables us to use even relatively mild hardness of k-SUM/k-XOR in our cryptographic constructions. Our primary technical contribution is the proof of this result, which departs significantly from existing approaches to hardness amplification.
- New Reductions and Algorithms: We provide reductions for k-SUM/k-XOR from search to decision, as well as worst-case and average-case reductions to the Subset Sum problem from k-SUM. Additionally, we present a new algorithm for average-case k-XOR that is faster than known worst-case algorithms at low densities.
Applications: We show that by additionally assuming mild hardness of k-XOR, we can construct Public Key Encryption (PKE) from a weaker variant of the Learning Parity with Noise (LPN) problem than was known before. In particular, such LPN hardness does not appear to imply PKE on its own -- this suggests that k-XOR/k-SUM can be used to bridge ``minicrypt'' and ``cryptomania'' in some cases. We are optimistic that this technique will find other applications in cryptography
2024
TCC
Instance-Hiding Interactive Proofs
Abstract
In an Instance-Hiding Interactive Proof (IHIP) [Beaver et al. CRYPTO 90], an efficient verifier with a _private_ input x interacts with an unbounded prover to determine whether x is contained in a language L. In addition to completeness and soundness, the instance-hiding property requires that the prover should not learn anything about x in the course of the interaction. Such proof systems capture natural privacy properties, and may be seen as a generalization of the influential concept of Randomized Encodings [Ishai et al. FOCS 00, Applebaum et al. FOCS 04, Agrawal et al. ICALP 15], and as a counterpart to Zero-Knowledge proofs [Goldwasser et al. STOC 89].
We investigate the properties and power of such instance-hiding proofs, and show the following:
1. Any language with an IHIP is contained in NP/poly and coNP/poly.
2. If an average-case hard language has an IHIP, then One-Way Functions exist.
3. There is an oracle with respect to which there is a language that has an IHIP but not an SZK proof.
4. IHIP's are closed under composition with any efficiently computable function.
We further study a stronger version of IHIP (that we call Simulatable IHIP) where the view of the honest prover can be efficiently simulated. For these, we obtain stronger versions of some of the above:
5. Any language with a Simulatable IHIP is contained in AM and coAM.
6. If a _worst-case_ hard language has a Simulatable IHIP, then One-Way Functions exist.
2024
TCC
Doubly-Efficient Batch Verification in Statistical Zero-Knowledge
Abstract
A sequence of recent works, concluding with Mu et al. (Eurocrypt, 2024) has shown that every problem $\Pi$ admitting a non-interactive statistical zero-knowledge proof (NISZK) has an efficient zero-knowledge \emph{batch verification} protocol. Namely, an NISZK protocol for proving that $x_1, \dots, x_k \in \Pi$ with communication that only scales poly-logarithmically with $k$. A caveat of this line of work is that the prover runs in exponential-time, whereas for NP problems it is natural to hope to obtain a \emph{doubly-efficient proof} -- that is, a prover that runs in polynomial-time given the $k$ NP witnesses.
In this work we show that every problem in NISZK $\cap$ UP has a \emph{doubly-efficient} interactive statistical zero-knowledge proof with communication $\poly(n, \log(k))$ and $\poly(\log(k), \log(n))$ rounds. The prover runs in time $\poly(n, k)$ given access to the $k$ UP witnesses. Here $n$ denotes the length of each individual input, and UP is the subclass of NP relations in which YES instances have unique witnesses.
This result yields doubly-efficient statistical zero-knowledge batch verification protocols for a variety of concrete and central cryptographic problems from the literature.
2024
JOFC
Collision Resistance from Multi-collision Resistance
Abstract
<jats:title>Abstract</jats:title><jats:p>Collision-resistant hash functions (<jats:inline-formula><jats:alternatives><jats:tex-math>$$\textsf{CRH}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
<mml:mi>CRH</mml:mi>
</mml:math></jats:alternatives></jats:inline-formula>) are a fundamental and ubiquitous cryptographic primitive. Several recent works have studied a relaxation of <jats:inline-formula><jats:alternatives><jats:tex-math>$$\textsf{CRH}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
<mml:mi>CRH</mml:mi>
</mml:math></jats:alternatives></jats:inline-formula> called <jats:italic>t</jats:italic><jats:italic>-way multi-collision-resistant hash functions</jats:italic> (<jats:inline-formula><jats:alternatives><jats:tex-math>$$t\text {-}\textsf{MCRH}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mtext>-</mml:mtext>
<mml:mi>MCRH</mml:mi>
</mml:mrow>
</mml:math></jats:alternatives></jats:inline-formula>). These are families of functions for which it is computationally hard to find a <jats:italic>t</jats:italic>-way collision, even though such collisions are abundant (and even <jats:inline-formula><jats:alternatives><jats:tex-math>$$(t-1)$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:math></jats:alternatives></jats:inline-formula>-way collisions may be easy to find). The case of <jats:inline-formula><jats:alternatives><jats:tex-math>$$t=2$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:math></jats:alternatives></jats:inline-formula> corresponds to standard <jats:inline-formula><jats:alternatives><jats:tex-math>$$\textsf{CRH}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
<mml:mi>CRH</mml:mi>
</mml:math></jats:alternatives></jats:inline-formula>, but it is natural to study <jats:italic>t</jats:italic>-<jats:inline-formula><jats:alternatives><jats:tex-math>$$\textsf{MCRH}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
<mml:mi>MCRH</mml:mi>
</mml:math></jats:alternatives></jats:inline-formula> for larger values of <jats:italic>t</jats:italic>. Multi-collision resistance seems to be a qualitatively weaker property than standard collision resistance. Nevertheless, in this work we show a <jats:italic>non-blackbox</jats:italic> transformation of any moderately shrinking <jats:italic>t</jats:italic>-<jats:inline-formula><jats:alternatives><jats:tex-math>$$\textsf{MCRH}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
<mml:mi>MCRH</mml:mi>
</mml:math></jats:alternatives></jats:inline-formula>, for <jats:inline-formula><jats:alternatives><jats:tex-math>$$t \in \{3,4\}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>∈</mml:mo>
<mml:mo>{</mml:mo>
<mml:mn>3</mml:mn>
<mml:mo>,</mml:mo>
<mml:mn>4</mml:mn>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:math></jats:alternatives></jats:inline-formula>, into an (infinitely often secure) <jats:inline-formula><jats:alternatives><jats:tex-math>$$\textsf{CRH}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
<mml:mi>CRH</mml:mi>
</mml:math></jats:alternatives></jats:inline-formula>. This transformation is non-constructive—we can prove the existence of a <jats:inline-formula><jats:alternatives><jats:tex-math>$$\textsf{CRH}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
<mml:mi>CRH</mml:mi>
</mml:math></jats:alternatives></jats:inline-formula> but cannot explicitly point out a construction. Our result partially extends to larger values of <jats:italic>t</jats:italic>. In particular, we show that for suitable values of <jats:inline-formula><jats:alternatives><jats:tex-math>$$t>t'$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>></mml:mo>
<mml:msup>
<mml:mi>t</mml:mi>
<mml:mo>′</mml:mo>
</mml:msup>
</mml:mrow>
</mml:math></jats:alternatives></jats:inline-formula>, we can transform a <jats:italic>t</jats:italic>-<jats:inline-formula><jats:alternatives><jats:tex-math>$$\textsf{MCRH}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
<mml:mi>MCRH</mml:mi>
</mml:math></jats:alternatives></jats:inline-formula> into a <jats:inline-formula><jats:alternatives><jats:tex-math>$$t'$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
<mml:msup>
<mml:mi>t</mml:mi>
<mml:mo>′</mml:mo>
</mml:msup>
</mml:math></jats:alternatives></jats:inline-formula>-<jats:inline-formula><jats:alternatives><jats:tex-math>$$\textsf{MCRH}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML">
<mml:mi>MCRH</mml:mi>
</mml:math></jats:alternatives></jats:inline-formula>, at the cost of reducing the shrinkage of the resulting hash function family and settling for infinitely often security. This result utilizes the list-decodability properties of Reed–Solomon codes.</jats:p>
2022
CRYPTO
Collision-Resistance from Multi-Collision-Resistance
📺
Abstract
Collision-resistant hash functions (CRH) are a fundamental and ubiquitous cryptographic primitive. Several recent works have studied a relaxation of CRH called t-way multi-collision-resistant hash functions (t-MCRH). These are families of functions for which it is computationally hard to find a t-way collision, even though such collisions are abundant (and even (t-1)-way collisions may be easy to find). The case of t=2 corresponds to standard CRH, but it is natural to study t-MCRH for larger values of t.
Multi-collision-resistance seems to be a qualitatively weaker property than standard collision-resistance. In particular, Komargodski et al. (Eurocrypt, 2018) showed that there does not exist a blackbox transformation of MCRH into CRH. Nevertheless, in this work we show a non-blackbox transformation of any moderately shrinking t-MCRH, for t in {3,4}, into an (infinitely often secure) CRH. This transformation is non-constructive - we can prove the existence of a CRH but cannot explicitly point out a construction.
Our result partially extends to larger values of t. In particular, we show that for suitable values of t>t', we can transform a t-MCRH into a t'-MCRH, at the cost of reducing the shrinkage of the resulting hash function family and settling for infinitely often security. This result utilizes the list-decodability properties of Reed-Solomon codes.
2021
EUROCRYPT
Public-Coin Statistical Zero-Knowledge Batch Verification against Malicious Verifiers
📺
Abstract
Suppose that a problem $\Pi$ has a statistical zero-knowledge (SZK) proof with communication complexity $m$. The question of batch verification for SZK asks whether one can prove that $k$ instances $x_1,\dots,x_k$ all belong to $\Pi$ with a statistical zero-knowledge proof whose communication complexity is better than $k \cdot m$ (which is the complexity of the trivial solution of executing the original protocol independently on each input).
In a recent work, Kaslasi et al. (TCC, 2020) constructed such a batch verification protocol for any problem having a non-interactive SZK (NISZK) proof-system. Two drawbacks of their result are that their protocol is private-coin and is only zero-knowledge with respect to the honest verifier.
In this work, we eliminate these two drawbacks by constructing a public-coin malicious-verifier SZK protocol for batch verification of NISZK. Similarly to the aforementioned prior work, the communication complexity of our protocol is $(k+poly(m)) \cdot polylog(k,m)$.
2021
JOFC
Placing Conditional Disclosure of Secrets in the Communication Complexity Universe
Abstract
In the conditional disclosure of secrets (CDS) problem (Gertner et al. in J Comput Syst Sci, 2000) Alice and Bob, who hold n -bit inputs x and y respectively, wish to release a common secret z to Carol, who knows both x and y , if and only if the input ( x , y ) satisfies some predefined predicate f . Alice and Bob are allowed to send a single message to Carol which may depend on their inputs and some shared randomness, and the goal is to minimize the communication complexity while providing information-theoretic security. Despite the growing interest in this model, very few lower-bounds are known. In this paper, we relate the CDS complexity of a predicate f to its communication complexity under various communication games. For several basic predicates our results yield tight, or almost tight, lower-bounds of $$\Omega (n)$$ Ω ( n ) or $$\Omega (n^{1-\epsilon })$$ Ω ( n 1 - ϵ ) , providing an exponential improvement over previous logarithmic lower-bounds. We also define new communication complexity classes that correspond to different variants of the CDS model and study the relations between them and their complements. Notably, we show that allowing for imperfect correctness can significantly reduce communication—a seemingly new phenomenon in the context of information-theoretic cryptography. Finally, our results show that proving explicit super-logarithmic lower-bounds for imperfect CDS protocols is a necessary step towards proving explicit lower-bounds against the communication complexity class $$\text {AM}^{\text {cc}}$$ AM cc , or even $$\text {AM}^{\text {cc}}\cap \text {co-AM}^{\text {cc}}$$ AM cc ∩ co-AM cc —a well known open problem in the theory of communication complexity. Thus imperfect CDS forms a new minimal class which is placed just beyond the boundaries of the “civilized” part of the communication complexity world for which explicit lower-bounds are known.
2020
EUROCRYPT
Formalizing Data Deletion in the Context of the Right to be Forgotten
📺
Abstract
The right of an individual to request the deletion of their personal data by an entity that might be storing it -- referred to as \emph{the right to be forgotten} -- has been explicitly recognized, legislated, and exercised in several jurisdictions across the world, including the European Union, Argentina, and California. However, much of the discussion surrounding this right offers only an intuitive notion of what it means for it to be fulfilled -- of what it means for such personal data to be deleted.
In this work, we provide a formal definitional framework for the right to be forgotten using tools and paradigms from cryptography. In particular, we provide a precise definition of what could be (or should be) expected from an entity that collects individuals' data when a request is made of it to delete some of this data. Our framework captures most, though not all, relevant aspects of typical systems involved in data processing. While it cannot be viewed as expressing the statements of current laws (especially since these are rather vague in this respect), our work offers technically precise definitions that represent possibilities for what the law could reasonably expect, and alternatives for what future versions of the law could explicitly require.
Finally, with the goal of demonstrating the applicability of our framework and definitions, we consider various natural and simple scenarios where the right to be forgotten comes up. For each of these scenarios, we highlight the pitfalls that arise even in genuine attempts at implementing systems offering deletion guarantees, and also describe technological solutions that provably satisfy our definitions. These solutions bring together techniques built by various communities.
2020
CRYPTO
Nearly Optimal Robust Secret Sharing against Rushing Adversaries
📺
Abstract
Robust secret sharing is a strengthening of standard secret sharing that allows the shared secret to be recovered even if some of the shares being used in the reconstruction have been adversarially modified. In this work, we study the setting where out of all the $n$ shares, the adversary is allowed to adaptively corrupt and modify up to $t$ shares, where $n = 2t+1$.\footnote{Note that if the adversary is allowed to modify any more shares, then correct reconstruction would be impossible.} Further, we deal with \emph{rushing} adversaries, meaning that the adversary is allowed to see the honest parties' shares before modifying its own shares.
It is known that when $n = 2t+1$, to share a secret of length $m$ bits and recover it with error less than $2^{-\sec}$, shares of size at least $m+\sec$ bits are needed. Recently, Bishop, Pastro, Rajaraman, and Wichs (EUROCRYPT 2016) constructed a robust secret sharing scheme with shares of size $m + O(\sec\cdot\polylog(n,m,\sec))$ bits that is secure in this setting against non-rushing adversaries. Later, Fehr and Yuan (EUROCRYPT 2019) constructed a scheme that is secure against rushing adversaries, but has shares of size $m + O(\sec\cdot n^{\eps}\cdot \polylog(n,m,\sec))$ bits for an arbitrary constant $\eps > 0$. They also showed a variant of their construction with share size $m + O(\sec\cdot\polylog(n,m,\sec))$ bits, but with super-polynomial reconstruction time.
We present a robust secret sharing scheme that is simultaneously close-to-optimal in all of these respects -- it is secure against rushing adversaries, has shares of size $m+O(\sec \log{n} (\log{n}+\log{m}))$ bits, and has polynomial-time sharing and reconstruction. Central to our construction is a polynomial-time algorithm for a problem on semi-random graphs that arises naturally in the paradigm of local authentication of shares used by us and in the aforementioned work.
2019
CRYPTO
Leakage Resilient Secret Sharing and Applications
📺
Abstract
A secret sharing scheme allows a dealer to share a secret among a set of n parties such that any authorized subset of the parties can recover the secret, while any unauthorized subset learns no information about the secret. A leakage-resilient secret sharing scheme (introduced in independent works by Goyal and Kumar, STOC ’18 and Benhamouda, Degwekar, Ishai and Rabin, CRYPTO ’18) additionally requires the secrecy to hold against every unauthorized set of parties even if they obtain some bounded leakage from every other share. The leakage is said to be local if it is computed independently for each share. So far, the only known constructions of local leakage resilient secret sharing schemes are for threshold access structures for very low (O(1)) or very high (
$$n -o(\log n)$$
) thresholds.In this work, we give a compiler that takes a secret sharing scheme for any monotone access structure and produces a local leakage resilient secret sharing scheme for the same access structure, with only a constant-factor asymptotic blow-up in the sizes of the shares. Furthermore, the resultant secret sharing scheme has optimal leakage-resilience rate, i.e., the ratio between the leakage tolerated and the size of each share can be made arbitrarily close to 1. Using this secret sharing scheme as the main building block, we obtain the following results:Rate Preserving Non-Malleable Secret Sharing. We give a compiler that takes any secret sharing scheme for a 4-monotone access structure (A 4-monotone access structure has the property that any authorized set has size at least 4.) with rate R and converts it into a non-malleable secret sharing scheme for the same access structure with rate
$$\varOmega (R)$$
. The previous such non-zero rate construction (Badrinarayanan and Srinivasan, EUROCRYPT ’19) achieved a rate of
$$\varTheta (R/{t_{\max }\log ^2 n})$$
, where
$$t_{\max }$$
is the maximum size of any minimal set in the access structure. As a special case, for any threshold
$$t \ge 4$$
and an arbitrary
$$n \ge t$$
, we get the first constant-rate construction of t-out-of-n non-malleable secret sharing.Leakage-Tolerant Multiparty Computation for General Interaction Patterns. For any function f, we give a reduction from constructing a leakage-tolerant secure multi-party computation protocol for computing f that obeys any given interaction pattern to constructing a secure (but not necessarily leakage-tolerant) protocol for a related function that obeys the star interaction pattern. Together with the known results for the star interaction pattern, this gives leakage tolerant MPC for any interaction pattern with statistical/computational security. This improves upon the result of (Halevi et al., ITCS 2016), who presented such a reduction in a leak-free environment.
2019
TCC
Statistical Difference Beyond the Polarizing Regime
Abstract
The polarization lemma for statistical distance (
$${\text {SD}}$$
), due to Sahai and Vadhan (JACM, 2003), is an efficient transformation taking as input a pair of circuits
$$(C_0,C_1)$$
and an integer k and outputting a new pair of circuits
$$(D_0,D_1)$$
such that if
$${\text {SD}}(C_0,C_1) \ge \alpha $$
then
$${\text {SD}}(D_0,D_1) \ge 1-2^{-k}$$
and if
$${\text {SD}}(C_0,C_1) \le \beta $$
then
$${\text {SD}}(D_0,D_1) \le 2^{-k}$$
. The polarization lemma is known to hold for any constant values
$$\beta < \alpha ^2$$
, but extending the lemma to the regime in which
$$\alpha ^2 \le \beta < \alpha $$
has remained elusive. The focus of this work is in studying the latter regime of parameters. Our main results are: 1.Polarization lemmas for different notions of distance, such as Triangular Discrimination (
$${{\,\mathrm{TD}\,}}$$
) and Jensen-Shannon Divergence (
$${{\,\mathrm{JS}\,}}$$
), which enable polarization for some problems where the statistical distance satisfies
$$ \alpha ^2< \beta < \alpha $$
. We also derive a polarization lemma for statistical distance with any inverse-polynomially small gap between
$$ \alpha ^2 $$
and
$$ \beta $$
(rather than a constant).2.The average-case hardness of the statistical difference problem (i.e., determining whether the statistical distance between two given circuits is at least
$$\alpha $$
or at most
$$\beta $$
), for any values of
$$\beta < \alpha $$
, implies the existence of one-way functions. Such a result was previously only known for
$$\beta < \alpha ^2$$
.3.A (direct) constant-round interactive proof for estimating the statistical distance between any two distributions (up to any inverse polynomial error) given circuits that generate them. Proofs of closely related statements have appeared in the literature but we give a new proof which we find to be cleaner and more direct.
2018
CRYPTO
From Laconic Zero-Knowledge to Public-Key Cryptography
📺
Abstract
Since its inception, public-key encryption (
$$\mathsf {PKE}$$
PKE) has been one of the main cornerstones of cryptography. A central goal in cryptographic research is to understand the foundations of public-key encryption and in particular, base its existence on a natural and generic complexity-theoretic assumption. An intriguing candidate for such an assumption is the existence of a cryptographically hard language .In this work we prove that public-key encryption can be based on the foregoing assumption, as long as the (honest) prover in the zero-knowledge protocol is efficient and laconic. That is, messages that the prover sends should be efficiently computable (given the witness) and short (i.e., of sufficiently sub-logarithmic length). Actually, our result is stronger and only requires the protocol to be zero-knowledge for an honest-verifier and sound against computationally bounded cheating provers.Languages in with such laconic zero-knowledge protocols are known from a variety of computational assumptions (e.g., Quadratic Residuocity, Decisional Diffie-Hellman, Learning with Errors, etc.). Thus, our main result can also be viewed as giving a unifying framework for constructing
$$\mathsf {PKE}$$
PKE which, in particular, captures many of the assumptions that were already known to yield
$$\mathsf {PKE}$$
PKE.We also show several extensions of our result. First, that a certain weakening of our assumption on laconic zero-knowledge is actually equivalent to
$$\mathsf {PKE}$$
PKE, thereby giving a complexity-theoretic characterization of
$$\mathsf {PKE}$$
PKE. Second, a mild strengthening of our assumption also yields a (2-message) oblivious transfer protocol.
2018
CRYPTO
Proofs of Work From Worst-Case Assumptions
📺
Abstract
We give Proofs of Work (PoWs) whose hardness is based on well-studied worst-case assumptions from fine-grained complexity theory. This extends the work of (Ball et al., STOC ’17), that presents PoWs that are based on the Orthogonal Vectors, 3SUM, and All-Pairs Shortest Path problems. These, however, were presented as a ‘proof of concept’ of provably secure PoWs and did not fully meet the requirements of a conventional PoW: namely, it was not shown that multiple proofs could not be generated faster than generating each individually. We use the considerable algebraic structure of these PoWs to prove that this non-amortizability of multiple proofs does in fact hold and further show that the PoWs’ structure can be exploited in ways previous heuristic PoWs could not.This creates full PoWs that are provably hard from worst-case assumptions (previously, PoWs were either only based on heuristic assumptions or on much stronger cryptographic assumptions (Bitansky et al., ITCS ’16)) while still retaining significant structure to enable extra properties of our PoWs. Namely, we show that the PoWs of (Ball et al., STOC ’17) can be modified to have much faster verification time, can be proved in zero knowledge, and more.Finally, as our PoWs are based on evaluating low-degree polynomials originating from average-case fine-grained complexity, we prove an average-case direct sum theorem for the problem of evaluating these polynomials, which may be of independent interest. For our context, this implies the required non-amortizability of our PoWs.
2017
CRYPTO
Program Committees
- Crypto 2024
- TCC 2024
- TCC 2022
- Eurocrypt 2021
- Eurocrypt 2020
- TCC 2019
Coauthors
- Shweta Agrawal (1)
- Benny Applebaum (2)
- Barak Arkis (1)
- Marshall Ball (1)
- Itay Berman (3)
- Akshay Degwekar (4)
- Sanjam Garg (1)
- Shafi Goldwasser (1)
- Inbar Kaslasi (1)
- Or Keret (1)
- Pasin Manurangsi (1)
- Changrui Mu (2)
- Shafik Nassar (1)
- Pavel Raykov (1)
- Alon Rosen (1)
- Ron D. Rothblum (8)
- Manuel Sabin (1)
- Sagnik Saha (1)
- Nikolaj I. Schwartzbach (1)
- Akshayaram Srinivasan (2)
- Vinod Vaikuntanathan (2)
- Akhil Vanukuri (1)
- Prashant Nalini Vasudevan (18)