## CryptoDB

### Spencer Peters

#### Publications

**Year**

**Venue**

**Title**

2024

CRYPTO

Adaptively Sound Zero Knowledge SNARKs for UP
Abstract

We study succinct non-interactive arguments (SNARGs) and succinct non-interactive arguments of knowledge (SNARKs) for the class $\mathsf{UP}$ in the reusable designated verifier model. $\mathsf{UP}$ is an expressive subclass of $\mathsf{NP}$ consisting of all $\mathsf{NP}$ languages where each instance has at most one witness; a designated verifier SNARG (dvSNARG) is one where verification of the SNARG proof requires a private verification key; and such a dvSNARG is reusable if soundness holds even against a malicious prover with oracle access to the (private) verification algorithm.
Our main results are as follows.
(1) A reusably and adaptively sound zero-knowledge (zk) dvSNARG for $\mathsf{UP}$, from subexponential LWE and evasive LWE (a relatively new but popular variant of LWE). Our SNARGs achieve very short proofs of length $(1 + o(1)) \cdot \lambda$ bits for $2^{-\lambda}$ soundness error.
(2) A generic transformation that lifts any ``Sahai-Waters-like'' (zk) SNARG to an adaptively sound (zk) SNARG, in the \emph{designated-verifier} setting. In particular, this shows that the Sahai-Waters SNARG for $\mathsf{NP}$ is adaptively sound in the designated verifier setting, assuming subexponential hardness of the underlying assumptions. The resulting SNARG proofs have length $(1 + o(1)) \cdot \lambda$ bits for $2^{-\lambda}$ soundness error. Our result sidesteps the Gentry-Wichs barrier for adaptive soundness by employing an exponential-time security reduction.
(3) A generic transformation that lifts any adaptively sound (zk) SNARG for $\mathsf{UP}$ to an adaptively sound (zk) SNARK for $\mathsf{UP}$, while preserving zero-knowledge. The resulting SNARK achieves the strong notion of black-box extraction. There are barriers to achieving such SNARKs for all of $\mathsf{NP}$ from falsifiable assumptions, so our restriction to $\mathsf{UP}$ is, in a sense, necessary.
Applying (3) to our SNARG for $\mathsf{UP}$ from evasive LWE (1), we obtain a reusably and adaptively sound designated-verifier zero-knowledge SNARK for $\mathsf{UP}$ from subexponential LWE and evasive LWE. Moreover, applying both (2) and (3) to the Sahai-Waters SNARG, we obtain the same result from LWE, subexponentially secure one-way functions, and subexponentially secure indistinguishability obfuscation. Both constructions have succinct proofs of size $\mathsf{poly}(\secp).$ These are the first SNARK constructions (even in the designated-verifier setting) for a non-trivial subset of $\mathsf{NP}$ from (sub-exponentially) falsifiable assumptions.

2023

CRYPTO

Revisiting Time-Space Tradeoffs for Function Inversion
Abstract

We study the black-box function inversion problem, which is the problem of finding x \in [N] such that f(x) = y, given as input some challenge point y in the image of a function f : [N] \to [N], using T oracle queries to f \emph{and} preprocessed advice \sigma \in \{0,1\}^S depending on f. We prove a number of new results about this problem, as follows.
1. We show an algorithm that works for any T and S satisfying
T S^2 \cdot \max\{S,T\} = \widetilde{\Theta}(N^3)
In the important setting when S < T, this improves on the celebrated algorithm of Fiat and Naor [STOC, 1991], which requires T S^3 \gtrsim N^3. E.g., Fiat and Naor's algorithm is only non-trivial for S \gg N^{2/3}, while our algorithm gives a non-trivial tradeoff for any S \gg N^{1/2}. (Our algorithm and analysis are quite simple. As a consequence of this, we also give a self-contained and simple proof of Fiat and Naor's original result, with certain optimizations left out for simplicity.)
2. We observe that there is a very simple \emph{non-adaptive} algorithm (i.e., an algorithm whose i-th query x_i is chosen based entirely on \sigma and y, and not on the f(x_1),\ldots, f(x_{i-1}))
that improves slightly on the trivial algorithm. It works for any T and S satisfying S = \Theta(N \log(N/T)),
for example, T = N /\polylog(N), S = \Theta(N/\log \log N).
This answers a question due to Corrigan-Gibbs and Kogan [TCC, 2019], who asked whether non-trivial non-adaptive algorithms exist; namely, algorithms that work with parameters T and S satisfying T + S/\log N < o(N).
We also observe that our non-adaptive algorithm is what we call a \emph{guess-and-check} algorithm, that is, it is non-adaptive \emph{and} its final output is always one of the oracle queries x_1,\ldots, x_T.
For guess-and-check algorithms, we prove a matching lower bound,
therefore completely characterizing the achievable parameters (S,T) for this natural class of algorithms. (Corrigan-Gibbs and Kogan showed that any such lower bound for \emph{arbitrary} non-adaptive algorithms would imply new circuit lower bounds.)
3. We show equivalence between function inversion and a natural decision version of the problem in both the worst case and the average case, and similarly for functions f : [N] \to [M] with different ranges. Some of these equivalence results are deferred to the full version [ECCC, 2022].
All of the above results are most naturally described in a model with \emph{shared randomness} (i.e., random coins shared between the preprocessing algorithm and the online algorithm). However, as an additional contribution, we show (using a technique from communication complexity due to Newman [IPL, 1991]) how to generically convert any algorithm that uses shared randomness into one that does not.

#### Coauthors

- Alexander Golovnev (1)
- Siyao Guo (1)
- Surya Mathialagan (1)
- Spencer Peters (2)
- Noah Stephens-Davidowitz (1)
- Vinod Vaikuntanathan (1)