CryptoDB
Rafael Pass
Publications
Year
Venue
Title
2024
EUROCRYPT
Public-Coin, Complexity-Preserving, Succinct Arguments of Knowledge for NP from Collision-Resistance
Abstract
Succinct arguments allow a powerful (yet polynomial-time) prover to convince a weak verifier of the validity of some NP statement using very little communication. A major barrier to the deployment of such proofs is the unwieldy overhead of the prover relative to the complexity of the statement to be proved. In this work, we focus on complexity-preserving arguments where proving a non-deterministic time $t$ and space $s$ RAM computation takes time $\tilde O(t)$ and space $\tilde O(s)$.
Currently, all known complexity-preserving arguments either are private-coin, rely on non-standard assumptions, or provide only weak succinctness. In this work, we construct complexity-preserving succinct argument based solely on collision-resistant hash functions, thereby matching the classic succinct argument of Kilian (STOC '92).
2024
EUROCRYPT
A Direct PRF Construction from Kolmogorov Complexity
Abstract
While classic results in the 1980s establish that one-way functions (OWFs) imply the existence of pseudorandom generators (PRGs) which in turn imply pseudorandom functions (PRFs), the constructions (most notably the one from OWFs to PRGs) is complicated and inefficient.
Consequently, researchers have developed alternative \emph{direct} constructions of PRFs from various different concrete hardness assumptions. In this work, we continue this thread of work and demonstrate the first direct constructions of PRFs from average-case hardness of the time-bounded Kolmogorov complexity problem $\mktp[s]$, where given a threshold, $s(\cdot)$, and a polynomial time-bound, $t(\cdot)$, $\mktp[s]$ denotes the language consisting of strings $x$ with $t$-bounded Kolmogorov complexity, $K^t(x)$, bounded by $s(|x|)$.
In more detail, we demonstrate a direct PRF construction with quasi-polynomial security from mild average-case of hardness of $\mktp[2^{O(\sqrt{\log n})}]$ w.r.t the uniform distribution. We note that by earlier results, this assumption is known to be
equivalent to the existence of quasi-polynomially secure OWFs; as such, our results yield the first direct (quasi-polynomially secure) PRF constructions from a natural hardness assumptions that also is known to be implied by (quasi-polynomially secure) PRFs.
Perhaps surprisingly, we show how to make use of the Nisan-Wigderson PRG construction to get a cryptographic, as opposed to a complexity-theoretic, PRG.
2024
TCC
On One-way Functions and the Worst-case Hardness of Time-Bounded Kolmogorov Complexity, and Computational Depth
Abstract
Whether one-way functions (OWF) exist is arguably the most important
problem in Cryptography, and beyond. While lots of candidate
constructions of one-way functions are known, and recently also
problems whose average-case hardness characterize the existence of
OWFs have been demonstrated, the question of
whether there exists some \emph{worst-case hard problem} that characterizes
the existence of one-way functions has remained open since their
introduction in 1976.
In this work, we present the first ``OWF-complete'' promise
problem---a promise problem whose worst-case hardness w.r.t. $\BPP$
(resp. $\Ppoly$) is \emph{equivalent} to the existence of OWFs secure
against $\PPT$ (resp. $\nuPPT$) algorithms. The problem is a
variant of the Minimum Time-bounded Kolmogorov Complexity
problem ($\mktp[s]$ with a threshold $s$), where we condition on
instances having small ``computational depth''.
We furthermore show that depending on the choice of the
threshold $s$, this problem characterizes either ``standard''
(polynomially-hard) OWFs, or quasi polynomially- or
subexponentially-hard OWFs. Additionally, when the threshold is
sufficiently small (e.g., $2^{O(\sqrt{n})}$ or $\poly\log n$) then
\emph{sublinear} hardness of this problem suffices to characterize
quasi-polynomial/sub-exponential OWFs.
While our constructions are black-box, our analysis is \emph{non-
black box}; we additionally demonstrate that fully black-box constructions
of OWF from the worst-case hardness of this problem are impossible.
We finally show that, under Rudich's conjecture, and standard derandomization
assumptions, our problem is not inside $\coAM$; as such, it
yields the first candidate problem believed to be outside of $\AM \cap \coAM$,
or even ${\bf SZK}$, whose worst case hardness implies the existence of OWFs.
2023
CRYPTO
One-way Functions and the Hardness of (Probabilistic) Time-Bounded Kolmogorov Complexity w.r.t. Samplable Distributions
Abstract
Consider the recently introduced notion of \emph{probabilistic
time-bounded Kolmogorov Complexity}, $pK^t$ (Goldberg et al,
CCC'22), and let $\mpktp$ denote the language of pairs $(x,t)$ such that $pK^t(x) \leq t$.
We show the equivalence of the following:
\BI
\item $\mpkpolyp$ is (mildly) hard-on-average w.r.t. \emph{any} samplable
distribution $\D$;
\item $\mpkpolyp$ is (mildly) hard-on-average w.r.t. the
\emph{uniform} distribution;
\item existence of one-way functions.
\EI
As far as we know, this yields the first natural class of problems where
hardness with respect to any samplable distribution is equivalent
to hardness with respect to the uniform distribution.
Under standard derandomization assumptions, we can show the same result
also w.r.t. the standard notion of time-bounded Kolmogorov
complexity, $K^t$.
2023
TCC
On One-way Functions and Sparse Languages
Abstract
We show equivalence between the existence of one-way
functions and the existence of a \emph{sparse} language that is
hard-on-average w.r.t. some efficiently samplable ``high-entropy''
distribution.
In more detail, the following are equivalent:
- The existence of a $S(\cdot)$-sparse language $L$ that is
hard-on-average with respect to some samplable distribution with
Shannon entropy $h(\cdot)$ such that $h(n)-\log(S(n)) \geq 4\log n$;
- The existence of a $S(\cdot)$-sparse language $L \in
\NP$, that is
hard-on-average with respect to some samplable distribution with
Shannon entropy $h(\cdot)$ such that $h(n)-\log(S(n)) \geq n/3$;
- The existence of one-way functions.
Our results are inspired by, and generalize, results from the elegant
recent paper by Ilango, Ren and Santhanam (IRS, STOC'22), which presents
similar connections for \emph{specific} sparse languages.
2023
TCC
Counting Unpredictable Bits: A Simple PRG from One-way Functions
Abstract
A central result in the theory of Cryptography, by Hastad, Imagliazzo, Luby and Levin [SICOMP’99], demonstrates that the existence one-way functions (OWF) implies the existence of pseudo-random generators (PRGs). Despite the fundamental importance of this result, and several elegant improvements/simplifications, analyses of constructions of PRGs from OWFs remain complex (both conceptually and technically).
Our goal is to provide a construction of a PRG from OWFs with a simple proof of security; we thus focus on the setting of non-uniform security (i.e., we start off with a OWF secure against non-uniform PPT, and we aim to get a PRG secure against non-uniform PPT).
Our main result is a construction of a PRG from OWFs with a self-contained, simple, proof of security, relying only on the Goldreich-Levin Theorem (and the Chernoff bound). Although our main goal is simplicity, the construction, and a variant there-of, also improves the efficiency—in terms of invocations and seed lengths—of the state-of-the-art constructions due to [Haitner-Reingold-Vadhan, STOC’10] and [Vadhan-Zheng, STOC’12], by a factor O(log2 n).
The key novelty in our analysis is a generalization of the Blum-Micali [FOCS’82] notion of unpredictabilty—rather than requiring that every bit in the output of a function is unpredictable, we count how many unpredictable bits a function has, and we show that any OWF on n input bits (after hashing the input and the output) has n + O(log n) unpredictable output bits. Such unpredictable bits can next be “extracted” into a pseudorandom string using standard techniques.
2023
TCC
Simplex Consensus: A Fast and Simple Consensus Protocol
Abstract
We present a theoretical framework for analyzing the efficiency of consensus protocols, and apply it to analyze the optimistic and pessimistic confirmation times of state-of-the-art partially-synchronous protocols in the so-called ``rotating leader/random leader'' model of consensus (recently popularized in the blockchain setting).
We next present a new and simple consensus protocol in the partially synchronous setting, tolerating $f < n/3$ byzantine faults; in our eyes, this protocol is essentially as simple to describe as the simplest known protocols, but it also enjoys an even simpler security proof, while matching and, even improving, the efficiency of the state-of-the-art (according to our theoretical framework).
As with the state-of-the-art protocols, our protocol assumes a (bare) PKI, a digital signature scheme, collision-resistant hash functions, and a random leader election oracle, which may be instantiated with a random oracle (or a CRS).
2022
TCC
Universal Reductions: Reductions Relative to Stateful Oracles
Abstract
We define a framework for analyzing the security of cryptographic protocols that makes minimal assumptions about what a ``realistic model of computation is". In particular, whereas classical models assume that the attacker is a (perhaps non-uniform) probabilistic polynomial-time algorithm, and more recent definitional approaches also consider quantum polynomial-time algorithms, we consider an approach that is more agnostic to what computational model is physically realizable.
Our notion of \emph{universal reductions} models attackers as PPT algorithms having access to some arbitrary unbounded \emph{stateful} Nature that cannot be rewound or restarted when queried multiple times. We also consider a more relaxed notion of \emph{universal reductions w.r.t. time-evolving, $k$-window, Natures} that makes restrictions on Nature---roughly speaking, Nature's behavior may depend on number of messages it has received and the content of the last $k(\sec)$-messages (but not on ``older'' messages).
We present both impossibility results and general feasibility results for our notions, indicating to what extent the extended Church-Turing hypotheses are needed for a well-founded theory of Cryptography.
2022
TCC
Parallelizable Delegation from LWE
Abstract
We present the first non-interactive delegation scheme for P with time-tight parallel prover
efficiency based on standard hardness assumptions. More precisely, in a time-tight delegation
scheme—which we refer to as a SPARG (succinct parallelizable argument)—the prover’s parallel
running time is t + polylog(t), while using only polylog(t) processors and where t is the length
of the computation. (In other words, the proof is computed essentially in parallel with the
computation, with only some minimal additive overhead in terms of time).
Our main results show the existence of a publicly-verifiable, non-interactive, SPARG for P
assuming polynomial hardness of LWE. Our SPARG construction relies on the elegant recent
delegation construction of Choudhuri, Jain, and Jin (FOCS’21) and combines it with techniques
from Ephraim et al (EuroCrypt’20).
We next demonstrate how to make our SPARG time-independent—where the prover and
verifier do not need to known the running-time t in advance; as far as we know, this yields
the first construction of a time-tight delegation scheme with time-independence based on any
hardness assumption.
We finally present applications of SPARGs to the constructions of VDFs (Boneh et al,
Crypto’18), resulting in the first VDF construction from standard polynomial hardness assumptions (namely LWE and the minimal assumption of a sequentially hard
function).
2022
ASIACRYPT
Concurrently Composable Non-Interactive Secure Computation
📺
Abstract
We consider the feasibility of non-interactive secure two-party computation (NISC) in the plain model satisfying the notion of superpolynomial-time simulation (SPS). While stand-alone secure SPS-NISC protocols are known from standard assumptions (Badrinarayanan et al., Asiacrypt 2017), it has remained an open problem to construct a concurrently composable SPS-NISC.
Prior to our work, the best protocols require 5 rounds (Garg et al., Eurocrypt 2017), or 3 simultaneous-message rounds (Badrinarayanan et al., TCC 2017).
In this work, we demonstrate the first concurrently composable SPS-NISC. Our construction assumes the existence of:
* a non-interactive (weakly) CCA-secure commitment,
* a stand-alone secure SPS-NISC with subexponential security,
and satisfies the notion of “angel-based” UC security (i.e., UC with a superpolynomial-time helper) with perfect correctness.
We additionally demonstrate that both of the primitives we use (albeit only with polynomial security) are necessary for such concurrently composable SPS-NISC with perfect correctness. As such, our work identifies essentially necessary and sufficient primitives for concurrently composable SPS-NISC with perfect correctness in the plain model.
2022
JOFC
On the Complexity of Compressing Obfuscation
Abstract
Indistinguishability obfuscation has become one of the most exciting cryptographic primitives due to its far-reaching applications in cryptography and other fields. However, to date, obtaining a plausibly secure construction has been an illusive task, thus motivating the study of seemingly weaker primitives that imply it, with the possibility that they will be easier to construct. In this work, we provide a systematic study of compressing obfuscation, one of the most natural and simple to describe primitives that is known to imply indistinguishability obfuscation when combined with other standard assumptions. A compressing obfuscator is roughly an indistinguishability obfuscator that outputs just a slightly compressed encoding of the truth table. This generalizes notions introduced by Lin et al. (Functional signatures and pseudorandom functions, PKC, 2016) and Bitansky et al. (From Cryptomania to Obfustopia through secret-key functional encryption, TCC, 2016) by allowing for a broader regime of parameters. We view compressing obfuscation as an independent cryptographic primitive and show various positive and negative results concerning its power and plausibility of existence, demonstrating significant differences from full-fledged indistinguishability obfuscation. First, we show that as a cryptographic building block, compressing obfuscation is weak. In particular, when combined with one-way functions, it cannot be used (in a black-box way) to achieve public-key encryption, even under (sub-)exponential security assumptions. This is in sharp contrast to indistinguishability obfuscation, which together with one-way functions implies almost all cryptographic primitives. Second, we show that to construct compressing obfuscation with perfect correctness, one only needs to assume its existence with a very weak correctness guarantee and polynomial hardness. Namely, we show a correctness amplification transformation with optimal parameters that relies only on polynomial hardness assumptions. This implies a universal construction assuming only polynomially secure compressing obfuscation with approximate correctness. In the context of indistinguishability obfuscation, we know how to achieve such a result only under sub-exponential security assumptions together with derandomization assumptions. Lastly, we characterize the existence of compressing obfuscation with statistical security. We show that in some range of parameters and for some classes of circuits such an obfuscator exists , whereas it is unlikely to exist with better parameters or for larger classes of circuits. These positive and negative results reveal a deep connection between compressing obfuscation and various concepts in complexity theory and learning theory.
2022
JOFC
Locality-Preserving Oblivious RAM
Abstract
Oblivious RAMs, introduced by Goldreich and Ostrovsky [JACM’96], compile any RAM program into one that is “memory oblivious,” i.e., the access pattern to the memory is independent of the input. All previous ORAM schemes, however, completely break the locality of data accesses (for instance, by shuffling the data to pseudorandom positions in memory). In this work, we initiate the study of locality-preserving ORAMs —ORAMs that preserve locality of the accessed memory regions, while leaking only the lengths of contiguous memory regions accessed. Our main results demonstrate the existence of a locality-preserving ORAM with polylogarithmic overhead both in terms of bandwidth and locality. We also study the trade-off between locality, bandwidth and leakage, and show that any scheme that preserves locality and does not leak the lengths of the contiguous memory regions accessed, suffers from prohibitive bandwidth. To further improve the parameters, we also consider a weaker notion of a File ORAM, which supports accesses to predefined non-overlapping regions. Assuming one-way functions, we present a computationally secure File ORAM that has a work overhead and locality of roughly $$O(\log ^2 N)$$ O ( log 2 N ) , while ignoring $$\log \log N$$ log log N factors. To the best of our knowledge, before our work, the only works combining locality and obliviousness were for symmetric searchable encryption [e.g., Cash and Tessaro (EUROCRYPT’14), Asharov et al. (STOC’16)]. Symmetric search encryption ensures obliviousness if each keyword is searched only once, whereas ORAM provides obliviousness to any input program. Thus, our work generalizes that line of work to the much more challenging task of preserving locality in ORAMs.
2021
CRYPTO
On the Possibility of Basing Cryptography on $\EXP \neq \BPP$
📺 ★
Abstract
Liu and Pass (FOCS'20) recently demonstrated an equivalence between the
existence of one-way
functions and mild average-case hardness of the time-bounded
Kolmogorov complexity problem. In this work, we establish a similar
equivalence but to a different form of time-bounded Kolmogorov
Complexity---namely, Levin's notion of Kolmogorov Complexity---whose
hardness is closely related to the problem of whether $\EXP \neq
\BPP$. In more detail, let $Kt(x)$ denote the Levin-Kolmogorov Complexity of the string $x$;
that is, $Kt(x) = \min_{\desc \in \bitset^*, t \in \N}\{|\desc| +
\lceil \log t \rceil: U(\desc, 1^t) = x\}$, where $U$ is a universal
Turing machine, and let $\mktp$ denote the language of pairs $(x,k)$ having
the property that $Kt(x) \leq k$.
We demonstrate that:
- $\mktp$ is \emph{two-sided error} mildly average-case hard (i.e., $\mktp
\notin \HeurpBPP$) iff infinititely-often one-way
functions exist.
- $\mktp$ is \emph{errorless} mildly average-case hard (i.e., $\mktp
\notin \AvgpBPP$) iff $\EXP \neq \BPP$.
Thus, the only ``gap'' towards getting (infinitely-often) one-way
functions from the assumption that $\EXP \neq \BPP$ is the
seemingly ``minor'' technical gap
between two-sided error and errorless average-case hardness of the
$\mktp$ problem.
As a corollary of this result, we additionally demonstrate that
any reduction from errorless to two-sided error average-case
hardness for $\mktp$ implies (unconditionally) that $\NP \neq \P$.
We finally consider other alternative notions of Kolmogorov
complexity---including space-bounded Kolmogorov complexity and
conditional Kolmogorov complexity---and show how average-case
hardness of problems related to them characterize log-space
computable one-way functions, or one-way functions in $\NC^0$.
2021
CRYPTO
Non-Malleable Codes for Bounded Parallel-Time Tampering
📺
Abstract
Non-malleable codes allow one to encode data in such a way that once a codeword is being tampered with, the modified codeword is either an encoding of the original message, or a completely unrelated one. Since the introduction of this notion by Dziembowski, Pietrzak, and Wichs (ICS '10 and J. ACM '18), there has been a large body of works realizing such coding schemes secure against various classes of tampering functions. It is well known that there is no efficient non-malleable code secure against all polynomial size tampering functions. Nevertheless, no code which is non-malleable for \emph{bounded} polynomial size attackers is known and obtaining such a code has been a major open problem.
We present the first construction of a non-malleable code secure against all polynomial size tampering functions that have {bounded} parallel time. This is an even larger class than all bounded polynomial size functions. In particular, this class includes all functions in non-uniform $\mathbf{NC}$ (and much more). Our construction is in the plain model (i.e., no trusted setup) and relies on several cryptographic assumptions such as keyless hash functions, time-lock puzzles, as well as other standard assumptions. Additionally, our construction has several appealing properties: the complexity of encoding is independent of the class of tampering functions and we can obtain (sub-)exponentially small error.
2021
TCC
Non-Malleable Time-Lock Puzzles and Applications
📺
Abstract
Time-lock puzzles are a mechanism for sending messages "to the future", by allowing a sender to quickly generate a puzzle with an underlying message that remains hidden until a receiver spends a moderately large amount of time solving it. We introduce and construct a variant of a time-lock puzzle which is non-malleable, which roughly guarantees that it is impossible to "maul" a puzzle into one for a related message without solving it.
Using non-malleable time-lock puzzles, we achieve the following applications:
- The first fair non-interactive multi-party protocols for coin flipping and auctions in the plain model without setup.
- Practically efficient fair multi-party protocols for coin flipping and auctions proven secure in the (auxiliary-input) random oracle model.
As a key step towards proving the security of our protocols, we introduce the notion of functional non-malleability, which protects against tampering attacks that affect a specific function of the related messages. To support an unbounded number of participants in our protocols, our time-lock puzzles satisfy functional non-malleability in the fully concurrent setting. We additionally show that standard (non-functional) non-malleability is impossible to achieve in the concurrent setting (even in the random oracle model).
2020
EUROCRYPT
Which Languages Have 4-Round Fully Black-Box Zero-Knowledge Arguments from One-Way Functions?
📺
Abstract
We prove that if a language $\cL$ has a 4-round fully black-box zero-knowledge argument with negligible soundness based on one-way functions, then $\overline{\cL} \in \MA$. Since $\coNP \subseteq \MA$ implies that the polynomial hierarchy collapses, our result implies that $\NP$-complete languages are unlikely to have 4-round fully black-box zero-knowledge arguments based on one-way functions. In TCC 2018, Hazay and Venkitasubramaniam, and Khurana, Ostrovsky, and Srinivasan demonstrated 4-round fully black-box zero-knowledge arguments for all languages in $\NP$ based on injective one-way functions.
Their results also imply a 5-round protocol based on one-way functions. In essence, our result resolves the round complexity of fully black-box zero-knowledge arguments based on one-way functions.
2020
EUROCRYPT
Continuous Verifiable Delay Functions
📺
Abstract
We introduce the notion of a continuous verifiable delay function (cVDF): a function g which is (a) iteratively sequential---meaning that evaluating the iteration $g^{(t)}$ of g (on a random input) takes time roughly t times the time to evaluate g, even with many parallel processors, and (b) (iteratively) verifiable---the output of $g^{(t)}$ can be efficiently verified (in time that is essentially independent of t). In other words, the iterated function $g^{(t)}$ is a verifiable delay function (VDF) (Boneh et al., CRYPTO '18), having the property that intermediate steps of the computation (i.e., $g^{(t')}$ for t'<t) are publicly and continuously verifiable.
We demonstrate that cVDFs have intriguing applications: (a) they can be used to construct public randomness beacons that only require an initial random seed (and no further unpredictable sources of randomness), (b) enable outsourceable VDFs where any part of the VDF computation can be verifiably outsourced, and (c) have deep complexity-theoretic consequences: in particular, they imply the existence of depth-robust moderately-hard Nash equilibrium problem instances, i.e. instances that can be solved in polynomial time yet require a high sequential running time.
Our main result is the construction of a cVDF based on the repeated squaring assumption and the soundness of the Fiat-Shamir (FS) heuristic for constant-round proofs.
We highlight that when viewed as a (plain) VDF, our construction requires a weaker FS assumption than previous ones (earlier constructions require the FS heuristic for either super-logarithmic round proofs, or for arguments).
2020
EUROCRYPT
SPARKs: Succinct Parallelizable Arguments of Knowledge
📺
Abstract
We introduce the notion of a Succinct Parallelizable Argument of Knowledge (SPARK). This is an argument system with the following three properties for computing and proving a time T (non-deterministic) computation:
- The prover's (parallel) running time is T + polylog T. (In other words, the prover's running time is essentially T for large computation times!)
- The prover uses at most polylog T processors.
- The communication complexity and verifier complexity are both polylog T.
While the third property is standard in succinct arguments, the combination of all three is desirable as it gives a way to leverage moderate parallelism in favor of near-optimal running time. We emphasize that even a factor two overhead in the prover's parallel running time is not allowed.
Our main results are the following, all for non-deterministic polynomial-time RAM computation. We construct (1) an (interactive) SPARK based solely on the existence of collision-resistant hash functions, and (2) a non-interactive SPARK based on any collision-resistant hash function and any SNARK with quasi-linear overhead (as satisfied by recent SNARK constructions).
2020
EUROCRYPT
Succinct Non-Interactive Secure Computation
📺
Abstract
We present the first maliciously secure protocol for succinct non-interactive secure two-party computation (SNISC): Each player sends just a single message whose length is (essentially) independent of the running time of the function to be computed. The protocol does not require any trusted setup, satisfies superpolynomial-time simulation-based security (SPS), and is based on (subexponential) security of the Learning With Errors (LWE) assumption. We do not rely on SNARKs or "knowledge of exponent"-type assumptions.
Since the protocol is non-interactive, the relaxation to SPS security is needed, as standard polynomial-time simulation is impossible; however, a slight variant of our main protocol yields a SNISC with polynomial-time simulation in the CRS model.
2020
PKC
Sublinear-Round Byzantine Agreement Under Corrupt Majority
📺
Abstract
Although Byzantine Agreement (BA) has been studied for three decades, perhaps somewhat surprisingly, there still exist significant gaps in our understanding regarding its round complexity. A long-standing open question is the following: can we achieve BA with sublinear round complexity under corrupt majority? Due to the beautiful works by Garay et al. (FOCS’07) and Fitzi and Nielsen (DISC’09), we have partial and affirmative answers to this question albeit for the narrow regime $$f = n/2 + o(n)$$ where f is the number of corrupt nodes and n is the total number of nodes. So far, no positive result is known about the setting $$f > 0.51n$$ even for static corruption! In this paper, we make progress along this somewhat stagnant front. We show that there exists a corrupt-majority BA protocol that terminates in $$O(frac{1}{epsilon } log frac{1}{delta })$$ rounds in the worst case, satisfies consistency with probability at least $$1 - delta $$ , and tolerates $$(1-epsilon )$$ fraction of corrupt nodes. Our protocol secures against an adversary that can corrupt nodes adaptively during the protocol execution but cannot perform “after-the-fact” removal of honest messages that have already been sent prior to corruption. Our upper bound is optimal up to a logarithmic factor in light of the elegant $$varOmega (1/epsilon )$$ lower bound by Garay et al. (FOCS’07).
2020
ASIACRYPT
On the Adaptive Security of MACs and PRFs
📺
Abstract
We consider the security of two of the most commonly used cryptographic primitives--message authentication codes (MACs) and pseudorandom functions (PRFs)--in a multi-user setting with adaptive corruption. Whereas is it well known that any secure MAC or PRF is also multi-user secure under adaptive corruption, the trivial reduction induces a security loss that is linear in the number of users.
Our main result shows that black-box reductions from "standard" assumptions cannot be used to provide a tight, or even a linear-preserving, security reduction for adaptive multi-user secure deterministic stateless MACs and thus also PRFs. In other words, a security loss that grows with the number of users is necessary for any such black-box reduction.
2019
EUROCRYPT
Consensus Through Herding
📺
Abstract
State Machine Replication (SMR) is an important abstraction for a set of nodes to agree on an ever-growing, linearly-ordered log of transactions. In decentralized cryptocurrency applications, we would like to design SMR protocols that (1) resist adaptive corruptions; and (2) achieve small bandwidth and small confirmation time. All past approaches towards constructing SMR fail to achieve either small confirmation time or small bandwidth under adaptive corruptions (without resorting to strong assumptions such as the erasure model or proof-of-work).We propose a novel paradigm for reaching consensus that departs significantly from classical approaches. Our protocol is inspired by a social phenomenon called herding, where people tend to make choices considered as the social norm. In our consensus protocol, leader election and voting are coalesced into a single (randomized) process: in every round, every node tries to cast a vote for what it views as the most popular item so far: such a voting attempt is not always successful, but rather, successful with a certain probability. Importantly, the probability that the node is elected to vote for v is independent from the probability it is elected to vote for $$v' \ne v$$v′≠v. We will show how to realize such a distributed, randomized election process using appropriate, adaptively secure cryptographic building blocks.We show that amazingly, not only can this new paradigm achieve consensus (e.g., on a batch of unconfirmed transactions in a cryptocurrency system), but it also allows us to derive the first SMR protocol which, even under adaptive corruptions, requires only polylogarithmically many rounds and polylogarithmically many honest messages to be multicast to confirm each batch of transactions; and importantly, we attain these guarantees under standard cryptographic assumptions.
2019
EUROCRYPT
Locality-Preserving Oblivious RAM
📺
Abstract
Oblivious RAMs, introduced by Goldreich and Ostrovsky [JACM’96], compile any RAM program into one that is “memory oblivious”, i.e., the access pattern to the memory is independent of the input. All previous ORAM schemes, however, completely break the locality of data accesses (for instance, by shuffling the data to pseudorandom positions in memory).In this work, we initiate the study of locality-preserving ORAMs—ORAMs that preserve locality of the accessed memory regions, while leaking only the lengths of contiguous memory regions accessed. Our main results demonstrate the existence of a locality-preserving ORAM with poly-logarithmic overhead both in terms of bandwidth and locality. We also study the tradeoff between locality, bandwidth and leakage, and show that any scheme that preserves locality and does not leak the lengths of the contiguous memory regions accessed, suffers from prohibitive bandwidth.To the best of our knowledge, before our work, the only works combining locality and obliviousness were for symmetric searchable encryption [e.g., Cash and Tessaro (EUROCRYPT’14), Asharov et al. (STOC’16)]. Symmetric search encryption ensures obliviousness if each keyword is searched only once, whereas ORAM provides obliviousness to any input program. Thus, our work generalizes that line of work to the much more challenging task of preserving locality in ORAMs.
2019
CRYPTO
Synchronous, with a Chance of Partition Tolerance
📺
Abstract
Murphy, Murky, Mopey, Moody, and Morose decide to write a paper together over the Internet and submit it to the prestigious CRYPTO’19 conference that has the most amazing PC. They encounter a few problems. First, not everyone is online every day: some are lazy and go skiing on Mondays; others cannot use git correctly and they are completely unaware that they are losing messages. Second, a small subset of the co-authors may be secretly plotting to disrupt the project (e.g., because they are writing a competing paper in stealth).Suppose that each day, sufficiently many honest co-authors are online (and use git correctly); moreover, suppose that messages checked into git on Monday can be correctly received by honest and online co-authors on Tuesday or any future day. Can the honest co-authors successfully finish the paper in a small number of days such that they make the CRYPTO deadline; and perhaps importantly, can all the honest co-authors, including even those who are lazy and those who sometimes use git incorrectly, agree on the final theorem?
2019
CRYPTO
Non-Uniformly Sound Certificates with Applications to Concurrent Zero-Knowledge
📺
Abstract
We introduce the notion of non-uniformly sound certificates: succinct single-message (unidirectional) argument systems that satisfy a “best-possible security” against non-uniform polynomial-time attackers. In particular, no polynomial-time attacker with s bits of non-uniform advice can find significantly more than s accepting proofs for false statements. Our first result is a construction of non-uniformly sound certificates for all $$\mathbf{NP }$$ in the random oracle model, where the attacker’s advice can depend arbitrarily on the random oracle.We next show that the existence of non-uniformly sound certificates for $$\mathbf{P }$$ (and collision resistant hash functions) yields a public-coin constant-round fully concurrent zero-knowledge argument for $$\mathbf{NP } $$.
2018
CRYPTO
On the Complexity of Compressing Obfuscation
📺
Abstract
Indistinguishability obfuscation has become one of the most exciting cryptographic primitives due to its far reaching applications in cryptography and other fields. However, to date, obtaining a plausibly secure construction has been an illusive task, thus motivating the study of seemingly weaker primitives that imply it, with the possibility that they will be easier to construct.In this work, we provide a systematic study of compressing obfuscation, one of the most natural and simple to describe primitives that is known to imply indistinguishability obfuscation when combined with other standard assumptions. A compressing obfuscator is roughly an indistinguishability obfuscator that outputs just a slightly compressed encoding of the truth table. This generalizes notions introduced by Lin et al. (PKC 2016) and Bitansky et al. (TCC 2016) by allowing for a broader regime of parameters.We view compressing obfuscation as an independent cryptographic primitive and show various positive and negative results concerning its power and plausibility of existence, demonstrating significant differences from full-fledged indistinguishability obfuscation.First, we show that as a cryptographic building block, compressing obfuscation is weak. In particular, when combined with one-way functions, it cannot be used (in a black-box way) to achieve public-key encryption, even under (sub-)exponential security assumptions. This is in sharp contrast to indistinguishability obfuscation, which together with one-way functions implies almost all cryptographic primitives.Second, we show that to construct compressing obfuscation with perfect correctness, one only needs to assume its existence with a very weak correctness guarantee and polynomial hardness. Namely, we show a correctness amplification transformation with optimal parameters that relies only on polynomial hardness assumptions. This implies a universal construction assuming only polynomially secure compressing obfuscation with approximate correctness. In the context of indistinguishability obfuscation, we know how to achieve such a result only under sub-exponential security assumptions together with derandomization assumptions.Lastly, we characterize the existence of compressing obfuscation with statistical security. We show that in some range of parameters and for some classes of circuits such an obfuscator exists, whereas it is unlikely to exist with better parameters or for larger classes of circuits. These positive and negative results reveal a deep connection between compressing obfuscation and various concepts in complexity theory and learning theory.
2018
TCC
On the Security Loss of Unique Signatures
Abstract
We consider the question of whether the security of unique digital signature schemes can be based on game-based cryptographic assumptions using linear-preserving black-box security reductions—that is, black-box reductions for which the security loss (i.e., the ratio between “work” of the adversary and the “work” of the reduction) is some a priori bounded polynomial. A seminal result by Coron (Eurocrypt’02) shows limitations of such reductions; however, his impossibility result and its subsequent extensions all suffer from two notable restrictions: (1) they only rule out so-called “simple” reductions, where the reduction is restricted to only sequentially invoke “straight-line” instances of the adversary; and (2) they only rule out reductions to non-interactive (two-round) assumptions. In this work, we present the first full impossibility result: our main result shows that the existence of any linear-preserving black-box reduction for basing the security of unique signatures on some bounded-round assumption implies that the assumption can be broken in polynomial time.
2018
TCC
Game Theoretic Notions of Fairness in Multi-party Coin Toss
Abstract
Coin toss has been extensively studied in the cryptography literature, and the well-accepted notion of fairness (henceforth called strong fairness) requires that a corrupt coalition cannot cause non-negligible bias. It is well-understood that two-party coin toss is impossible if one of the parties can prematurely abort; further, this impossibility generalizes to multiple parties with a corrupt majority (even if the adversary is computationally bounded and fail-stop only).Interestingly, the original proposal of (two-party) coin toss protocols by Blum in fact considered a weaker notion of fairness: imagine that the (randomized) transcript of the coin toss protocol defines a winner among the two parties. Now Blum’s notion requires that a corrupt party cannot bias the outcome in its favor (but self-sacrificing bias is allowed). Blum showed that this weak notion is indeed attainable for two parties assuming the existence of one-way functions.In this paper, we ask a very natural question which, surprisingly, has been overlooked by the cryptography literature: can we achieve Blum’s weak fairness notion in multi-party coin toss? What is particularly interesting is whether this relaxation allows us to circumvent the corrupt majority impossibility that pertains to strong fairness. Even more surprisingly, in answering this question, we realize that it is not even understood how to define weak fairness for multi-party coin toss. We propose several natural notions drawing inspirations from game theory, all of which equate to Blum’s notion for the special case of two parties. We show, however, that for multiple parties, these notions vary in strength and lead to different feasibility and infeasibility results.
2018
TCC
Achieving Fair Treatment in Algorithmic Classification
Abstract
Fairness in classification has become an increasingly relevant and controversial issue as computers replace humans in many of today’s classification tasks. In particular, a subject of much recent debate is that of finding, and subsequently achieving, suitable definitions of fairness in an algorithmic context. In this work, following the work of Hardt et al. (NIPS’16), we consider and formalize the task of sanitizing an unfair classifier $$\mathcal {C}$$C into a classifier $$\mathcal {C}'$$C′ satisfying an approximate notion of “equalized odds” or fair treatment. Our main result shows how to take any (possibly unfair) classifier $$\mathcal {C}$$C over a finite outcome space, and transform it—by just perturbing the output of $$\mathcal {C}$$C—according to some distribution learned by just having black-box access to samples of labeled, and previously classified, data, to produce a classifier $$\mathcal {C}'$$C′ that satisfies fair treatment; we additionally show that our derived classifier is near-optimal in terms of accuracy. We also experimentally evaluate the performance of our method.
2012
CRYPTO
Program Committees
- Crypto 2022
- Eurocrypt 2021
- TCC 2020 (Program chair)
- Eurocrypt 2019
- TCC 2017
- Crypto 2017
- Eurocrypt 2014
- Crypto 2014
- Crypto 2011
- Crypto 2010
- TCC 2009
- Crypto 2009
- TCC 2007
Coauthors
- Gilad Asharov (4)
- Per Austrin (1)
- Boaz Barak (3)
- Eleanor Birrell (1)
- Elette Boyle (5)
- Ran Canetti (3)
- Benjamin Chan (1)
- T.-H. Hubert Chan (4)
- Benjamin Y Chan (1)
- Kai-Min Chung (12)
- Ronald Cramer (1)
- Dana Dachman-Soled (1)
- Yevgeniy Dodis (1)
- Naomi Ephraim (3)
- Cody Freitag (7)
- Johannes Gehrke (2)
- Vipul Goyal (1)
- Yue Guo (2)
- Goichiro Hanaoka (1)
- Johan Håstad (1)
- Michael Hay (1)
- Carmit Hazay (1)
- Dennis Hofheinz (1)
- Hideki Imai (1)
- Yuval Ishai (1)
- Eike Kiltz (1)
- Ilan Komargodski (7)
- Wei-Kai Lin (1)
- Huijia Lin (9)
- Yehuda Lindell (2)
- Zhenming Liu (1)
- Yanyi Liu (5)
- Edward Lui (4)
- Mohammad Mahmoody (3)
- Noam Mazor (1)
- Ameer Mohammed (1)
- Andrew Morgan (5)
- Kartik Nayak (2)
- Soheil Nematihaji (1)
- Rafail Ostrovsky (1)
- Omkant Pandey (3)
- Omer Paneth (1)
- Rafael Pass (84)
- Krzysztof Pietrzak (1)
- Antigoni Polychroniadou (1)
- Tal Rabin (2)
- Ling Ren (2)
- Amit Sahai (2)
- Lior Seeman (1)
- Karn Seth (4)
- Abhi Shelat (7)
- Elaine Shi (10)
- Naomi Sirkin (3)
- Sidharth Telang (4)
- Florian Tramèr (1)
- Wei-Lung Dustin Tseng (7)
- Vinod Vaikuntanathan (4)
- Muthuramakrishnan Venkitasubramaniam (11)
- Ivan Visconti (1)
- Shabsi Walfish (1)
- Hoeteck Wee (2)
- Douglas Wikström (2)
- Mary Wootters (1)