## CryptoDB

### Alon Rosen

#### Publications

Year
Venue
Title
2020
EUROCRYPT
Fine-grained cryptography is concerned with adversaries that are only moderately more powerful than the honest parties. We will survey recent results in this relatively underdeveloped area of study and examine whether the time is ripe for further advances in it.
2020
PKC
$mathsf {LWE}$ based key-exchange protocols lie at the heart of post-quantum public-key cryptography. However, all existing protocols either lack the non-interactive nature of Diffie-Hellman key-exchange or polynomial $mathsf {LWE}$ -modulus, resulting in unwanted efficiency overhead. We study the possibility of designing non-interactive $mathsf {LWE}$ -based protocols with polynomial $mathsf {LWE}$ -modulus. To this end, We identify and formalize simple non-interactive and polynomial $mathsf {LWE}$ -modulus variants of existing protocols, where Alice and Bob simultaneously exchange one or more (ring) $mathsf {LWE}$ samples with polynomial $mathsf {LWE}$ -modulus and then run individual key reconciliation functions to obtain the shared key. We point out central barriers and show that such non-interactive key-exchange protocols are impossible if: (1) the reconciliation functions first compute the inner product of the received $mathsf {LWE}$ sample with their private $mathsf {LWE}$ secret. This impossibility is information theoretic. (2) One of the reconciliation functions does not depend on the error of the transmitted $mathsf {LWE}$ sample. This impossibility assumes hardness of $mathsf {LWE}$ . We give further evidence that progress in either direction, of giving an $mathsf {LWE}$ -based $mathrm {NIKE}$ protocol or proving impossibility of one will lead to progress on some other well-studied questions in cryptography. Overall, our results show possibilities and challenges in designing simple (ring) $mathsf {LWE}$ -based non-interactive key exchange protocols.
2018
EUROCRYPT
2018
CRYPTO
We give Proofs of Work (PoWs) whose hardness is based on well-studied worst-case assumptions from fine-grained complexity theory. This extends the work of (Ball et al., STOC ’17), that presents PoWs that are based on the Orthogonal Vectors, 3SUM, and All-Pairs Shortest Path problems. These, however, were presented as a ‘proof of concept’ of provably secure PoWs and did not fully meet the requirements of a conventional PoW: namely, it was not shown that multiple proofs could not be generated faster than generating each individually. We use the considerable algebraic structure of these PoWs to prove that this non-amortizability of multiple proofs does in fact hold and further show that the PoWs’ structure can be exploited in ways previous heuristic PoWs could not.This creates full PoWs that are provably hard from worst-case assumptions (previously, PoWs were either only based on heuristic assumptions or on much stronger cryptographic assumptions (Bitansky et al., ITCS ’16)) while still retaining significant structure to enable extra properties of our PoWs. Namely, we show that the PoWs of (Ball et al., STOC ’17) can be modified to have much faster verification time, can be proved in zero knowledge, and more.Finally, as our PoWs are based on evaluating low-degree polynomials originating from average-case fine-grained complexity, we prove an average-case direct sum theorem for the problem of evaluating these polynomials, which may be of independent interest. For our context, this implies the required non-amortizability of our PoWs.
2017
TCC
2017
TCC
2017
TCC
2016
TCC
2016
TCC
2016
TCC
2016
JOFC
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
TCC
2015
TCC
2015
TCC
2014
CRYPTO
2014
TCC
2014
EPRINT
2014
EPRINT
2014
EPRINT
2014
CHES
2014
FSE
2013
CRYPTO
2012
TCC
2012
TCC
2012
EUROCRYPT
2011
TCC
2010
PKC
2010
ASIACRYPT
2010
EPRINT
Much of the literature on rational cryptography focuses on analyzing the strategic properties of cryptographic protocols. However, due to the presence of computationally-bounded players and the asymptotic nature of cryptographic security, a definition of sequential rationality for this setting has thus far eluded researchers. We propose a new framework for overcoming these obstacles, and provide the first definitions of computational solution concepts that guarantee sequential rationality. We argue that natural computational variants of subgame perfection are too strong for cryptographic protocols. As an alternative, we introduce a weakening called threat free Nash equilibrium that is more permissive but still eliminates the undesirable empty threats'' of non-sequential solution concepts. To demonstrate the applicability of our framework, we revisit the problem of implementing a mediator for correlated equilibria (Dodis Halevi-Rabin, Crypto'00), and propose a variant of their protocol that is sequentially rational for a non-trivial class of correlated equilibria. Our treatment provides a better understanding of the conditions under which mediators in a correlated equilibrium can be replaced by a stable protocol.
2009
TCC
2009
TCC
2009
TCC
2008
FSE
2008
EPRINT
We provide a simple protocol for secret reconstruction in any threshold secret sharing scheme, and prove that it is fair when executed with many rational parties together with a small minority of honest parties. That is, all parties will learn the secret with high probability when the honest parties follow the protocol and the rational parties act in their own self-interest (as captured by the notion of a Bayesian subgame perfect equilibrium). The protocol only requires a standard (synchronous) broadcast channel, and tolerates fail-stop deviations (i.e. early stopping, but not incorrectly computed messages). Previous protocols for this problem in the cryptographic or economic models have either required an honest majority, used strong communication channels that enable simultaneous exchange of information, or settled for approximate notions of security/equilibria.
2008
EPRINT
We initiate the study of one-wayness under {\em correlated products}. We are interested in identifying necessary and sufficient conditions for a function $f$ and a distribution on inputs $(x_1, \ldots, x_k)$, so that the function $(f(x_1), \ldots, f(x_k))$ is one-way. The main motivation of this study is the construction of public-key encryption schemes that are secure against chosen-ciphertext attacks (CCA). We show that any collection of injective trapdoor functions that is secure under very natural correlated products can be used to construct a CCA-secure public-key encryption scheme. The construction is simple, black-box, and admits a direct proof of security. We provide evidence that security under correlated products is achievable by demonstrating that any collection of lossy trapdoor functions, a powerful primitive introduced by Peikert and Waters (STOC '08), yields a collection of injective trapdoor functions that is secure under the above mentioned natural correlated products. Although we eventually base security under correlated products on lossy trapdoor functions, we argue that the former notion is potentially weaker as a general assumption. Specifically, there is no fully-black-box construction of lossy trapdoor functions from trapdoor functions that are secure under correlated products.
2008
EPRINT
Lossy trapdoor functions (Peikert and Waters, STOC '08) are an intriguing and powerful cryptographic primitive. Their main applications are simple and black-box constructions of chosen-ciphertext secure encryption, as well as collision-resistant hash functions and oblivious transfer. An appealing property of lossy trapdoor functions is the ability to realize them from a variety of number-theoretic assumptions, such as the hardness of the decisional Diffie-Hellman problem, and the worst-case hardness of lattice problems. In this short note we propose a new construction of lossy trapdoor functions based on the Damg{\aa}rd-Jurik encryption scheme (whose security relies on Paillier's decisional composite residuosity assumption). Our approach also yields a direct construction of all-but-one trapdoor functions, an important ingredient of the Peikert-Waters encryption scheme. The functions we propose enjoy short public descriptions, which in turn yield more efficient encryption schemes.
2007
JOFC
2006
TCC
2006
JOFC
2006
EPRINT
We demonstrate an \emph{average-case} problem which is as hard as finding $\gamma(n)$-approximate shortest vectors in certain $n$-dimensional lattices in the \emph{worst case}, where $\gamma(n) = O(\sqrt{\log n})$. The previously best known factor for any class of lattices was $\gamma(n) = \tilde{O}(n)$. To obtain our results, we focus on families of lattices having special algebraic structure. Specifically, we consider lattices that correspond to \emph{ideals} in the ring of integers of an algebraic number field. The worst-case assumption we rely on is that in some $\ell_p$ length, it is hard to find approximate shortest vectors in these lattices, under an appropriate form of preprocessing of the number field. Our results build upon prior works by Micciancio (FOCS 2002), Peikert and Rosen (TCC 2006), and Lyubashevsky and Micciancio (ICALP 2006). For the connection factors $\gamma(n)$ we achieve, the corresponding \emph{decisional} promise problems on ideal lattices are \emph{not} known to be NP-hard; in fact, they are in P. However, the \emph{search} approximation problems still appear to be very hard. Indeed, ideal lattices are well-studied objects in computational number theory, and the best known algorithms for them seem to perform \emph{no better} than the best known algorithms for general lattices. To obtain the best possible connection factor, we instantiate our constructions with infinite families of number fields having constant \emph{root discriminant}. Such families are known to exist and are computable, though no efficient construction is yet known. Our work motivates the search for such constructions. Even constructions of number fields having root discriminant up to $O(n^{2/3-\epsilon})$ would yield connection factors better than the current best of~$\tilde{O}(n)$.
2005
EUROCRYPT
2004
TCC
2004
TCC
2001
EPRINT
We show that any concurrent zero-knowledge protocol for a non-trivial language (i.e., for a language outside $\BPP$), whose security is proven via black-box simulation, must use at least $\tilde\Omega(\log n)$ rounds of interaction. This result achieves a substantial improvement over previous lower bounds, and is the first bound to rule out the possibility of constant-round concurrent zero-knowledge when proven via black-box simulation. Furthermore, the bound is polynomially related to the number of rounds in the best known concurrent zero-knowledge protocol for languages in $\NP$.
2001
EPRINT
Factoring integers is the most established problem on which cryptographic primitives are based. This work presents an efficient construction of {\em pseudorandom functions} whose security is based on the intractability of factoring. In particular, we are able to construct efficient length-preserving pseudorandom functions where each evaluation requires only a {\em constant} number of modular multiplications per output bit. This is substantially more efficient than any previous construction of pseudorandom functions based on factoring, and matches (up to a constant factor) the efficiency of the best known factoring-based {\em pseudorandom bit generators}.
2000
CRYPTO

Crypto 2020
TCC 2020
TCC 2019
TCC 2019
Eurocrypt 2018
TCC 2018
Eurocrypt 2015
Asiacrypt 2014
TCC 2013
PKC 2012
TCC 2010
PKC 2010
Crypto 2008
Eurocrypt 2007
TCC 2005