TCC 2016-B: Program
Download Proceedings from Springer Link: Part I and Part II
MONDAY Building 4
14:00-18:00 Conference on-site registration
18:00-20:00 Reception
TUESDAY Building 7, Room 1
8:00-9:00 Conference on-site registration
Foundations I (Chair: John Steinberger)
9:00-9:20
Simulating Auxiliary Inputs, Revisited
Maciej Skorski
by allowing it to be only approximately correct. In folklore this problem is known as \emph{simulating auxiliary inputs}.
This idea of simulating auxiliary information turns out to be a very usefull tool, finding applications in complexity theory, cryptography, pseudorandomness and zero-knowledge. In this paper we revisit this problem, achieving the following results:
\begin{enumerate}[(a)]
\item We present a novel boosting algorithm for constructing the simulator. This boosting proof is of independent interest, as it shows how to handle "negative mass" issues when constructing probability measures by shifting distinguishers in descent algorithms. Our technique essentially fixes the flaw in the TCC'14 paper "How to Fake Auxiliary Inputs".
\item The complexity of our simulator is better than in previous works, including results derived from the uniform min-max theorem due to Vadhan and Zheng. To achieve $(s,\epsilon)$-indistinguishability we need the complexity $O\left(s\cdot 2^{5\ell}\epsilon^{-2}\right)$ in time/circuit size, which improve previous bounds by a factor of $\epsilon^{-2}$.
In particular, with we get meaningful provable security for the EUROCRYPT'09 leakage-resilient stream cipher instantiated with a standard 256-bit block cipher, like $\mathsf{AES256}$.
\end{enumerate}
Our boosting technique utilizes a two-step approach. In the first step we shift the current result (as in gradient or sub-gradient descent algorithms) and in the separate step we fix the biggest non-negative mass constraint violation (if applicable).
9:20-9:40
Fast Pseudorandom Functions Based on Expander Graphs
Benny Applebaum,
Pavel Raykov
Our proofs are based on a new search-to-decision reduction for expander-based functions. This extends a previous reduction of the first author (STOC 2012) which was applicable for the special case of \emph{random} local functions. Additionally, we present a new family of highly efficient hash functions whose output on exponentially many inputs jointly forms (with high probability) a good expander graph. These hash functions are based on the techniques of Miles and Viola (Crypto 2012). Although some of our reductions provide only relatively weak security guarantees, we believe that they yield novel approach for constructing PRFs, and therefore enrich the study of pseudorandomness.
9:40-10:00
3-Message Zero Knowledge Against Human Ignorance
Nir Bitansky,
Zvika Brakerski,
Yael Kalai,
Omer Paneth,
Vinod Vaikuntanathan
It is well established that two-message zero-knowledge protocols for NP do not exist, and that four-message
zero-knowledge arguments exist under the minimal assumption of one-way functions. Resolving the precise round
complexity of zero-knowledge has been an outstanding open problem for far too long.
In this work, we present a three-message zero-knowledge argument system with soundness against uniform
polynomial-time cheating provers. The main component in our construction is the recent delegation protocol for
RAM computations (Kalai and Paneth, TCC 2016B and Brakerski, Holmgren and Kalai, ePrint 2016). Concretely, we rely
on a three-message variant of their protocol based on a {\em key-less} collision-resistant hash functions
secure against uniform adversaries as well as other standard primitives.
More generally, beyond uniform provers, our protocol provides a natural and meaningful security guarantee
against real-world adversaries, which we formalize following Rogaway's ``human-ignorance" approach (VIETCRYPT
2006): in a nutshell, we give an explicit uniform reduction from any adversary breaking the soundness of our
protocol to finding collisions in the underlying hash function.
10:00-10:30 BREAK
Unconditional Security I (Chair: Andrej Bogdanov)
10:30-10:50
Pseudoentropy: Lower-bounds for Chain rules and Transformations
Krzysztof Pietrzak,
Maciej Skorski
leakage-resilient cryptography, deterministic encryption or memory delegation.
The two main types of results which make computational notions so useful are
(1) Chain rules, which quantify by how much the computational entropy of a variable decreases if conditioned on some other variable (2) Transformations, which quantify to which extend one type of entropy implies another.
Such chain rules and transformations typically lose a significant amount in
quality of the entropy, and are the reason why applying these results one gets rather weak
quantitative security bounds. In this paper we for the first time prove lower bounds in this context, showing that existing
results for transformations are, unfortunately, basically optimal for non-adaptive black-box reductions (and it's hard
to imagine how non black-box reductions or adaptivity could be useful here.)
A variable $X$ has $k$ bits of HILL entropy of quality $(\epsilon,s)$ if there exists a variable $Y$ with
$k$ bits min-entropy which cannot be distinguished from $X$ with advantage $\epsilon$ by
distinguishing circuits of size $s$. A weaker notion
is Metric entropy, where we switch quantifiers, and only require that
for every distinguisher of size $s$, such a $Y$ exists.
%For Metric entropy, we further distinguish between a notion that considers probabilistic or only weaker deterministic distinguishers.
We first describe our result concerning transformations. By definition, HILL implies Metric without any loss in quality.
Metric entropy often comes up in applications, but must be transformed to HILL for meaningful security guarantees.
The best known result states that if a variable $X$ has $k$ bits of Metric entropy of quality $(\epsilon,s)$, then
it has $k$ bits of HILL with quality $(2\epsilon,s\cdot\epsilon^2)$.
We show that this loss of a factor $\Omega(\epsilon^{-2})$ in circuit size is necessary. In fact, we show the stronger result that this loss is already necessary when transforming so called deterministic real valued Metric entropy to randomised boolean Metric (both these variants of Metric entropy are implied by HILL without loss in quality).
The chain rule for HILL entropy states that if $X$ has $k$ bits of HILL entropy of quality $(\epsilon,s)$, then for any
variable $Z$ of length $m$, $X$ conditioned on $Z$ has $k-m$ bits of HILL entropy with quality $(\epsilon,s\cdot \epsilon^2/ 2^{m})$. We show that a loss of $\Omega(2^m/\epsilon)$ in circuit size necessary here.
Note that this still leaves a gap of $\epsilon$ between the known bound and our lower bound.
10:50-11:10
Oblivious Transfer from Any Non-Trivial Elastic Noisy Channel via Secret Key Agreement
Ignacio Cascudo,
Ivan DamgÄrd,
Felipe Lacerda,
Samuel Ranellucci
non-trivial elastic channel (i.e., $0<\gamma<\delta<\frac{1}{2}$) we can
implement oblivious transfer with information-theoretic security. This was
previously (Khurana et al., Eurocrypt 2016) only known for a subset of these
parameters. Our technique relies on a new way to exploit protocols for
information-theoretic key agreement from noisy channels. We also show that
information-theoretically secure commitments where the receiver commits follow
from any non-trivial elastic channel.
11:10-11:30
Simultaneous Secrecy and Reliability Amplification for a General Channel Model
Russell Impagliazzo,
Ragesh Jaiswal,
Valentine Kabanets,
Bruce M. Kapron,
Valerie King,
Stefano Tessaro
Even in the restricted model, we require that for the original channel, the failure chance for the attacker must be a factor $c$ more than that for the intended receiver. We show that for any $c > 4 $, there is a one-way protocol (where the sender sends information to the receiver only) which achieves simultaneous secrecy and reliability. From results of Holenstein and Renner (\emph{CRYPTO'05}), there are no such one-way protocols for $c < 2$. On the other hand, we also show that for $c > 1.5$, there are two-way protocols that achieve simultaneous secrecy and reliability.
We propose using similar models to address other questions in the theory of cryptography, such as using noisy channels for secret agreement, trade-offs between reliability and secrecy, and the equivalence of various notions of oblivious channels and secure computation.
Test of Time Award (Chair: Adam Smith)
11:30-12:45
From Indifferentiability to Constructive Cryptography (and Back)
Award paper: "Indifferentiability, Impossibility Results on Reductions, and Applications to the Random Oracle Methodology" by Ueli Maurer, Renato Renner, and Clemens Holenstein (TCC 2004).
Speaker: Ueli Maurer
indistinguishability, was proposed in 2004 to provide a simplified
and generalized explanation of impossibility results like the
non-instantiability of random oracles by hash functions due to
Canetti, Goldreich, and Halevi (STOC 1998). But indifferentiability
is actually a constructive notion, leading to possibility
results. For example, Coron {\em et al.} (Crypto 2005) argued that the
soundness of the construction $C(f)$ of a hash function from a
compression function $f$ can be demonstrated by proving that $C(R)$
is indifferentiable from a random oracle if $R$ is an ideal random
compression function.
The purpose of this short paper is to describe how the
indifferentiability notion was a precursor to the theory of
constructive cryptography and thereby to provide a simplified and
generalized treatment of indifferentiability as a special type of
constructive statement.
12:45-14:15 LUNCH Friendship Palace
Foundations II (Chair: Alessandro Chiesa)
14:15-14:35
On the (In)security of SNARKs in the Presence of Oracles
Dario Fiore,
Anca Nitulescu
First, we formalize the question of extraction in the presence of oracles by proposing a suitable proof of knowledge definition for this setting. We call SNARKs satisfying this definition O-SNARKs. Second, we show how to use O-SNARKs to obtain formal and intuitive security proofs for three applications (homomorphic signatures, succinct functional signatures, and SNARKs on authenticated data) where we recognize an issue while doing the proof under the standard proof of knowledge definition of SNARKs. Third, we study whether O-SNARKs exist, providing both negative and positive results. On the negative side, we show that, assuming one way functions, there do not exist O-SNARKs in the standard model for every signing oracle family (and thus for general oracle families as well). On the positive side, we show that when considering signature schemes with appropriate restrictions on the message length O-SNARKs for the corresponding signing oracles exist, based on classical SNARKs and assuming extraction with respect to specific distributions of auxiliary input.
14:35-14:55
Leakage Resilient One-Way Functions: The Auxiliary-Input Setting
Ilan Komargodski
Dodis et al. (FOCS '10) formalized and constructed leakage resilient one-way functions. These are one-way functions f such that given a random image f(x) and leakage g(x) it is still hard to invert f(x). Based on any one-way function, Dodis et al. constructed such a one-way function that is leakage resilient assuming that an attacker can leak any lossy function g of the input.
In this work we consider the problem of constructing leakage resilient one-way functions that are secure with respect to arbitrary computationally hiding leakage (a.k.a auxiliary-input). We consider both types of leakage --- selective and adaptive --- and prove various possibility and impossibility results.
On the negative side, we show that if the leakage is an adaptively-chosen arbitrary one-way function, then it is impossible to construct leakage resilient one-way functions. The latter is proved both in the random oracle model (without any further assumptions) and in the standard model based on a strong vector-variant of DDH. On the positive side, we observe that when the leakage is chosen ahead of time, there are leakage resilient one-way functions based on a variety of assumption.
14:55-15:15
The GGM Function Family is Weakly One-Way
Aloni Cohen,
Saleet Klein
Namely, any efficient algorithm fails to invert GGM with probability at least $1/n^{2+\epsilon}$ -- \emph{even when given the secret key}.
Additionally, we state natural conditions under which the GGM family is strongly one-way.
15:15-15:45 BREAK
Foundations of Multi-Party Protocol (Chair: Alon Rosen)
15:45-16:05
Almost-Optimally Fair Multiparty Coin-Tossing with Nearly Three-Quarters Malicious
Bar Alon,
Eran Omri
In a surprising result,
Moran, Naor, and Segev [TCC 2009] constructed an $r$-round two-party $O(1/r)$-fair coin-tossing protocol, i.e., an optimally fair protocol.
Beimel, Omri, and Orlov [Crypto 2010] extended the results of Moran et al.~to the {\em multiparty setting} where strictly fewer than 2/3 of the parties are corrupted. They constructed a $2^{2^k}/r$-fair $r$-round $m$-party protocol, tolerating up to $t=\frac{m+k}{2}$ corrupted parties.
Recently, in a breakthrough result, Haitner and Tsfadia [STOC 2014] constructed an $O(\log^3(r)/r)$-fair (almost optimal) three-party coin-tossing protocol. Their work brings forth a combination of novel techniques for coping with the difficulties of constructing fair coin-tossing protocols. Still, the best coin-tossing protocols for the case where more than 2/3 of the parties may be corrupted (and even when $t=2m/3$, where $m>3$) were $\theta(1/\sqrt{r})$-fair.
We construct an $O(\log^3(r)/r)$-fair $m$-party coin-tossing protocol, tolerating up to $t$ corrupted parties, whenever $m$ is constant and $t<3m/4$.
16:05-16:25
Binary AMD Circuits from Secure Multiparty Computation
Daniel Genkin,
Yuval Ishai,
Mor Weiss
Genkin et al. (STOC 2014, Crypto 2015) introduced AMD circuits as a means for protecting MPC protocols against active attacks, and showed that every arithmetic circuit C over F can be transformed into an equivalent AMD circuit of size O(|C|) with O(1/|F|) simulation error. However, for the case of the binary field F=F_2, their constructions relied on a tamper-proof output decoder and could only realize a weaker notion of security.
We obtain the first constructions of fully secure binary AMD circuits. Given a boolean circuit C and a statistical security parameter s, we construct an equivalent binary AMD circuit C' of size |C|*polylog(|C|,s) (ignoring lower order additive terms) with 2^-s simulation error. That is, the effect of toggling an arbitrary subset of wires can be simulated by toggling only input and output wires.
Our construction combines in a general way two types of ``simple'' honest-majority MPC protocols: protocols that only offer security against passive adversaries, and protocols that only offer correctness against active adversaries. As a corollary, we get a conceptually new technique for constructing active-secure two-party protocols in the OT-hybrid model, and reduce the open question of obtaining such protocols with constant computational overhead to a similar question in these simpler MPC models.
16:25-16:45
Composable Security in the Tamper-Proof Hardware Model under Minimal Complexity
Carmit Hazay,
Antigoni Polychroniadou,
Muthuramakrishnan Venkitasubramaniam
(1) We construct secure Two-Party Computation (2PC) protocols for general functionalities with optimal round complexity and computational assumptions using stateless tokens. More precisely, we show how to realize arbitrary functionalities in the two-party setting with GUC security in two rounds under the minimal assumption of One-Way Functions (OWFs). Moreover, our construction relies on the underlying function in a black-box way. As a corollary, we obtain feasibility of Multi-Party Computation (MPC) with GUC-security under the minimal assumption of OWFs.
As an independent contribution, we identify an issue with a claim in a previous work by Goyal, Ishai, Sahai, Venkatesan and Wadia in TCC 2010 regarding the feasibility of UC-secure computation with stateless tokens assuming collision-resistant hash-functions (and the extension based only on one-way functions).
(2) We then construct a 3-round MPC protocol to securely realize arbitrary functionalities with GUC-security starting from any semi-honest secure MPC protocol. For this construction, we require the so-called one-many commit-and-prove primitive introduced in the original work of Canetti, Lindell, Ostrovsky and Sahai in STOC 2002 that is round-efficient and black-box in the underlying commitment. Using specially designed ``input-delayed'' protocols we realize this primitive (with a 3-round protocol in our framework) using stateless tokens and one-way functions (where the underlying one-way function is used in a black-box way).
16:45-17:05
Composable Adaptive Secure Protocols without Setup under Polytime Assumptions
Carmit Hazay,
Muthuramakrishnan Venkitasubramaniam
Next, we revisit the zero-one law for two-party secure functions evaluation initiated by the work of Maji, Prabhakaran and Rosulek (CRYPTO 2010). According to this law, every two-party functionality is either trivial (meaning, such functionalities can be reduced to any other functionality) or complete (meaning, any other functionality can be reduced to these functionalities) in the Universal Composability (UC) framework. As our second contribution, assuming the existence of a simulatable public-key encryption scheme, we establish a zero-one law in the adaptive setting. Our result implies that every two-party non-reactive functionality is either trivial or complete in the UC framework in the presence of adaptive, malicious adversaries.
17:05-17:25
Adaptive Security of Yao's Garbled Circuits
Zahra Jafargholi,
Daniel Wichs
A recent work of Hemenway et al. (CRYPTO '16) modifies Yao's construction and shows that the resulting scheme is adaptively secure. This is done by encrypting the garbled circuit from Yao's construction with a special type of ``somewhere equivocal encryption'' and giving the key together with the garbled input. The efficiency of the scheme and the security loss of the reduction is captured by a certain pebbling game over the circuit.
In this work we prove that Yao's construction itself is already adaptively secure, where the security loss can be captured by the same pebbling game. For example, we show that for circuits of depth $d$, the security loss of our reduction is 2^{O(d)}, meaning that Yao's construction is adaptively secure for NC1 circuits without requiring complexity leveraging.
Our technique is inspired by the ``nested hybrids'' of Fuchsbauer et al. (Asiacrypt '14, CRYPTO '15) and relies on a careful sequence of hybrids where each hybrid involves some limited guessing about the adversary's adaptive choices. Although it doesn't match the parameters achieved by Hemenway et al. in their full generality, the main advantage of our work is to prove the security of Yao's construction as is, without any additional encryption layer.
18:00-20:30 Conference Banquet Friendship Palace
WEDNESDAY Building 7, Room 1
Delegation and IP (Chair: Muthuramakrishnan Venkitasubramaniam)
9:00-9:25
Delegating RAM Computations with Adaptive Soundness and Privacy
Prabhanjan Ananth,
Yu-Chi Chen,
Kai-Min Chung,
Huijia Lin,
Wei-Kai Lin
databases. A user wishes to delegate a sequence of computations over a
database to a server, where each computation may read and modify the database
and the modifications persist between computations. Delegating RAM
computations is important as it has the distinct feature that the run-time of
computations maybe sub-linear in the size of the database.
We present the first RAM delegation scheme that provide both soundness and
privacy guarantees in the adaptive setting, where the sequence of delegated
RAM programs are chosen adaptively, depending potentially on the encodings of
the database and previously chosen programs. Prior works either achieved only
adaptive soundness without privacy [Kalai and Paneth, ePrintâ?5], or only
security in the selective setting where all RAM programs are chosen statically
[Chen et al. ITCSâ?6, Canetti and Holmgren ITCSâ?6].
Our scheme assumes the existence of indistinguishability obfuscation (iO) for
circuits and the decisional Diffie-Hellman (DDH) assumption. However, our
techniques are quite general and in particular, might be applicable even in
settings where iO is not used. We provide a âsecurity lifting techniqueâ?br /> that âliftsâ?any proof of selective security satisfying certain special
properties into a proof of adaptive security, for arbitrary cryptographic
schemes. We then apply this technique to the delegation scheme of Chen et al.
and its selective security proof, obtaining that their scheme is essentially
already adaptively secure. Because of the general approach, we can also easily
extend to delegating parallel RAM (PRAM) computations. We believe that the
security lifting technique can potentially find other applications and is of
independent interest.
JOINT SLOT WITH
Adaptive Succinct Garbled RAM, or How To Delegate Your Database
Ran Canetti,
Yilei Chen,
Justin Holmgren,
Mariana Raykova
As an immediate application, we give the first scheme for efficiently outsourcing a large database and computations on the database to an untrusted server, then delegating computations on this database, where these computations may update the database.
We also define and use a new primitive of independent interest, called adaptive accumulators. The primitive extends the positional accumulators of Koppula et al [STOC 2015] and somewhere statistical binding hashing of Hubacek and Wichs [ITCS 2015] to an adaptive setting.
9:25-9:45
Interactive Oracle Proofs
Eli Ben-Sasson,
Alessandro Chiesa,
Nicholas Spooner
We offer two main technical contributions. First, we give a compiler that maps any public-coin IOP into a non-interactive proof in the random oracle model. We prove that the soundness of the resulting proof is tightly characterized by the soundness of the IOP against *state restoration attacks*, a class of rewinding attacks on the IOP verifier that is reminiscent of, but incomparable to, resetting attacks.
Second, we study the notion of state-restoration soundness of an IOP: we prove tight upper and lower bounds in terms of the IOP's (standard) soundness and round complexity; and describe a simple adversarial strategy that is optimal, in expectation, across all state restoration attacks.
Our compiler can be viewed as a generalization of the Fiat--Shamir paradigm for public-coin IPs [FS86], and of the ``CS proof'' constructions of [Mic94] and [Val08] for PCPs. Our analysis of the compiler gives, in particular, a unified understanding of these constructions, and also motivates the study of state restoration attacks, not only for IOPs, but also for IPs and PCPs.
When applied to known IOP constructions, our compiler implies, e.g., blackbox unconditional ZK proofs in the random oracle model with quasilinear prover and polylogarithmic verifier, improving on a result of [IMSX15].
9:45-10:05
Delegating RAM Computations
Yael Kalai,
Omer Paneth
We construct a two-message protocol for delegating RAM computations to an untrusted cloud. In our protocol, the user saves a short digest of the delegated data. For every delegated computation, the cloud returns, in addition to the computationâs output, the digest of the modified data, and a proof that the output and digest were computed correctly. When delegating a T-time RAM computation M with security parameter k, the cloud runs in time Poly(T,k) and the user in time Poly(|M|, log(T), k).
Our protocol is secure assuming super-polynomial hardness of the Learning with Error (LWE) assumption. Security holds even when the delegated computations are chosen adaptively as a function of the data and output of previous computations.
We note that RAM delegation schemes are an improved variant of memory delegation schemes [Chung et al. CRYPTO 2011]. In memory delegation, computations are modeled as Turing machines, and therefore, the cloudâs work always grows with the size of the delegated data.
10:05-10:35 BREAK
Differential Privacy (Chair: Martin Hirt)
10:35-10:55
Separating Computational and Statistical Differential Privacy in the Client-Server Model
Mark Bun,
Yi-Hsiu Chen,
Salil P. Vadhan
Our main result shows that, assuming the existence of sub-exponentially secure one-way functions and 2-message witness indistinguishable proofs (zaps) for NP, that there is in fact a computational task in the client-server model that can be efficiently performed with CDP, but is infeasible to perform with information-theoretic differential privacy.
10:55-11:15
Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds
Mark Bun,
Thomas Steinke
11:15-11:35
Strong Hardness of Privacy from Weak Traitor Tracing
Lucas Kowalczyk,
Tal Malkin,
Jonathan Ullman,
Mark Zhandry
of statistical queries over a data universe X. A statistical query
on a dataset D in X^n asks "what fraction of the elements of D satisfy a given
predicate p on X?" Ignoring computational constraints, it is possible to accurately
answer exponentially many queries on an exponential size universe while satisfying
differential privacy (Blum et al., STOC'08). Dwork et al. (STOC'09) and Boneh and
Zhandry (CRYPTO'14) showed that if both Q and X are of polynomial size,
then there is an efficient differentially private algorithm that
accurately answers all the queries. They also proved that if Q and X are both
exponentially large, then under a plausible assumption, no efficient
algorithm exists.
We show that, under the same assumption,
if either the number of queries or the data universe is of
exponential size, then there is no differentially private algorithm
that answers all the queries.
Specifically, we prove that if one-way functions and
indistinguishability obfuscation exist, then:
---For every n, there is a family Q of O~(n^7) queries on a data universe X of size 2^d such that no poly(n,d) time differentially private algorithm takes a dataset D in X^n and outputs accurate answers to every query in Q.
---For every n, there is a family Q of 2^d queries on a data universe X of size O~(n^7) such that no poly(n,d) time differentially private algorithm takes a dataset D in X^n and outputs accurate answers to every query in Q.
In both cases, the result is nearly quantitatively tight, since there
is an efficient differentially private algorithm that answers
Omega~(n^2) queries on an exponential size data universe,
and one that answers exponentially many queries on a data universe of
size Omega~(n^2).
Our proofs build on the connection between hardness of
differential privacy and traitor-tracing schemes (Dwork et al.,
STOC'09; Ullman, STOC'13). We prove our hardness result for a
polynomial size query set (resp., data universe) by showing that they
follow from the existence of a special type of traitor-tracing scheme
with very short ciphertexts (resp., secret keys), but very weak
security guarantees, and then constructing such a scheme.
Invited Talk (Chair: Martin Hirt)
11:35-12:35
Through the Looking Glass: What Cryptography Should Do for Alice
Allison Bishop
12:35-14:00 LUNCH Friendship Palace
Public-Key Encryption I (Chair: Andrej Bogdanov)
14:00-14:20
Towards Non-Black-Box Separations of Public Key Encryption and One Way Function
Dana Dachman-Soled
(presented by Tal Malkin)
Specifically, we introduce the notion of BBN- reductions (similar to the BBNp reductions of Baecher et al. (ASIACRYPT, 2013)), in which the construction E accesses the underlying primitive in a black-box way, but wherein the universal reduction R receives the efficient code/circuit of the underlying primitive as input and is allowed oracle access to the adversary Adv. We additionally require that the functions describing the number of oracle queries made to Adv, and the success probability of R are independent of the run-time/circuit size of the underlying primitive. We prove that there is no non-adaptive, BBN- reduction from PKE to one way function, under the assumption that certain types of strong one way functions exist. Specifically, we assume that there exists a regular one way function f such that there is no Arthur-Merlin protocol proving that ``z not in Range(f)'', where soundness holds with high probability over ``no instances,'' y ~ f(U_n), and Arthur may receive polynomial-sized, non-uniform advice. This assumption is related to the average-case analogue of the widely believed assumption coNP \not\subseteq NP/poly.
14:20-14:40
Post-Quantum Security of the Fujisaki-Okamoto and OAEP Transforms
Ehsan Ebrahimi Targhi,
Dominique Unruh
ciphertext secure in the quantum random oracle model. Our scheme is a
combination of an asymmetric and a symmetric encryption scheme that are
secure in a weak sense. It is a slight modification of the Fujisaki-Okamoto
transform that is secure against classical adversaries. In addition, we
modify the OAEP-cryptosystem and prove its security in the quantum random
oracle model based on the existence of a partial-domain one-way
injective function secure against quantum adversaries.
Secret Sharing (Chair: Andrej Bogdanov)
14:40-15:00
Threshold Secret Sharing Requires a Linear Size Alphabet
Andrej Bogdanov,
Siyao Guo,
Ilan Komargodski
More generally, we show that for all 1<s<r<n, any ramp secret sharing scheme with secrecy threshold s and reconstruction threshold r requires share size log((r+1)/(r-s)).
As part of our analysis we formulate a simple game-theoretic relaxation of secret sharing for arbitrary access structures. We prove the optimality of our analysis for threshold secret sharing with respect to this method and point out a general limitation.
15:00-15:20
How to Share a Secret, Infinitely
Ilan Komargodski,
Moni Naor,
Eylon Yogev
In this work we consider the case where the set of parties is not known in advance and could potentially be infinite. Our goal is to give the t-th party arriving a small share as possible as a function of t. Our main result is such a scheme for the k-threshold access structure where the share size of party t is (k-1)*log t + poly(k)*o(log t). For k=2 we observe an equivalence to prefix codes and present matching upper and lower bounds of the form log t+loglog t+logloglogt+O(1). Finally, we show that for any access structure there exists such a secret sharing scheme with shares of size 2^{t-1}.
15:20-15:50 BREAK
New Models (Chair: Alon Rosen)
15:50-16:10
Designing Proof of Human-work Puzzles for Cryptocurrency and Beyond
Jeremiah Blocki,
Hong-Sheng Zhou
16:10-16:30
Access Control Encryption: Enforcing Information Flow with Cryptography
Ivan DamgÄrd,
Helene Haagh,
Claudio Orlandi
Classical examples of security policies for information flow are the well known Bell-Lapadula [BL73] or Biba [Bib75] model: in a nutshell, the Bell-Lapadula model assigns roles to every user in the system (e.g., public, secret and top-secret). A users' role specifies which messages the user is allowed to receive (i.e., the no read-up rule, meaning that users with public clearance should not be able to read messages marked as secret or top-secret) but also which messages the user is allowed to send (i.e., the no write-down rule, meaning that a malicious user with top-secret clearance should not be able to write messages marked as secret or public).
To the best of our knowledge, no existing cryptographic primitive allows for even this simple form of access control, since no existing cryptographic primitive enforces any restriction on what kind of messages one should be able to encrypt.
Our contributions are: - Introducing and formally defining access control encryption (ACE); - A construction of ACE with complexity linear in the number of the roles based on classic number theoretic assumptions (DDH, Paillier); - A construction of ACE with complexity polylogarithmic in the number of roles based on recent results on cryptographic obfuscation;
Obfuscation and Multilinear Maps (Chair: Alon Rosen)
16:30-16:50
Secure Obfuscation in a Weak Multilinear Map Model
Sanjam Garg,
Eric Miles,
Pratyay Mukherjee,
Amit Sahai,
Akshayaram Srinivasan,
Mark Zhandry
However, this model is inadequate: there have been several attacks on multilinear maps that exploit extra information revealed by the zero-test procedure. In particular, Miles, Sahai and Zhandry [Crypto'16] recently gave a polynomial-time attack on several iO candidates when instantiated with the multilinear maps of Garg, Gentry, and Halevi [Eurocrypt'13], and also proposed a new "weak multilinear map model" that captures all known polynomial-time attacks on GGH13.
In this work, we give a new iO candidate which can be seen as a small modification or generalization of the original candidate of Garg, Gentry, Halevi, Raykova, Sahai, and Waters [FOCS'13]. We prove its security in the weak multilinear map model, thus giving the first iO candidate that is provably secure against all known polynomial-time attacks on GGH13. The proof of security relies on a new assumption about the hardness of computing annihilating polynomials, and we show that this assumption is implied by the existence of pseudorandom functions in NC^1.
16:50-17:10
Virtual Grey-Boxes Beyond Obfuscation: A Statistical Security Notion for Cryptographic Agents
Shashank Agrawal,
Manoj Prabhakaran,
Ching-Hua Yu
Our main result is a characterization of VGB security, in all these cases, in terms of an indistinguishability-preserving notion of security, called 'rtestfamily-sINDPRE security', formulated using an extension of the recently proposed Cryptographic Agents framework (Agrawal et al., 2015). We further show that this definition is equivalent to an indistinguishability based security definition that is restricted to 'concentrated' distributions (wherein the outcome of any computation on encrypted data is essentially known ahead of the computation).
A result of Bitansky et al. (2014), who showed that VGB obfuscation is equivalent to strong indistinguishability obfuscation (SIO), is obtained by specializing our result to obfuscation. Our proof, while sharing various elements from the proof of Bitansky et al., is simpler and significantly more general, as it uses rtestfamily-sINDPRE security as an intermediate notion. Our characterization also shows that the semantic security for graded encoding schemes (Pass et al. 2014), is in fact an instance of this same definition.
We also present a composition theorem for rtestfamily-sINDPRE security. We can then recover the result of Bitansky et al. (2014) regarding the existence of VGB obfuscation for all NC1 circuits, simply by instantiating this composition theorem with a reduction from obfuscation of NC1 circuits to graded encoding schemas (Barak et al., 2014) and the assumption that there exists an rtestfamily-sINDPRE secure scheme for the graded encoding schema (Pass et al. 2014).
19:00-19:30 Business Meeting
19:30-21:30 Rump Session
THURSDAY Building 7, Room 1
Round Complexity & Efficiency of Multi-Party Computation (Chair: Alessandro Chiesa)
9:00-9:20
Efficient Secure Multiparty Computation with Identifiable Abort
Carsten Baum,
Emmanuela Orsini,
Peter Scholl
In this work, we present the first efficient MPC protocol with identifiable abort. Our protocol has an information-theoretic online phase with message complexity O(n^2) for each secure multiplication, similar to the BDOZ protocol (Bendlin et al., Eurocrypt 2011), and a factor of O(\kappa) lower than the identifiable abort protocol of Ishai et al. (Crypto 2014). A key component of our protocol is a linearly homomorphic information-theoretic signature scheme, for which we provide the first definitions and construction based on a previous non-homomorphic scheme. We then show how to implement the preprocessing for our protocol using somewhat homomorphic encryption, similarly to the SPDZ protocol (Damg{\aa}rd et al., Crypto 2012) and other recent works with applicable efficiency improvements.
9:20-9:45
Secure Multiparty RAM Computation in Constant Rounds
Sanjam Garg,
Divya Gupta,
Peihan Miao,
Omkant Pandey
In this work, we consider the multi-party case and obtain the following results:
1.\emph{Semi-honest model}: We present a constant-round black-box secure computation protocol for RAM programs. This protocol is obtained by building on the new black-box garbled RAM construction by Garg, Lu, and Ostrovsky [FOCS 2015], and constant-round secure computation protocol for circuits of Beaver, Micali, and Rogaway [STOC 1990]. This construction allows execution of multiple programs on the same persistent database.
2. \emph{Malicious model}: Next, we show how to extend our semi-honest results to the malicious setting, while ensuring that the new protocol is still constant-round and black-box in nature.
JOINT SLOT WITH
Constant-Round Maliciously Secure Two-Party Computation in the RAM Model
Carmit Hazay,
Avishay Yanai
9:45-10:05
More Efficient Constant-Round Multi-Party Computation from BMR and SHE
Yehuda Lindell,
Nigel P. Smart,
Eduardo Soria-Vazquez
10:05-10:25
Cross&Clean: Amortized Garbled Circuits With Constant Overhead
Jesper Buus Nielsen,
Claudio Orlandi
In this paper we ask a simple question, namely "how many garbled circuits are needed to achieve active security?" and we propose a novel protocol which achieves active security while using only a constant number of garbled circuits per evaluation in the amortized setting.
10:25-10:55 BREAK
Unconditional Security II (Chair: Adam Smith)
10:55-11:15
Proof of Space from Stacked Expanders
Ling Ren,
Srinivas Devadas
In PoS, a prover proves to a verifier that it has dedicated some specified amount of space.
A closely related notion is memory-hard functions (MHF), functions that require a lot of memory/space to compute.
While making promising progress, existing PoS and MHF have several problems.
First, there are large gaps between the desired space-hardness and what can be proven.
Second, it has been pointed out that PoS and MHF should require a lot of space not just at some point, but throughout the entire computation/protocol;
few proposals considered this issue.
Third, the two existing PoS constructions are both based on a class of graphs called superconcentrators, which are either hard to construct or add a logarithmic factor overhead to efficiency.
In this paper, we construct PoS from stacked expander graphs.
Our constructions are simpler, more efficient and have tighter provable space-hardness than prior works.
Our results also apply to a recent MHF called Balloon hash.
We show Balloon hash has tighter space-hardness than previously believed and consistent space-hardness throughout its computation.
11:15-11:35
Perfectly Secure Message Transmission in Two Rounds
Gabriele Spini,
Gilles ZĂ©mor
In this paper, we focus on protocols that work in two transmission rounds for n= 2t+1. We break from previous work by following a conceptually simpler blueprint for achieving a PSMT protocol. We reduce the previously best-known communication complexity, i.e. the number of transmitted bits necessary to communicate a 1-bit secret, from O(n^3 log n) to O(n^2 log n). Our protocol also answers a question raised by Kurosawa and Suzuki and hitherto left open: their protocol reaches optimal transmission rate for a secret of size O(n^2 log n) bits, and the authors raised the problem of lowering this threshold. The present solution does this for a secret of O(n log n) bits.
Invited Talk (Chair: Adam Smith)
11:35-12:35
Secure Hardware and Cryptography: Contrasts, Challenges and Opportunities
Srini Devadas
12:35-14:00 LUNCH Friendship Palace
Public-Key Encryption II (Chair: Muthuramakrishnan Venkitasubramaniam)
14:00-14:20
Standard Security Does Not Imply Indistinguishability Under Selective Opening
Dennis Hofheinz,
Vanishree Rao,
Daniel Wichs
(presented by Mor Weiss)
In this work, we resolve the above question in the negative and construct a highly contrived encryption scheme which is CPA (and even CCA) secure but is not IND-SOA secure. In fact, it is broken in a very obvious sense by a selective opening attack as follows. A random value is secret-shared via Shamir's scheme so that any $t$ out of $n$ shares reveal no information about the shared value. The $n$ shares are individually encrypted under a common public key and the $n$ resulting ciphertexts are given to the adversary who selectively chooses to see $t$ of the ciphertexts opened. Counter-intuitively, by the specific properties of our encryption scheme, this suffices for the adversary to completely recover the shared value. Our contrived scheme relies on strong assumptions: public-coin differing inputs obfuscation and a certain type of correlation intractable hash functions.
We also extend our negative result to the setting of SOA attacks with key opening(IND-SOA-K) where the adversary is given a collection of ciphertexts under different public keys and selectively chooses to see some subset of the secret keys.
14:20-14:40
Public-Key Encryption with Simulation-Based Selective-Opening Security and Compact Ciphertexts
Dennis Hofheinz,
Tibor Jager,
Andy Rupp
Two main flavors of SO security notions exist: indistinguishability-based (IND-SO) and simulation-based (SIM-SO) ones. Whereas IND-SO security allows for simple and efficient instantiations, its usefulness in larger constructions is somewhat limited, since it is restricted to special types of plaintext distributions. On the other hand, SIM-SO security does not suffer from this restriction, but turns out to be significantly harder to achieve. In fact, all known SIM-SO secure encryption schemes either require O(|m|) group elements in the ciphertext to encrypt |m|-bit plaintexts, or use specific algebraic properties available in the DCR setting. In this work, we present the first SIM-SO secure PKE schemes in the discrete-log setting with compact ciphertexts (whose size is O(1) group elements plus plaintext size). The SIM-SO security of our constructions can be based on, e.g., the k-linear assumption for any k.
Technically, our schemes extend previous IND-SO secure schemes by the property that simulated ciphertexts can be efficiently opened to arbitrary plaintexts. We do so by encrypting the plaintext in a bitwise fashion, but such that each encrypted bit leads only to a single ciphertext bit (plus O(1) group elements that can be shared across many bit encryptions). Our approach leads to rather large public keys (of O(|m|2) group elements), but we also show how this public key size can be reduced (to O(|m|) group elements) in pairing-friendly groups.
14:40-15:00
Multi-Key FHE from LWE, Revisited
Chris Peikert,
Sina Shiehian
computation on data encrypted under a \emph{single} key.
L{\'o}pez-Alt, Tromer, and Vaikuntanathan (STOC 2012) proposed the
notion of \emph{multi-key} FHE, which allows homomorphic computation
on ciphertexts encrypted under different keys, and also gave a
construction based on a (somewhat nonstandard) assumption related to
NTRU.\@ More recently, Clear and McGoldrick (CRYPTO 2015), followed by
Mukherjee and Wichs (EUROCRYPT 2016), proposed a multi-key FHE that
builds upon the LWE-based FHE of Gentry, Sahai, and Waters (CRYPTO
2013). However, unlike the original construction of L{\'o}pez-Alt
\etal, these later LWE-based schemes have the somewhat undesirable
property of being ``single-hop for keys:'' all relevant keys must be
known at the start of the homomorphic computation, and the output
cannot be usefully combined with ciphertexts encrypted under other
keys (unless an expensive ``bootstrapping'' step is performed).
In this work we construct two multi-key FHE schemes, based on LWE
assumptions, which are \emph{multi-hop for keys}: the output of a
homomorphic computation on ciphertexts encrypted under a set of keys
can be used in further homomorphic computation involving
\emph{additional} keys, and so on. Moreover, incorporating
ciphertexts associated with new keys is a relatively efficient
``native'' operation akin to homomorphic multiplication, and does not
require bootstrapping (in contrast with all other LWE-based
solutions). Our systems also have smaller ciphertexts than the
previous LWE-based ones; in fact, ciphertexts in our second
construction are simply GSW ciphertexts with no auxiliary data.
15:00-15:30 BREAK
Attribute-Based Encryption (Chair: John Steinberger)
15:30-15:50
Deniable Attribute Based Encryption for Branching Programs from LWE
Daniel Apon,
Xiong Fan,
Feng-Hao Liu
In particular from standard lattice assumptions, i.e. Learning with Errors (LWE), we have only flexibly and non-negligible advantage deniable public-key encryption schemes, whereas with the much stronger assumption of indistinguishable obfuscation, we can obtain at least fully sender-deniable PKE and computation. How to achieve deniability for other more advanced encryption schemes under standard assumptions remains an interesting open question.
In this work, we construct a flexibly bi-deniable Attribute-Based Encryption (ABE) scheme for all polynomial-size Branching Programs from LWE. Our techniques involve new ways of manipulating Gaussian noise that may be of independent interest, and lead to a significantly sharper analysis of noise growth in Dual Regev type encryption schemes. We hope these ideas give insight into achieving deniability and related properties for further, advanced cryptographic systems from lattice assumptions.
15:50-16:10
Targeted Homomorphic Attribute-Based Encryption
Zvika Brakerski,
David Cash,
Rotem Tsabary,
Hoeteck Wee
We present an ABE scheme where homomorphic operations can be performed compactly across attributes. Of course, decrypting the resulting ciphertext needs to be done with a key respective to a policy $f$ with $f(x_i)=0$ for all attributes involved in the computation. In our scheme, the target policy $f$ needs to be known to the evaluator, we call this \emph{targeted homomorphism}. Our scheme is secure under the polynomial hardness of learning with errors (LWE) with sub-exponential modulus-to-noise ratio.
We present a second scheme where there needs not be a single target policy. Instead, the decryptor only needs a set of keys representing policies $f_j$ s.t.\ for any attribute $x_i$ there exists $f_j$ with $f_j(x_i)=0$. In this scheme, the ciphertext size grows (quadratically) with the size of the \emph{set of policies} (and is still independent of the number of inputs or attributes). Again, the target set of policies needs to be known at evaluation time. This latter scheme is secure in the random oracle model under the polynomial hardness of LWE with sub-exponential noise ratio.
16:10-16:30
Semi-Adaptive Security and Bundling Functionalities Made Generic and Easy
Rishab Goyal,
Venkata Koppula,
Brent Waters
We show how to generically transform any selectively secure ABE or FE scheme into one that is semi-adaptively secure with the only additional assumption being public key encryption, which is already naturally included in almost any scheme of interest. Our technique utilizes a fairly simple application of garbled circuits where instead of encrypting directly, the encryptor creates a garbled circuit that takes as input the public parameters and outputs a ciphertext in the underlying selective scheme. Essentially, the encryption algorithm encrypts without knowing the ârealâ?public parameters. This allows one to delay giving out the underlying selective parameters until a private key is issued, which connects the semi-adaptive to selective security. The methods used to achieve this result suggest that the moral gap between selective and semi-adaptive security is in general much smaller than that between semi-adaptive and full security.
Finally, we show how to extend the above idea to generically bundle a family of functionalities under one set of public parameters. For example, suppose we had an inner product predicate encryption scheme where the length of the vectors was specified at setup and therefore fixed to the public parameters. Using our transformation one could create a system where for a single set of public parameters the vector length is not apriori bounded, but instead is specified by the encryption algorithm. The resulting ciphertext would be compatible with any private key generated to work on the same input length.
Functional Encryption (Chair: John Steinberger)
16:30-16:50
From Cryptomania to Obfustopia through Secret-Key Functional Encryption
Nir Bitansky,
Ryo Nishimaki,
Alain PasselĂšgue,
Daniel Wichs
Indeed, most variants have been classified as belonging to either the former or the latter category. However, one mystery that has remained is the case of \emph{secret-key functional encryption} with an unbounded number of keys and ciphertexts. On the one hand, this primitive \mbox{is not known to imply} anything outside of minicrypt, the land of secret-key crypto, but on the other hand, we do no know how to construct it without the heavy hammers in obfustopia.
In this work, we show that (sub-exponentially secure) secret-key functional encryption is powerful enough to construct indistinguishability obfuscation if we additionally assume the existence of (sub-exponentially secure) standard public-key encryption. In other words, secret-key functional encryption provides a bridge from cryptomania to obfustopia.
On the technical side, our result relies on two main components. As our first contribution, we show how to use secret key functional encryption to get ``exponentially-efficient indistinguishability obfuscation'' \XIO, a notion recently introduced by Lin et al. (PKC '16) as a relaxation of \IO. Lin et al. show how to use \XIO and the \LWE assumption to build \IO. As our second contribution, we show how to replace, in this result, the \LWE assumption with any (standard) public-key encryption scheme.
Lastly, we ask whether secret-key functional encryption can be used to construct public-key encryption itself and therefore take us all the way from minicrypt to obfustopia. A result of Asharov and Segev (FOCS '15) shows that this is not the case under black-box constructions, even for exponentially secure functional encryption. We show, through a non-black box construction, that sub-exponentially secure-key functional encryption indeed leads public-key encryption. However, the resulting public-key encryption scheme is at most quasi-polynomially secure, which is insufficient to take us to obfustopia.
16:50-17:15
Single-Key to Multi-Key Functional Encryption with Polynomial Loss
Sanjam Garg,
Akshayaram Srinivasan
An important parameter in the security model for FE is the number of secret keys that adversary has access to. In this work, we give a transformation from a FE scheme for which the adversary gets access to a single secret key (with ciphertext size sub-linear in the circuit for which this secret key is issued) to one that is secure even if adversary gets access to an {unbounded} number of secret keys. A novel feature of our transformation is that its security proof incurs only a {\em polynomial} loss.
JOINT SLOT WITH
Compactness vs Collusion Resistance in Functional Encryption
Baiyu Li,
Daniele Micciancio
two functional
encryption (FE) schemes (supporting a bounded number of key queries)
into a new functional encryption scheme supporting
a larger number of key queries.
By using these constructions iteratively,
we transform any primitive FE scheme supporting a single
functional key query (from a sufficiently general class of functions)
and has certain weak compactness properties to a collusion-resistant
FE scheme with the same or slightly weaker compactness properties.
Together with previously known reductions, this
shows that the compact, weakly compact, collusion-resistant, and
weakly collusion-resistant versions of FE are all equivalent
under polynomial time reductions.
These are all FE variants known to imply the existence of indistinguishability
obfuscation, and were previously thought to offer slightly different avenues toward
the realization of obfuscation from general assumptions.
Our results show that they are indeed all equivalent, improving our
understanding of the minimal assumptions on
functional encryption required to instantiate
indistinguishability obfuscation.