International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

27 September 2017

Sanjam Garg, Mohammad Mahmoody, Ameer Mohammed
ePrint Report ePrint Report
Realizing indistinguishablility obfuscation (IO) based on well-understood computational assumptions is an important open problem. Recently, realizing functional encryption (FE) has emerged as promising directing towards that goal. This is because: (1) compact single-key FE (where the functional secret-key is of length double the ciphertext length) is known to imply IO [Anath and Jain, CRYPTO 2015; Bitansky and Vaikuntanathan, FOCS 2015] and (2) several strong variants of single-key FE are known based on various standard computation assumptions.

In this work, we study when FE can be used for obtaining IO. We show any single-key FE for function families with ``short'' enough outputs (specifically the output is less than ciphertext length by a value at least $\omega(n + k)$, where $n$ is the message length and $k$ is the security parameter) is insufficient for IO even when non-black-box use of the underlying FE is allowed to some degree. Namely, our impossibility result holds even if we are allowed to plant FE sub-routines as gates inside the circuits for which functional secret-keys are issued (which is exactly how the known FE to IO constructions work).

Complementing our negative result, we show that our condition of ``short'' enough is almost tight. More specifically, we show that any compact single-key FE with functional secret-key output length strictly larger than ciphertext length is sufficient for IO. Furthermore, we show that non-black-box use of the underlying FE is necessary for such a construction, by ruling out any fully black-box construction of IO from FE even with arbitrary long output.
Expand
Prabhanjan Ananth, Abhishek Jain
ePrint Report ePrint Report
We revisit the exact round complexity of secure two-party computation. While four rounds are known to be sufficient for securely computing general functions that provide output to one party [Katz-Ostrovsky, CRYPTO'04], Goldreich-Krawczyk [SIAM J. Computing'96] proved that three rounds are insufficient for this task w.r.t. black-box simulation.

In this work, we study the feasibility of secure computation in three rounds using non-black-box simulation. Our main result is a three-round two-party computation protocol for general functions against adversaries with auxiliary inputs of bounded size. This result relies on a new two round input-extraction protocol based on succinct randomized encodings.

We also provide a partial answer to the question of achieving security against non-uniform adversaries. Assuming sub-exponentially secure iO and one-way functions, we rule out three-round protocols that achieve polynomial simulation-based security against the output party and exponential indistinguishability-based security against the other party.
Expand
Reyhaneh Rabaninejad, Mahmoud Ahmadian Attari, Maryam Rajabzadeh Asaar, Mohammad Reza Aref
ePrint Report ePrint Report
As data sharing has become one of the most popular features offered by cloud storage services, designing public auditing mechanisms for integrity of shared data stored at the cloud becomes much more important. Two unique problems which arise in shared data auditing mechanisms include preserving signer identity privacy and collusion resistant revocation of cloud users. When the data stored at the cloud is shared among a group of users, different users may modify different data blocks; therefore, data blocks are signed by different users which accordingly leak signer identities to the public verifier. Also, when a user is revoked from the group, the signatures generated by this user become invalid to the group and should be re-signed by the cloud server using re-signature keys. In addition, the collusion of cloud server who possess re-signature keys and the revoked user should leak no information about the private key of other users. In this paper, by employing a collusion resistant proxy re-signature scheme, we propose a public auditing mechanism for integrity of shared data that provides identity privacy and collusion resistant user revocation, simultaneously. We also formally prove the mentioned properties based on exact security definition and well-known hard problems in the random oracle model. To our best knowledge, this paper presents the first public auditing mechanism based on collusion resistant proxy re-signatures. Moreover, our protocol supports large dynamic group of users, batch verification of multiple auditing tasks and fully dynamic data operations, efficiently. Overhead analysis and experimental results demonstrate the excellent efficiency of our scheme in comparison to the state of the art.
Expand
Amos Beimel, Oriol Farr\`as, Yuval Mintz, Naty Peter
ePrint Report ePrint Report
A secret-sharing scheme realizes the forbidden graph access structure determined by a graph $G=(V,E)$ if a pair of vertices can reconstruct the secret if and only if it is an edge in $G$. Secret-sharing schemes for forbidden graph access structures of bipartite graphs are equivalent to conditional disclosure of secrets protocols, a primitive that is used to construct attributed-based encryption schemes.

We study the complexity of realizing a forbidden graph access structure by linear secret-sharing schemes. A secret-sharing scheme is linear if the reconstruction of the secret from the shares is a linear mapping. In many applications of secret-sharing, it is required that the scheme will be linear. We provide efficient constructions and lower bounds on the share size of linear secret-sharing schemes for sparse and dense graphs, closing the gap between upper and lower bounds: Given a sparse graph with $n$ vertices and at most $n^{1+\beta}$ edges, for some $ 0 \leq \beta < 1$, we construct a linear secret-sharing scheme realizing its forbidden graph access structure in which the total size of the shares is $\tilde{O} (n^{1+\beta/2})$. We provide an additional construction showing that every dense graph with $n$ vertices and at least $\binom{n}{2} - n^{1+\beta}$ edges can be realized by a linear secret-sharing scheme with the same total share size.

We provide lower bounds on the share size of linear secret-sharing schemes realizing forbidden graph access structures. We prove that for most forbidden graph access structures, the total share size of every linear secret-sharing scheme realizing these access structures is $\Omega(n^{3/2})$, which shows that the construction of Gay, Kerenidis, and Wee [CRYPTO 2015] is optimal. Furthermore, we show that for every $ 0 \leq \beta < 1$ there exist a graph with at most $n^{1+\beta}$ edges and a graph with at least $\binom{n}{2}-n^{1+\beta}$ edges, such that the total share size of every linear secret-sharing scheme realizing these forbidden graph access structures is $\Omega (n^{1+\beta/2})$. This shows that our constructions are optimal (up to poly-logarithmic factors).
Expand
Changhai Ou, Degang Sun, Zhu Wang, Xinping Zhou
ePrint Report ePrint Report
An attacker or evaluator can detect more information leakages if he improves the Signal-to-Noise Ratio (SNR) of power traces in his tests. For this purpose, pre-processings such as de-noise, distribution-based traces biasing are used. However, the existing traces biasing schemes can't accurately express the characteristics of power traces with high SNR, making them not ideal for leakage detections. Moreover, if the SNR of power traces is very low, it is very difficult to use the existing de-noise schemes and traces biasing schemes to enhance leakage detection. In this paper, a known key based pre-processing tool named Traces Linear Optimal Biasing (TLOB) is proposed, which performs very well even on power traces with very low SNR. It can accurately evaluate the noise of time samples and give reliable traces optimal biasing. Experimental results show that TLOB significantly reduces number of traces used for detection; correlation coefficients in $\rho$-tests using TLOB approach 1.00, thus the confidence of tests is significantly improved. As far as we know, there is no pre-processing tool more efficient than TLOB. TLOB is very simple, and only brings very limited time and memory consumption. We strongly recommend to use it to pre-process traces in side channel evaluations.
Expand
Philip Lafrance, Alfred Menezes
ePrint Report ePrint Report
We identify a flaw in the security proof and a flaw in the concrete security analysis of the WOTS-PRF variant of the Winternitz one-time signature scheme, and discuss the implications to its concrete security.
Expand
Sandro Coretti, Yevgeniy Dodis, Siyao Guo, John Steinberger
ePrint Report ePrint Report
We revisit security proofs for various cryptographic primitives in the auxiliary-input random-oracle model (AI-ROM), in which an attacker $A$ can compute arbitrary $S$ bits of leakage about the random oracle $\mathcal O$ before attacking the system and then use additional $T$ oracle queries to $\mathcal O$ during the attack. This model has natural applications in settings where traditional random-oracle proofs are not useful: (a) security against non-uniform attackers; (b) security against preprocessing.

We obtain a number of new results about the AI-ROM:

Unruh (CRYPTO '07) introduced the pre-sampling technique, which generically reduces security proofs in the AI-ROM to those in a much simpler $P$-bit-fixing random-oracle model (BF-ROM), where the attacker can arbitrarily fix the values of $\mathcal O$ on some $P$ coordinates, but then the remaining coordinates are chosen at random. Unruh's security loss for this transformation is $\sqrt{ST/P}$. We improve this loss to the optimal value $O(ST/P)$, which implies nearly tight bounds for a variety of indistinguishability applications in the AI-ROM. While the basic pre-sampling technique cannot give tight bounds for unpredictability applications, we introduce a novel "multiplicative version" of pre-sampling, which allows to dramatically reduce the size of $P$ of the pre-sampled set to $P=O(ST)$ and yields nearly tight security bounds for a variety of unpredictability applications in the AI-ROM. Qualitatively, it validates Unruh's "polynomial pre-sampling conjecture"---disproved in general by Dodis et al. (EUROCRYPT '17)---for the special case of unpredictability applications. Using our techniques, we reprove nearly all AI-ROM bounds obtained by Dodis et al. (using a much more laborious compression technique), but we also apply it to many settings where the compression technique is either inapplicable (e.g., computational reductions) or appears intractable (e.g., Merkle-Damgard hashing). We show that for any salted Merkle-Damgard hash function with m-bit output there exists a collision-finding circuit of size $\Theta(2^{m/3})$ (taking salt as the input), which is significantly below the $2^{m/2}$ birthday security conjectured against uniform attackers. We build two general compilers showing how to generically extend the security of applications proven in the traditional ROM to the AI-ROM. One compiler simply prepends a public salt to the random oracle and shows that salting generically provably defeats preprocessing.

Overall, our results make it much easier to get concrete security bounds in the AI-ROM. These bounds in turn give concrete conjectures about the security of these applications (in the standard model) against non-uniform attackers.
Expand
André Chailloux, Thomas Debris-Alazard
ePrint Report ePrint Report
Quantum secure signature schemes have a lot of attention recently, in particular because of the NIST call to standardize quantum safe cryptography. However, only few signature schemes can have concrete quantum security because of technical difficulties associated with the Quantum Random Oracle Model (QROM). In this paper, we show that code-based signature schemes based on the full domain hash paradigm can behave very well in the QROM i.e. that we can have tight security reductions. We also study quantum algorithms related to the underlying code-based assumption. Finally, we apply our reduction to a concrete example: the SURF signature scheme. We provide parameters for 128 bits of quantum security in the QROM and show that the obtained parameters are competitive compared to other similar quantum secure signature schemes.
Expand

25 September 2017

Rishab Goyal, Vipul Goyal
ePrint Report ePrint Report
Blockchain technology has the potential to disrupt how cryptography is done. In this work, we propose to view blockchains as an "enabler", much like indistinguishability obfuscation (Barak et al., CRYPTO 2001, Garg et al., FOCS 2013, and Sahai and Waters, STOC 2014) or one-way functions, for building a variety of cryptographic systems. Our contributions in this work are as follows:

1. A Framework for Proof-of-Stake based Blockchains: We provide an abstract framework for formally analyzing and defining useful security properties for Proof-of-Stake (POS) based blockchain protocols. Interestingly, for some of our applications, POS based protocols are more suitable. We believe our framework and assumptions would be useful in building applications on top of POS based blockchain protocols even in the future.

2. Blockchains as an Alternative to Trusted Setup Assumptions in Cryptography: A trusted setup, such as a common reference string (CRS) has been used to realize numerous systems in cryptography. The paragon example of a primitive requiring trusted setup is a non-interactive zero-knowledge (NIZK) system. We show that already existing blockchains systems including Bitcoin, Ethereum etc. can be used as a foundation (instead of a CRS) to realize NIZK systems.

The novel aspect of our work is that it allows for utilizing an already existing (and widely trusted) setup rather than proposing a new one. Our construction does not require any additional functionality from the miners over the already existing ones, nor do we need to modify the underlying blockchain protocol. If an adversary can violate the security of our NIZK, it could potentially also take over billions of dollars worth of coins in the Bitcoin, Ethereum or any such cryptocurrency!

We believe that such a "trusted setup" represents significant progress over using CRS published by a central trusted party. Indeed, NIZKs could further serve as a foundation for a variety of other cryptographic applications such as round efficient secure computation (Katz and Ostrovsky, CRYPTO 2004 and Horvitz and Katz, CRYPTO 2007).

3. One-time programs and pay-per use programs: Goldwasser et al. (CRYPTO 2008) introduced the notion of one time program and presented a construction using tamper-proof hardware. As noted by Goldwasser et al., clearly a one-time program cannot be solely software based, as software can always be copied and run again. While there have been a number of follow up works (Goyal et al., TCC 2010, Bellare et al., ASIACRYPT 2012, and Applebaum et al., SIAM Journal on Computing 2015), there are indeed no known constructions of one-time programs which do not rely on self destructing tamper-proof hardware (even if one uses trusted setup or random oracles). Somewhat surprisingly, we show that it is possible to base one-time programs on POS based blockchain systems without relying on trusted hardware. Our ideas do not seem to translate over to Proof-of-Work (POW) based blockchains.

We also introduce the notion of pay-per-use programs which is simply a contract between two parties --- service provider and customer. A service provider supplies a program such that if the customer transfers a specific amount of coins to the provider, it can evaluate the program on any input of its choice once, even if the provider is offline. This is naturally useful in a subscription based model where your payment is based on your usage.
Expand
Zahra Jafargholi, Alessandra Scafuro, Daniel Wichs
ePrint Report ePrint Report
A garbling scheme is used to garble a circuit C and an input x in a way that reveals the output C(x) but hides everything else. An adaptively secure scheme allows the adversary to specify the input x after seeing the garbled circuit. Applebaum et al. (CRYPTO '13) showed that in any garbling scheme with adaptive simulation-based security, the size of the garbled input must exceed the output size of the circuit. Here we show how to circumvent this lower bound and achieve significantly better efficiency under the minimal assumption that one-way functions exist by relaxing the security notion from simulation-based to indistinguishability-based.

We rely on the recent work of Hemenway et al. (CRYPTO '16) which constructed an adaptive simulation-based garbling scheme under one-way functions. The size of the garbled input in their scheme is as large as the output size of the circuit plus a certain pebble complexity of the circuit, where the latter is (e.g.,) bounded by the space complexity of the computation. By building on top of their construction and adapting their proof technique, we show how to remove the output size dependence in their result when considering indistinguishability-based security.

As an application of the above result, we get a symmetric-key functional encryption based on one-way functions, with indistinguishability-based security where the adversary can obtain an unbounded number of function secret keys and then adaptively a single challenge ciphertext. The size of the ciphertext only depends on the maximal pebble complexity of each of the functions but not on the number of functions or their circuit size.
Expand
Jean-Philippe Aumasson, Guillaume Endignoux
ePrint Report ePrint Report
We present several optimizations to SPHINCS, a stateless hash-based signature scheme proposed by Bernstein et al. in 2015: PORS, a more secure variant of the HORS few-time signature scheme used in SPHINCS; secret key caching, to speed-up signing and reduce signature size; batch signing, to amortize signature time and reduce signature size when signing multiple messages at once; mask-less constructions to reduce the key size and simplify the scheme; and Octopus, a technique to eliminate redundancies from authentication paths in Merkle trees. Based on a refined analysis of the subset resilience problem, we show that SPHINCS' parameters can be modified to reduce the signature size while retaining a similar security level and computation time. We then propose Gravity-SPHINCS, our variant of SPHINCS embodying the aforementioned tricks. Gravity-SPHINCS has shorter keys (32 and 64 bytes instead of $\approx1$ KB), shorter signatures ($\approx30$ KB instead of 41 KB), and faster signing and verification for a same security level as SPHINCS.
Expand
Nils Wisiol, Christoph Graebnitz, Marian Margraf, Manuel Oswald, Tudor A. A. Soroceanu, Benjamin Zengin
ePrint Report ePrint Report
In a novel analysis, we formally prove that arbitrarily many Arbiter PUFs can be combined into a stable XOR Arbiter PUF. To the best of our knowledge, this design cannot be modeled by any known oracle access attack in polynomial time.

Using majority vote of arbiter chain responses, our analysis shows that with a polynomial number of votes, the XOR Arbiter PUF stability of almost all challenges can be boosted exponentially close to 1; that is, the stability gain through majority voting can exceed the stability loss introduced by large XORs for a feasible number of votes. Considering state-of-the-art modeling attacks by Becker and Rührmair et al., our proposal enables the designer to increase the attacker's effort exponentially while still maintaining polynomial design effort. This is the first result that relates PUF design to this traditional cryptographic design principle.
Expand
Michele Ciampi, Rafail Ostrovsky, Luisa Siniscalchi, Ivan Visconti
ePrint Report ePrint Report
In this work we start from the following two results in the state-of-the art:

1)4-round non-malleable zero knowledge (NMZK): Goyal et al. in FOCS 2014 showed the first 4-round one-one NMZK argument from one-way functions (OWFs). Their construction requires the prover to know the instance and the witness already at the 2nd round. 2) 4-round multi-party coin tossing (MPCT): Garg et al. in Eurocrypt 2016 showed the first 4-round protocol for MPCT. Their result crucially relies on 3-round 3-robust parallel non-malleable commitments. So far there is no candidate construction for such a commitment scheme under standard polynomial-time hardness assumptions.

We improve the state-of-the art on NMZK and MPCT by presenting the following two results:

1) a delayed-input 4-round one-many NMZK argument $\Pi_{NMZK}$ from OWFs; moreover $\Pi_{NMZK}$ is also a delayed-input many-many synchronous NMZK argument. 2) a 4-round MPCT protocol $\Pi_{MPCT}$ from one-to-one OWFs; $\Pi_{MPCT}$ uses $\Pi_{NMZK}$ as subprotocol and exploits the special properties (e.g., delayed input, many-many synchronous) of $\Pi_{NMZK}$.

Both $\Pi_{NMZK}$ and $\Pi_{MPCT}$ make use of a special proof of knowledge that offers additional security guarantees when played in parallel with other protocols. The new technique behind such a proof of knowledge is an additional contribution of this work and is of independent interest.
Expand
Bhavana Kanukurthi, Sai Lakshmi Bhavana Obbattu, Sruthi Sekar
ePrint Report ePrint Report
Non-malleable codes (NMCs), introduced by Dziembowski, Pietrzak and Wichs (ITCS 2010), generalize the classical notion of error correcting codes by providing a powerful guarantee even in scenarios where error correcting codes cannot provide any guarantee: a decoded message is either the same or completely independent of the underlying message, regardless of the number of errors introduced into the codeword. Informally, NMCs are defined with respect to a family of tampering functions $F$ and guarantee that any tampered codeword either decodes to the same message or to an independent message, so long as it is tampered using a function $f \in F$.

Nearly all known constructions of NMCs are for the $t$-split-state family, where the adversary tampers each of the $t$ blocks (also known as states), of a codeword, arbitrarily but independently. Cheraghchi and Guruswami (TCC 2014) obtain a Rate-1 non-malleable code for the case where $t = O(n)$ with $n$ being the codeword length and, in (ITCS 2014), show an upper bound of $1-1/t$ on the best achievable rate for any $t-$split state NMC. For $t=10$, Chattopadhyay and Zuckerman (FOCS 2014) achieve a constant rate construction where the constant is unknown. In summary, there is no known construction of an NMC with an explicit constant rate for any $t= o(n)$, let alone one that comes close to matching Cheraghchi and Guruswami's lowerbound!

In this work, we construct an efficient non-malleable code in the $t$-split-state model, for $t=4$, that achieves a constant rate of $\frac{1}{3+\zeta}$, for any constant $\zeta > 0$, and error $2^{-\Omega(\ell / log^{c+1} \ell)}$, where $\ell$ is the length of the message and $c > 0$ is a constant.
Expand

24 September 2017

Dahmun Goudarzi, Antoine Joux, Matthieu Rivain
ePrint Report ePrint Report
Since their introduction in the late 90's, side-channel attacks have been considered as a major threat against cryptographic implementations. This threat has raised the need for formal leakage models in which the security of implementations can be proved. At Eurocrypt 2013, Prouff and Rivain introduced the noisy leakage model which has been argued to soundly capture the physical reality of power and electromagnetic leakages. In their work, they also provide the first formal security proof for a masking scheme in the noisy leakage model. However their work has two important limitations: (i) the security proof relies on the existence of a leak-free component, (ii) the tolerated amount of information in the leakage (aka leakage rate) is of $O(1/n)$ where $n$ is the number of shares in the underlying masking scheme. The first limitation was nicely tackled by Duc, Dziembowski and Faust one year later (Eurocrypt 2014). Their main contribution was to show a security reduction from the noisy leakage model to the conceptually simpler random-probing model. They were then able to prove the security of the well-known Ishai-Sahai-Wagner scheme (Crypto 2003) in the noisy leakage model. The second limitation was addressed last year in a paper by Andrychowicz, Dziembowski and Faust (Eurocrypt 2016). The proposed construction achieves security in the strong adaptive probing model with a leakage rate of $O(1/\log n)$ at the cost of a $O(n^2 \log n)$ complexity. The authors argue that their result can be translated into the noisy leakage model with a leakage rate of $O(1)$ by using secret sharing based on algebraic geometric codes. They further argue that the efficiency of their construction can be improved by a linear factor using packed secret sharing but no details are provided.

In this paper, we show how to compute in the presence of noisy leakage with a leakage rate up to $\tilde{O}(1)$ in complexity $\tilde{O}(n)$. We use a polynomial encoding allowing quasilinear multiplication based on the fast Number Theoretic Transform (NTT). We first show that our scheme is secure in the random-probing model with leakage rate $O(1/\log n)$. Using the reduction by Duc et al. this result can be translated in the noisy leakage model with a $O(1/|\mathbb{F}|^2 \log n)$ leakage rate. However, as in the work of Andrychowicz et al., our construction also requires $|\mathbb{F}| = O(n)$. In order to bypass this issue, we refine the granularity of our computation by considering the noisy leakage model on logical instructions} that work on constant-size machine words. We provide a generic security reduction from the noisy leakage model at the logical-instruction level to the random-probing model at the arithmetic level. This reduction allows us to prove the security of our construction in the noisy leakage model with leakage rate $\tilde{O}(1)$.
Expand
Jeremy Blackthorne, Benjamin Kaiser, Benjamin Fuller, Bulent Yener
ePrint Report ePrint Report
Malware needs to execute on a target machine while simultaneously keeping its payload confidential from a malware analyst. Standard encryption can be used to ensure the confidentiality, but it does not address the problem of hiding the key. Any analyst can find the decryption key if it is stored in the malware or derived in plain view.

One approach is to derive the key from a part of the environment which changes when the analyst is present. Such malware derives a key from the environment and encrypts its true functionality under this key.

In this paper, we present a formal framework for environmental authentication. We formalize the interaction between malware and analyst in three settings: 1) blind: in which the analyst does not have access to the target environment, 2) basic: where the analyst can load a single analysis toolkit on an effected target, and 3) resettable: where the analyst can create multiple copies of an infected environment. We show necessary and sufficient conditions for malware security in the blind and basic games and show that even under mild conditions, the analyst can always win in the resettable scenario.
Expand
Kuan Cheng, Yuval Ishai, Xin Li
ePrint Report ePrint Report
We study the question of minimizing the computational complexity of (robust) secret sharing schemes and error correcting codes. In standard instances of these objects, both encoding and decoding involve linear algebra, and thus cannot be implemented in the class AC0. The feasibility of non-trivial secret sharing schemes in AC0 was recently shown by Bogdanov et al. (Crypto 2016) and that of (locally) decoding errors in AC0 by Goldwasser et al. (STOC 2007).

In this paper, we show that by allowing some slight relaxation such as a small error probability, we can construct much better secret sharing schemes and error correcting codes in the class AC0. In some cases, our parameters are close to optimal and would be impossible to achieve without the relaxation. Our results significantly improve previous constructions in various parameters.

Our constructions combine several ingredients in pseudorandomness and combinatorics in an innovative way. Specifically, we develop a general technique to simultaneously amplify security threshold and reduce alphabet size, using a two-level concatenation of protocols together with a random permutation. We demonstrate the broader usefulness of this technique by applying it in the context of a variant of secure broadcast.
Expand
Daniel Genkin, Yual Ishai, Mor Weiss
ePrint Report ePrint Report
Trusted parties and devices are commonly used in the real world to securely perform computations on secret inputs. However, their security can often be compromised by side-channel attacks in which the adversary obtains partial leakage on intermediate computation values. This gives rise to the following natural question: To what extent can one protect the trusted party against leakage?

Our goal is to design a hardware device $T$ that allows $m\ge 1$ parties to securely evaluate a function $f(x_1,\ldots,x_m)$ of their inputs by feeding $T$ with encoded inputs that are obtained using local secret randomness. Security should hold even in the presence of an active adversary that can corrupt a subset of parties and obtain restricted leakage on the internal computations in $T$.

We design hardware devices $T$ in this setting both for zero-knowledge proofs and for general multi-party computations. Our constructions can unconditionally resist either $AC^0$ leakage or a strong form of ``only computation leaks'' (OCL) leakage that captures realistic side-channel attacks, providing different tradeoffs between efficiency and security.
Expand
Wutichai Chongchitmate, Rafail Ostrovsky, Ivan Visconti
ePrint Report ePrint Report
In FOCS 2001 Barak et al. conjectured the existence of zero-knowledge arguments that remain secure against resetting provers and resetting verifiers. The conjecture was proven true by Deng et al. in FOCS 2009 under various complexity assumptions and requiring a polynomial number of rounds. Later on in FOCS 2013 Chung et al. improved the assumptions requiring one-way functions only but still with a polynomial number of rounds.

In this work we show a constant-round resettably-sound resettable zero-knowledge argument system, therefore improving the round complexity from polynomial to constant. We obtain this result through the following steps.

1. We show an explicit transform from any $\ell$-round concurrent zero-knowledge argument system into an $O(\ell)$-round resettable zero-knowledge argument system. The transform is based on techniques proposed by Barak et al. in FOCS 2001 and by Deng et al. in FOCS 2009. Then, we make use of a recent breakthrough presented by Chung et al. in CRYPTO 2015 that solved the longstanding open question of constructing a constant-round concurrent zero-knowledge argument system from plausible polynomial-time hardness assumptions. Starting with their construction $\Gamma$ we obtain a constant-round resettable zero-knowledge argument system $\Lambda$.

2. We then show that by carefully embedding $\Lambda$ inside $\Gamma$ (i.e., essentially by playing a modification of the construction of Chung et al. against the construction of Chung et al.) we obtain the first constant-round resettably-sound concurrent zero-knowledge argument system $\Delta$.

3. Finally, we apply a transformation due to Deng et al. to $\Delta$ obtaining a resettably-sound resettable zero-knowledge argument system $\Pi$, the main result of this work.

While our round-preserving transform for resettable zero knowledge requires one-way functions only, both $\Lambda, \Delta$ and $\Pi$ extend the work of Chung et al. and as such they rely on the same assumptions (i.e., families of collision-resistant hash functions, one-way permutations and indistinguishability obfuscation for P/poly, with slightly super-polynomial security).
Expand
T-H. Hubert Chan, Yue Guo, Wei-Kai Lin, Elaine Shi
ePrint Report ePrint Report
Oblivious RAM (ORAM) is a powerful cryptographic building block that allows a program to provably hide its access patterns to sensitive data. Since the original proposal of ORAM by Goldreich and Ostrovsky, numerous improvements have been made. To date, the best asymptotic overhead achievable for general block sizes is $O(\log^2 N/\log \log N)$, due to an elegant scheme by Kushilevitz et al., which in turn relies on the oblivious Cuckoo hashing scheme by Goodrich and Mitzenmacher.

In this paper, we make the following contributions: we first revisit the prior $O(\log^2 N/\log \log N)$-overhead ORAM result. We demonstrate the somewhat incompleteness of this prior result, due to the subtle incompleteness of a core building block, namely, Goodrich and Mitzenmacher's oblivious Cuckoo hashing scheme.

Even though we do show how to patch the prior result such that we can fully realize Goodrich and Mitzenmacher's elegant blueprint for oblivious Cuckoo hashing, it is clear that the extreme complexity of oblivious Cuckoo hashing has made understanding, implementation, and proofs difficult. We show that there is a conceptually simple $O(\log^2 N/\log \log N)$-overhead ORAM that dispenses with oblivious Cuckoo hashing entirely.

We show that such a conceptually simple scheme lends to further extensions. Specifically, we obtain the first $O(\log^2 N/\log \log N)$ Oblivious Parallel RAM (OPRAM) scheme, thus not only matching the performance of the best known sequential ORAM, but also achieving super-logarithmic improvements in comparison with known OPRAM schemes.
Expand
◄ Previous Next ►