IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
17 October 2016
Jo\"el Alwen, Binyi Chen, Krzysztof Pietrzak, Leonid Reyzin, Stefano Tessaro
This paper focuses on scrypt, a simple candidate MHF designed by Percival, and described in RFC 7914. It has been used within a number of cryptocurrencies (e.g., Litecoin and Dogecoin) and has been an inspiration for Argon2d, one of the winners of the recent password-hashing competition. Despite its popularity, no rigorous lower bounds on its memory complexity are known.
We prove that scrypt is optimally memory hard, i.e., its cumulative memory complexity (cmc) in the parallel random oracle model is $\Omega(n^2 w)$, where $w$ and $n$ are the output length and number of invocations of the underlying hash function, respectively. High cmc is a strong security target for MHFs introduced by Alwen and Serbinenko (STOC '15) which implies high memory cost even for adversaries who can amortise the cost over many evaluations and evaluate the underlying hash functions many times in parallel. Our proof is the first showing optimal memory hardness for any MHF.
Our result improves both quantitatively and qualitatively upon the recent work by Alwen et al. (EUROCRYPT '16) who proved a weaker lower bound of $\Omega(n^2 w /\log^2 n)$ for a restricted class of adversaries.
Eli Ben-Sasson, Alessandro Chiesa, Michael A. Forbes, Ariel Gabizon, Michael Riabzev, Nicholas Spooner
1. The complexity class #P has PZK proofs in the model of Interactive PCPs (IPCPs) [KR08], where the verifier first receives from the prover a PCP and then engages with the prover in an Interactive Proof (IP).
2. The complexity class NEXP has PZK proofs in the model of Interactive Oracle Proofs (IOPs) [BCS16,RRR16], where the verifier, in every round of interaction, receives a PCP from the prover.
Unlike PZK multi-prover proof systems [BGKW88], PZK single-prover proof systems are elusive: PZK IPs are limited to AM Ⴖ coAM [F87,AH91], while known PCPs and IPCPs achieve only *statistical* simulation [KPT97,GIMS10]. Recent work [BCGV16] has achieved PZK for IOPs but only for languages in NP, while our results go beyond it.
Our constructions rely on *succinct* simulators that enable us to "simulate beyond NP", achieving exponential savings in efficiency over [BCGV16]. These simulators crucially rely on solving a problem that lies at the intersection of coding theory, linear algebra, and computational complexity, which we call the *succinct constraint detection* problem, and consists of detecting dual constraints with polynomial support size for codes of exponential block length. Our two results rely on solutions to this problem for fundamental classes of linear codes:
* An algorithm to detect constraints for Reed--Muller codes of exponential length.
* An algorithm to detect constraints for PCPs of Proximity of Reed--Solomon codes [BS08] of exponential degree.
The first algorithm exploits the Raz--Shpilka [RS05] deterministic polynomial identity testing algorithm, and shows, to our knowledge, a first connection of algebraic complexity theory with zero knowledge. Along the way, we give a perfect zero knowledge analogue of the celebrated sumcheck protocol [LFKN92], by leveraging both succinct constraint detection and low-degree testing. The second algorithm exploits the recursive structure of the PCPs of Proximity to show that small-support constraints are "locally" spanned by a small number of small-support constraints.
Sylvain Guilley, Annelie Heuser, Olivier Rioul
In this paper, we define the success exponent (SE) of an arbitrary side-channel distinguisher as the first-order exponent of the success rate as the number of measurements increases. Under fairly general assumptions such as soundness, we give a general simple formula for any arbitrary distinguisher and derive closed-form expressions of it for DoM, CPA, MIA and the optimal distinguisher when the model is known (template attack). For DoM and CPA our results are in line with the literature.
Experiments confirm that the theoretical closed-form expression of the SE coincides with the empirically computed one, even for reasonably small numbers of measurements. Finally, we highlight that our study raises many new perspectives for comparing and evaluating side-channel attacks, countermeasures and implementations.
Joppe W. Bos, Simon Friedberger
Recent implementations already use this special prime shape to speed up the cryptographic implementations but it remains unclear if the choices made are optimal or if one can do better. Our overview shows that in the SIDH setting, where arithmetic over a quadratic extension field is required, the approaches based on Montgomery multiplication are to be preferred. Furthermore, the outcome of our search reveals that there exist moduli which result in even faster implementations.
15 October 2016
Dario Fiore, Cédric Fournet, Esha Ghosh, Markulf Kohlweiss, Olga Ohrimenko, Bryan Parno
In this paper, we investigate multi-relation hash & prove schemes for verifiable computations that operate on succinct data hashes. Hence, the verifier delegates both storage and computation to an untrusted worker. She uploads data and keeps hashes; exchanges hashes with other parties; verifies arguments that consume and produce hashes; and selectively downloads the actual data she needs to access.
Existing instantiations that fit our definition either target restricted classes of computations or employ relatively inefficient techniques. Instead, we propose efficient constructions that lift classes of existing arguments schemes for fixed relations to multi-relation hash & prove schemes. Our schemes (1) rely on hash algorithms that run linearly in the size of the input; (2) enable constant-time verification of arguments on hashed inputs; (3) incur minimal overhead for the prover. Their main benefit is to amortize the linear cost for the verifier across all relations with shared I/O. Concretely, compared to solutions that can be obtained from prior work, our new hash & prove constructions yield a 1,400x speed-up for provers. We also explain how to further reduce the linear verification costs by partially outsourcing the hash computation itself, obtaining a 480x speed-up when applied to existing VC schemes, even on single-relation executions.
Daniel Dinu, Léo Perrin, Aleksei Udovenko, Vesselin Velichkov, Johann Großschädl, Alex Biryukov
As a second contribution we propose another strategy for designing ARX ciphers with provable properties, that is completely independent of the LTS. It is motivated by a challenge proposed earlier by Wall{\'{e}}n and uses the differential properties of modular addition to minimize the maximum differential probability across multiple rounds of a cipher. A new primitive, called LAX, is designed following those principles. LAX partly solves the Wall{\'{e}}n challenge.
Avijit Dutta, Ashwin Jha, Mridul Nandi
Christopher Huth, Daniela Becker, Jorge Guajardo, Paul Duplys, Tim G\"uneysu
Wakaha Ogata, Kaoru Kurosawa
Ning Zhang, Kun Sun, Deborah Shands, Wenjing Lou, Y. Thomas Hou
In this work, we present TruSpy, the first study of timingbased cache side-channel information leakage of TrustZone. Our proposed attack exploits the cache contention between normal world and secure world to recover secret information from secure world. Two attacks are proposed in TruSpy, namely, the normal world OS attack and the normal world Android app attack. In the OS-based attack, the attacker is able to access virtual-to-physical address translation and high precision timers. In the Android app-based attack, these tools are unavailable to the attacker, so we devise a novel method that uses the expected channel statistics to allocate memory for cache probing. We also show how an attacker might use the less accurate performance event interface as a timer.
Using the T-table based AES implementation in OpenSSL 1.0.1f as an example, we demonstrate that it is possible for a normal world attacker to steal a fine-grained secret from the secure world using a timing-based cache side-channel. We can recover the full AES encryption key via either the OSbased attack or the Android app-based attack. Since our zero permission TruSpy attack is based on the cache design in TrustZone enabled ARM processors, it poses a significant threat to a wide array of devices. To mitigate the newly discovered threat, we also propose both application-based and system-oriented countermeasures.
Zhengjun Cao, Lihua Liu
14 October 2016
University of Bristol
Closing date for applications: 1 May 2017
Contact: Nigel Smart
More information: http://www.bristol.ac.uk/jobs/find/details.html?nPostingID=573&nPostingTargetID=17237&option=28&sort=DESC
ENS de Lyon, France
Expertise in at least one of the following topics is expected:
* computing on encrypted data
* cryptographic protocols
* lattice-based cryptography
* lattice algorithms and/or hardness of lattice problems
* implementation of cryptographic primitives
Interested applicants should provide a detailed resume and two references.
Closing date for applications: 31 December 2016
Contact: Benoit Libert and Damien Stehle
benoit.libert (at) ens-lyon.fr, damien.stehle (at) ens-lyon.fr
12 October 2016
Muhammad Yasin, Ozgur Sinanoglu, Jeyavijayan Rajendran
In this paper, we present HackTest, an attack that extracts secret information generated in the test data, even if the test data does not explicitly contain the secret. HackTest can break the existing intellectual property (IP) protection techniques, such as camouflaging, within two minutes for our benchmarks using only the camouflaged layout and the test data. HackTest applies to all existing camouflaged gate-selection techniques and is successful even in the presence of state-of-the-art test infrastructure, i.e. test data compression circuits. Our attack necessitates that the IC test data generation algorithms be reinforced with security. We also discuss potential countermeasures to prevent HackTest.
Frederik Armknecht, Colin Boyd, Gareth T. Davies, Kristian Gjøsteen, Mohsen Toorani
Ran Cohen, Chris Peikert
In this work, we study adaptively secure protocols which only rely on a short CRS that is independent on the function to compute. First, we raise a subtle issue relating to the usage of \emph{non-interactive non-committing encryption} within security proofs in the UC framework, and explain how to overcome it. We demonstrate the problem in the security proof of the adaptively secure oblivious-transfer protocol from [CLOS02] and provide a complete proof of this protocol.
Next, we consider the two-party setting where one of the parties has a polynomial-size input domain, yet the other has no constraints on its input. We show that assuming the existence of adaptively secure oblivious transfer, every deterministic functionality can be computed with adaptive security in a constant number of rounds.
Finally, we present a new primitive called \emph{non-committing indistinguishability obfuscation}, and show that this primitive is \emph{complete} for constructing adaptively secure protocols with round complexity independent of the function.
Gina Gallegos-Garcia, Vincenzo Iovino, Alfredo Rial, Peter B. Roenne, Peter Y. A. Ryan
To provide verifiability, many e-voting protocols use Non-Interactive Zero-Knowledge Proofs (NIZKs). Thanks to their non-interactive nature, NIZKs allow anybody, including third parties that do not participate in the protocol, to verify the correctness of the tally. Therefore, NIZKs can be used to obtain universal verifiability. Additionally, NIZKs also improve usability because they allow voters to cast a vote using a non-interactive protocol.
The disadvantage of NIZKs is that their security is based on setup assumptions such as the common reference string (CRS) or the random oracle (RO) model. The former requires a trusted party for the generation of a common reference string. The latter, though a popular methodology for designing secure protocols, has been shown to be unsound.
In this paper, we address the design of an e-voting protocol that provides verifiability without any trust assumptions, where verifiability here is meant without eligibility verification. We show that Non-Interactive Witness-Indistinguishable proofs (NIWI) can be used for this purpose. The e-voting scheme is private under the Decision Linear assumption, while verifiability holds unconditionally. To our knowledge, this is the first private e-voting scheme with perfect universal verifiability, i.e. one in which the probability of a fake tally not being detected is 0, and non-interactive protocols that do not rely on trust assumptions.
Khoa Nguyen, Huaxiong Wang, Juanyang Zhang
In this paper, inspired by Qin et al.s work, we design the first SR-IBE scheme from lattice assumptions. Our scheme is more efficient than existing constructions of lattice-based revocable IBE. We prove that the scheme is selectively secure in the standard model, based on the hardness of the Learning with Errors problem. At the heart of our design is a double encryption mechanism that enables smooth interactions between the message sender and the server, as well as between the server and the recipient, while ensuring the confidentiality of messages.