International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

17 October 2016

Jo\"el Alwen, Binyi Chen, Krzysztof Pietrzak, Leonid Reyzin, Stefano Tessaro
ePrint Report ePrint Report
Memory-hard functions (MHFs) are hash algorithms whose evaluation cost is dominated by memory cost. As memory, unlike computation, costs about the same across different platforms, MHFs cannot be evaluated at significantly lower cost on dedicated hardware like ASICs. MHFs have found widespread applications including password hashing, key derivation, and proofs-of-work.

This paper focuses on scrypt, a simple candidate MHF designed by Percival, and described in RFC 7914. It has been used within a number of cryptocurrencies (e.g., Litecoin and Dogecoin) and has been an inspiration for Argon2d, one of the winners of the recent password-hashing competition. Despite its popularity, no rigorous lower bounds on its memory complexity are known.

We prove that scrypt is optimally memory hard, i.e., its cumulative memory complexity (cmc) in the parallel random oracle model is $\Omega(n^2 w)$, where $w$ and $n$ are the output length and number of invocations of the underlying hash function, respectively. High cmc is a strong security target for MHFs introduced by Alwen and Serbinenko (STOC '15) which implies high memory cost even for adversaries who can amortise the cost over many evaluations and evaluate the underlying hash functions many times in parallel. Our proof is the first showing optimal memory hardness for any MHF.

Our result improves both quantitatively and qualitatively upon the recent work by Alwen et al. (EUROCRYPT '16) who proved a weaker lower bound of $\Omega(n^2 w /\log^2 n)$ for a restricted class of adversaries.
Expand
Eli Ben-Sasson, Alessandro Chiesa, Michael A. Forbes, Ariel Gabizon, Michael Riabzev, Nicholas Spooner
ePrint Report ePrint Report
We present the first constructions of *single*-prover proof systems that achieve *perfect* zero knowledge (PZK) for languages beyond NP, under no intractability assumptions:

1. The complexity class #P has PZK proofs in the model of Interactive PCPs (IPCPs) [KR08], where the verifier first receives from the prover a PCP and then engages with the prover in an Interactive Proof (IP).

2. The complexity class NEXP has PZK proofs in the model of Interactive Oracle Proofs (IOPs) [BCS16,RRR16], where the verifier, in every round of interaction, receives a PCP from the prover.

Unlike PZK multi-prover proof systems [BGKW88], PZK single-prover proof systems are elusive: PZK IPs are limited to AM Ⴖ coAM [F87,AH91], while known PCPs and IPCPs achieve only *statistical* simulation [KPT97,GIMS10]. Recent work [BCGV16] has achieved PZK for IOPs but only for languages in NP, while our results go beyond it.

Our constructions rely on *succinct* simulators that enable us to "simulate beyond NP", achieving exponential savings in efficiency over [BCGV16]. These simulators crucially rely on solving a problem that lies at the intersection of coding theory, linear algebra, and computational complexity, which we call the *succinct constraint detection* problem, and consists of detecting dual constraints with polynomial support size for codes of exponential block length. Our two results rely on solutions to this problem for fundamental classes of linear codes:

* An algorithm to detect constraints for Reed--Muller codes of exponential length.

* An algorithm to detect constraints for PCPs of Proximity of Reed--Solomon codes [BS08] of exponential degree.

The first algorithm exploits the Raz--Shpilka [RS05] deterministic polynomial identity testing algorithm, and shows, to our knowledge, a first connection of algebraic complexity theory with zero knowledge. Along the way, we give a perfect zero knowledge analogue of the celebrated sumcheck protocol [LFKN92], by leveraging both succinct constraint detection and low-degree testing. The second algorithm exploits the recursive structure of the PCPs of Proximity to show that small-support constraints are "locally" spanned by a small number of small-support constraints.
Expand
Sylvain Guilley, Annelie Heuser, Olivier Rioul
ePrint Report ePrint Report
The success rate is the classical metric for evaluating the performance of side-channel attacks. It is generally computed empirically from measurements for a particular device or using simulations. Closed-form expressions of success rate are desirable because they provide an explicit functional dependence on relevant parameters such as number of measurements and signal-to-noise ratio which help to understand the effectiveness of a given attack and how one can mitigate its threat by countermeasures. However, such closed-form expressions involve high-dimensional complex statistical functions that are hard to estimate.

In this paper, we define the success exponent (SE) of an arbitrary side-channel distinguisher as the first-order exponent of the success rate as the number of measurements increases. Under fairly general assumptions such as soundness, we give a general simple formula for any arbitrary distinguisher and derive closed-form expressions of it for DoM, CPA, MIA and the optimal distinguisher when the model is known (template attack). For DoM and CPA our results are in line with the literature.

Experiments confirm that the theoretical closed-form expression of the SE coincides with the empirically computed one, even for reasonably small numbers of measurements. Finally, we highlight that our study raises many new perspectives for comparing and evaluating side-channel attacks, countermeasures and implementations.
Expand
Joppe W. Bos, Simon Friedberger
ePrint Report ePrint Report
We give a systematic overview of techniques to compute efficient arithmetic modulo $2^xp^y\pm 1$. This is useful for computations in the supersingular isogeny Diffie-Hellman (SIDH) key-exchange protocol which is one of the more recent contenders in the post-quantum public-key arena. One of the main computational bottlenecks in this key-exchange protocol is computing modular arithmetic in a finite field defined by a prime of this special shape.

Recent implementations already use this special prime shape to speed up the cryptographic implementations but it remains unclear if the choices made are optimal or if one can do better. Our overview shows that in the SIDH setting, where arithmetic over a quadratic extension field is required, the approaches based on Montgomery multiplication are to be preferred. Furthermore, the outcome of our search reveals that there exist moduli which result in even faster implementations.
Expand

15 October 2016

Dario Fiore, Cédric Fournet, Esha Ghosh, Markulf Kohlweiss, Olga Ohrimenko, Bryan Parno
ePrint Report ePrint Report
Proof systems for verifiable computation (VC) have the potential to make cloud outsourcing more trustworthy. Recent schemes enable a verifier with limited resources to delegate large computations and verify their outcome based on succinct arguments: verification complexity is linear in the size of the inputs and outputs (not the size of the computation). However, cloud computing also often involves large amounts of data, which may exceed the local storage and I/O capabilities of the verifier, and thus limit the use of VC.

In this paper, we investigate multi-relation hash & prove schemes for verifiable computations that operate on succinct data hashes. Hence, the verifier delegates both storage and computation to an untrusted worker. She uploads data and keeps hashes; exchanges hashes with other parties; verifies arguments that consume and produce hashes; and selectively downloads the actual data she needs to access.

Existing instantiations that fit our definition either target restricted classes of computations or employ relatively inefficient techniques. Instead, we propose efficient constructions that lift classes of existing arguments schemes for fixed relations to multi-relation hash & prove schemes. Our schemes (1) rely on hash algorithms that run linearly in the size of the input; (2) enable constant-time verification of arguments on hashed inputs; (3) incur minimal overhead for the prover. Their main benefit is to amortize the linear cost for the verifier across all relations with shared I/O. Concretely, compared to solutions that can be obtained from prior work, our new hash & prove constructions yield a 1,400x speed-up for provers. We also explain how to further reduce the linear verification costs by partially outsourcing the hash computation itself, obtaining a 480x speed-up when applied to existing VC schemes, even on single-relation executions.
Expand
Daniel Dinu, Léo Perrin, Aleksei Udovenko, Vesselin Velichkov, Johann Großschädl, Alex Biryukov
ePrint Report ePrint Report
We present, for the first time, a general strategy for designing ARX symmetric-key primitives with provable resistance against single-trail differential and linear cryptanalysis. The latter has been a long standing open problem in the area of ARX design. The wide trail design strategy (WTS), that is at the basis of many S-box based ciphers, including the AES, is not suitable for ARX designs due to the lack of S-boxes in the latter. In this paper we address the mentioned limitation by proposing the long trail design strategy (LTS) -- a dual of the WTS that is applicable (but not limited) to ARX constructions. In contrast to the WTS, that prescribes the use of small and efficient S-boxes at the expense of heavy linear layers with strong mixing properties, the LTS advocates the use of large (ARX-based) S-Boxes together with sparse linear layers. With the help of the so-called Long Trail argument, a designer can bound the maximum differential and linear probabilities for any number of rounds of a cipher built according to the LTS. To illustrate the effectiveness of the new strategy, we propose SPARX -- a family of ARX-based block ciphers designed according to the LTS. SPARX has 32-bit ARX-based S-boxes and has provable bounds against differential and linear cryptanalysis. In addition, SPARX is very efficient on a number of embedded platforms. Its optimized software implementation ranks in the top 6 of the most software-efficient ciphers along with SIMON, SPECK, Chaskey, LEA and RECTANGLE.

As a second contribution we propose another strategy for designing ARX ciphers with provable properties, that is completely independent of the LTS. It is motivated by a challenge proposed earlier by Wall{\'{e}}n and uses the differential properties of modular addition to minimize the maximum differential probability across multiple rounds of a cipher. A new primitive, called LAX, is designed following those principles. LAX partly solves the Wall{\'{e}}n challenge.
Expand
Avijit Dutta, Ashwin Jha, Mridul Nandi
ePrint Report ePrint Report
Probabilistic MAC (message authentication code) is an alternative choice for a stateful MAC where maintaining internal state may be difficult or unsafe. Usually tag of a probabilistic MAC consists of an $m$-bit random coin (also called {\em salt}) and an $n$-bit core-tag depending on the salt. In terms of the security, probabilistic MAC falls under birthday collision of salts which is absent in stateful MAC. XMACR is an example of probabilistic MAC which remains secure up to $o(2^{m/2})$ tag generation queries. To achieve security beyond birthday in $n$, one can naturally use a large salt. For example, $\mathrm{MACRX}_3$ sets $m = 3n$ and provides security up to $o(2^{n})$ tag-generation queries. Large salt may restrict its applicability as it increases the cost of random string generation as well as the size of the overall tag. RWMAC (randomized version of WMAC) provides similar security with $m = n$ but it uses a PRF (pseudorandom function) over $2n$-bit inputs which is naturally more costlier than those over $n$-bit inputs. Achieving beyond birthday security using $n$-bit PRF and $n$-bit salt is a practical and challenging problem. Minematsu in FSE 2010 proposed Enhanced Hash-then-Mask (\tx{EHtM}) using $n$-bit salt and showed its security up to $o(2^{2n/3})$ tag-generation queries. In this paper we revisit this construction and we provide exact security analysis of \tx{EHtM}. In particular, we show that it has higher security, namely up to $o(2^{3n/4})$ queries, than what claimed by the designer. Moreover, we demonstrate a single attempt forgery attack which makes about $2^{3n/4}$ tag generation queries. XMACR and \tx{EHtM} follow the hash-then-mask paradigm due to Carter-Wegman. We revisit six possible constructions following hash-then-mask paradigm and we provide exact security analysis for all of these constructions, some of which however were known before.
Expand
Christopher Huth, Daniela Becker, Jorge Guajardo, Paul Duplys, Tim G\"uneysu
ePrint Report ePrint Report
With the advent of the Internet of Things, lightweight devices necessitate secure and cost-efficient key storage. Since traditional secure storage is expensive, the valuable entropy could originate from noisy sources, for which fuzzy extractors allow strong key derivation. While providing information-theoretic security, fuzzy extractors require large amount of input entropy to account for entropy loss in the key extraction process. It has been shown by Fuller et al. [20] that the entropy loss can be reduced if the requirement is relaxed to computational security based on the hardness of the Learning with Errors problem. Using this computational fuzzy extractor, we show how to construct a device-server authentication system providing outsider chosen perturbation security and pre-application robustness. We present the first implementation of a lossless computational fuzzy extractor where the entropy of the source equals the entropy of the key on a constrained device. The implementation needs only 1.45KB of SRAM and 9.8KB of Flash memory on an 8-bit microcontroller. We compare our implementation to existing work in terms of security, while achieving no entropy loss.
Expand
Wakaha Ogata, Kaoru Kurosawa
ePrint Report ePrint Report
In the model of "no-dictionary" verifiable searchable symmetric encryption (SSE) scheme, a client does not need to keep the set of keywords ${\cal W}$ in the search phase, where ${\cal W}$ is called a dictionary. Still a malicious server cannot cheat the client by saying that ``your search word $w$ does not exist in the dictionary ${\cal W}$" when it exists. In the previous such schemes, it takes $O(\log m)$ time for the server to prove that $w \not\in {\cal W}$, where $m=|{\cal W}|$ is the number of keywords. In this paper, we show a generic method to transform any SSE scheme (that is only secure against passive adversaries) to a no-dictionary verifiable SSE scheme. In the transformed scheme, it takes only $O(1)$ time for the server to prove that $w \not\in {\cal W}$.
Expand
Ning Zhang, Kun Sun, Deborah Shands, Wenjing Lou, Y. Thomas Hou
ePrint Report ePrint Report
As smart, embedded devices are increasingly integrated into our daily life, the security of these devices has become a major concern. The ARM processor family, which powers more than 60% of embedded devices, introduced TrustZone technology to offer security protection via an isolated execution environment called secure world. Caches in TrustZone-enabled processors are extended with a non-secure (NS) bit to indicate whether a cache line is used by the secure world or the normal world. This cache design improves system performance by eliminating the need to perform cache flush during world switches; however, it also enables cache contention between the two worlds.

In this work, we present TruSpy, the first study of timingbased cache side-channel information leakage of TrustZone. Our proposed attack exploits the cache contention between normal world and secure world to recover secret information from secure world. Two attacks are proposed in TruSpy, namely, the normal world OS attack and the normal world Android app attack. In the OS-based attack, the attacker is able to access virtual-to-physical address translation and high precision timers. In the Android app-based attack, these tools are unavailable to the attacker, so we devise a novel method that uses the expected channel statistics to allocate memory for cache probing. We also show how an attacker might use the less accurate performance event interface as a timer.

Using the T-table based AES implementation in OpenSSL 1.0.1f as an example, we demonstrate that it is possible for a normal world attacker to steal a fine-grained secret from the secure world using a timing-based cache side-channel. We can recover the full AES encryption key via either the OSbased attack or the Android app-based attack. Since our zero permission TruSpy attack is based on the cache design in TrustZone enabled ARM processors, it poses a significant threat to a wide array of devices. To mitigate the newly discovered threat, we also propose both application-based and system-oriented countermeasures.
Expand
Zhengjun Cao, Lihua Liu
ePrint Report ePrint Report
In the literature there are some divide-and-conquer algorithms, such as Karatsuba's algorithm and Strassen's algorithm, which play a key role in analyzing the performance of some cryptographic protocols and attacks. But we find these algorithms are rarely practically implemented although their theoretical complexities are attractive. In this paper, we point out that the reason of this phenomenon is that decomposing the original problem into subproblems does not take constant time. The type of problem decomposition results in data expand exponentially. In such case the time for organizing data (including assigning address, writing and reading) which is conventionally ignored, accumulates significantly.
Expand

14 October 2016

University of Bristol
Job Posting Job Posting
We are looking for Post-Docs to work on theory, implementation and applications of MPC. We are looking for different individuals to contribute to each of these three areas. The positions are multi-year, and you will be working in a vibrant group working on trying to make MPC a reality; with strong links with both other research groups, and industrial applications.

Closing date for applications: 1 May 2017

Contact: Nigel Smart

More information: http://www.bristol.ac.uk/jobs/find/details.html?nPostingID=573&nPostingTargetID=17237&option=28&sort=DESC

Expand
ENS de Lyon, France
Job Posting Job Posting
Several post-doc positions, for up to 2 years.

Expertise in at least one of the following topics is expected:

* computing on encrypted data

* cryptographic protocols

* lattice-based cryptography

* lattice algorithms and/or hardness of lattice problems

* implementation of cryptographic primitives

Interested applicants should provide a detailed resume and two references.

Closing date for applications: 31 December 2016

Contact: Benoit Libert and Damien Stehle

benoit.libert (at) ens-lyon.fr, damien.stehle (at) ens-lyon.fr

Expand

12 October 2016

Muhammad Yasin, Ozgur Sinanoglu, Jeyavijayan Rajendran
ePrint Report ePrint Report
Test of integrated circuits (ICs) is essential to ensure their quality; the test is meant to prevent defective and out-of-spec ICs from entering into the supply chain. The test is conducted by comparing the observed IC output with the expected test responses for a set of test patterns; the test patterns are generated using automatic test pattern generation algorithms. Existing test-pattern generation algorithms aim to achieve higher fault coverage at lower test costs. In an attempt to reduce the size of test data, these algorithms reveal the maximum information about the internal circuit structure. This is realized through sensitizing the internal nets to the outputs as much as possible, unintentionally leaking the secrets embedded in the circuit as well.

In this paper, we present HackTest, an attack that extracts secret information generated in the test data, even if the test data does not explicitly contain the secret. HackTest can break the existing intellectual property (IP) protection techniques, such as camouflaging, within two minutes for our benchmarks using only the camouflaged layout and the test data. HackTest applies to all existing camouflaged gate-selection techniques and is successful even in the presence of state-of-the-art test infrastructure, i.e. test data compression circuits. Our attack necessitates that the IC test data generation algorithms be reinforced with security. We also discuss potential countermeasures to prevent HackTest.
Expand
Frederik Armknecht, Colin Boyd, Gareth T. Davies, Kristian Gjøsteen, Mohsen Toorani
ePrint Report ePrint Report
Deduplication removes redundant copies of files or data blocks stored on the cloud. Client-side deduplication, where the client only uploads the file upon the request of the server, provides major storage and bandwidth savings, but introduces a number of security concerns. Harnik et al. (2010) showed how cross-user client-side deduplication inherently gives the adversary access to a (noisy) side-channel that may divulge whether or not a particular file is stored on the server, leading to leakage of user information. We provide formal definitions for deduplication strategies and their security in terms of adversarial advantage. Using these definitions, we prove a bound characterizing the best trade-off between security and efficiency in natural scenarios.
Expand
Ran Cohen, Chris Peikert
ePrint Report ePrint Report
In the setting of multiparty computation, a set of mutually distrusting parties wish to securely compute a joint function of their private inputs. A protocol is adaptively secure if honest parties might get corrupted \emph{after} the protocol has started. Recently (TCC 2015) three constant-round adaptively secure protocols were presented [CGP15, DKR15, GP15]. All three constructions assume that the parties have access to a \emph{common reference string} (CRS) whose size depends on the function to compute, even when facing semi-honest adversaries. It is unknown whether constant-round adaptively secure protocols exist, without assuming access to such a CRS.

In this work, we study adaptively secure protocols which only rely on a short CRS that is independent on the function to compute. First, we raise a subtle issue relating to the usage of \emph{non-interactive non-committing encryption} within security proofs in the UC framework, and explain how to overcome it. We demonstrate the problem in the security proof of the adaptively secure oblivious-transfer protocol from [CLOS02] and provide a complete proof of this protocol.

Next, we consider the two-party setting where one of the parties has a polynomial-size input domain, yet the other has no constraints on its input. We show that assuming the existence of adaptively secure oblivious transfer, every deterministic functionality can be computed with adaptive security in a constant number of rounds.

Finally, we present a new primitive called \emph{non-committing indistinguishability obfuscation}, and show that this primitive is \emph{complete} for constructing adaptively secure protocols with round complexity independent of the function.
Expand
Gina Gallegos-Garcia, Vincenzo Iovino, Alfredo Rial, Peter B. Roenne, Peter Y. A. Ryan
ePrint Report ePrint Report
In e-voting protocol design, cryptographers must balance usability and strong security guarantees, such as privacy and verifiability. In traditional e-voting protocols, privacy is often provided by a trusted authority that learns the votes and computes the tally. Some protocols replace the trusted authority by a set of authorities, and privacy is guaranteed if less than a threshold number of authorities are corrupt. For verifiability, stronger security guarantees are demanded. Typically, corrupt authorities that try to fake the result of the tally must always be detected.

To provide verifiability, many e-voting protocols use Non-Interactive Zero-Knowledge Proofs (NIZKs). Thanks to their non-interactive nature, NIZKs allow anybody, including third parties that do not participate in the protocol, to verify the correctness of the tally. Therefore, NIZKs can be used to obtain universal verifiability. Additionally, NIZKs also improve usability because they allow voters to cast a vote using a non-interactive protocol.

The disadvantage of NIZKs is that their security is based on setup assumptions such as the common reference string (CRS) or the random oracle (RO) model. The former requires a trusted party for the generation of a common reference string. The latter, though a popular methodology for designing secure protocols, has been shown to be unsound.

In this paper, we address the design of an e-voting protocol that provides verifiability without any trust assumptions, where verifiability here is meant without eligibility verification. We show that Non-Interactive Witness-Indistinguishable proofs (NIWI) can be used for this purpose. The e-voting scheme is private under the Decision Linear assumption, while verifiability holds unconditionally. To our knowledge, this is the first private e-voting scheme with perfect universal verifiability, i.e. one in which the probability of a fake tally not being detected is 0, and non-interactive protocols that do not rely on trust assumptions.
Expand
Khoa Nguyen, Huaxiong Wang, Juanyang Zhang
ePrint Report ePrint Report
Server-aided revocable identity-based encryption (SR-IBE), recently proposed by Qin et al. at ESORICS 2015, offers significant advantages over previous user revocation mechanisms in the scope of IBE. In this new system model, almost all the workloads on users are delegated to an untrusted server, and users can compute decryption keys at any time period without having to communicate with either the key generation center or the server.

In this paper, inspired by Qin et al.’s work, we design the first SR-IBE scheme from lattice assumptions. Our scheme is more efficient than existing constructions of lattice-based revocable IBE. We prove that the scheme is selectively secure in the standard model, based on the hardness of the Learning with Errors problem. At the heart of our design is a “double encryption” mechanism that enables smooth interactions between the message sender and the server, as well as between the server and the recipient, while ensuring the confidentiality of messages.
Expand
Jian Guo, J\'er\'emy Jean andIvica Nikoli\'c, Kexin Qiao andYu Sasaki, Siang Meng Sim
ePrint Report ePrint Report
We present an invariant subspace attack on the block cipher Midori64, proposed at Asiacrypt 2015. Our analysis shows that Midori64 has a class of 2^{32} weak keys. Under any such key, the cipher can be distinguished with only a single chosen query, and the key can be recovered in 2^{16} time with two chosen queries. As both the distinguisher and the key recovery have very low complexities, we confirm our analysis by implementing the attacks. Some tweaks of round constants make Midori64 more resistant to the attacks, but some lead to even larger weak-key classes. To eliminate the dependency on the round constants, we investigate alternative S-boxes for Midori64 that provide certain level of security against the found invariant subspace attacks, regardless of the choice of the round constants. Our search for S-boxes is enhanced with a dedicated tool which evaluates the depth of any given 4-bit S-box that satisfies certain design criteria. The tool may be of independent interest to future S-box designs.
Expand
Helene Haagh, Yue Ji, Chenxing Li, Claudio Orlandi, and Yifan Song
ePrint Report ePrint Report
We generalize the cryptographic notion of Order Revealing Encryption (ORE) to arbitrary functions and we present a construction that allows to determine the (partial) ordering of two vectors i.e., given E(x) and E(y) it is possible to learn whether x is less than or equal to y, y is less than or equal to x or whether x and y are incomparable. This is the first non-trivial example of a Revealing Encryption (RE) scheme with output larger than one bit, and which does not rely on cryptographic obfuscation or multilinear maps.
Expand
◄ Previous Next ►