International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

17 March 2016

Yael Tauman Kalai, Guy N. Rothblum, Ron D. Rothblum
ePrint Report ePrint Report
The Fiat-Shamir paradigm [CRYPTO'86] is a heuristic for converting three-round identification schemes into signature schemes, and more generally, for collapsing rounds in constant-round public-coin interactive protocols. This heuristic is very popular both in theory and in practice, and its security has been the focus of extensive study.

In particular, this paradigm was shown to be secure in the so-called Random Oracle Model. However, in the plain model, mainly negative results were shown. In particular, this heuristic was shown to be insecure when applied to computationally sound proofs (also known as arguments). Moreover, recently it was shown that even in the restricted setting, where the heuristic is applied to interactive proofs (as opposed to arguments), its soundness cannot be proven via a black-box reduction to any so-called falsifiable assumption.

In this work, we give a positive result for the security of this paradigm in the plain model. Specifically, we construct a hash function for which the Fiat Shamir paradigm is secure when applied to proofs (as opposed to arguments), assuming the existence of a sub-exponentially secure indistinguishability obfuscator, the existence of an exponentially secure input-hiding obfuscator for the class of multi-bit point functions, and the existence of a sub-exponentially secure one-way function.
Expand
Alain Couvreur, Ayoub Otmani, Jean-Pierre Tillich, Valérie Gauthier-Umana
ePrint Report ePrint Report
The BBCRS scheme is a variant of the McEliece public-key encryption scheme where the hiding phase is performed by taking the inverse of a matrix which is of the form T+R where T is a sparse matrix with average row/column weight equal to a very small quantity m, usually m<2, and R is a matrix of small rank z&#10878;1. The rationale of this new transformation is the reintroduction of families of codes, like generalized Reed-Solomon codes, that are famously known for representing insecure choices. We present a key-recovery attack when z=1 and m is chosen between 1 and 1+R+O(1n&#8730;) where R denotes the code rate. This attack has complexity O(n6) and breaks all the parameters suggested in the literature.
Expand
Apoorvaa Deshpande, Venkata Koppula, Brent Waters
ePrint Report ePrint Report
A constrained pseudo random function (PRF) behaves like a standard PRF, but with the added feature that the (master) secret key holder, having secret key K, can produce a constrained key, K{f}, that allows for the evaluation of the PRF on all inputs satisfied by the constraint f. Most existing constrained PRF constructions can handle only bounded length inputs. In a recent work, Abusalah et al. [AFP14] constructed a constrained PRF scheme where constraints can be represented as Turing machines with unbounded inputs. Their proof of security, however, requires risky “knowledge type” assumptions such as differing inputs obfuscation for circuits and SNARKs. In this work, we construct a constrained PRF scheme for Turing machines with unbounded inputs under weaker assumptions, namely, the existence of indistinguishability obfuscation for circuits (and injective pseudorandom generators).
Expand
Leon Groot Bruinderink, Andreas Hülsing, Tanja Lange, Yuval Yarom
ePrint Report ePrint Report
We present the first side-channel attack on a lattice-based signature scheme, using the FLUSH+RELOAD cache-attack. The attack is targeted at the discrete Gaussian sampler, an important step in the Bimodal Lattice Signature Schemes (BLISS). After observing only 450 signatures with a perfect side-channel, an attacker is able to extract the secret BLISS-key in less than 2 minutes, with a success probability of 0.96. Similar results are achieved in a proof-of-concept implementation using the FLUSH+RELOAD technique with less than 3500 signatures.

We show how to attack sampling from a discrete Gaussian using CDT or rejection sampling by showing potential information leakage via cache memory. For both sampling methods, a strategy is given to use this additional information, finalize the attack and extract the secret key.

We provide experimental evidence for the idealized perfect side-channel attacks and the FLUSH+RELOAD attack on two recent CPUs.
Expand
Jérémy Jean, Ivica Nikolic
ePrint Report ePrint Report
We show several constructions based on the AES round function that can be used as building blocks for MACs and authenticated encryption schemes. They are found by a search of the space of all secure constructions based on an efficient design strategy that has been shown to be one of the most optimal among all the considered. We implement the constructions on the latest Intel's processors. Our benchmarks show that on Intel Skylake the smallest construction runs at 0.188 c/B, while the fastest at only 0.125 c/B, i.e. five times faster than AES-128.
Expand
Max Fillinger, Marc Stevens
ePrint Report ePrint Report
In May 2012, a highly advanced malware for espionage dubbed Flame was found targeting the Middle-East. As it turned out, it used a forged signature to infect Windows machines by MITM-ing Windows Update. Using counter-cryptanalysis, Stevens found that the forged signature was made possible by a chosen-prefix attack on MD5 \cite{DBLP:conf/crypto/Stevens13}. He uncovered some details that prove that this attack differs from collision attacks in the public literature, yet many questions about techniques and complexity remained unanswered.

In this paper, we demonstrate that significantly more information can be deduced from the example collision. Namely, that these details are actually sufficient to reconstruct the collision attack to a great extent using some weak logical assumptions. In particular, we contribute an analysis of the differential path family for each of the four near-collision blocks, the chaining value differences elimination procedure and a complexity analysis of the near-collision block attacks and the associated birthday search for various parameter choices. Furthermore, we were able to prove a lower-bound for the attack's complexity.

This reverse-engineering of a non-academic cryptanalytic attack exploited in the real world seems to be without precedent. As it allegedly was developed by some nation-state(s), we discuss potential insights to their cryptanalytic knowledge and capabilities.
Expand
Liwei Zhang, A. Adam Ding, Yunsi Fei, Pei Luo
ePrint Report ePrint Report
To design effective countermeasures for cryptosystems against side-channel power analysis attacks, the evaluation of the system leakage has to be lightweight and often times at the early stage like on cryptographic algorithm or source code. When real implementations and power leakage measurements are not available, security evaluation has to be through metrics for the information leakage of algorithms. In this work, we propose such a general and unified metric, information leakage amount - ILA. ILA has several distinct advantages over existing metrics. It unifies the measure of information leakage to various attacks: first-order and higher-order DPA and CPA attacks. It works on algorithms with no mask protection or perfect/imperfect masking countermeasure.It is explicitly connected to the success rates of attacks, the ultimate security metric on physical implementations. Therefore, we believe ILA is an accurate indicator of the side-channel security level of the physical system, and can be used during the countermeasure design stage effectively and efficiently for choosing the best countermeasure.
Expand
Sonia Bogos, Serge Vaudenay
ePrint Report ePrint Report
We assume a scenario where an attacker can mount several independent attacks on a single CPU. Each attack can be run several times in independent ways. Each attack can succeed after a given number of steps with some given and known probability. A natural question is to wonder what is the optimal strategy to run steps of the attacks in a sequence. In this paper, we develop a formalism to tackle this problem. When the number of attacks is infinite, we show that there is a magic number of steps m such that the optimal strategy is to run an attack for m steps and to try again with another attack until one succeeds. We also study the case of a finite number of attacks.

We describe this problem when the attacks are exhaustive key searches, but the result is more general. We apply our result to the learning parity with noise (LPN) problem and the password search problem. Although the optimal m decreases as the distribution is more biased, we observe a phase transition in all cases: the decrease is very abrupt from m corresponding to exhaustive search on a single target to m=1 corresponding to running a single step of the attack on each target. For all practical biased examples, we show that the best strategy is to use m=1. For LPN, this means to guess that the noise vector is 0 and to solve the secret by Gaussian elimination. This is actually better than all variants of the Blum-Kalai-Wasserman (BKW) algorithm.
Expand
Thomas Peyrin
ePrint Report ePrint Report
Hash functions have been among the most scrutinized cryptographic primitives in the previous decade, mainly due to the cryptanalysis breakthroughs on MD-SHA family and the NIST SHA3 competition that followed. GRINDAHL is a hash function proposed at FSE 2007 that inspired several SHA3 candidates. One of its particularities is that it follows the RIJNDAEL design strategy, with an efficiency comparable to SHA2. This paper provides the first cryptanalytic work on this scheme and we show that the 256-bit version of GRINDAHL is not collision resistant. Our attack uses byte-level truncated differentials and leverages a counterintuitive method (reaching an internal state where all bytes are active) in order to ease the construction of good differential paths. Then, by a careful utilization of the freedom degrees inserted every round, and with a work effort of approximatively $2^{112}$ hash computations, an attacker can generate a collision for the full 256-bit version of GRINDAHL.
Expand
Weijia Wang, Yu Yu, Junrong Liu, Zheng Guo, François-Xavier Standaert Standaert, Dawu Gu, Sen Xu, Rong Fu
ePrint Report ePrint Report
At CT-RSA 2014, Whitnall, Oswald and Standaert gave the impossibility result that no generic DPA strategies (i.e., without any \emph{a priori} knowledge about the leakage characteristics) can recover secret information from a physical device by considering an injective target function (e.g., AES and PRESENT S-boxes), and as a remedy, they proposed a slightly relaxed strategy ``generic-emulating DPAs'' free from the non-injectivity constraint. However, as we show in this paper, the only generic-emulating DPA proposed in their work, namely the SLR-based DPA, suffers from two drawbacks: unstable outcomes in the high-noise regime (i.e., for a small number of traces) and poor performance especially on real smart cards (compared with traditional DPAs with a specific power model). In order to solve these problems, we introduce two new generic-emulating distinguishers, based on lasso and ridge regression strategies respectively, with more stable and better performances than the SLR-based one. Further, we introduce the cross-validation technique that improves the generic-emulating DPAs in general and might be of independent interest. Finally, we compare the performances of all aforementioned generic-emulating distinguishers (both with and without cross-validation) in simulated leakages functions of different degrees, and on an AES ASIC implementation. Our experimental results show that our generic-emulating distinguishers are stable and some of them behave even better than (resp., almost the same as) the best Difference-of-Means distinguishers in simulated leakages (resp., on a real implementation), and thus make themselves good alternatives to traditional DPAs.
Expand
David Nuñez, Isaac Agudo, Javier Lopez
ePrint Report ePrint Report
Proxy Re-Encryption (PRE) is a type of Public-Key Encryption (PKE) that provides an additional re-encryption functionality. Although PRE is inherently more complex than PKE, attack models for PRE have not been developed further than those inherited from PKE. In this paper we address this gap and define a parametric family of attack models for PRE, based on the availability of both the decryption and re-encryption oracles during the security game. This family enables the definition of fine-grained security notions for PRE, ranging from “plain” IND-CPA to “full” IND-CCA. We analyze some relations among these notions of security, and in particular, the separations, which further support the importance of the re-encryption oracle. The identified separations stem from the study of a new property of PRE, called privacy of re-encryption keys, which captures the requirement that re-encryption keys should not be leaked through the re-encryption function. Finally, we show that the scheme by Kirshanova (PKC 2014), which does not satisfy this property, cannot achieve a meaningful security notion for PRE since it is vulnerable to chosen-ciphertext attacks using the re-encryption oracle. This attack emphasizes the fact that PRE schemes that leak re-encryption keys cannot achieve strong security notions.
Expand
Yusuke Naito, Kan Yasuda
ePrint Report ePrint Report
We provide new bounds for the pseudo-random function security of keyed sponge constructions. For the case $c\leq b/2$ ($c$ the capacity and $b$ the permutation size), our result improves over all previously-known bounds. A remarkable aspect of our bound is that dependence between capacity and message length is removed, partially solving the open problem posed by Ga\v{z}i~et~al. at CRYPTO~2015. Our bound is essentially tight, matching the two types of attacks pointed out by Ga\v{z}i~et~al. For the case $c>b/2$, Ga\v{z}i~et~al.'s bound remains the best for the case of single-block output, but for keyed sponges with extendable outputs, our result partly (when query complexity is relatively large) provides better security than Mennink~et~al.'s bound presented at ASIACRYPT~2015.
Expand
Cynthia Dwork, Moni Naor, Guy N. Rothblum
ePrint Report ePrint Report
We are interested in constructing short two-message arguments for various languages, where the complexity of the verifier is small (e.g. linear in the input size, or even sublinear if it is coded properly). Suppose that we have a low communication public-coin interactive protocol for proving (or arguing) membership in the language. We consider a ``compiler'' from the literature that takes a protocol consisting of several rounds and produces a two-message argument system. The compiler is based on any Fully Homomorphic Encryption (FHE) scheme, or on PIR (under additional conditions on the protocol). This compiler has been used successfully in several proposed protocols.

We investigate the question of whether this compiler can be proven to work under standard cryptographic assumptions. We prove:

(i) If FHEs or PIR systems exist, then there is a sound interactive proof protocol that, when run through the compiler, results in a protocol that is not sound.

(ii) If the verifier in the original protocol runs in logarithmic space and has no ``long-term'' secret memory (a generalization of public coins), then the compiled protocol is sound. This yields a succinct two-message argument system for any language in NC, where the verifier's work is linear (or even polylog if the input is coded appropriately). This is the first (non trivial) two-message succinct argument system that is based on a standard polynomial-time hardness assumption.
Expand
Mihir Bellare, Björn Tackmann
ePrint Report ePrint Report
We take nonce-based cryptography beyond symmetric encryption, developing it as a broad and practical way to mitigate damage caused by failures in randomness, whether inadvertent (bugs) or malicious (subversion). We focus on definitions and constructions for nonce-based public-key encryption and briefly treat nonce-based signatures. We introduce and construct hedged extractors as a general tool in this domain. Our nonce-based PKE scheme guarantees that if the adversary wants to violate IND-CCA security then it must do both of the following: (1) fully compromise the RNG (2) penetrate the sender system to exfiltrate a seed used by the sender
Expand
Marc Fischlin, Amir Herzberg, Hod Bin Noon, Haya Shulman
ePrint Report ePrint Report
Obfuscation is challenging; we currently have practical candidates with rather vague security guarantees on the one side, and theoretical constructions which have recently experienced jeopardizing attacks against the underlying cryptographic assumptions on the other side. This motivates us to study and present robust combiners for obfuscators, which integrate several candidate obfuscators into a single obfuscator which is secure as long as a quorum of the candidates is indeed secure.

We give several results about building obfuscation combiners, with matching upper and lower bounds for the precise quorum of secure candidates. Namely, we show that one can build 3-out-of-4 obfuscation combiners where at least three of the four combiners are secure, whereas 2-out-of-3 structural combiners (which combine the obfuscator candidates in a black-box sense) with only two secure candidates, are impossible. Our results generalize to (2c+1)-out-of-(3c+1) combiners for the positive result, and to 2c-out-of-3c results for the negative result, for any integer c.

To reduce overhead, we define detecting combiners, where the combined obfuscator may sometimes produce an error-indication instead of the desired output, indicating that some of the component obfuscators is faulty. We present a (c+1)-out-of-(2c+1) detecting combiner for any integer c, bypassing the previous lower bound. We further show that c-out-of-2c structural detecting combiners are again impossible.

Since our approach can be used for practical obfuscators, as well as for obfuscators proven secure (based on assumptions), we also briefly report on implementation results for some applied obfuscator programs.
Expand
Sonia Bogos, Serge Vaudenay
ePrint Report ePrint Report
In this article we focus on constructing an algorithm that automatizes the generation of LPN solving algorithms from the considered parameters. When searching for an algorithm to solve an LPN instance, we make use of the existing techniques and optimize their use. We formalize an LPN algorithm as a path in a graph G and our algorithm is searching for the optimal paths in this graph. The results bring improvements over the existing work by a factor from 2^8 to 2^{10}, i.e. we improve the results of the covering code from ASIACRYPT'14. Furthermore, we propose concrete practical codes and a method to find good codes.
Expand
Veronique Cortier, David Galindo, Ralf Kuesters, Johannes Mueller, Tomasz Truderung
ePrint Report ePrint Report
There have been intensive research efforts in the last two decades or so to design and deploy electronic voting (e-voting) protocols/systems which allow voters and/or external auditors to check that the votes were counted correctly. This security property, which not least was motivated by numerous problems in even national elections, is called \emph{verifiability}. It is meant to defend against voting devices and servers that have programming errors or are outright malicious. In order to properly evaluate and analyze e-voting protocols w.r.t.~verifiability, one fundamental challenge has been to formally capture the meaning of this security property. While the first formal definitions of verifiability were devised in the late 1980s already, new verifiability definitions are still being proposed. The definitions differ in various aspects, including the classes of protocols they capture and even their formulations of the very core of the meaning of verifiability. This is an unsatisfying state of affairs, leaving the research on the verifiability of e-voting protocols in a fuzzy state.

In this paper, we review all formal definitions of verifiability proposed in the literature and cast them in a framework proposed by K{\"u}sters, Truderung, and Vogt (the KTV framework), yielding a uniform treatment of verifiability. This enables us to provide a detailed comparison of the various definitions of verifiability from the literature. We thoroughly discuss advantages and disadvantages, and point to limitations and problems. Finally, from these discussions and based on the KTV framework, we distill a general definition of verifiability, which can be instantiated in various ways, and provide precise guidelines for its instantiation. The concepts for verifiability we develop should be widely applicable also beyond the framework used here. Altogether, our work offers a well-founded reference point for future research on the verifiability of e-voting systems.
Expand

15 March 2016

Anastasiya Gorodilova
ePrint Report ePrint Report
In [13] for a given vectorial Boolean function $F$ from $\mathbb{F}_2^n$ to itself it was defined an associated Boolean function $\gamma_F(a,b)$ in $2n$ variables that takes value~$1$ iff $a\neq{\bf 0}$ and equation $F(x)+F(x+a)=b$ has solutions. In this paper we introduce the notion of differentially equivalent functions as vectorial functions that have equal associated Boolean functions. It is an interesting open problem to describe differential equivalence class of a given APN function. We consider the APN Gold function $F(x)=x^{2^k+1}$, where gcd$(k,n)=1$, and prove that there exist exactly $2^{2n+n/2}$ distinct affine functions $A$ such that $F$ and $F+A$ are differentially equivalent if $n=4t$ for some $t$ and $k = n/2 \pm 1$; otherwise the number of such affine functions is equal to $2^{2n}$. This theoretical result and computer calculations obtained show that APN Gold functions for $k=n/2\pm1$ and $n=4t$ are the only functions (except one function in 6 variables) among all known quadratic APN functions in $2,\ldots,8$ variables that have more than $2^{2n}$ trivial affine functions $A^F_{c,d}(x)=F(x)+F(x+c)+d$, where $c,d\in\mathbb{F}_2^n$, preserving the associated Boolean function when adding to $F$.
Expand
Yosuke Todo, Masakatu Morii
ePrint Report ePrint Report
Ciphers that do not use S-boxes have been discussed for the demand on lightweight cryptosystems, and their round functions consist of and, rotation, and xor. Especially, the Simon family is one of the most famous ciphers, and there are many cryptanalyses again the Simon family. However, it is very difficult to guarantee the security because we cannot use useful techniques for S-box-based ciphers. Very recently, the division property, which is a new technique to find integral characteristics, was shown in Eurocrypt 2015. The technique is powerful for S-box-based ciphers, and it was used to break, for the first time, the full MISTY1 in CRYPTO 2015. However, it has not been applied to non-S-box-based ciphers like the Simon family effectively, and only the existence of the 10-round integral characteristic on Simon32 was proven. On the other hand, the experimental characteristic, which possibly does not work for all keys, covers 15 rounds, and there is a 5-round gap. To fill the gap, we introduce a bit-based division property, and we apply it to show that the experimental 15-round integral characteristic always works for all keys. Though the bit-based division property finds more accurate integral characteristics, it requires much time and memory complexity. As a result, we cannot apply it to symmetric-key ciphers whose block length is over 32. Therefore, we alternatively propose a method for designers. The method works for ciphers with large block length, and it shows ``provable security'' against integral cryptanalyses using the division property. We apply this technique to the Simon family and show that Simon48, 64, 96, and 128 probably do not have 17-, 20-, 25-, and 29-round integral characteristics, respectively.
Expand
Mehmet Sinan Inci, Berk Gulmezoglu, Thomas Eisenbarth, Berk Sunar
ePrint Report ePrint Report
In this work we focus on the problem of co-location as a first step of conducting Cross-VM attacks such as Prime and Probe or Flush+Reload in commercial clouds. We demonstrate and compare three co-location detection methods namely, cooperative Last-Level Cache (LLC) covert channel, software profiling on the LLC and memory bus locking. We conduct our experiments on three commercial clouds, Amazon EC2, Google Compute Engine and Microsoft Azure. Finally, we show that both cooperative and non-cooperative co-location to specific targets on cloud is still possible on major cloud services.
Expand
◄ Previous Next ►