International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

21 June 2015

Milivoj Simeonovski, Fabian Bendun, Muhammad Rizwan Asghar, Michael Backes, Ninja Marnau and
ePrint Report ePrint Report
Search engines are the prevalently used tools to collect information about individuals on the Internet. Search results typically comprise a variety of sources that contain personal information --- either intentionally released by the person herself, or unintentionally leaked or published by third parties without being noticed, often with detrimental effects on the individual\'s privacy. To grant individuals the ability to regain control over their disseminated personal information, the European Court of Justice recently ruled that EU citizens have a right to be forgotten in the sense that indexing systems, such as Google, must offer them technical means to request removal of links from search results that point to sources violating their data protection rights. As of now, these technical means consist of a web form that requires a user to manually identify all relevant links herself upfront and to insert them into the web form, followed by a manual evaluation by employees of the indexing system to assess if the request to remove those links is eligible and lawful.

In this work, we propose a universal framework Oblivion to support

the automation of the right to be forgotten in a scalable,

provable and privacy-preserving manner. First, Oblivion enables a

user to automatically find and tag her disseminated personal

information using natural language processing (NLP) and image recognition techniques and

file a request in a privacy-preserving manner. Second, Oblivion

provides indexing systems with an automated and provable eligibility

mechanism, asserting that the author of a request is indeed affected

by an online resource. The automated eligibility proof ensures censorship-resistance so that only legitimately affected

individuals can request the removal of corresponding links from

search results. We have conducted comprehensive evaluations of Oblivion, showing that the framework is capable of handling 278 removal requests per second on a standard notebook (2.5 GHz dual core), and is hence suitable for large-scale deployment.

Expand
Patrick HADDAD, Viktor FISCHER, Florent BERNARD, Jean NICOLAI
ePrint Report ePrint Report
Security in random number generation for cryptography is closely related to the entropy rate at the generator output. This rate has to be evaluated using an appropriate stochastic model. The stochastic model proposed in this paper is dedicated to the transition effect ring oscillator (TERO) based true random number generator (TRNG) proposed by Varchola and Drutarovsky in 2010. The advantage and originality of this model is that it is derived from a physical model based on a detailed study and on the precise electrical description of the noisy physical phenomena that contribute to the generation of random numbers. We compare the proposed electrical description with data generated in a 28 nm CMOS ASIC implementation. Our experimental results are in very good agreement with those obtained with both the physical model of TERO\'s noisy behavior and with the stochastic model of the TERO TRNG, which we also confirmed using the AIS 31 test suites.

Expand
Debrup Chakraborty, Cuauhtemoc Mancillas-Lopez, Palash Sarkar
ePrint Report ePrint Report
In the last one-and-a-half decade there has been a lot of activity towards development of cryptographic techniques for disk

encryption. It has been almost canonised that an encryption scheme suitable for the application of disk encryption must be

length preserving, i.e., it rules out the use of schemes like authenticated encryption where an authentication tag is also

produced as a part of the ciphertext resulting in ciphertexts being longer than the corresponding plaintexts. The notion of

a tweakable enciphering scheme (TES) has been formalised as the appropriate primitive for disk encryption and it has been argued

that they provide the maximum security possible for a tag-less scheme. On the other hand, TESs are less efficient than some

existing authenticated encryption schemes. Also TES cannot provide true authentication as they do not have authentication tags.

In this paper, we analyze the possibility of the use of encryption schemes where length expansion is produced for

the purpose of disk encryption. On the negative side, we argue that nonce based authenticated encryption schemes are not appropriate

for this application. On the positive side, we demonstrate that deterministic authenticated encryption (DAE) schemes may

have more advantages than disadvantages compared to a TES when used for disk encryption. Finally, we propose a new deterministic

authenticated encryption scheme called BCTR which is suitable for this purpose. We provide the full specification of BCTR, prove

its security and also report an efficient implementation in reconfigurable hardware. Our experiments suggests that BCTR performs

significantly better than existing TESs and existing DAE schemes.

Expand
Nahid Farhady Ghalaty, Bilgiday Yuce, Mostafa Taha, Patrick Schaumont
ePrint Report ePrint Report
--Recent research has demonstrated that there is

no sharp distinction between passive attacks based on sidechannel

leakage and active attacks based on fault injection.

Fault behavior can be processed as side-channel information,

offering all the benefits of Differential Power Analysis including

noise averaging and hypothesis testing by correlation. This paper

introduces Differential Fault Intensity Analysis, which combines

the principles of Differential Power Analysis and fault injection.

We observe that most faults are biased - such as single-bit,

two-bit, or three-bit errors in a byte - and that this property

can reveal the secret key through a hypothesis test. Unlike

Differential Fault Analysis, we do not require precise analysis

of the fault propagation. Unlike Fault Sensitivity Analysis, we do

not require a fault sensitivity profile for the device under attack.

We demonstrate our method on an FPGA implementation of

AES with a fault injection model. We find that with an average

of 7 fault injections, we can reconstruct a full 128-bit AES key

Expand
Jean-Sebastien Coron, Craig Gentry, Shai Halevi, Tancrede Lepoint, Hemanta K. Maji, Eric Miles, Mariana
ePrint Report ePrint Report
We extend the recent zeroizing attacks of Cheon, Han, Lee, Ryu and Stehle (Eurocrypt\'15) on multilinear maps to settings where no encodings of zero below the maximal level are available. Some of the new attacks apply to the CLT13 scheme (resulting in a total break) while others apply to (a variant of) the GGH13 scheme (resulting in a weak-DL attack). We also note the limits of these zeroizing attacks.

Expand
Amir Moradi, Alexander Wild
ePrint Report ePrint Report
Higher-order side-channel attacks are becoming amongst the major interests of academia as well as industry sector. It is indeed being motivated by the development of countermeasures which can prevent the leakages up to certain orders. As a concrete example, threshold implementation (TI) as an efficient way to realize Boolean masking in hardware is able to avoid first-order leakages. Trivially, the attacks conducted at second (and higher) orders can exploit the corresponding leakages hence devastating the provided security. Hence, the extension of TI to higher orders was being expected which has been presented at ASIACRYPT 2014. Following its underlying univariate settings it can provide security at higher orders, and its area and time overheads naturally increase with the desired security order.

In this work we look at the feasibility of higher-order attacks on first-order TI from another perspective. Instead of increasing the order of resistance by employing higher-order TIs, we realize the first-order TI designs following the principles of a power-equalization technique dedicated to FPGA platforms, that naturally leads to hardening higher-order attacks. We show that although the first-order TI designs, which are additionally equipped by the power-equalization methodology, have significant area overhead, they can maintain the same throughput and more importantly can avoid the higher-order leakages to be practically exploitable by up to 1 billion traces.

Expand
Martin Pettai, Peeter Laud
ePrint Report ePrint Report
We consider how to perform privacy-preserving analyses on private data from different data providers and containing personal information of many different individuals. We combine differential privacy and secret sharing in the same system to protect the privacy of both the data providers and the individuals. We have implemented a prototype of this combination and the overhead of adding differential privacy to secret sharing is small enough to be usable in practice.

Expand
Krzysztof Pietrzak, Maciej Skorski
ePrint Report ePrint Report
Computationalnotionsofentropy(a.k.a.pseudoentropy)have found many applications, including leakage-resilient cryptography, deter- ministic encryption or memory delegation. The most important tools to argue about pseudoentropy are chain rules, which quantify by how much (in terms of quantity and quality) the pseudoentropy of a given random variable X decreases when conditioned on some other variable Z (think for example of X as a secret key and Z as information leaked by a side-channel). In this paper we give a very simple and modular proof of the chain rule for HILL pseudoentropy, improving best known parameters. Our version allows for increasing the acceptable length of leakage in ap- plications up to a constant factor compared to the best previous bounds. As a contribution of independent interest, we provide a comprehensive study of all known versions of the chain rule, comparing their worst-case strength and limitations.

Expand
John Kelsey, Kerry A. McKay, Meltem Sonmez Turan
ePrint Report ePrint Report
Random numbers are essential for cryptography. In most real-world systems, these values come from a cryptographic pseudorandom number generator (PRNG), which in turn is seeded by an entropy source. The security of the entire cryptographic system then relies on the accuracy of the claimed amount of entropy provided by the source. If the entropy source provides less unpredictability than is expected, the security of the cryptographic mechanisms is undermined. For this reason, correctly estimating the amount of entropy available from a source is critical.

In this paper, we develop a set of tools for estimating entropy, based on mechanisms that attempt to predict the next sample in a sequence based on all previous samples.

These mechanisms are called predictors. We develop a framework for using predictors to estimate entropy, and test them experimentally against both simulated and real noise sources. For comparison, we subject the entropy estimates defined in the August 2012 draft of NIST Special Publication 800-90B to the same tests, and compare their performance.

Expand
Jan Camenisch, Maria Dubovitskaya, Kristiyan Haralambiev, Markulf Kohlweiss
ePrint Report ePrint Report
It takes time for theoretical advances to get used in practical schemes. Anonymous credential schemes are no exception. For instance, existing schemes suited for real-world use lack formal, composable definitions, partly because they do not support straight-line extraction and rely on random oracles for their security arguments.

To address this gap, we propose unlinkable redactable signatures (URS), a new building block for privacy-enhancing protocols, which we use to construct the first efficient UC-secure anonymous credential system that supports multiple issuers, selective disclosure of attributes, and pseudonyms. Our scheme is one of the first such systems for which both the size of a credential and its presentation proof are independent of the number of attributes issued in a credential. Moreover, our new credential scheme does not rely on random oracles.

As an important intermediary step, we address the problem of building a functionality for a complex credential system that can cover many different features. Namely, we design a core building block for a single issuer that supports credential issuance and presentation with respect to pseudonyms and then show how to construct a full-fledged credential system with multiple issuers in a modular way. We expect this flexible definitional approach to be of independent interest.

Expand
Christina Brzuska, Arno Mittelbach
ePrint Report ePrint Report
Universal Computational Extractors (UCEs), introduced by Bellare, Hoang and Keelveedhi (CRYPTO 2013), are a framework of assumptions on hash functions that allow to instantiate random oracles in a large variety of settings. Brzuska, Farshim and Mittelbach (CRYPTO 2014) showed that a large class of UCE assumptions with \\emph{computationally} unpredictable sources cannot be achieved, if indistinguishability obfuscation exists. In the process of circumventing obfuscation-based attacks, new UCE notions emerged, most notably UCEs with respect to \\emph{statistically} unpredictable sources that suffice for a large class of applications. However, the only standard model constructions of UCEs are for a small subclass considering only $q$-query sources which are \\emph{strongly statistically} unpredictable (Brzuska, Mittelbach; Asiacrypt 2014).

The contributions of this paper are threefold:

1) We show a surprising equivalence for the notions of strong unpredictability and (plain) unpredictability thereby lifting the construction from Brzuska and Mittelbach to achieve $q$-query UCEs for statistically unpredictable sources. This yields standard model instantiations for various ($q$-query) primitives including, deterministic public-key encryption, message-locked encryption, multi-bit point obfuscation, CCA-secure encryption, and more. For some of these, our construction yields the first standard model candidate.

2) We study the blow-up that occurs in indistinguishability obfuscation proof techniques due to puncturing and state the \\emph{Superfluous Padding Assumption} for indistinguishability obfuscation which allows us to lift the $q$-query restriction of our construction. We validate the assumption by showing that it holds for virtual black-box obfuscation.

3) Brzuska and Mittelbach require a strong form of point obfuscation secure in the presence of auxiliary input for their construction of UCEs. We show that this assumption is indeed necessary for the construction of injective UCEs.

Expand
Robert Lychev, Samuel Jero, Alexandra Boldyreva, Cristina Nita-Rotaru
ePrint Report ePrint Report
QUIC is a secure transport

protocol developed by Google and implemented in Chrome in 2013, currently

representing one of the most promising solutions to decreasing latency

while intending to provide security properties similar with TLS.

In this work we shed some light on QUIC\'s strengths and weaknesses

in terms of its provable security and performance guarantees in the presence of attackers.

We first introduce a security model for analyzing performance-driven protocols like QUIC

and prove that QUIC satisfies our definition under reasonable assumptions on the protocol\'s building blocks.

However, we find that QUIC does not satisfy the traditional notion of forward secrecy that is provided by some modes of TLS,

e.g., TLS-DHE.

Our analyses also reveal that with simple bit-flipping and replay attacks on some

public parameters exchanged during the handshake, an

adversary could easily prevent QUIC from achieving minimal latency

advantages either by having it fall back to TCP or by causing

the client and server to have an inconsistent view of their

handshake leading to a failure to complete the connection.

We have implemented these attacks and demonstrated that they

are practical.

Our results suggest that QUIC\'s security weaknesses are introduced by the very mechanisms used to reduce latency,

which highlights the seemingly inherent trade off between minimizing latency and providing `good\' security guarantees.

Expand
Roel Maes, Vincent van der Leest, Erik van der Sluis, Frans Willems
ePrint Report ePrint Report
PUF-based key generators have been widely considered as a root-of-trust in digital systems. They typically require an error-correcting mechanism (e.g. based on the code-offset method) for dealing with bit errors between the enrollment and reconstruction of keys. When the used PUF does not have full entropy, entropy leakage between the helper data and the device-unique key material can occur. If the entropy level of the PUF becomes too low, the PUF-derived key can be attacked through the publicly available helper data. In this work we provide several solutions for preventing this entropy leakage for PUFs suffering from bias. The methods proposed in this work pose no limit on the amount of bias that can be tolerated, which solves an important open problem for PUF-based key generation. Additionally, the solutions are all evaluated based on reliability, efficiency, leakage and reusability showing that depending on requirements for the key generator different solutions are preferable.

Expand

19 June 2015

Announcement Announcement
Videos from FSE 2013 are now online.
Expand
Academic City, UAE, March 3 - March 5
Event Calendar Event Calendar
Submission: 3 February 2016
Notification: 3 February 2016
From March 3 to March 5
Location: Academic City, UAE
More Information: http://sdiwc.net/conferences/ctisrm2016/
Expand

18 June 2015

Mridul Nandi
ePrint Report ePrint Report
Let P be chosen uniformly from the set P := Perm(S), the set of all permutations over a set S of size N. In Crypto 2015, Minaud and Seurin proved that for any unbounded time adversary A, making at most q queries, the distinguishing advantage between P^r (after sampling P, compose it for r times) and P, denoted Delta(P^r ; P), is at most (2r + 1)q/N. In this paper we provide an alternative simple proof of this result for an upper bound 2q(r+1)^2/N by using well known coefficient H-technique.

Expand

17 June 2015

Bingke Ma, Bao Li, Rongl
ePrint Report ePrint Report
In this paper, we present improved preimage attacks on the reduced-round \\texttt{GOST} hash function family, which serves as the new Russian hash standard, with the aid of techniques such as the rebound attack, the Meet-in-the-Middle preimage attack and the multicollisions. Firstly, the preimage attack on 5-round \\texttt{GOST-256} is proposed which is the first preimage attack for \\texttt{GOST-256} at the hash function level. Then we extend the (previous) attacks on 5-round \\texttt{GOST-256} and 6-round \\texttt{GOST-512} to 6.5 and 7.5 rounds respectively by exploiting the involution property of the \\texttt{GOST} transposition operation.

Secondly, inspired by the preimage attack on \\texttt{GOST-256}, we also study the impacts of four representative truncation patterns on the resistance of the Meet-in-the-Middle preimage attack against \\texttt{AES}-like compression functions, and propose two stronger truncation patterns which make it more difficult to launch this type of attack. Based on our investigations, we are able to slightly improve the previous pseudo preimage attacks on reduced-round \\texttt{Gr{\\o}stl-256}.

Expand
Tarik Moataz, Travis Mayberry, Erik-Oliver Blass
ePrint Report ePrint Report
There have been several attempts recently at using homomorphic encryption to increase the efficiency of Oblivious RAM protocols. One of the most successful has been Onion ORAM, which achieves O(1) communication overhead with polylogarithmic server com- putation. However, it has a number of drawbacks. It requires a very large block size of B = Ω(log^5 N), with large constants. Although it needs only polylogarithmic computation complexity, that computation consists mostly of expensive homomorphic mul- tiplications. Finally, it achieves O(1) communication complexity but only amortized over a number of accesses. In this work we aim to address these problems, reducing the required block size to Ω(log^3 N), removing almost all of the homomorphic multiplica- tions and achieving O(1) worst-case communication complexity. We achieve this by replacing their homomorphic eviction routine with a much less expensive permute-and-merge one which elim- inates homomorphic multiplications while maintaining the same level of security. In turn, this removes the need for layered encryp- tion that Onion ORAM relies on and reduces both the minimum block size and worst-case bandwidth.

Expand
Tobias Schneider, Amir Moradi, Tim Güneysu
ePrint Report ePrint Report
The protection of cryptographic implementations against higher-order attacks has risen to an important topic in the side-channel community after the advent of enhanced measurement equipment that enables the capture of millions of power traces in reasonably short time. However, the preprocessing of multi-million traces for such an attack is still challenging, in particular when in the case of (multivariate) higher-order attacks all traces need to be parsed at least two times. Even worse, partitioning the captured traces into smaller groups to parallelize computations is hardly possible with current techniques.

In this work we introduce procedures that allow iterative computation of correlation in a side-channel analysis attack at any arbitrary order in both univariate and multivariate settings. The advantages of our proposed solutions are manifold: i) they provide stable results, i.e., by increasing the number of used traces high accuracy of the estimations is still maintained, ii) each trace needs to be processed only once and at any time the result of the attack can be obtained (without requiring to reparse the whole trace pull when adding more traces), and iii) the computations can be efficiently parallelized, e.g., by splitting the trace pull into smaller subsets and processing each by a single thread on a multi-threading or cloud-computing platform. In short, our constructions allow efficiently performing higher-order side-channel analysis attacks (e.g., on hundreds of million traces) which is of crucial importance when practical evaluation of the masking schemes need to be performed.

Expand
Eli Ben-Sasson, Iddo Ben-Tov, Ivan Damgard, Yuval Ishai, Noga ron-Zewi
ePrint Report ePrint Report
Several well-known public key encryption schemes, including those of Alekhnovich (FOCS 2003), Regev (STOC 2005), and Gentry, Peikert and Vaikuntanathan (STOC 2008), rely on the conjectured intractability of inverting noisy linear encodings. These schemes are limited in that they either require the underlying field to grow with the security parameter, or alternatively they can work over the binary field but have a low noise entropy that gives rise to sub-exponential attacks.

Motivated by the goal of efficient public key cryptography, we study the possibility of obtaining improved security over the binary field by using different noise distributions.

Inspired by an abstract encryption scheme of Micciancio (PKC 2010), we consider an abstract encryption scheme that unifies all the three schemes mentioned above and allows for arbitrary choices of the underlying field and noise distributions.

Our main result establishes an unexpected connection between the power of such encryption schemes and additive combinatorics. Concretely, we show that under the ``approximate duality conjecture\" from additive combinatorics (Ben-Sasson and Zewi, STOC 2011), every instance of the abstract encryption scheme over the binary field can be attacked in time $2^{O(\\sqrt{n})}$, where $n$ is the maximum of the ciphertext size and the public key size (and where the latter excludes public randomness used for specifying the code).

On the flip side, counter examples to the above conjecture (if false) may lead to candidate public key encryption schemes with improved security guarantees.

We also show, using a simple argument that relies on agnostic learning of parities (Kalai, Mansour and Verbin, STOC 2008), that any such encryption scheme can be {\\em unconditionally} attacked in time $2^{O(n/\\log n)}$, where $n$ is the ciphertext size.

Combining this attack with the security proof of Regev\'s cryptosystem, we immediately obtain an algorithm that solves the {\\em learning parity with noise (LPN)} problem in time $2^{O(n/\\log \\log n)}$ using only $n^{1+\\epsilon}$ samples, reproducing the result of Lyubashevsky (Random 2005) in a conceptually different way.

Finally, we study the possibility of instantiating the abstract encryption scheme over constant-size rings to yield encryption schemes with no decryption error. We show that over the binary field decryption errors are inherent. On the positive side, building on the construction of matching vector families

(Grolmusz, Combinatorica 2000; Efremenko, STOC 2009; Dvir, Gopalan and Yekhanin, FOCS 2010),

we suggest plausible candidates for secure instances of the framework over constant-size rings that can offer perfectly correct decryption.

Expand
◄ Previous Next ►