IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
21 June 2015
Milivoj Simeonovski, Fabian Bendun, Muhammad Rizwan Asghar, Michael Backes, Ninja Marnau and
In this work, we propose a universal framework Oblivion to support
the automation of the right to be forgotten in a scalable,
provable and privacy-preserving manner. First, Oblivion enables a
user to automatically find and tag her disseminated personal
information using natural language processing (NLP) and image recognition techniques and
file a request in a privacy-preserving manner. Second, Oblivion
provides indexing systems with an automated and provable eligibility
mechanism, asserting that the author of a request is indeed affected
by an online resource. The automated eligibility proof ensures censorship-resistance so that only legitimately affected
individuals can request the removal of corresponding links from
search results. We have conducted comprehensive evaluations of Oblivion, showing that the framework is capable of handling 278 removal requests per second on a standard notebook (2.5 GHz dual core), and is hence suitable for large-scale deployment.
Patrick HADDAD, Viktor FISCHER, Florent BERNARD, Jean NICOLAI
Debrup Chakraborty, Cuauhtemoc Mancillas-Lopez, Palash Sarkar
encryption. It has been almost canonised that an encryption scheme suitable for the application of disk encryption must be
length preserving, i.e., it rules out the use of schemes like authenticated encryption where an authentication tag is also
produced as a part of the ciphertext resulting in ciphertexts being longer than the corresponding plaintexts. The notion of
a tweakable enciphering scheme (TES) has been formalised as the appropriate primitive for disk encryption and it has been argued
that they provide the maximum security possible for a tag-less scheme. On the other hand, TESs are less efficient than some
existing authenticated encryption schemes. Also TES cannot provide true authentication as they do not have authentication tags.
In this paper, we analyze the possibility of the use of encryption schemes where length expansion is produced for
the purpose of disk encryption. On the negative side, we argue that nonce based authenticated encryption schemes are not appropriate
for this application. On the positive side, we demonstrate that deterministic authenticated encryption (DAE) schemes may
have more advantages than disadvantages compared to a TES when used for disk encryption. Finally, we propose a new deterministic
authenticated encryption scheme called BCTR which is suitable for this purpose. We provide the full specification of BCTR, prove
its security and also report an efficient implementation in reconfigurable hardware. Our experiments suggests that BCTR performs
significantly better than existing TESs and existing DAE schemes.
Nahid Farhady Ghalaty, Bilgiday Yuce, Mostafa Taha, Patrick Schaumont
no sharp distinction between passive attacks based on sidechannel
leakage and active attacks based on fault injection.
Fault behavior can be processed as side-channel information,
offering all the benefits of Differential Power Analysis including
noise averaging and hypothesis testing by correlation. This paper
introduces Differential Fault Intensity Analysis, which combines
the principles of Differential Power Analysis and fault injection.
We observe that most faults are biased - such as single-bit,
two-bit, or three-bit errors in a byte - and that this property
can reveal the secret key through a hypothesis test. Unlike
Differential Fault Analysis, we do not require precise analysis
of the fault propagation. Unlike Fault Sensitivity Analysis, we do
not require a fault sensitivity profile for the device under attack.
We demonstrate our method on an FPGA implementation of
AES with a fault injection model. We find that with an average
of 7 fault injections, we can reconstruct a full 128-bit AES key
Jean-Sebastien Coron, Craig Gentry, Shai Halevi, Tancrede Lepoint, Hemanta K. Maji, Eric Miles, Mariana
Amir Moradi, Alexander Wild
In this work we look at the feasibility of higher-order attacks on first-order TI from another perspective. Instead of increasing the order of resistance by employing higher-order TIs, we realize the first-order TI designs following the principles of a power-equalization technique dedicated to FPGA platforms, that naturally leads to hardening higher-order attacks. We show that although the first-order TI designs, which are additionally equipped by the power-equalization methodology, have significant area overhead, they can maintain the same throughput and more importantly can avoid the higher-order leakages to be practically exploitable by up to 1 billion traces.
Martin Pettai, Peeter Laud
Krzysztof Pietrzak, Maciej Skorski
John Kelsey, Kerry A. McKay, Meltem Sonmez Turan
In this paper, we develop a set of tools for estimating entropy, based on mechanisms that attempt to predict the next sample in a sequence based on all previous samples.
These mechanisms are called predictors. We develop a framework for using predictors to estimate entropy, and test them experimentally against both simulated and real noise sources. For comparison, we subject the entropy estimates defined in the August 2012 draft of NIST Special Publication 800-90B to the same tests, and compare their performance.
Jan Camenisch, Maria Dubovitskaya, Kristiyan Haralambiev, Markulf Kohlweiss
To address this gap, we propose unlinkable redactable signatures (URS), a new building block for privacy-enhancing protocols, which we use to construct the first efficient UC-secure anonymous credential system that supports multiple issuers, selective disclosure of attributes, and pseudonyms. Our scheme is one of the first such systems for which both the size of a credential and its presentation proof are independent of the number of attributes issued in a credential. Moreover, our new credential scheme does not rely on random oracles.
As an important intermediary step, we address the problem of building a functionality for a complex credential system that can cover many different features. Namely, we design a core building block for a single issuer that supports credential issuance and presentation with respect to pseudonyms and then show how to construct a full-fledged credential system with multiple issuers in a modular way. We expect this flexible definitional approach to be of independent interest.
Christina Brzuska, Arno Mittelbach
The contributions of this paper are threefold:
1) We show a surprising equivalence for the notions of strong unpredictability and (plain) unpredictability thereby lifting the construction from Brzuska and Mittelbach to achieve $q$-query UCEs for statistically unpredictable sources. This yields standard model instantiations for various ($q$-query) primitives including, deterministic public-key encryption, message-locked encryption, multi-bit point obfuscation, CCA-secure encryption, and more. For some of these, our construction yields the first standard model candidate.
2) We study the blow-up that occurs in indistinguishability obfuscation proof techniques due to puncturing and state the \\emph{Superfluous Padding Assumption} for indistinguishability obfuscation which allows us to lift the $q$-query restriction of our construction. We validate the assumption by showing that it holds for virtual black-box obfuscation.
3) Brzuska and Mittelbach require a strong form of point obfuscation secure in the presence of auxiliary input for their construction of UCEs. We show that this assumption is indeed necessary for the construction of injective UCEs.
Robert Lychev, Samuel Jero, Alexandra Boldyreva, Cristina Nita-Rotaru
protocol developed by Google and implemented in Chrome in 2013, currently
representing one of the most promising solutions to decreasing latency
while intending to provide security properties similar with TLS.
In this work we shed some light on QUIC\'s strengths and weaknesses
in terms of its provable security and performance guarantees in the presence of attackers.
We first introduce a security model for analyzing performance-driven protocols like QUIC
and prove that QUIC satisfies our definition under reasonable assumptions on the protocol\'s building blocks.
However, we find that QUIC does not satisfy the traditional notion of forward secrecy that is provided by some modes of TLS,
e.g., TLS-DHE.
Our analyses also reveal that with simple bit-flipping and replay attacks on some
public parameters exchanged during the handshake, an
adversary could easily prevent QUIC from achieving minimal latency
advantages either by having it fall back to TCP or by causing
the client and server to have an inconsistent view of their
handshake leading to a failure to complete the connection.
We have implemented these attacks and demonstrated that they
are practical.
Our results suggest that QUIC\'s security weaknesses are introduced by the very mechanisms used to reduce latency,
which highlights the seemingly inherent trade off between minimizing latency and providing `good\' security guarantees.
Roel Maes, Vincent van der Leest, Erik van der Sluis, Frans Willems
19 June 2015
Academic City, UAE, March 3 - March 5
Notification: 3 February 2016
From March 3 to March 5
Location: Academic City, UAE
More Information: http://sdiwc.net/conferences/ctisrm2016/
18 June 2015
Mridul Nandi
17 June 2015
Bingke Ma, Bao Li, Rongl
Secondly, inspired by the preimage attack on \\texttt{GOST-256}, we also study the impacts of four representative truncation patterns on the resistance of the Meet-in-the-Middle preimage attack against \\texttt{AES}-like compression functions, and propose two stronger truncation patterns which make it more difficult to launch this type of attack. Based on our investigations, we are able to slightly improve the previous pseudo preimage attacks on reduced-round \\texttt{Gr{\\o}stl-256}.
Tarik Moataz, Travis Mayberry, Erik-Oliver Blass
Tobias Schneider, Amir Moradi, Tim Güneysu
In this work we introduce procedures that allow iterative computation of correlation in a side-channel analysis attack at any arbitrary order in both univariate and multivariate settings. The advantages of our proposed solutions are manifold: i) they provide stable results, i.e., by increasing the number of used traces high accuracy of the estimations is still maintained, ii) each trace needs to be processed only once and at any time the result of the attack can be obtained (without requiring to reparse the whole trace pull when adding more traces), and iii) the computations can be efficiently parallelized, e.g., by splitting the trace pull into smaller subsets and processing each by a single thread on a multi-threading or cloud-computing platform. In short, our constructions allow efficiently performing higher-order side-channel analysis attacks (e.g., on hundreds of million traces) which is of crucial importance when practical evaluation of the masking schemes need to be performed.
Eli Ben-Sasson, Iddo Ben-Tov, Ivan Damgard, Yuval Ishai, Noga ron-Zewi
Motivated by the goal of efficient public key cryptography, we study the possibility of obtaining improved security over the binary field by using different noise distributions.
Inspired by an abstract encryption scheme of Micciancio (PKC 2010), we consider an abstract encryption scheme that unifies all the three schemes mentioned above and allows for arbitrary choices of the underlying field and noise distributions.
Our main result establishes an unexpected connection between the power of such encryption schemes and additive combinatorics. Concretely, we show that under the ``approximate duality conjecture\" from additive combinatorics (Ben-Sasson and Zewi, STOC 2011), every instance of the abstract encryption scheme over the binary field can be attacked in time $2^{O(\\sqrt{n})}$, where $n$ is the maximum of the ciphertext size and the public key size (and where the latter excludes public randomness used for specifying the code).
On the flip side, counter examples to the above conjecture (if false) may lead to candidate public key encryption schemes with improved security guarantees.
We also show, using a simple argument that relies on agnostic learning of parities (Kalai, Mansour and Verbin, STOC 2008), that any such encryption scheme can be {\\em unconditionally} attacked in time $2^{O(n/\\log n)}$, where $n$ is the ciphertext size.
Combining this attack with the security proof of Regev\'s cryptosystem, we immediately obtain an algorithm that solves the {\\em learning parity with noise (LPN)} problem in time $2^{O(n/\\log \\log n)}$ using only $n^{1+\\epsilon}$ samples, reproducing the result of Lyubashevsky (Random 2005) in a conceptually different way.
Finally, we study the possibility of instantiating the abstract encryption scheme over constant-size rings to yield encryption schemes with no decryption error. We show that over the binary field decryption errors are inherent. On the positive side, building on the construction of matching vector families
(Grolmusz, Combinatorica 2000; Efremenko, STOC 2009; Dvir, Gopalan and Yekhanin, FOCS 2010),
we suggest plausible candidates for secure instances of the framework over constant-size rings that can offer perfectly correct decryption.