International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

08 June 2015

Bart Mennink, Reza Reyhanitabar, Damian Vizár
ePrint Report ePrint Report
We provide a security analysis for full-state keyed Sponge and full-state Duplex constructions. Our results can be used for making a large class of Sponge-based authenticated encryption schemes more efficient by concurrent absorption of associated data and message blocks. In particular, we introduce and analyze a new variant of SpongeWrap with almost free authentication of associated data. The idea of using full-state message absorption for higher efficiency was first made explicit in the Donkey Sponge MAC construction, but without any formal security proof. Recently, Gazi, Pietrzak and Tessaro (CRYPTO 2015) have provided a proof for the fixed-output-length variant of Donkey Sponge. Yasuda and Sasaki (CT-RSA 2015) have considered partially full-state Sponge-based authenticated encryption schemes for efficient incorporation of associated data. In this work, we unify, simplify, and generalize these results about the security and applicability of full-state keyed Sponge and Duplex constructions; in particular, for designing more efficient authenticated encryption schemes. Compared to the proof of Gazi et al., our analysis directly targets the original Donkey Sponge construction as an arbitrary-output-length function. Our treatment is also more general than that of Yasuda and Sasaki, while yielding a more efficient authenticated encryption mode for the case that associated data might be longer than messages.

Expand
Sonia Belaïd, Jean-Sébastien Coron, Pierre-Alain Fouque, Benoît Gérard, Jean-Gabriel Kammerer, Emmanuel Prouff
ePrint Report ePrint Report
A side-channel analysis of multiplication in GF(2^{128}) has recently been published by Belaïd, Fouque and Gérard at Asiacrypt 2014, with an application to AES-GCM. Using the least significant bit of the Hamming weight of the multiplication result, the authors have shown how to recover the secret multiplier efficiently. However such least significant bit is very sensitive to noise measurement; this implies that without averaging their attack can only work for high signal-to-noise ratios (SNR > 128). In this paper we describe a new side-channel attack against the multiplication in GF(2^{128}) that uses the most significant bits of the Hamming weight. We show that much higher values of noise can be then tolerated. For instance with an SNR equal to 8, the key can be recovered using 2^{20} consumption traces with time and memory complexities respectively equal to 2^{51.68} and 2^{36}. We moreover show that the new method can be extended to attack the fresh re-keying countermeasure proposed by Medwed, Standaert, Großschädl and Regazzoni at Africacrypt 2010.

Expand
Moni Naor, Eylon Yogev
ePrint Report ePrint Report
Many efficient data structures use randomness, allowing them to improve upon deterministic ones. Usually, their efficiency and/or correctness are analyzed using probabilistic tools under the assumption that the inputs and queries are independent of the internal randomness of the data structure. In this work, we consider data structures in a more robust model, which we call the adversarial model. Roughly speaking, this model allows an adversary to choose inputs and queries adaptively according to previous responses. Specifically, we consider a data structure known as \"Bloom filter\" and prove a tight connection between Bloom filters in this model and cryptography.

A Bloom filter represents a set S of elements approximately, by using fewer bits than a precise representation. The price for succinctness is allowing some errors: for any x in S it should always answer \'Yes\', and for any x not in S it should answer \'Yes\' only with small probability.

In the adversarial model, we consider both efficient adversaries (that run in polynomial time) and computationally unbounded adversaries that are only bounded in the amount of queries they can make. For computationally bounded adversaries, we show that non-trivial (memory-wise) Bloom filters exist if and only if one-way functions exist. For unbounded adversaries we show that there exists a Bloom filter for sets of size n and error eps, that is secure against t queries and uses only O(n*log(1/eps) + t) bits of memory. In comparison, n*log(1/eps) is the best possible under a non-adaptive adversary.

Expand
Daniel R. L. Brown
ePrint Report ePrint Report
Two alternating vector operations on a cubic hypersurface are given

simple expressions. Direct use of the first operation\'s expression

seems less efficient than state-of-the-art elliptic curve

cryptography. The second expression seems mainly interesting

towards an elementary exposition about elliptic curve theory.

Expand
Qinglong Zhang, Zongbin Liu, and Cunqing Ma, Changting Li, Jiwu Jing
ePrint Report ePrint Report
Ring oscillator (RO) based physically unclonable function

(PUF) on FPGAs is crucial and popular for its nice properties and easy

implementation. The compensated measurement based on the ratio of

two ring oscillators\' frequencies proves to be particularly effective to extract

entropy of process variations. However from two ring oscillators

only one bit entropy is extracted and RO PUFs will occupy numerous

resource with the size of private information increasing. Motivated by this

inefficient resource usage, we propose an elegant and efficient method to

extract at least 31 bits entropy from two ring oscillators on FPGAs by

utilizing the fine control of programmable delay lines (PDL). We call this

construction Further ROPUF (FROPUF). In this paper, we present in

detail how to take advantage of the underlying random process variation

which derives from the lookup tables (LUT) of two ring oscillators,

and show that the in-depth variation can be extracted by a similar second

order difference calculation. In addition, we reveal the consistency

of the evaluation results from Xilinx FPGAs (e.g. Virtex-5, Virtex-6,

Kintex-7) and those by simulation of FROPUF. The responses of our

new construction have a nominal bit-error-rate (BER) of 1.85% at 27

C

and FROPUF greatly promotes the number of entropy with equivalent

reliability of the general ROPUF.

Expand
Marcel Keller, Emmanuela Orsini, Peter Scholl
ePrint Report ePrint Report
We describe an actively secure OT extension protocol in the random oracle model with efficiency very close to the passively secure IKNP protocol of Ishai et al. (Crypto 2003). For computational security parameter $\\kappa$, our protocol requires $\\kappa$ base OTs, and is the first practical, actively secure protocol to match the cost of the passive IKNP extension in this regard. The added communication cost is

only additive in $O(\\kappa)$, independent of the number of OTs being created, while the computation cost is essentially two finite field operations per extended OT. We present implementation results that show our protocol takes no more than 5% more time than the passively secure IKNP extension, in both LAN and WAN environments, and so is essentially optimal with respect to the passive protocol.

Expand
Xiao Shaun Wang, S. Dov Gordon, Allen McIntosh, Jonathan Katz
ePrint Report ePrint Report
Existing systems for secure computation require programmers to express the program to be securely computed as a circuit, or in some domain-specific language that can be compiled to a form suitable for applying known protocols. We propose a new system that can securely execute native MIPS code with no special annotations. Our system has the advantage of allowing programmers to use a language of their choice to express their programs, together with any off-the-shelf compiler to MIPS; it can be used for secure computation of existing \"legacy\" MIPS code as well.

Our system uses oblivious RAM for fetching instructions and performing load/store operations in memory, and garbled universal circuits for the execution of a MIPS ALU in each instruction step. We also explore various optimizations based on an offline analysis of the MIPS code to be executed, in order to minimize the overhead of executing each instruction while still maintaining security.

Expand
Yevgeniy Dodis, Ilya Mironov, Noah Stephens-Davidowitz
ePrint Report ePrint Report
A secure reverse firewall, as recently defined by Mironov and Stephens-Davidowitz, is a third party that \"sits between a user and the outside world\" and modifies the user\'s sent and received messages so that even if the user\'s machine has been corrupted, her security is still guaranteed! In other words, reverse firewalls allow us to provide meaningful (and, indeed, very strong) security guarantees against powerful adversaries that may have tampered with the user\'s hardware or software (or adversaries that are aware of bugs in the user\'s protocol implementation). A long list of recent revelations shows that such threats are extremely common in practice, and they present a serious, arguably existential, threat to cryptography. Importantly, reverse firewalls defend against such threats without sharing any secrets with the user, and in general we expect the user to place essentially no trust in the firewall.

While Mironov and Stephens-Davidowitz demonstrated that reverse firewalls can be constructed for very strong cryptographic primitives (which are of mostly theoretical interest), we study reverse firewalls for perhaps the most natural cryptographic task: secure message transmission. We find a rich structure of solutions that vary in efficiency, security, and setup assumptions, in close analogy with message transmission in the classical setting. Our strongest and most important result shows a protocol that achieves interactive, concurrent CCA-secure message transmission with a reverse firewall---i.e., CCA-secure message transmission on a possibly compromised machine! Surprisingly, this protocol is quite efficient and simple, requiring only a small constant number of public-key operations. It could easily be used in practice. Behind this result is a technical composition theorem that shows how key agreement with a sufficiently secure reverse firewall can be used to construct a message-transmission protocol with its own secure reverse firewall.

Expand
Amir Hassani Karbasi, Reza Ebrahimi Atani
ePrint Report ePrint Report
In this paper we present a new NTRU-Like public key cryptosystem with security provably based on the worst case hardness of the approximate both Shortest Vector Problem (SVP) and Closest Vector Problem (CVP) in some structured lattices, called ideal lattices. We show how to modify the ETRU cryptosystem, an NTRU-Like public key cryptosystem based on the Eisenstein integers where is a primitive cube root of unity, to make it provably secure, under the assumed quantum hardness of standard worst-case lattice problems, restricted to a family of lattices related to some cyclotomic fields. The security then proves for our main system from the already proven hardness of the R-LWE and R-SIS problems.

Expand
Charanjit S. Jutla
ePrint Report ePrint Report
The stock markets have two primary functions, that of providing liquidity and price discovery. While the market micro-structure was mostly ignored or assumed to function ideally for the purpose of asset pricing, M. O\'Hara (Journal of Finance, 2003) has established that both liquidity and price discovery affect asset pricing, and in particular asset returns. While the cost of liquidity provision is borne by investors, and is clearly detrimental to asset returns, periodic price discovery has both positive and negative consequences for asset pricing. In this work we propose using cryptography, and in particular multi-party secure computation, to setup a novel stock market structure that, to a large extent, removes the negative consequences of liquidity costs and periodic price discovery. Interestingly, the proposed market structure takes us back to the early days of stock markets, i.e. periodic call markets, but with the not so ``trusted\'\' auctioneer replaced by secure distributed computing where no individual party (or small coalition) gets to know the order book.

Expand
Sergey Agievich, Anastasiya Gorodilova, Nikolay Kolomeec, Svetla Nikova, Bart Preneel
ePrint Report ePrint Report
A detailed overview of the problems, solutions and experience of the

first international student\'s Olympiad in cryptography,

NSUCRYPTO\'2014, is given. We start with rules of participation and

description of rounds. All 15 problems of the Olympiad and their

solutions are considered in detail. There are discussed solutions of

the mathematical problems related to cipher constructing such as

studying of differential characteristics of S-boxes, S-box masking,

determining of relations between cyclic rotation and additions

modulo $2$ and $2^n$, constructing of special linear subspaces in

$\\mathbb{F}_2^n$; problems about the number of solutions of the

equation $F(x)+F(x+a)=b$ over the finite field $\\mathbb{F}_{2^n}$

and APN functions. Some unsolved problems in symmetric cryptography

are also considered.

Expand
Vincent Grosso, François-Xavier Standaert
ePrint Report ePrint Report
We describe three contributions regarding the Soft Analytical Side-Channel Attacks (SASCA) introduced at Asiacrypt 2014. First, we compare them with Algebraic Side-Channel Attacks (ASCA) in a noise-free simulated setting. We observe that SASCA allow more efficient key recoveries than ASCA, even in this context (favorable to the latter). Second, we describe the first working experiments of SASCA against an actual AES implementation. Doing so, we analyse their profiling requirements, put forward the significant gains they provide over profiled Differential Power Analysis (DPA) in terms of number of traces needed for key recoveries, and discuss the specificities of such concrete attacks compared to simulated ones. Third, we evaluate the distance between SASCA and DPA enhanced with computational power to perform enumeration, and show that the gap between both attacks can be quite reduced in this case. Therefore, our results bring interesting feedback for evaluation laboratories. They suggest that in several relevant scenarios (e.g. attacks exploiting many known plaintexts), taking a small margin over the security level indicated by standard DPA with enumeration should be sufficient to prevent more elaborate attacks such as SASCA. By contrast, SASCA may remain the only option in more extreme scenarios (e.g. attacks with unknown plaintexts/ciphertexts or against leakage-resilient primitives). We conclude by recalling the algorithmic dependency of the latter attacks, and therefore that our conclusions are specific to the AES.

Expand
François Durvaux, François-Xavier Standaert
ePrint Report ePrint Report
Leakage detection usually refers to the task of identifying data-dependent information in side-channel measurements, independent of whether this information can be exploited. Detecting Points-Of-interest (POIs) in leakage traces is a complementary task that is a necessary first step in most side-channel attacks, where the adversary wants to turn this information into (e.g.) a key recovery. In this paper, we discuss the differences between these tasks, by investigating a popular solution to leakage detection based on a t-test, and an alternative method exploiting Pearson\'s correlation coefficient. We first show that the simpler t-test has better sampling complexity, and that its gain over the correlation-based test can be predicted by looking at the Signal-to-Noise Ratio (SNR) of the leakage partitions used in these tests. This implies that the sampling complexity of both tests relates more to their implicit leakage assumptions than to the actual statistics exploited. We also put forward that this gain comes at the cost of some intuition loss regarding the localization of the exploitable leakage samples in the traces, and their informativeness. Next, and more importantly, we highlight that our reasoning based on the SNR allows defining an improved t-test with significanly faster detection speed (with approximately 5 times less measurements in our experiments), which is therefore highly relevant for evaluation laboratories. We finally conclude that whereas t-tests are the method of choice for leakage detection only, correlation-based tests exploiting larger partitions are preferable for detecting POIs, and confirm the latter intuition by integrating a correlation-based leakage detection test in recent automated tools for the detection of POIs in the leakage measurements of a masked implementation, in a black box manner and without key knowledge.

Expand
François Durvaux, François-Xavier Standaert
ePrint Report ePrint Report
Side-channel attacks generally rely on the availability of good leakage models to extract sensitive information from cryptographic implementations. The recently introduced leakage certification tests aim to guarantee that this condition is fulfilled based on sound statistical arguments. They are important ingredients in the evaluation of leaking devices since they allow a good separation between engineering challenges (how to produce clean measurements) and cryptographic ones (how to exploit these measurements). In this paper, we propose an alternative leakage certification test that is significantly simpler to implement than the previous proposal from Eurocrypt 2014. This gain admittedly comes at the cost of a couple of heuristic (yet reasonable) assumptions on the leakage distribution. To confirm its relevance, we first show that it allows confirming previous results of leakage certification. We then put forward that it leads to additional and useful intuitions regarding the information losses caused by incorrect assumptions in leakage modeling.

Expand
Sarita Agrawal, Jay Patel, Manik Lal Das
ePrint Report ePrint Report
In Wireless Sensor Networks(WSNs), a group of users communicating

on an unreliable wireless channel can use a group secret. For each session, group manager broadcasts a message containing some keying material, from which only the group members authorized in that session can extract the session key. If a member misses a broadcast message for key, it uses self healing to recover missing session key using most recent broadcast message. However, only self healing does not help if node needs to get most recent session key and have missed the corresponding broadcast. Through mutual healing, a node can request recent broadcast information from a neighboring node and then recover the required key using self-healing. In this paper, we propose a bi-linear pairing based self-healing scheme that reduces communication, storage and computation overhead in comparison to existing bi-linear pairing based self-healing schemes. Then, we discuss the mutual healing scheme that provides mutual authentication and key confirmation without disclosing the node locations to the adversary. The analysis with respect to active adversary shows a significant performance improvement for resource constrained sensor nodes along with the security features such as forward and backward secrecy, resilience against node collusion, node revocation and resistance to impersonation.

Expand
Benoît Cogliati, Rodolphe Lampe, Yannick Seurin
ePrint Report ePrint Report
We study how to construct efficient tweakable block ciphers in the Random Permutation model, where all parties have access to public random permutation oracles. We propose a construction that combines, more efficiently than by mere black-box composition, the CLRW construction (which turns a traditional block cipher into a tweakable block cipher) of Landecker et al. (CRYPTO 2012) and the iterated Even-Mansour construction (which turns a tuple of public permutations into a traditional block cipher) that has received considerable attention since the work of Bogdanov et al. (EUROCRYPT 2012). More concretely, we introduce the (one-round) tweakable Even-Mansour (TEM) cipher, constructed from a single $n$-bit permutation $P$ and a uniform and almost XOR-universal family of hash functions $(H_k)$ from some tweak space to $\\{0,1\\}^n$, and defined as $(k,t,x)\\mapsto H_k(t)\\oplus P(H_k(t)\\oplus x)$, where $k$ is the key, $t$ is the tweak, and $x$ is the $n$-bit message, as well as its generalization obtained by cascading $r$ independently keyed rounds of this construction. Our main result is a security bound up to approximately $2^{2n/3}$ adversarial queries against adaptive chosen-plaintext and ciphertext distinguishers for the two-round TEM construction, using Patarin\'s H-coefficients technique. We also provide an analysis based on the coupling technique showing that asymptotically, as the number of rounds $r$ grows, the security provided by the $r$-round TEM construction approaches the information-theoretic bound of $2^n$ adversarial queries.

Expand

06 June 2015

Bochum, Germany, September 10 - September 11
Event Calendar Event Calendar
Submission: 29 June 2015
Notification: 29 July 2015
From September 10 to September 11
Location: Bochum, Germany
More Information: https://wiki.crypto.rub.de/lightsec15/index.html
Expand

05 June 2015

Rhodes, Greece, October 26 - October 28
Event Calendar Event Calendar
Submission: 30 June 2015
Notification: 15 August 2015
From October 26 to October 28
Location: Rhodes, Greece
More Information: http://www.onthemove-conferences.org/index.php/cloud-trust-15
Expand

04 June 2015

PhD Database PhD Database
Name: Dai Yamamoto
Topic: Security Evaluation and Improvement of Physically Unclonable Functions
Category: implementation

Expand
Sunoo Park, Krzysztof Pietrzak, Jo\\\"el Alwen, Georg Fuchsbauer, Peter Gazi
ePrint Report ePrint Report
We propose a decentralized cryptocurrency based on a block-chain ledger similar to that of Bitcoin, but where the extremely wasteful proofs of work are replaced by proofs of space, recently introduced by Dziembowski et al. (CRYPTO 2015). Instead of requiring that a majority of the computing power is controlled by honest miners (as in Bitcoin), our currency requires that honest miners dedicate more disk space than a potential adversary.

Once a miner has dedicated and initialized some space, participating in the mining process is very cheap. A new block is added to the chain every fixed period of time (say, every minute), and in every period a miner just has to make a small number of lookups to the stored space to check if she ``wins\", and thus can add the next block to the chain and get the mining reward. Because this check is cheap, proof-of-space-based currencies share some (but not all) issues with currencies based on ``proofs of stake\'\', like Peercoin. Concretely, a na\\\"ive solution that simply replaces proofs of work with proofs of space raises two main issues which we address:

\\emph{Grinding:} A miner who can add the next block has some degree of freedom in shaping how the chain looks, e.g. by trying out different sets of transactions to include in her block. The miner can try many possible choices until she finds one which results in a chain that allows her to also mine the next block, thus hijacking the chain forever while dedicating only a small amount of the space. We solve this problem fully by ``decoupling\" the hash chain from the transactions, so that there is nothing to grind. To bind the transactions back to the hash chain, we add an extra signature chain, which guarantees that past transactions cannot be altered once an honest miner adds a block. Our solution also gives a simple and novel way to solve the grinding problem in currencies based on proofs of stake.

\\emph{Mining multiple chains:} Since checking whether one can add a block is cheap, rational miners will not only try to extend the so-far-best chain, but also try other chains, in the hope that they can extend one of them which will ultimately catch up and overtake the currently-best chain. (In the context of proof-of-stake-based currencies this is known as the ``nothing-at-stake\" problem.) This not only gives rational miners a larger-than-expected reward (compared to what honest miners get), but also makes consensus very slow, if not impossible. Our solution to this problem is based on penalizing miners who try to work on more than one branch of the chain.

Expand
◄ Previous Next ►