International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

25 July 2017

Hugues Thiebeauld, Georges Gagnerot, Antoine Wurcker, Christophe Clavier
ePrint Report ePrint Report
Side-channel techniques have well progressed over the last few years, leading to the creation of a variety of statistical tools for extracting the secrets used in cryptographic algorithms. Such techniques are taking advantage of the side-channel traces collected during the executions of secret computations in the product. Noticeably, the vast majority of side-channel attacks requires the traces have been aligned together beforehand. This prerequisite turns out to be more and more challenging in the practical realisation of attacks as many devices include hardware or software countermeasures to increase this difficulty. This is typically achieved applying random jittering or random executions with fake operations. In this paper, we introduce \textsl{scatter}, a new attack which has the potential to tackle most of the alignment issues. scatter brings a new dimension to improve the efficiency of existing attacks and opens the door to a large set of potential new attack techniques. The effectiveness of scatter has been proven on both simulated traces and real word secure products. As a result, scatter is a new side-channel technique particularly powerful when the trace alignment represents an issue, or even when considering low-cost attacks, as the requirements in terms of equipment and expertise are significantly reduced.
Expand
ikaterini Mitrokotsa, Cristina Onete, Elena Pagnin, Mahesh Perera
ePrint Report ePrint Report
Several access control systems are based on the users’ physical location/proximity to the access point. Distance- Bounding (DB) protocols constitute a classical solution to calculate the distance between a trusted verifier (e.g., an access point) and an untrusted prover (e.g., a pervasive device). The main limitation of DB is that the prover and the verifier need to lie in each other’s communication range. In this paper, we introduce the concept of Multi-Hop Distance-Estimation (MHDE) protocols, which enable a verifier to authenticate a possibly far-away prover and estimate its distance to this prover, when they are not in the communication range of each other, using an ad-hoc network of pervasive devices. More precisely, our contributions are three-fold, since we provide: (1) a formal definition for MHDE; (2) a threat model for MHDE that considers a powerful and distributed adversary; and (3) implementation of MHDE protocols with different settings. Additionally, we demonstrate our protocol to be secure in the considered threat model, and we provide a performance analysis regarding the accuracy of the distance estimation and the tolerance of limited mobility of the nodes. The results are promising in order to adopt MHDE in a distributed setting.
Expand
Nieuwpoort, Curacao, 26 February - 2 March 2018
Event Calendar Event Calendar
Event date: 26 February to 2 March 2018
Submission deadline: 15 September 2017
Notification: 17 November 2017
Expand
University of South Florida, Tampa, FL
Job Posting Job Posting
This is an urgent call for interested applicants. A funded Ph.D. student position is available starting January 2018 (all documents submitted by Sep. 15th, 2017) to work on different aspects of Cryptographic Engineering in the CSE department at USF.

USF is an R1 university and among the leading institutions in Florida. We are looking for motivated, talented, and hardworking applicants who have background and are interested in working on different aspects of Cryptographic Engineering with emphasis on:

- Cryptographic hardware systems

- Side-channel attacks, particularly fault and power analysis attacks

The required expertise includes:

- Masters (or Bachelors with outstanding background) in Computer Engineering or Electrical Engineering

- Solid background in digital design, VLSI, computer arithmetic, and ASIC/FPGA implementations

- Solid HDL expertise

- Outstanding English (if English tests are taken) to be eligible for department funding

- Motivation to work beyond the expectations from an average Ph.D. student and publish in top tier venues

Please closely observe the admission requirement details here before emailing:

http://www.usf.edu/engineering/cse/graduate/phd-program.aspx

Please send me your updated CV (including list of publications, language test marks, and references), transcripts for B.Sc. (and/or M.Sc.), and a statement of interest at mmozaff (at) gmail.com as soon as possible.

NOTE: At this time, I consider only the applicants who have already taken TOEFL/IELTS and GRE exams with excellent marks (application deadline: Sep. 15th, 2017). The successful candidate will be asked to apply formally very soon to the USF CSE department, so all the material has to be ready.

Closing date for applications: 15 September 2017

Contact: Prof. Mehran Mozaffari-Kermani

Assistant Professor, CSE @ USF

Expand

23 July 2017

Zhongxiang Zheng, Chunhuan Zhao, Haining Fan, Xiaoyun Wang
ePrint Report ePrint Report
Since first introduced by Satoshi Nakamoto in 2008, Bitcoin has become the biggest and most well-known decentralized digital currency. Its anonymity allows users all over the world to make transactions with each other and keep their identities hidden. However, protecting private key becomes a very important issue because it is the only access to a unique account and can hardly be recovered if missing. Storing an encrypted backup of private key and its corresponding advanced key is a traditional but effective way, and many other techniques help to make the backup harder to obtain by attackers. While in this paper, we introduce a new backup scheme that can provide protection when an attacker manages to obtain the backup. It is based on Bitcoin system and ECDSA signature scheme. The biggest difference is the generation and recovery of the backup processes are both related with some specific transactions on blockchain, thus it creates a gap for legal users and attackers who manages to obtain backup to recover key. The gap is decided by the number of accounts and transactions on the blockchain which increases rapidly with the growth of Bitcoin's popularity and therefore strengthens the security of our scheme at the same time. What's more, our technique can also be combined with former ones to achieve better security.
Expand
Helger Lipmaa
ePrint Report ePrint Report
Given a well-chosen additively homomorphic cryptosystem and a $\Sigma$ protocol with a linear answer, Damg{\aa}rd, Fazio, and Nicolosi proposed a non-interactive designated-verifier zero knowledge argument in the registered public key model that is sound under non-standard complexity-leveraging assumptions. In 2015, Chaidos and Groth showed how to achieve the weaker yet reasonable culpable soundness notion under standard assumptions but only if the plaintext space order is prime. It makes use of $\Sigma$ protocols that satisfy what we call the \emph{optimal culpable soundness}. Unfortunately, most of the known additively homomorphic cryptosystems (like the Paillier Elgamal cryptosystem that is secure under the standard Decisional Composite Residuosity Assumption) have composite-order plaintext space. We construct optimally culpable sound $\Sigma$ protocols and thus culpably sound non-interactive designated-verifier zero knowledge protocols for NP under standard assumptions given that the least prime divisor of the plaintext space order is large.
Expand
Shay Gueron, Yehuda Lindell
ePrint Report ePrint Report
Block cipher modes of operation provide a way to securely encrypt using a block cipher. The main factors in analyzing modes of operation are the level of security achieved (chosen-plaintext security, authenticated encryption, nonce-misuse resistance, and so on) and performance. When measuring the security level of a mode of operation, it does not suffice to consider asymptotics, and a concrete analysis is necessary. This is especially the case today, when encryption rates can be very high, and so birthday bounds may be approached or even reached.

In this paper, we show that key-derivation at every encryption significantly improves the security bounds in many cases. We present a new key-derivation method that utilizes a truncated block cipher, and show that this is far better than standard block-cipher based key derivation. We prove that by using our key derivation method, we obtain greatly improved bounds for many modes of operation, with a result that the lifetime of a key can be significantly extended. We demonstrate this for AES-CTR (CPA-security), AES-GCM (authenticated encryption) and AES-GCM-SIV (nonce-misuse resistance). Finally, we demonstrate that when using modern hardware with AES instructions (AES-NI), the performance penalty of deriving keys at each encryption is insignificant for most uses.
Expand
Marie-Sarah Lacharité, Brice Minaud, Kenneth G. Paterson
ePrint Report ePrint Report
We analyse the security of database encryption schemes supporting range queries against persistent adversaries. Security against such an adversary captures, among other things, the privacy of the client's data with respect to the server hosting the encrypted database. The bulk of our work applies to a generic setting, where the view of the adversary is limited to the set of records or documents matched by each query (known as access pattern leakage). We also consider a more specific setting where certain rank information is also leaked. The latter is inherent to multiple encryption schemes supporting range queries, such as Kerschbaum's FH-OPE scheme (CCS 2015) and the recently proposed Arx scheme of Poddar et al. (IACR eprint 2016/568, 2016/591). We provide three attacks.

- We first consider full reconstruction, which asks to recover the value of every record, fully negating encryption. We show that full reconstruction is possible within an expected number of queries $N \log N + O(N)$, where $N$ is the number of distinct plaintext values. This attack assumes that the dataset is dense, in the sense that every plaintext value occurs in some record; but it does not assume any a priori knowledge of the distribution of the values among records. This bound improves on an $O(N^2 \log N)$ bound in the same setting by Kellaris et al. (CCS 2016). We also provide efficient algorithms that succeed with the minimum possible number of queries (in a strong, information theoretical sense), prove a matching data lower bound for the number of queries required, and study in more detail the setting where rank information leakage is available in addition to the access pattern. - We show another efficient attack able to recover all plaintext values within a constant ratio of error (such as a 1% error), requiring only the access pattern leakage of $O(N)$ queries. More precisely, recovering all plaintext values within an additive margin of error $\epsilon N$ for any arbitrary $\epsilon$ requires an expected number of $\frac{5}{4} N\log(1/\epsilon) + O(N)$ queries. As before, this result comes with a matching lower bound. - Finally, we consider the common situation where the adversary has access to an auxiliary distribution for the targeted values. This enables us to convert rank leakage into approximate range information, leading to an accelerated attack. This attack does not require a dense dataset. Since it is not amenable to a rigorous analysis, we report the results of experiments using this third attack against age data from real-world medical data sets. We show that the attack is highly effective at reconstructing the association between values and records, even with imperfect auxiliary information. In our experiments, observing only 50 queries was sufficient to reconstruct 55% of records to within 5 years, and 35% of records to within 3 years.

In combination, our attacks suggest that the practical impact of the leakage suffered by all schemes supporting range queries is more severe than previously thought, particularly so for schemes like Arx and FH-OPE which also leak rank. Our attacks cast doubt on the practical viability of current approaches to enabling range queries when the threat model goes beyond snapshot attacks to include a persistent server-side adversary.
Expand
Johannes Buchmann, Denise Demirel, Lucas Schabhüser, Patrick Struck
ePrint Report ePrint Report
In this work the first linearly homomorphic authenticated encryption scheme with public verifiability and provable correctness, called LEPCoV, is presented. It improves the initial proposal by avoiding false negatives during the verification algorithm. This work provides a detailed description of LEPCoV, a comparison with the original scheme, a security and correctness proof, and a performance analysis showing that all algorithms run in reasonable time for parameters that are currently considered secure. The scheme presented here allows a user to outsource computations on encrypted data to the cloud, such that any third party can verify the correctness of the computations without having access to the original data. This makes this work an important contribution to cloud computing and applications where operations on sensitive data have to be performed, such as statistics on medical records and tallying of electronically cast votes.
Expand
Damien Couroussé, Thierno Barry, Bruno Robisson, Philippe Jaillon, Olivier Potin, Jean-Louis Lanet
ePrint Report ePrint Report
We present a generic framework for runtime code polymorphism, applicable to a broad range of computing platforms including embedded systems with low computing resources (e.g. microcontrollers with few kilo-bytes of memory). Code polymorphism is defined as the ability to change the observable behaviour of a software component without changing its functional properties. In this paper we present the implementation of code polymorphism with runtime code generation, which offers many code transformation possibilities: we describe the use of random register allocation, random instruction selection, instruction shuffling and insertion of noise instructions. We evaluate the effectiveness of our framework against correlation power analysis: as compared to an unprotected implementation of AES where the secret key could be recovered in less than 50 traces in average, in our protected implementation, we increased the number of traces necessary to achieve the same attack by more than 20000x. With regards to the state of the art, our implementation shows a moderate impact in terms of performance overhead.
Expand
Sean Murphy, Rachel Player
ePrint Report ePrint Report
We develop a statistical framework to analyse the Ring-LWE processes of A Toolkit for Ring-LWE Cryptography (Eurocrypt 2013) and similar processes. We consider the $\delta$-subgaussian random variables used in the Toolkit and elsewhere in the literature, and we give a simple and complete characterisation of such random variables. We then apply our results to the homomorphic cryptosystem provided as an example application in the Toolkit. We show that the $\delta$-subgaussian approach as used in the Toolkit to argue correctness of this cryptosystem is flawed, and we also rectify this analysis using our developed statistical framework.
Expand
Liliya R. Ahmetzyanova, Evgeny K. Alekseev, Igor B. Oshkin, Stanislav V. Smyshlyaev
ePrint Report ePrint Report
In this paper we introduce a classification of existing approaches to increase the security of block cipher operation modes based on re-keying, putting the focus on so-called internal re-keying without master key --- re-keying during each separate message processing with no additional keys required. For extending the GCM base mode we provide an internal re-keying technique called ACPKM. This technique does not require additional secret parameters or complicated transformations --- for key updating only the base encryption function is used. We quantify the security of the re-keyed GCMKM mode, respecting standard security notions with nonce-respecting adversaries, as a function of the security of a used primitive. We claim that the obtained proof framework can be reused to provide security bounds for other re-keyed modes without a master key. We also show that the ACPKM internal re-keying technique increases security, essentially extending the lifetime of a key with a minor loss in performance. We also consider the composition of internal and external re-keying and compare key lifetime limitations for the base and re-keyed GCM modes in TLS 1.3.
Expand
Hai Zhou
ePrint Report ePrint Report
Logic encryption is an important hardware security technique that introduces keys to modify a given combinational circuit in order to lock the functionality from unauthorized uses. Traditional methods are all ad hoc approaches based on inserting lock gates with keys on randomly selected signals in the original circuit. Thus, it was not a surprise that a SAT-based attack developed by Subramanyan et al. successfully defeated almost all of them. New approaches such as SARLock and Anti-SAT were then developed to defend against SAT-based attack. However, they are still vulnerable with extremely low error rates.

In this paper, we present a theory on logic encryption that provides a complete understanding on the design space and the trade-o between error rate and attack complexity. An e cient general design scheme is derived from the theory and some speci c designs are also suggested. We also suggest a method to insert one-way function to burden the SAT engine, and a method to obfuscate the whole design. In addition, as an application of the theory, we also develop a scienti c encryption benchmark for approximate attacks. We test our encryption designs and obfuscation techniques by the SAT-based attack, and the results have veri ed the robustness of our approach.
Expand
Christian Cachin, Jan Camenisch, Eduarda Freire-Stoegbuchner, Anja Lehmann
ePrint Report ePrint Report
Tokenization is the process of consistently replacing sensitive elements, such as credit cards numbers, with non-sensitive surrogate values. As tokenization is mandated for any organization storing credit card data, many practical solutions have been introduced and are in commercial operation today. However, all existing solutions are static yet, i.e., they do not allow for efficient updates of the cryptographic keys while maintaining the consistency of the tokens. This lack of updatability is a burden for most practical deployments, as cryptographic keys must also be re-keyed periodically for ensuring continued security. This paper introduces a model for updatable tokenization with key evolution, in which a key exposure does not disclose relations among tokenized data in the past, and where the updates to the tokenized data set can be made by an untrusted entity and preserve the consistency of the data. We formally define the desired security properties guaranteeing unlinkability of tokens among different time epochs and one-wayness of the tokenization process. Moreover, we construct two highly efficient updatable tokenization schemes and prove them to achieve our security notions.
Expand
Patrick McCorry, Ethan Heilman, Andrew Miller
ePrint Report ePrint Report
We present atomic trade protocols for Bitcoin and Ethereum that can bind two parties to swap coins in the event that two blockchains emerge from a single “pre-fork” blockchain. This work is motivated by a bet between two members of the Bitcoin community, Loaded and Roger Ver, to trade 60,000 bitcoins in the event that Bitcoin Unlimited’s planned hardfork occurs and the blockchain splits into two distinct forks. Additionally we study several ways to provide replay protection in the event of hardfork alongside a novel mechanism called migration inputs. We provide a detailed survey and history of previous softforks and hardforks in Ethereum and Bitcoin.
Expand
Carlos Cid, Tao Huang, Thomas Peyrin, Yu Sasaki, Ling Song
ePrint Report ePrint Report
In this article, we provide the first independent security analysis of Deoxys, a third-round authenticated encryption candidate of the CAESAR competition, and its internal tweakable block ciphers Deoxys-BC-256 and Deoxys-BC-384. We show that the related-tweakey differential bounds provided by the designers can be greatly improved thanks to a MILP-based search tool. In particular, we develop a new method to incorporate linear incompatibility in the MILP model. We use this tool to generate valid differential paths for reduced-round versions of Deoxys-BC-256 and Deoxys-BC-384, later combining them into broader boomerang or rectangle attacks. Here, we also develop a new MILP model which optimises the two paths by taking into account the effect of the ladder switch technique. Interestingly, with the tweak in Deoxys-BC providing extra input as opposed to a classical block cipher, we can even consider beyond full-codebook attacks, and we analyse how our results can be improved in this setting. As these primitives are based on the TWEAKEY framework, we further study how the security of the cipher is impacted when playing with the tweak/key sizes. All in all, we are able to attack 10 rounds of Deoxys-BC-256 (out of 14) and 14 rounds of Deoxys-BC-384 (out of 16). The extra rounds specified in Deoxys-BC to balance the tweak input (when compared to AES) seem to provide about the same security margin as AES-128. Finally we analyse why the authenticated encryption modes of Deoxys mostly prevent our attacks on Deoxys-BC to apply to the authenticated encryption primitive.
Expand
Deepesh Data, Manoj Prabhakaran
ePrint Report ePrint Report
A basic question of cryptographic complexity is to combinatorially characterize all randomized functions which have information-theoretic semi-honest secure 2-party computation protocols. The corresponding question for deterministic functions was answered almost three decades back, by Kushilevitz (FOCS 1989). In this work, we make progress towards understanding securely computable `randomized' functions. We bring tools developed in the study of completeness to bear on this problem. In particular, our characterizations are obtained by considering only symmetric functions with a combinatorial property called `simplicity' (Maji et al. Indocrypt 2012).

Our main result is a complete combinatorial characterization of randomized functions with `ternary output' kernels, that have information-theoretic semi-honest secure 2-party computation protocols. In particular, we show that there exist simple randomized functions with ternary output that do not have secure computation protocols. (For deterministic functions, the smallest output alphabet size of such a function is 5, due to an example given by Beaver, DIMACS Workshop on Distributed Computing and Cryptography 1989.)

Also, we give a complete combinatorial characterization of randomized functions that have `2-round' information-theoretic semi-honest secure 2-party computation protocols.

We also give a counter-example to a natural conjecture for the full characterization, namely, that all securely computable simple functions have secure protocols with a unique transcript for each output value. This conjecture is in fact true for deterministic functions, and -- as our results above show -- for ternary functions and for functions with 2-round secure protocols.
Expand
Fanbao Liu, Fengmei Liu
ePrint Report ePrint Report
In this paper, we provide a security analysis of the Full-State Keyed Sponge (FKS), Full-State Keyed Duplex (FKD) and Keyak, one of the third-round CAESAR candidates, in the classic setting and the quantum model, respectively. In the classic setting, we present an universal forgery attack that can be implemented in $O(2^{c/2})$ queries, where $c$ is the capacity.

In the quantum model, by utilizing the Simon's algorithm, we propose an efficient universal forgery attack to FKS, FKD and Keyak with complexity of $O(c)$. Moreover, we also propose an efficient key recovery attack that can be implemented in $O(c)$. Such attacks show that FKS, FKD and Keyak is completely broken in the quantum model.
Expand
Po-Chun Kuo, Wen-Ding Li, Yu-Wei Chen, Yuan-Che Hsu, Bo-Yuan Peng, Chen-Mou Cheng, Bo-Yin Yang
ePrint Report ePrint Report
The National Institute of Standards and Technology (NIST) announces the post-quantum crypto project, aiming to select cryptographic standard in the post-quantum era. The key establishment algorithm is one of the most important primitives. At Usenix Security 2016, Alkim, Ducas, Thomas Pöpplemann, and Schwabe proposed a post-quantum key exchange scheme called NewHope, based on the ring-learning-with-error (RLWE) problem. In this work, we propose the first hardware implementation of NewHope. Our implementation requires 12,707 FFs, 19,781 LUTs, 13,025 slice registers, 32 DSPs and 13 BRAMs on Xilinx Zynq-7000 equipped with 28mm Artix-7 7020 FPGA. For NewHope key exchange, the three phase of key exchange costs 75.4, 99.1, and 24.6 microsecond, respectively.
Expand
Yasufumi Hashimoto
ePrint Report ePrint Report
In this short report, we study the security of the new multivariate signature scheme HMFEv proposed at PQCrypto 2017.
Expand
◄ Previous Next ►