Here you can see all recent updates to the IACR webpage. These updates are also available:

26
July
2017

Event Calendar
The 6th Technion Summer School on Cyber and Computer Security
Haifa, Israel, 10 September - 14 September 2017

Event date: 10 September to 14 September 2017

25
July
2017

In the framework of a H2020 program of the European Commission, Intelligent Voice Ltd is receiving financial support to hire a postdoctoral researcher for one year on the following topic :

The cloud offers an ideal opportunity for storing large volumes of data. However, the storage of sensitive data such as speech in plain text format on the cloud is not permitted in many industry sectors such as finance, health care etc. Hence speech data should be encrypted before storage on the cloud, and because it contains biometric identifiers it must remain encrypted. The challenge then is to search over large amounts of encrypted speech and return encrypted search results that can be decrypted by the user only. Intelligent Voice are providers of the world\'s fastest speech to text engine, and we are looking for a talented researcher in semantic security and searchable encryption to join our research team. This post builds on existing research within Intelligent Voice on Searchable and Homomorphic cryptographic protocols for speech processing.

Applicants should have already completed, or be close to completing, a PhD in computer science, mathematics, or a related discipline. Applicants should have an excellent research track record demonstrated by publications at major cryptography/security venues, and should have significant experience in the design and deployment of cryptographic protocols.

To apply please send your CV (with publication list), a 1-page cover letter, and the names of at least two people who can provide reference letters (e-mail).

**Closing date for applications:** 31 August 2017

**Contact:** Gérard Chollet, Head of Research, Intelligent Voice Ltd

St Clare House, 30-33 Minories, London EC3N 1BP

gerard.chollet(at)intelligentvoice.com

Phone: +44 20 3627 2670

**More information:** http://www.intelligentvoice.com/

Event Calendar
INSCRYPT 2017: The 13th International Conference on Information Security and Cryptology
Xi'an, China, 3 November - 5 November 2017

Event date: 3 November to 5 November 2017

Notification: 15 September 2017

Notification: 15 September 2017

ePrint Report
Composable Masking Schemes in the Presence of Physical Defaults and the Robust Probing Model
Sebastian Faust, Vincent Grosso, Santos Merino Del Pozo, Clara Paglialonga, François-Xavier Standaert

Composability and robustness against physical defaults (e.g., glitches) are two highly desirable properties for secure implementations of masking schemes. While tools exist to guarantee them separately, no current formalism enables their joint investigation. In this paper, we solve this issue by introducing a new model, the robust probing model, that is naturally suited to capture the combination of these properties. We first motivate this formalism by analyzing the excellent robustness and low randomness requirements of first-order threshold implementations, and highlighting the difficulty to extend them to higher orders. Next, and most importantly, we use our theory to design higher-order secure, robust and composable multiplication gadgets. While admittedly inspired by existing approaches to masking (e.g., Ishai-Sahai-Wagner-like, threshold, domain-oriented), these gadgets exhibit subtle implementation differences with these state-of-the-art solutions (none of which being provably composable and robust). Hence, our results illustrate how sound theoretical models can guide practically-relevant
implementations.

ePrint Report
Distributed Computing with Channel Noise
Abhinav Aggarwal, Varsha Dani, Thomas P. Hayes, Jared Saia

A group of $n$ users want to run a distributed protocol $\pi$ over a network where communication occurs via private point-to-point channels. Unfortunately, an adversary, who knows $\pi$, is able to maliciously flip bits on the channels. Can we efficiently simulate $\pi$ in the presence of such an adversary?

We show that this is possible, even when $L$, the number of bits sent in $\pi$, and $T$, the number of bits flipped by the adversary are not known in advance. In particular, we show how to create a robust version of $\pi$ that 1) fails with probability at most $\delta$, for any $\delta > 0$; and 2) sends $\tilde{O}(L+T)$ bits, where the $\tilde{O}$ notation hides a $\log(nL/\delta)$ term multiplying $L$.

Additionally, we show how to improve this result when the average message size $\alpha$ is not constant. In particular, we give an algorithm that sends $O(L(1 + (1/\alpha) \log(nL/\delta) + T )$ bits. This algorithm is adaptive in that it does not require a priori knowledge of $\alpha$. We note that if $\alpha$ is $\Omega (log(nL/\delta))$, then this improved algorithm sends only $O(L + T)$ bits, and is therefore within a constant factor of optimal.

We show that this is possible, even when $L$, the number of bits sent in $\pi$, and $T$, the number of bits flipped by the adversary are not known in advance. In particular, we show how to create a robust version of $\pi$ that 1) fails with probability at most $\delta$, for any $\delta > 0$; and 2) sends $\tilde{O}(L+T)$ bits, where the $\tilde{O}$ notation hides a $\log(nL/\delta)$ term multiplying $L$.

Additionally, we show how to improve this result when the average message size $\alpha$ is not constant. In particular, we give an algorithm that sends $O(L(1 + (1/\alpha) \log(nL/\delta) + T )$ bits. This algorithm is adaptive in that it does not require a priori knowledge of $\alpha$. We note that if $\alpha$ is $\Omega (log(nL/\delta))$, then this improved algorithm sends only $O(L + T)$ bits, and is therefore within a constant factor of optimal.

ePrint Report
spKEX: An optimized lattice-based key exchange
Sauvik Bhattacharya, Oscar Garcia-Morchon, Ronald Rietman, Ludo Tolhuizen

The advent of large-scale quantum computers has resulted in significant interest in quantum-safe cryptographic primitives. Lattice-based cryptography is one of the most attractive post-quantum cryptographic families due to its well-understood security, efficient operation and versatility. However, LWE-based schemes are still relatively bulky and slow.
In this work, we present spKEX, a forward-secret, post-quantum, unauthenticated lattice-based key-exchange scheme that combines four techniques to optimize performance. spKEX relies on Learning with Rounding (LWR) to reduce bandwidth; it uses sparse and ternary secrets to speed up computations and reduce failure probability; it applies an improved key reconciliation scheme to reduce bandwidth and failure probability; and computes the public matrix A by means of a permutation to improve performance while allowing for a fresh A in each key exchange.
For a quantum security level of 128 bits, our scheme requires 32\% lesser bandwidth than the LWE-based key-exchange proposal Frodo and allows for a fast implementation of the key exchange.

We make a number of remarks about the AES-GCM-SIV nonce-misuse resistant authenticated encryption scheme currently considered for standardization by the Crypto Forum Research Group (CFRG). First, we point out that the security analysis proposed in the ePrint report 2017/168 is incorrect, leading to overly optimistic security claims. We correct the bound and re-assess the security guarantees offered by the scheme for various parameters. Second, we suggest a simple modification to the key derivation function which would improve the security of the scheme with virtually no efficiency penalty.

ePrint Report
Privacy-Preserving Ridge Regression on Distributed Data
Irene Giacomelli, Somesh Jha, C. David Page, Kyonghwan Yoon

Linear regression is an important statistical tool that models the relationship between some explanatory values and an outcome value using a linear function.
In many current applications (e.g. predictive modelling in personalized healthcare), these values represent sensitive data owned by several different parties that are unwilling to share them. In this setting, training a linear regression model becomes challenging and needs specific cryptographic solutions. In this work, we propose a new system that can train a linear regression model with 2-norm regularization (i.e. ridge regression) on a dataset obtained by merging a finite number of private datasets. Our system is composed of two phases: The first one is based on a simple homomorphic encryption scheme and takes care of securely merging the private datasets. The second phase is a new ad-hoc two-party protocol that computes a ridge regression model solving a linear system where all coefficients are encrypted. The efficiency of our system is evaluated both on synthetically generated and real-world datasets.

ePrint Report
SCATTER : A New Dimension in Side-Channel
Hugues Thiebeauld, Georges Gagnerot, Antoine Wurcker, Christophe Clavier

Side-channel techniques have well progressed over the last few years, leading to the creation of a variety of statistical tools for extracting the secrets used in cryptographic algorithms. Such techniques are taking advantage of the side-channel traces collected during the executions of secret computations in the product. Noticeably, the vast majority of side-channel attacks requires the traces have been aligned together beforehand. This prerequisite turns out to be more and more challenging in the practical realisation of attacks as many devices include hardware or software countermeasures to increase this difficulty. This is typically achieved applying random jittering or random executions with fake operations. In this paper, we introduce \textsl{scatter}, a new attack which has the potential to tackle most of the alignment issues.
scatter brings a new dimension to improve the efficiency of existing attacks and opens the door to a large set of potential new attack techniques. The effectiveness of scatter has been proven on both simulated traces and real word secure products. As a result, scatter is a new side-channel technique particularly powerful when the trace alignment represents an issue, or even when considering low-cost attacks, as the requirements in terms of equipment and expertise are significantly reduced.

ePrint Report
Multi-Hop Distance Estimation: How Far are You?
ikaterini Mitrokotsa, Cristina Onete, Elena Pagnin, Mahesh Perera

Several access control systems are based on the users’ physical location/proximity to the access point. Distance- Bounding (DB) protocols constitute a classical solution to calculate the distance between a trusted verifier (e.g., an access point) and an untrusted prover (e.g., a pervasive device). The main limitation of DB is that the prover and the verifier need to lie in each other’s communication range. In this paper, we introduce the concept of Multi-Hop Distance-Estimation (MHDE) protocols, which enable a verifier to authenticate a possibly far-away prover and estimate its distance to this prover, when they are not in the communication range of each other, using an ad-hoc network of pervasive devices. More precisely, our contributions are three-fold, since we provide: (1) a formal definition for MHDE; (2) a threat model for MHDE that considers a powerful and distributed adversary; and (3) implementation of MHDE protocols with different settings. Additionally, we demonstrate our protocol to be secure in the considered threat model, and we provide a performance analysis regarding the accuracy of the distance estimation and the tolerance of limited mobility of the nodes. The results are promising in order to adopt MHDE in a distributed setting.

Event Calendar
FC18: Financial Cryptography and Data Security 2018
Nieuwpoort, Curacao, 26 February - 2 March 2018

Event date: 26 February to 2 March 2018

Submission deadline: 15 September 2017

Notification: 17 November 2017

Submission deadline: 15 September 2017

Notification: 17 November 2017

This is an urgent call for interested applicants. A funded Ph.D. student position is available starting January 2018 (all documents submitted by Sep. 15th, 2017) to work on different aspects of Cryptographic Engineering in the CSE department at USF.

USF is an R1 university and among the leading institutions in Florida. We are looking for motivated, talented, and hardworking applicants who have background and are interested in working on different aspects of Cryptographic Engineering with emphasis on:

- Cryptographic hardware systems

- Side-channel attacks, particularly fault and power analysis attacks

The required expertise includes:

- Masters (or Bachelors with outstanding background) in Computer Engineering or Electrical Engineering

- Solid background in digital design, VLSI, computer arithmetic, and ASIC/FPGA implementations

- Solid HDL expertise

- Outstanding English (if English tests are taken) to be eligible for department funding

- Motivation to work beyond the expectations from an average Ph.D. student and publish in top tier venues

Please closely observe the admission requirement details here before emailing:

http://www.usf.edu/engineering/cse/graduate/phd-program.aspx

Please send me your updated CV (including list of publications, language test marks, and references), transcripts for B.Sc. (and/or M.Sc.), and a statement of interest at *mmozaff (at) gmail.com* as soon as possible.

NOTE: At this time, I consider only the applicants who have already taken TOEFL/IELTS and GRE exams with excellent marks (application deadline: Sep. 15th, 2017). The successful candidate will be asked to apply formally very soon to the USF CSE department, so all the material has to be ready.

**Closing date for applications:** 15 September 2017

**Contact:** Prof. Mehran Mozaffari-Kermani

Assistant Professor, CSE @ USF

23
July
2017

ePrint Report
A Key Backup Scheme Based on Bitcoin
Zhongxiang Zheng, Chunhuan Zhao, Haining Fan, Xiaoyun Wang

Since first introduced by Satoshi Nakamoto in 2008, Bitcoin has become the biggest and most well-known decentralized digital currency. Its anonymity allows users all over the world to make transactions with each other and keep their identities hidden. However, protecting private key becomes a very important issue because it is the only access to a unique account and can hardly be recovered if missing. Storing an encrypted backup of private key and its corresponding advanced key is a traditional but effective way, and many other techniques help to make the backup harder to obtain by attackers. While in this paper, we introduce a new backup scheme that can provide protection when an attacker manages to obtain the backup. It is based on Bitcoin system and ECDSA signature scheme. The biggest difference is the generation and recovery of the backup processes are both related with some specific transactions on blockchain, thus it creates a gap for legal users and attackers who manages to obtain backup to recover key. The gap is decided by the number of accounts and transactions on the blockchain which increases rapidly with the growth of Bitcoin's popularity and therefore strengthens the security of our scheme at the same time. What's more, our technique can also be combined with former ones to achieve better security.

Given a well-chosen additively homomorphic cryptosystem and a $\Sigma$ protocol with a linear answer, Damg{\aa}rd, Fazio, and Nicolosi proposed a non-interactive designated-verifier zero knowledge argument in the registered public key model that is sound under non-standard complexity-leveraging assumptions.
In 2015, Chaidos and Groth showed how to achieve the weaker yet reasonable culpable soundness notion under standard assumptions but only if the plaintext space order is prime. It makes use of $\Sigma$ protocols that satisfy what we call the \emph{optimal culpable soundness}. Unfortunately, most of the known additively homomorphic cryptosystems (like the Paillier Elgamal cryptosystem that is secure under the standard Decisional Composite Residuosity Assumption) have composite-order plaintext space. We construct optimally culpable sound $\Sigma$ protocols and thus culpably sound non-interactive designated-verifier zero knowledge protocols for NP under standard assumptions given that the least prime divisor of the plaintext space order is large.

ePrint Report
Better Bounds for Block Cipher Modes of Operation via Nonce-Based Key Derivation
Shay Gueron, Yehuda Lindell

Block cipher modes of operation provide a way to securely encrypt using a block cipher. The main factors in analyzing modes of operation are the level of security achieved (chosen-plaintext security, authenticated encryption, nonce-misuse resistance, and so on) and performance. When measuring the security level of a mode of operation, it does not suffice to consider asymptotics, and a concrete analysis is necessary. This is especially the case today, when encryption rates can be very high, and so birthday bounds may be approached or even reached.

In this paper, we show that key-derivation at every encryption significantly improves the security bounds in many cases. We present a new key-derivation method that utilizes a truncated block cipher, and show that this is far better than standard block-cipher based key derivation. We prove that by using our key derivation method, we obtain greatly improved bounds for many modes of operation, with a result that the lifetime of a key can be significantly extended. We demonstrate this for AES-CTR (CPA-security), AES-GCM (authenticated encryption) and AES-GCM-SIV (nonce-misuse resistance). Finally, we demonstrate that when using modern hardware with AES instructions (AES-NI), the performance penalty of deriving keys at each encryption is insignificant for most uses.

In this paper, we show that key-derivation at every encryption significantly improves the security bounds in many cases. We present a new key-derivation method that utilizes a truncated block cipher, and show that this is far better than standard block-cipher based key derivation. We prove that by using our key derivation method, we obtain greatly improved bounds for many modes of operation, with a result that the lifetime of a key can be significantly extended. We demonstrate this for AES-CTR (CPA-security), AES-GCM (authenticated encryption) and AES-GCM-SIV (nonce-misuse resistance). Finally, we demonstrate that when using modern hardware with AES instructions (AES-NI), the performance penalty of deriving keys at each encryption is insignificant for most uses.

ePrint Report
Improved Reconstruction Attacks on Encrypted Data Using Range Query Leakage
Marie-Sarah Lacharité, Brice Minaud, Kenneth G. Paterson

We analyse the security of database encryption schemes supporting range queries against persistent adversaries. Security against such an adversary captures, among other things, the privacy of the client's data with respect to the server hosting the encrypted database. The bulk of our work applies to a generic setting, where the view of the adversary is limited to the set of records or documents matched by each query (known as access pattern leakage). We also consider a more specific setting where certain rank information is also leaked. The latter is inherent to multiple encryption schemes supporting range queries, such as Kerschbaum's FH-OPE scheme (CCS 2015) and the recently proposed Arx scheme of Poddar et al. (IACR eprint 2016/568, 2016/591). We provide three attacks.

- We first consider full reconstruction, which asks to recover the value of every record, fully negating encryption. We show that full reconstruction is possible within an expected number of queries $N \log N + O(N)$, where $N$ is the number of distinct plaintext values. This attack assumes that the dataset is dense, in the sense that every plaintext value occurs in some record; but it does not assume any a priori knowledge of the distribution of the values among records. This bound improves on an $O(N^2 \log N)$ bound in the same setting by Kellaris et al. (CCS 2016). We also provide efficient algorithms that succeed with the minimum possible number of queries (in a strong, information theoretical sense), prove a matching data lower bound for the number of queries required, and study in more detail the setting where rank information leakage is available in addition to the access pattern. - We show another efficient attack able to recover all plaintext values within a constant ratio of error (such as a 1% error), requiring only the access pattern leakage of $O(N)$ queries. More precisely, recovering all plaintext values within an additive margin of error $\epsilon N$ for any arbitrary $\epsilon$ requires an expected number of $\frac{5}{4} N\log(1/\epsilon) + O(N)$ queries. As before, this result comes with a matching lower bound. - Finally, we consider the common situation where the adversary has access to an auxiliary distribution for the targeted values. This enables us to convert rank leakage into approximate range information, leading to an accelerated attack. This attack does not require a dense dataset. Since it is not amenable to a rigorous analysis, we report the results of experiments using this third attack against age data from real-world medical data sets. We show that the attack is highly effective at reconstructing the association between values and records, even with imperfect auxiliary information. In our experiments, observing only 50 queries was sufficient to reconstruct 55% of records to within 5 years, and 35% of records to within 3 years.

In combination, our attacks suggest that the practical impact of the leakage suffered by all schemes supporting range queries is more severe than previously thought, particularly so for schemes like Arx and FH-OPE which also leak rank. Our attacks cast doubt on the practical viability of current approaches to enabling range queries when the threat model goes beyond snapshot attacks to include a persistent server-side adversary.

- We first consider full reconstruction, which asks to recover the value of every record, fully negating encryption. We show that full reconstruction is possible within an expected number of queries $N \log N + O(N)$, where $N$ is the number of distinct plaintext values. This attack assumes that the dataset is dense, in the sense that every plaintext value occurs in some record; but it does not assume any a priori knowledge of the distribution of the values among records. This bound improves on an $O(N^2 \log N)$ bound in the same setting by Kellaris et al. (CCS 2016). We also provide efficient algorithms that succeed with the minimum possible number of queries (in a strong, information theoretical sense), prove a matching data lower bound for the number of queries required, and study in more detail the setting where rank information leakage is available in addition to the access pattern. - We show another efficient attack able to recover all plaintext values within a constant ratio of error (such as a 1% error), requiring only the access pattern leakage of $O(N)$ queries. More precisely, recovering all plaintext values within an additive margin of error $\epsilon N$ for any arbitrary $\epsilon$ requires an expected number of $\frac{5}{4} N\log(1/\epsilon) + O(N)$ queries. As before, this result comes with a matching lower bound. - Finally, we consider the common situation where the adversary has access to an auxiliary distribution for the targeted values. This enables us to convert rank leakage into approximate range information, leading to an accelerated attack. This attack does not require a dense dataset. Since it is not amenable to a rigorous analysis, we report the results of experiments using this third attack against age data from real-world medical data sets. We show that the attack is highly effective at reconstructing the association between values and records, even with imperfect auxiliary information. In our experiments, observing only 50 queries was sufficient to reconstruct 55% of records to within 5 years, and 35% of records to within 3 years.

In combination, our attacks suggest that the practical impact of the leakage suffered by all schemes supporting range queries is more severe than previously thought, particularly so for schemes like Arx and FH-OPE which also leak rank. Our attacks cast doubt on the practical viability of current approaches to enabling range queries when the threat model goes beyond snapshot attacks to include a persistent server-side adversary.

ePrint Report
Linearly Homomorphic Authenticated Encryption with Provable Correctness and Public Verifiability
Johannes Buchmann, Denise Demirel, Lucas Schabhüser, Patrick Struck

In this work the first linearly homomorphic authenticated encryption
scheme with public verifiability and provable correctness, called
LEPCoV, is presented. It improves the initial proposal by avoiding false
negatives during the verification algorithm. This work provides a detailed
description of LEPCoV, a comparison with the original scheme, a
security and correctness proof, and a performance analysis showing that
all algorithms run in reasonable time for parameters that are currently
considered secure. The scheme presented here allows a user to outsource
computations on encrypted data to the cloud, such that any third party
can verify the correctness of the computations without having access
to the original data. This makes this work an important contribution
to cloud computing and applications where operations on sensitive data
have to be performed, such as statistics on medical records and tallying
of electronically cast votes.

ePrint Report
Runtime Code Polymorphism as a Protection Against Side Channel Attacks
Damien Couroussé, Thierno Barry, Bruno Robisson, Philippe Jaillon, Olivier Potin, Jean-Louis Lanet

We present a generic framework for runtime code polymorphism, applicable to a broad range of computing platforms including embedded systems with low computing resources (e.g. microcontrollers with few kilo-bytes of memory). Code polymorphism is defined as the ability to change the observable behaviour of a software component without changing its functional properties. In this paper we present the implementation of code polymorphism with runtime code generation, which offers many code transformation possibilities: we describe the use of random register allocation, random instruction selection, instruction shuffling and insertion of noise instructions. We evaluate the effectiveness of our framework against correlation power analysis: as compared to an unprotected implementation of AES where the secret key could be recovered in less than 50 traces in average, in our protected implementation, we increased the number of traces necessary to achieve the same attack by more than 20000x. With regards to the state of the art, our implementation shows a moderate impact in terms of performance overhead.

We develop a statistical framework to analyse the Ring-LWE processes of A Toolkit for Ring-LWE Cryptography (Eurocrypt 2013) and similar processes. We consider the $\delta$-subgaussian random variables used in the Toolkit and elsewhere in the literature, and we give a simple and complete characterisation of such random variables. We then apply our results to the homomorphic cryptosystem provided as an example application in the Toolkit. We show that the $\delta$-subgaussian approach as used in the Toolkit to argue correctness of this cryptosystem is flawed, and we also rectify this analysis using our developed statistical framework.

ePrint Report
Increasing the Lifetime of Symmetric Keys for the GCM Mode by Internal Re-keying
Liliya R. Ahmetzyanova, Evgeny K. Alekseev, Igor B. Oshkin, Stanislav V. Smyshlyaev

In this paper we introduce a classification of existing approaches to increase the security of block cipher operation modes based on re-keying, putting the focus on so-called internal re-keying without master key --- re-keying during each separate message processing with no additional keys required.
For extending the GCM base mode we provide an internal re-keying technique called ACPKM. This technique does not require additional secret parameters or complicated transformations --- for key updating only the base encryption function is used. We quantify the security of the re-keyed GCMKM mode, respecting standard security notions with nonce-respecting adversaries, as a function of the security of a used primitive. We claim that the obtained proof framework can be reused to provide security bounds for other re-keyed modes without a master key. We also show that the ACPKM internal re-keying technique increases security, essentially extending the lifetime of a key with a minor loss in performance. We also consider the composition of internal and external re-keying and compare key lifetime limitations for the base and re-keyed GCM modes in TLS 1.3.

Logic encryption is an important hardware security technique that introduces keys to modify a given combinational circuit in order to lock the functionality from unauthorized uses. Traditional methods are all ad hoc approaches based on inserting lock gates with keys on randomly selected signals in the original circuit. Thus, it was not a surprise that a SAT-based attack developed by Subramanyan et al. successfully defeated almost all of them. New approaches such as SARLock and Anti-SAT were then developed to defend against SAT-based attack. However, they are still vulnerable with extremely low error rates.

In this paper, we present a theory on logic encryption that provides a complete understanding on the design space and the trade-o between error rate and attack complexity. An e cient general design scheme is derived from the theory and some speci c designs are also suggested. We also suggest a method to insert one-way function to burden the SAT engine, and a method to obfuscate the whole design. In addition, as an application of the theory, we also develop a scienti c encryption benchmark for approximate attacks. We test our encryption designs and obfuscation techniques by the SAT-based attack, and the results have veri ed the robustness of our approach.

In this paper, we present a theory on logic encryption that provides a complete understanding on the design space and the trade-o between error rate and attack complexity. An e cient general design scheme is derived from the theory and some speci c designs are also suggested. We also suggest a method to insert one-way function to burden the SAT engine, and a method to obfuscate the whole design. In addition, as an application of the theory, we also develop a scienti c encryption benchmark for approximate attacks. We test our encryption designs and obfuscation techniques by the SAT-based attack, and the results have veri ed the robustness of our approach.

ePrint Report
Updatable Tokenization: Formal Definitions and Provably Secure Constructions
Christian Cachin, Jan Camenisch, Eduarda Freire-Stoegbuchner, Anja Lehmann

Tokenization is the process of consistently replacing sensitive elements, such as credit cards numbers, with non-sensitive surrogate values. As tokenization is mandated for any organization storing credit card data, many practical solutions have been introduced and are in commercial operation today. However, all existing solutions are static yet, i.e., they do not allow for efficient updates of the cryptographic keys while maintaining the consistency of the tokens. This lack of updatability is a burden for most practical deployments, as cryptographic keys must also be re-keyed periodically for ensuring continued security. This paper introduces a model for updatable tokenization with key evolution, in which a key exposure does not disclose relations among tokenized data in the past, and where the updates to the tokenized data set can be made by an untrusted entity and preserve the consistency of the data. We formally define the desired security properties guaranteeing unlinkability of tokens among different time epochs and one-wayness of the tokenization process. Moreover, we construct two highly efficient updatable tokenization schemes and prove them to achieve our security notions.

ePrint Report
Atomically Trading with Roger: Gambling on the success of a hardfork
Patrick McCorry, Ethan Heilman, Andrew Miller

We present atomic trade protocols for Bitcoin and Ethereum
that can bind two parties to swap coins in the event that two blockchains
emerge from a single “pre-fork” blockchain. This work is motivated by
a bet between two members of the Bitcoin community, Loaded and
Roger Ver, to trade 60,000 bitcoins in the event that Bitcoin Unlimited’s
planned hardfork occurs and the blockchain splits into two distinct
forks. Additionally we study several ways to provide replay protection
in the event of hardfork alongside a novel mechanism called migration
inputs. We provide a detailed survey and history of previous softforks
and hardforks in Ethereum and Bitcoin.

ePrint Report
Cryptanalysis of Deoxys and its Internal Tweakable Block Ciphers
Carlos Cid, Tao Huang, Thomas Peyrin, Yu Sasaki, Ling Song

In this article, we provide the first independent security analysis of Deoxys, a third-round authenticated encryption candidate of the CAESAR competition, and its internal tweakable block ciphers Deoxys-BC-256 and Deoxys-BC-384. We show that the related-tweakey differential bounds provided by the designers can be greatly improved thanks to a MILP-based search tool. In particular, we develop a new method to incorporate linear incompatibility in the MILP model. We use this tool to generate valid differential paths for reduced-round versions of Deoxys-BC-256 and Deoxys-BC-384, later combining them into broader boomerang or rectangle attacks. Here, we also develop a new MILP model which optimises the two paths by taking into account the effect of the ladder switch technique. Interestingly, with the tweak in Deoxys-BC providing extra input as opposed to a classical block cipher, we can even consider beyond full-codebook attacks, and we analyse how our results can be improved in this setting. As these primitives are based on the TWEAKEY framework, we further study how the security of the cipher is impacted when playing with the tweak/key sizes. All in all, we are able to attack 10 rounds of Deoxys-BC-256 (out of 14) and 14 rounds of Deoxys-BC-384 (out of 16). The extra rounds specified in Deoxys-BC to balance the tweak input (when compared to AES) seem to provide about the same security margin as AES-128. Finally we analyse why the authenticated encryption modes of Deoxys mostly prevent our attacks on Deoxys-BC to apply to the authenticated encryption primitive.

ePrint Report
Towards Characterizing Securely Computable Two-Party Randomized Functions
Deepesh Data, Manoj Prabhakaran

A basic question of cryptographic complexity is to combinatorially
characterize all randomized functions which have information-theoretic
semi-honest secure 2-party computation protocols. The corresponding question
for deterministic functions was answered almost three decades back, by
Kushilevitz (FOCS 1989). In this work, we make progress towards
understanding securely computable `randomized' functions. We bring
tools developed in the study of completeness to bear on this problem. In
particular, our characterizations are obtained by considering only symmetric
functions with a combinatorial property called `simplicity'
(Maji et al. Indocrypt 2012).

Our main result is a complete combinatorial characterization of randomized functions with `ternary output' kernels, that have information-theoretic semi-honest secure 2-party computation protocols. In particular, we show that there exist simple randomized functions with ternary output that do not have secure computation protocols. (For deterministic functions, the smallest output alphabet size of such a function is 5, due to an example given by Beaver, DIMACS Workshop on Distributed Computing and Cryptography 1989.)

Also, we give a complete combinatorial characterization of randomized functions that have `2-round' information-theoretic semi-honest secure 2-party computation protocols.

We also give a counter-example to a natural conjecture for the full characterization, namely, that all securely computable simple functions have secure protocols with a unique transcript for each output value. This conjecture is in fact true for deterministic functions, and -- as our results above show -- for ternary functions and for functions with 2-round secure protocols.

Our main result is a complete combinatorial characterization of randomized functions with `ternary output' kernels, that have information-theoretic semi-honest secure 2-party computation protocols. In particular, we show that there exist simple randomized functions with ternary output that do not have secure computation protocols. (For deterministic functions, the smallest output alphabet size of such a function is 5, due to an example given by Beaver, DIMACS Workshop on Distributed Computing and Cryptography 1989.)

Also, we give a complete combinatorial characterization of randomized functions that have `2-round' information-theoretic semi-honest secure 2-party computation protocols.

We also give a counter-example to a natural conjecture for the full characterization, namely, that all securely computable simple functions have secure protocols with a unique transcript for each output value. This conjecture is in fact true for deterministic functions, and -- as our results above show -- for ternary functions and for functions with 2-round secure protocols.

ePrint Report
Universal Forgery and Key Recovery Attacks: Application to FKS, FKD and Keyak
Fanbao Liu, Fengmei Liu

In this paper, we provide a security analysis of the Full-State Keyed Sponge (FKS), Full-State Keyed Duplex (FKD) and Keyak, one of the third-round CAESAR candidates, in the classic setting and the quantum model, respectively. In the classic setting, we present an universal forgery attack that can be implemented in $O(2^{c/2})$ queries, where $c$ is the capacity.

In the quantum model, by utilizing the Simon's algorithm, we propose an efficient universal forgery attack to FKS, FKD and Keyak with complexity of $O(c)$. Moreover, we also propose an efficient key recovery attack that can be implemented in $O(c)$. Such attacks show that FKS, FKD and Keyak is completely broken in the quantum model.

In the quantum model, by utilizing the Simon's algorithm, we propose an efficient universal forgery attack to FKS, FKD and Keyak with complexity of $O(c)$. Moreover, we also propose an efficient key recovery attack that can be implemented in $O(c)$. Such attacks show that FKS, FKD and Keyak is completely broken in the quantum model.

ePrint Report
Post-Quantum Key Exchange on FPGAs
Po-Chun Kuo, Wen-Ding Li, Yu-Wei Chen, Yuan-Che Hsu, Bo-Yuan Peng, Chen-Mou Cheng, Bo-Yin Yang

The National Institute of Standards and Technology (NIST) announces the post-quantum crypto project, aiming to select cryptographic standard in the post-quantum era. The key establishment algorithm is one of the most important primitives. At Usenix Security 2016, Alkim, Ducas, Thomas PÃ¶pplemann, and Schwabe proposed a post-quantum key exchange scheme called NewHope, based on the ring-learning-with-error (RLWE) problem.
In this work, we propose the first hardware implementation of NewHope. Our implementation requires 12,707 FFs, 19,781 LUTs, 13,025 slice registers, 32 DSPs and 13 BRAMs on Xilinx Zynq-7000 equipped with 28mm Artix-7 7020 FPGA. For NewHope key exchange, the three phase of key exchange costs 75.4, 99.1, and 24.6 microsecond, respectively.

In this short report, we study the security of the new multivariate signature scheme HMFEv proposed at PQCrypto 2017.

ePrint Report
Quantum Collision-Finding in Non-Uniform Random Functions
Marko Balogh, Edward Eaton, Fang Song

We give a complete characterization of quantum attacks for finding a collision in a non- uniform random function whose outputs are drawn according to a distribution of min-entropy k. This can be viewed as showing generic security of hash functions under relaxed assumptions in contrast to the standard heuristic of assuming uniformly random outputs. It also has ap- plications in analyzing quantum security of the Fujisaki-Okamoto transformation [TU TCC16B]. In particular, our results close a gap in the lower bound left open in [TTU PQCrypto16].

Specifically, let $D$ be a min-entropy $k$ distribution on a set $Y$ of size $N$. Let $f: X\to Y$ be a function whose output $f(x)$ is drawn according to $D$ for each $x \in X$ independently. We show that $\Omega(2^{k/3})$ quantum queries are necessary to find a collision in $f$, improving the previous bound $\Omega(2^{k/9})$. In fact we show a stronger lower bound $2^{k/2}$ in some special case. For all cases, we also describe explicit quantum algorithms that find a collision with a number of queries matching the corresponding lower bounds.

Specifically, let $D$ be a min-entropy $k$ distribution on a set $Y$ of size $N$. Let $f: X\to Y$ be a function whose output $f(x)$ is drawn according to $D$ for each $x \in X$ independently. We show that $\Omega(2^{k/3})$ quantum queries are necessary to find a collision in $f$, improving the previous bound $\Omega(2^{k/9})$. In fact we show a stronger lower bound $2^{k/2}$ in some special case. For all cases, we also describe explicit quantum algorithms that find a collision with a number of queries matching the corresponding lower bounds.

21
July
2017

Event Calendar
WAIFI 2018: International Workshop on the Arithmetic of Finite Fields
Bergen, Norway, 14 June - 16 June 2018

Event date: 14 June to 16 June 2018

Submission deadline: 1 April 2018

Notification: 11 May 2018

Submission deadline: 1 April 2018

Notification: 11 May 2018