International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

05 August 2019

Vasyl Ustimenko
ePrint Report ePrint Report
Non-commutative cryptography studies cryptographic primitives and systems which are based on algebraic structures like groups, semigroups and noncommutative rings. We con-tinue to investigate inverse protocols of Non-commutative cryptography defined in terms of subsemigroups of Affine Cremona Semigroups over finite fields or arithmetic rings $Z_m$ and homomorphic images of these semigroups as possible instruments of Post Quantum Cryptography. This approach allows to construct cryptosystems which are not public keys, as outputs of the protocol correspondents receive mutually inverse transformations on affine space $K^n$ or variety $(K^*)^n$, where $K$ is a field or an arithmetic ring. The security of such inverse protocol rests on the complexity of word problem to decompose element of Affine Cremona Semigroup given in its standard form into composition of given generators. We discuss the idea of the usage of combinations of two cryptosystems with cipherspaces $(K^*)^n$ and $K^n$ to form a new cryptosystem with the plainspace $(K^*)^n$, ciphertext $K^n$ and nonbijective highly nonlinear encryption map.
Expand
Runchao Han, Haoyu Lin, Jiangshan Yu
ePrint Report ePrint Report
Atomic Swap enables two parties to atomically exchange their own cryptocurrencies without trusted third parties. However, it was pointed out that an Atomic Swap is equivalent to an American Call Option without the premium, thus is unfair to the swap participant. In this paper, we investigate the (un)fairness of the Atomic Swap protocol. First, we quantify the unfairness of Atomic Swap and compare it with that of conventional financial assets (stocks and fiat currencies). The quantification results show that the Atomic Swaps are much more unfair on cryptocurrencies than on stocks and fiat currencies in the same setting. Second, we use the conventional Cox-Ross-Rubinstein option pricing model in Finance to estimate the premium, and show that the estimated premium for cryptocurrencies is 2% ~ 3% of the asset value, while the premium for stocks and fiat currencies is approximately 0.3%. Third, we propose two fair Atomic Swap protocols, one is for currency exchange and the other is for American Call Options. Our protocols are based on the original Atomic Swap protocol, but implement the premium mechanism. Blockchains supporting smart contracts such as Ethereum support our protocols directly. Blockchains only supporting scripts such as Bitcoin can support our protocols by adding a simple opcode. Last, we give the reference implementation of our protocols in Solidity, and give detailed instructions on implementing our protocols with Bitcoin script.
Expand
Jintai Ding, Zheng Zhang, Joshua Deaton, Vishakha
ePrint Report ePrint Report
In 2017 Kyung-Ah Shim et al proposed a multivariate signature scheme called Himq-3 which is a submission to National Institute of Standards and Technology (NIST) standardization process of post-quantum cryptosystems. The Himq-3 signature scheme can be classified into oil vinegar signature scheme family. It has a multilayer structure but it uses a cycle system to invert the central map. The signing process of Himq-3 is very fast, and it has small signatures.

In this paper we present a cryptanalysis of Himq-3. We show that inherent to the signing process is a leakage of information of the private key. Using this information one can forge a signature.
Expand
Fatih Balli, F. Betül Durak, Serge Vaudenay
ePrint Report ePrint Report
We design a suite of protocols so that a small tamper-resistant device can be used as a biometric identity document which can be scanned by authorized terminals. We target both strongly secure identification and strong privacy. Unlike biometric passports, our protocols leak no digital evidence and are essentially deniable. Besides, getting the identity information from the device requires going through access control. Access control can follow either a strong PKI-based path or a weak password-based path which offer different functionalities. We implemented our protocols on JavaCard using finger-vein recognition as a proof of concept.
Expand
Thomas Pornin
ePrint Report ePrint Report
A new implementation of Falcon is presented. It solves longstanding issues in the existing reference code: the new implementation is constant-time, it does not require floating-point hardware (though it can use such hardware for better performance), it uses less RAM, and achieves much better performance on both large systems (x86 with Skylake cores, POWER8,...) and small microcontrollers (ARM Cortex M4). In particular, signature generation with Falcon-512 takes less than 390k cycles on a Skylake (82k cycles only for verification), and about 19.4 million cycles on an ARM Cortex M4.
Expand
Patrick Kresmer, Alexander Zeh
ePrint Report ePrint Report
We propose a new nonce-misuse-resistant authenticated encryption scheme, which instantiates the SIV paradigm of Rogaway and Shrimpton. In contrast to the GCM-SIV approach proposed by Gueron and Lindell, we do only use a single type of cryptographic primitive, which can be advantageous in restricted embedded devices. Furthermore, we use three independent and fixed subkeys derived from a single master key. Similar to the CCM mode, our scheme uses a combination of the CTR mode for the symmetric encryption and a MAC based on the CBC construction and is therefore called CCM-SIV. We provide a detailed security proof for our scheme. Furthermore, we outline its extension to a nonce-based key derivation as the AES-GCM-SIV approach.
Expand
Vincenzo Botta, Daniele Friolo, Daniele Venturi, Ivan Visconti
ePrint Report ePrint Report
We investigate the security of smart contracts within a blockchain that can fork (as Bitcoin and Ethereum). In particular, we focus on multi-party computation (MPC) protocols run on-chain with the aid of smart contracts, and observe that honest players face the following dilemma: Should I rush sending protocol's messages based on the current view of the blockchain, or rather wait that a message is confirmed on the chain before sending the next one?

To the best of our knowledge, the (implicit) default option used in previous work is the second one, and thus known on-chain MPC protocols take long time to be executed on those blockchains with a long confirmation time (e.g., 1 hour per transaction in Bitcoin). While the first option would clearly be preferable for efficiency, we show that this is not necessarily the case for security, as there are natural examples of on-chain MPC protocols that simply become insecure in presence of rushing players.

Our contributions are twofold:

-- For the concrete case of fairly tossing multiple coins with penalties, we show that the lottery protocol of Andrychowicz et al. (S&P '14) becomes insecure in the presence of rushing players. In addition, we present a new protocol that instead retains security even if the players are rushing.

-- We design a compiler that takes any on-chain MPC protocol and transforms it into another one (for the same task) that remains secure even in the presence of rushing players. The only (unavoidable) requirement is that honest players start to be rushing after the first round of the protocol (by all players) has been confirmed on the blockchain.

Our techniques are inspired by ideas on resettably secure computation (Goyal and Sahai, EUROCRYPT '09). We also provide a prototype implementation of our coin tossing protocol using Ethereum smart contracts, and instantiate our generic compiler in a concrete setting, showing that both our constructions yield considerable improvements in terms of efficiency.
Expand
Samuel Dobson, Steven D. Galbraith, Jason LeGrow, Yan Bo Ti, Lukas Zobernig
ePrint Report ePrint Report
In this note, we present a polynomial time and memory adaptive attack on the 2-SIDH protocol. The 2-SIDH protocol is a special instance of the countermeasure proposed by Azarderakhsh, Jao and Leonardi to perform isogeny-based key exchange with static keys in the presence of an adaptive attack. This countermeasure has also been recently explicitly proposed by Kayacan.

Our attack extends the adaptive attack by Galbraith, Petit, Shani and Ti (GPST) by recovering a static secret using malformed points. The extension of GPST is non-trivial and requires learning more information. In particular, the attack needs to recover intermediate elliptic curves in the isogeny path, and points on them. We will use this extra information to show how the attacker recover the secret isogeny path from a partial path.
Expand
Anders Dalskov, Marcel Keller, Claudio Orlandi, Kris Shrishak, Haya Shulman
ePrint Report ePrint Report
A surge in DNS cache poisoning attacks in the recent years generated an incentive to push the deployment of DNSSEC forward. ICANN accredited registrars are required to support DNSSEC signing for their customers, and the number of signed domains is slowly increasing. Yet with the increase in number of signed domains, the number of vulnerable DNSSEC deployments is also increasing. However, due to lack of support for other, more efficient algorithms, the most popular cryptographic algorithm is RSA. Furthermore, to avoid overhead, the network operators typically use short keys ($ >1024 $ bits) which are no longer considered secure.

In this work, we propose an automated DNSSEC keys generation and zone files signing with threshold ECDSA. We show that a generic transformation suffices to turn essentially any MPC protocol into an equally secure and efficient protocol that computes ECDSA signatures in a threshold setting. The generality of this approach means that DNS operators can pick from a variety of existing efficient MPC solutions which satisfy different security/availability trade-offs. We stress that several of these options were not supported by any previous solution (as a new protocols would have had to be designed for each scenario). We benchmark all the protocols achievable from our transformation. Moreover, as many of the underlying MPC protocols naturally support preprocessing, so does our threshold ECDSA solution (in a way that is independent of both the DNS zone being signed, and the key being used to sign them). We argue that this sort of preprocessing is crucial for pushing deployment of DNSSEC, as it allows DNS operators to sign requests with almost no overhead, compared to the common approach where one operators is completely in charge of their customer's keys.

Depending on the security level and the network configuration, our protocols can preprocess tens, hundreds, or even thousands of signatures per second. Then, the online time for signing essentially matches the RTT for all but the LAN configuration (where signing is still incredibly fast at less than 0.3ms). When comparing with prior work for the same security level, our protocol is never slower and significantly faster in many configurations. For instance, we can generate 4 times as many signatures per second in WAN. Finally, we perform the first study to measure the extent to which multiple DNS operators are used in the Internet and we integrate our novel threshold ECDSA protocols into a DNS application.
Expand
Mustafa Khairallah
ePrint Report ePrint Report
In this article, we analyze two of the NIST Round 1 Candidates for the Lightweight Cryptography Standardization Process: COMET and mixFeed. We show how AEAD modes that are based on rekeying can be modelled as modes without rekeying in the multi-key setting, where every nonce is treated as a different user. Then we show that the security degradation due to weak keys in the multi-key setting will affect these modes in the single key setting. We show how the weak key analysis of both these modes may be applied.
Expand
Paul Bottinelli, Robert Lambert
ePrint Report ePrint Report
The increasing communication capabilities of vehicles are paving the way for promising road safety and traffic management applications. But the rise of connected vehicles also potentially introduces many security and privacy concerns. Thus, a vision of a successful cooperative vehicular network relies on strong security properties. Proposals such as the Security Credential Management System (SCMS) fulfil these security requirements with the concept of pseudonym certificates, relying on large-scale PKI. But since the on-board units performing these cryptographic operations are usually resource-constrained devices, it is important to consider ways to optimize and devise efficient implementations of the proposed algorithms.

In this work, we study optimizations on the mathematical and algorithmic aspects of the validation of implicit certificates and the verification of ECDSA signatures used in the SCMS. We propose efficient algorithms to validate batches of implicit certificates, providing significant savings compared to the sequential validation of the individual certificates. We also propose optimizations to the verification of ECDSA signatures when the verification is performed with an implicit certificate. Although we focus our work on the SCMS and V2X communications, our contributions are more general and apply to every system combining ECQV and ECDSA.
Expand
T-H. Hubert Chan, Rafael Pass, Elaine Shi
ePrint Report ePrint Report
Although Byzantine Agreement (BA) has been studied for three decades, perhaps somewhat surprisingly, there still exist significant gaps in our understanding regarding its round complexity. First, although expected constant-round protocols are known in the honest majority setting, it is unclear whether one has to settle for expected constant-round or whether there exist better protocols that are worst-case constant-round. Second, for the corrupt majority setting, the existence of sublinear-round BA protocols continues to ellude us except for the narrow regime when only sublinearly more than a half are corrupt.

In this paper, we make a couple important steps forward in bridging this gap.

We show two main results:

- No (even randomized) protocol that completes in worst-case $o\left(\log(1/\delta)/\log \log(1/\delta)\right)$ rounds can achieve BA with $1-\delta$ probability, even when only 1% of the nodes are corrupt. In comparison, known expected constant-round, honest-majority protocols complete in $O(\log(1/\delta))$ rounds in the worst-case. Therefore, our lower bound is tight upto a $\log\log$ factor for the honest majority setting.

- There exists a corrupt-majority BA protocol that terminates in $O(\log(1/\delta)/\epsilon)$ rounds in the worst case and tolerates $(1-\epsilon)$ fraction of corrupt nodes. Our upper bound is optimal upto a logarithmic factor in light of the elegant $\Omega(1/\epsilon)$ lower bound by Garay et al. (FOCS'07).
Expand

01 August 2019

Aurore Guillevic, Shashank Singh
ePrint Report ePrint Report
In this paper, we provide a notable step towards filling the gap between theory (estimates of running-time) and practice (a discrete logarithm record computation) for the Tower Number Field Sieve (TNFS) algorithm. We propose a generalisation of ranking formula for selecting the polynomials used in the very first step of TNFS algorithm. For this we provide a definition and an exact implementation (Magma and SageMath) of the alpha function. This function measures the bias in the smoothness probability of norms in number fields compared to random integers of the same size. We use it to estimate the yield of polynomials, that is the expected number of relations, as a generalisation of Murphy's E function, and finally the total amount of operations needed to compute a discrete logarithm with TNFS algorithm in the targeted fields. This is an improvement of the earlier work of Barbulescu and Duquesne on estimating the running-time of the algorithm. We apply our estimates to a wide size range of finite fields GF(p^n), for small composite n = 12, 16, 18, 24, that are target fields of pairing-friendly curves.
Expand
Mahesh Sreekumar Rajasree
ePrint Report ePrint Report
In this paper, we present new preimage attacks on KECCAK-384 and KECCAK-512 for 2, 3 and 4 rounds. The attacks are based on non-linear structures (structures that contain quadratic terms). These structures were studied by Guo et al. and Li et al. to give preimage attacks on round reduced KECCAK. We carefully construct non-linear structures such that the quadratic terms are not spread across the whole state. This allows us to create more linear equations between the variables and hash values, leading to better preimage attacks. As a result, we present the best theoretical preimage attack on KECCAK-384 and KECCAK-512 for 2 and 3-rounds and also KECCAK-384 for 4-rounds.
Expand
Donghang Lu, Thomas Yurek, Samarth Kulshreshtha, Rahul Govind, Rahul Mahadev, Aniket Kate, Andrew Miller
ePrint Report ePrint Report
Multiparty computation as a service (MPSaaS) is a promising approach for building privacy-preserving communication systems.However, in this paper, we argue that existing MPC implementations are inadequate for this application as they do not address fairness, let alone robustness. Even a single malicious server can cause the protocol to abort while seeing the output for itself, which in the context of an anonymous communication service would create a vulnerability to censorship and deanonymization attacks. To remedy this we propose a new MPC implementation, HoneyBadgerMPC, that combines a robust online phase with an optimistic offline phase that is efficient enough to run continuously alongside the online phase. We use HoneyBadgerMPC to develop an application case study, called AsynchroMix, that provides an anonymous broadcast functionality. AsynchroMix features a novel MPC program that trades off between computation and communication, allowing for low-latency message mixing in varying settings. In a cloud-based distributed benchmark with 100 nodes, we demonstrate mixing a batch of 512 messages in around 20 seconds and up to 4096 messages in around two minutes.
Expand
Any Muanalifah, Serge˘ı Sergeev
ePrint Report ePrint Report
A tropical version of Stickel’s key exchange protocol was suggested by Grigoriev and Sphilrain [2] and successfully attacked by Kotov and Ushakov [5]. We suggest some modifications of this scheme that use commuting matrices in tropical algebra and discuss some possibilities of attacks on these new modifications. We suggest some simple heuristic attacks on one of our new protocols, and then we generalize the Kotov and Ushakov attack on Stickel’s protocol and discuss the application of that generalised attack to all our new protocols
Expand
Marco Calderini, Irene Villa
ePrint Report ePrint Report
The boomerang attack, introduced by Wagner in 1999, is a cryptanalysis technique against block ciphers based on differential cryptanalysis. In particular it takes into consideration two differentials, one for the upper part of the cipher and one for the lower part, and it exploits the dependency of these two differentials.

At Eurocrypt'18, Cid et al. introduced a new tool, called the Boomerang Connectivity Table (BCT) that permits to simplify this analysis. Next, Boura and Canteaut introduced an important parameter for cryptographic S-boxes called boomerang uniformity, that is the maximum value in the BCT. Very recently, the boomerang uniformity of some classes of permutations (in particular quadratic functions) have been studied by Li, Qu, Sun and Li, and by Mesnager, Chunming and Maosheng.

In this paper we further study the boomerang uniformity of some non-quadratic differentially 4-uniform functions. In particular, we consider the case of the Bracken-Leander cubic function and three classes of 4-uniform functions constructed by Li, Wang and Yu, obtained from modifying the inverse functions.
Expand
Yuyang Zhou, Yuanfeng Guan, Zhiwei Zhang, Fagen Li
ePrint Report ePrint Report
At present, the access control schemes in the power grid are centralized. In the centralized system, the data of the network sensor nodes is transmitted by centralized nodes, and the data itself may be illegally tamped with or lost, which can lead to reduced system reliability. For this feature, we apply blockchain technology to the design of access control schemes. In this paper, we propose a blockchain-based access control scheme that is suitable for multiple scenarios in the smart grid. Our access control scheme is based on an identity-based combined encryption, signature and signcryption scheme. In addition, we design a consensus algorithm in the power system for the consortium blockchain architecture to solve the key escrow problem of the untrusted third parties. Our scheme also ensures the confidentiality, integrity, authentication and non-repudiation of the data. Compared with the existing work, our scheme can use the same key pair to encrypt, sign and signcrypt the message, which has lower computation and communication costs in multiple scenarios of smart grids.
Expand
Fukang Liu, Takanori Isobe, Willi Meier
ePrint Report ePrint Report
Subterranean 2.0 designed by Daemen, Massolino and Rotella is a Round 1 candidate of the NIST Lightweight Cryptography Standardization process. In the official document of Subterranean 2.0, the designers have made a cryptanalysis of the state collisions in unkeyed absorbing by reducing the number of rounds to absorb the message from 2 to 1. However, no cryptanalysis of the authenticated encryption scheme Subterranean-SAE is made. For Subterranean-SAE, the designers introduce 8 blank rounds to separate the controllable input and output, and expect that 8 blank rounds can achieve a sufficient diffusion. Therefore, it is meaningful to investigate the security by reducing the number of blank rounds. Moreover, the designers make no security claim but expect a non-trivial effort to achieve full-state recovery in nonce-misuse scenario. In this paper, we present the first full-state recovery attack in nonce-misuse scenario with practical time complexity $2^{16}$. Moreover, in a nonce-respecting scenario and if the number of blank rounds is reduced to 4, we can mount a key-recovery attack with time complexity $2^{122}$ and data complexity $2^{69.5}$. The distinguishing attack can also be achieved with time and data complexity $2^{33}$. Our cryptanalysis does not threaten the security claim for Subterranean-SAE and we hope it can help further understand the security of Subterranean-SAE.
Expand
Chris Peikert, Zachary Pepin
ePrint Report ePrint Report
In recent years, there has been a proliferation of algebraically structured Learning With Errors (LWE) variants, including Ring-LWE, Module-LWE, Polynomial-LWE, Order-LWE, and Middle-Product LWE, and a web of reductions to support their hardness, both among these problems themselves and from related worst-case problems on structured lattices. However, these reductions are often difficult to interpret and use, due to the complexity of their parameters and analysis, and most especially their (frequently large) blowup and distortion of the error distributions.

In this paper we unify and simplify this line of work. First, we give a general framework that encompasses all proposed LWE variants (over commutative base rings), and in particular unifies all prior "algebraic" LWE variants defined over number fields. We then use this framework to give much simpler, more general, and tighter reductions from Ring-LWE to other algebraic LWE variants, including Module-LWE, Order-LWE, and Middle-Product LWE. In particular, all our reductions have easy-to-analyze effects on the error, and in some cases they even leave the error unchanged. A main message of our work is that it is straightforward to use the hardness of the original Ring-LWE problem as a foundation for the hardness of all other algebraic LWE problems defined over number fields, via simple reductions.
Expand
◄ Previous Next ►