## IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

#### 08 August 2019

###### Gwangbae Choi, Serge Vaudenay

ePrint Report
Timed-release encryption allows senders to send a message to a receiver which cannot decrypt until a server releases a time bound key at the release time. The release time usually supposed to be known to the receiver, the ciphertext therefore cannot be decrypted if the release time is lost. We solve this problem in this paper by having a master time bound key which can replace the time bound key of any release time. We first present security models of the timed-release encryption with master time bound key. We present a provably secure construction based on the Weil pairing.

###### Igor Semaev, Andrea Tenti

ePrint Report
GrÃ¶bner basis methods are used to solve systems of polynomial equations over finite fields, but their complexity is poorly understood. In this work an upper bound on the time complexity of constructing a GrÃ¶bner basis and finding a solutions of a system is proved. A key parameter in this estimate is the degree of regularity of the leading forms of the polynomials. Therefore, we provide an upper bound on the degree of regularity for a sufficiently overdetermined system of forms over any finite field. The bound holds with probability tending to 1 and depends only on the number of variables, the number of polynomials, and their degrees. Our results imply that sufficiently overdetermined systems of polynomial equations are solvable in polynomial time with high probability.

###### Gérald Gavin, Stéphane Bonnevay

ePrint Report
Many cryptographic constructions are based on the famous problem LWE \cite{LWERegev05}. In particular, this cryptographic problem is currently the most relevant to build FHE. In some LWE-based schemes, encrypting $x$ consists of randomly choosing a vector $c$ satisfying $\langle s,c\rangle=x+\textsf{noise}\pmod q$ where $s$ is a secret size-$n$ vector. While the vector sum is a homomorphic operator, such a scheme is intrinsically vulnerable to lattice-based attacks. To overcome this, we propose to define $c$ as a pair of vectors $(u,v)$ satisfying $\langle s,u\rangle/\langle s,v\rangle=x+\textsf{noise}\pmod q$.
This simple scheme is based on a new cryptographic problem intuitively not easier than LWE, called Fractional LWE (FLWE).
While some homomorphic properties are lost, the secret vector $s$ could be hopefully chosen shorter leading to more efficient constructions. We extensively study the hardness of FLWE. We first prove that the decision and search versions are equivalent provided $q$ is a \textit{small} prime. We then propose lattice-based cryptanalysis showing that $n$ could be chosen logarithmic in $\log q$.

###### Thomas Haines, Clementine Gritti

ePrint Report
Verifiable electronic voting promises to ensure the correctness of elections even in the presence of a corrupt authority, while providing strong privacy guarantees. However, few practical systems with end-to-end verifiability are expected to offer long term privacy, let alone guarantee it. Since good guarantees of privacy are essential to the democratic process, good guarantees of everlasting privacy must be a major goal of secure online voting systems.
Various currently proposed solutions rely on unusual constructions whose security has not been established. Further, the cost of verifying the zero knowledge proofs of other solutions has only been partially analysed. Our work builds upon Moran and Naor's solution---and its extensions, applications and generalisations---to present a scheme which is additively homomorphic, efficient to verify, and rests upon well studied assumptions.

###### Kai Chen; Zhongrui Lin; Jian Wan; Lei Xu; Chungen Xu.

ePrint Report
With the rapid development of cloud computing, searchable encryption for multiple data owners model (multi-owner model) draws much attention as it enables data users to perform searches on encrypted cloud data outsourced by multiple data owners. However, there are still some issues yet to be solved nowadays, such as precise query, fast query, dimension disaster and flexible system dynamic maintenance. To target these issues, this paper proposes a secure and efficient multi-keyword ranked search over encrypted cloud data for multi-owner model based on searching adversarial networks (MRSM\_SAN). Specifically, we exploit searching adversarial networks to achieve optimal pseudo-keyword filling, and obtains the optimal game equilibrium for query precision and privacy protection strength. In order to achieve fast query, maximum likelihood search balanced tree is proposed, which brings the query complexity closer to $O(\log N)$. we reduce data dimension with fast index clustering, and enable low-overhead system maintenance based on balanced index forest. In addition, attribute based encryption is used to achieve more secure and convenient key management as well as authorized access control. Compared with previous work, our solution maintains query precision above 95\% while ensuring adequate privacy protection, significantly improving search efficiency, enabling more flexible system dynamic maintenance, and reducing the overhead on computation and storage.

###### Michael Yonli

ePrint Report
Side channel attacks have demonstrated in the past that it is possible to break cryptographic algorithms by attacking the implementation rather than the algorithm.
This paper compares an adaptation of Paul Kocher's Differential Power Analysis (DPA) for AES with a multi-bit variant by attacking an AES128 implementation for an ATmega328P microcontroller board.
The results show that the use of multi-bit DPA can significantly reduce ghost peaks and allow for the recovery of a key with far fewer traces.

#### 06 August 2019

###### Mehdi Tibouchi, Alexandre Wallet

ePrint Report
As one of the most efficient lattice-based signature schemes, and one of the only ones to have seen deployment beyond an academic setting (e.g., as part of the VPN software suite strongSwan), BLISS has attracted a significant amount of attention in terms of its implementation security, and side-channel vulnerabilities of several parts of its signing algorithm have been identified in previous works. In this paper, we present an even simpler timing attack against it. The bimodal Gaussian distribution that BLISS is named after is achieved using a random sign flip during signature generation, and neither the original implementation of BLISS nor strongSwan ensure that this sign flip is carried out in constant time. It is therefore possible to recover the corresponding sign through side-channel leakage (using, e.g., cache attacks or branch tracing). We show that obtaining this single bit of leakage (for a moderate number of signatures) is in fact sufficient for a full key recovery attack. The recovery is carried out using a maximum likelihood estimation on the space of parameters, which can be seen as a statistical manifold. The analysis of the attack thus reduces to the computation of the Fisher information metric.

#### 05 August 2019

###### Vasyl Ustimenko

ePrint Report
Non-commutative cryptography studies cryptographic primitives and systems which are based on algebraic structures like groups, semigroups and noncommutative rings. We con-tinue to investigate inverse protocols of Non-commutative cryptography defined in terms of subsemigroups of Affine Cremona Semigroups over finite fields or arithmetic rings $Z_m$ and homomorphic images of these semigroups as possible instruments of Post Quantum Cryptography. This approach allows to construct cryptosystems which are not public keys, as outputs of the protocol correspondents receive mutually inverse transformations on affine space $K^n$ or variety $(K^*)^n$, where $K$ is a field or an arithmetic ring.
The security of such inverse protocol rests on the complexity of word problem to decompose element of Affine Cremona Semigroup given in its standard form into composition of given generators. We discuss the idea of the usage of combinations of two cryptosystems with cipherspaces $(K^*)^n$ and $K^n$ to form a new cryptosystem with the plainspace $(K^*)^n$, ciphertext $K^n$ and nonbijective highly nonlinear encryption map.

###### Runchao Han, Haoyu Lin, Jiangshan Yu

ePrint Report
Atomic Swap enables two parties to atomically exchange their own cryptocurrencies without trusted third parties. However, it was pointed out that an Atomic Swap is equivalent to an American Call Option without the premium, thus is unfair to the swap participant. In this paper, we investigate the (un)fairness of the Atomic Swap protocol. First, we quantify the unfairness of Atomic Swap and compare it with that of conventional financial assets (stocks and fiat currencies). The quantification results show that the Atomic Swaps are much more unfair on cryptocurrencies than on stocks and fiat currencies in the same setting. Second, we use the conventional Cox-Ross-Rubinstein option pricing model in Finance to estimate the premium, and show that the estimated premium for cryptocurrencies is 2% ~ 3% of the asset value, while the premium for stocks and fiat currencies is approximately 0.3%. Third, we propose two fair Atomic Swap protocols, one is for currency exchange and the other is for American Call Options. Our protocols are based on the original Atomic Swap protocol, but implement the premium mechanism. Blockchains supporting smart contracts such as Ethereum support our protocols directly. Blockchains only supporting scripts such as Bitcoin can support our protocols by adding a simple opcode. Last, we give the reference implementation of our protocols in Solidity, and give detailed instructions on implementing our protocols with Bitcoin script.

###### Jintai Ding, Zheng Zhang, Joshua Deaton, Vishakha

ePrint Report
In 2017 Kyung-Ah Shim et al proposed a multivariate signature scheme called Himq-3 which is a submission to National Institute of Standards and Technology (NIST) standardization process of post-quantum cryptosystems. The Himq-3 signature scheme can be classified into oil vinegar signature scheme family. It has a multilayer structure but it uses a cycle system to invert the central map. The signing process of Himq-3 is very fast, and it has small signatures.

In this paper we present a cryptanalysis of Himq-3. We show that inherent to the signing process is a leakage of information of the private key. Using this information one can forge a signature.

In this paper we present a cryptanalysis of Himq-3. We show that inherent to the signing process is a leakage of information of the private key. Using this information one can forge a signature.

###### Fatih Balli, F. Betül Durak, Serge Vaudenay

ePrint Report
We design a suite of protocols so that a small tamper-resistant device can be used as a biometric identity document which can be scanned by authorized terminals. We target both strongly secure identification and strong privacy. Unlike biometric passports, our protocols leak no digital evidence and are essentially deniable. Besides, getting the identity information from the device requires going through access control. Access control can follow either a strong PKI-based path or a weak password-based path which offer different functionalities. We implemented our protocols on JavaCard using finger-vein recognition as a proof of concept.

###### Thomas Pornin

ePrint Report
A new implementation of Falcon is presented. It solves longstanding
issues in the existing reference code: the new implementation is
constant-time, it does not require floating-point hardware (though it
can use such hardware for better performance), it uses less RAM, and
achieves much better performance on both large systems (x86 with Skylake
cores, POWER8,...) and small microcontrollers (ARM Cortex M4). In
particular, signature generation with Falcon-512 takes less than 390k
cycles on a Skylake (82k cycles only for verification), and about 19.4
million cycles on an ARM Cortex M4.

###### Patrick Kresmer, Alexander Zeh

ePrint Report
We propose a new nonce-misuse-resistant authenticated encryption scheme, which instantiates the SIV paradigm of Rogaway and Shrimpton. In contrast to the GCM-SIV approach proposed by Gueron and Lindell, we do only use a single type of cryptographic primitive, which can be advantageous in restricted embedded devices. Furthermore, we use three independent and fixed subkeys derived from a single master key. Similar to the CCM mode, our scheme uses a combination of the CTR mode for the symmetric encryption and a MAC based on the CBC construction and is therefore called CCM-SIV. We provide a detailed security proof for our scheme. Furthermore, we outline its extension to a nonce-based key derivation as the AES-GCM-SIV approach.

###### Vincenzo Botta, Daniele Friolo, Daniele Venturi, Ivan Visconti

ePrint Report
We investigate the security of smart contracts within a blockchain that can fork (as Bitcoin and Ethereum). In particular, we focus on multi-party computation (MPC) protocols run on-chain with the aid of smart contracts, and observe that honest players face the following dilemma: Should I rush sending protocol's messages based on the current view of the blockchain, or rather wait that a message is confirmed on the chain before sending the next one?

To the best of our knowledge, the (implicit) default option used in previous work is the second one, and thus known on-chain MPC protocols take long time to be executed on those blockchains with a long confirmation time (e.g., 1 hour per transaction in Bitcoin). While the first option would clearly be preferable for efficiency, we show that this is not necessarily the case for security, as there are natural examples of on-chain MPC protocols that simply become insecure in presence of rushing players.

Our contributions are twofold:

-- For the concrete case of fairly tossing multiple coins with penalties, we show that the lottery protocol of Andrychowicz et al. (S&P '14) becomes insecure in the presence of rushing players. In addition, we present a new protocol that instead retains security even if the players are rushing.

-- We design a compiler that takes any on-chain MPC protocol and transforms it into another one (for the same task) that remains secure even in the presence of rushing players. The only (unavoidable) requirement is that honest players start to be rushing after the first round of the protocol (by all players) has been confirmed on the blockchain.

Our techniques are inspired by ideas on resettably secure computation (Goyal and Sahai, EUROCRYPT '09). We also provide a prototype implementation of our coin tossing protocol using Ethereum smart contracts, and instantiate our generic compiler in a concrete setting, showing that both our constructions yield considerable improvements in terms of efficiency.

To the best of our knowledge, the (implicit) default option used in previous work is the second one, and thus known on-chain MPC protocols take long time to be executed on those blockchains with a long confirmation time (e.g., 1 hour per transaction in Bitcoin). While the first option would clearly be preferable for efficiency, we show that this is not necessarily the case for security, as there are natural examples of on-chain MPC protocols that simply become insecure in presence of rushing players.

Our contributions are twofold:

-- For the concrete case of fairly tossing multiple coins with penalties, we show that the lottery protocol of Andrychowicz et al. (S&P '14) becomes insecure in the presence of rushing players. In addition, we present a new protocol that instead retains security even if the players are rushing.

-- We design a compiler that takes any on-chain MPC protocol and transforms it into another one (for the same task) that remains secure even in the presence of rushing players. The only (unavoidable) requirement is that honest players start to be rushing after the first round of the protocol (by all players) has been confirmed on the blockchain.

Our techniques are inspired by ideas on resettably secure computation (Goyal and Sahai, EUROCRYPT '09). We also provide a prototype implementation of our coin tossing protocol using Ethereum smart contracts, and instantiate our generic compiler in a concrete setting, showing that both our constructions yield considerable improvements in terms of efficiency.

###### Samuel Dobson, Steven D. Galbraith, Jason LeGrow, Yan Bo Ti, Lukas Zobernig

ePrint Report
In this note, we present a polynomial time and memory adaptive attack on the 2-SIDH protocol.
The 2-SIDH protocol is a special instance of the countermeasure proposed by Azarderakhsh, Jao and Leonardi to perform isogeny-based key exchange with static keys in the presence of an adaptive attack.
This countermeasure has also been recently explicitly proposed by Kayacan.

Our attack extends the adaptive attack by Galbraith, Petit, Shani and Ti (GPST) by recovering a static secret using malformed points. The extension of GPST is non-trivial and requires learning more information. In particular, the attack needs to recover intermediate elliptic curves in the isogeny path, and points on them. We will use this extra information to show how the attacker recover the secret isogeny path from a partial path.

Our attack extends the adaptive attack by Galbraith, Petit, Shani and Ti (GPST) by recovering a static secret using malformed points. The extension of GPST is non-trivial and requires learning more information. In particular, the attack needs to recover intermediate elliptic curves in the isogeny path, and points on them. We will use this extra information to show how the attacker recover the secret isogeny path from a partial path.

###### Anders Dalskov, Marcel Keller, Claudio Orlandi, Kris Shrishak, Haya Shulman

ePrint Report
A surge in DNS cache poisoning attacks in the recent years generated an incentive to push the deployment of DNSSEC forward. ICANN accredited registrars are required to support DNSSEC signing for their customers, and the number of signed domains is slowly increasing. Yet with the increase in number of signed domains, the number of vulnerable DNSSEC deployments is also increasing. However, due to lack of support for other, more efficient algorithms, the most popular cryptographic algorithm is RSA. Furthermore, to avoid overhead, the network operators typically use short keys ($ >1024 $ bits) which are no longer considered secure.

In this work, we propose an automated DNSSEC keys generation and zone files signing with threshold ECDSA. We show that a generic transformation suffices to turn essentially any MPC protocol into an equally secure and efficient protocol that computes ECDSA signatures in a threshold setting. The generality of this approach means that DNS operators can pick from a variety of existing efficient MPC solutions which satisfy different security/availability trade-offs. We stress that several of these options were not supported by any previous solution (as a new protocols would have had to be designed for each scenario). We benchmark all the protocols achievable from our transformation. Moreover, as many of the underlying MPC protocols naturally support preprocessing, so does our threshold ECDSA solution (in a way that is independent of both the DNS zone being signed, and the key being used to sign them). We argue that this sort of preprocessing is crucial for pushing deployment of DNSSEC, as it allows DNS operators to sign requests with almost no overhead, compared to the common approach where one operators is completely in charge of their customer's keys.

Depending on the security level and the network configuration, our protocols can preprocess tens, hundreds, or even thousands of signatures per second. Then, the online time for signing essentially matches the RTT for all but the LAN configuration (where signing is still incredibly fast at less than 0.3ms). When comparing with prior work for the same security level, our protocol is never slower and significantly faster in many configurations. For instance, we can generate 4 times as many signatures per second in WAN. Finally, we perform the first study to measure the extent to which multiple DNS operators are used in the Internet and we integrate our novel threshold ECDSA protocols into a DNS application.

In this work, we propose an automated DNSSEC keys generation and zone files signing with threshold ECDSA. We show that a generic transformation suffices to turn essentially any MPC protocol into an equally secure and efficient protocol that computes ECDSA signatures in a threshold setting. The generality of this approach means that DNS operators can pick from a variety of existing efficient MPC solutions which satisfy different security/availability trade-offs. We stress that several of these options were not supported by any previous solution (as a new protocols would have had to be designed for each scenario). We benchmark all the protocols achievable from our transformation. Moreover, as many of the underlying MPC protocols naturally support preprocessing, so does our threshold ECDSA solution (in a way that is independent of both the DNS zone being signed, and the key being used to sign them). We argue that this sort of preprocessing is crucial for pushing deployment of DNSSEC, as it allows DNS operators to sign requests with almost no overhead, compared to the common approach where one operators is completely in charge of their customer's keys.

Depending on the security level and the network configuration, our protocols can preprocess tens, hundreds, or even thousands of signatures per second. Then, the online time for signing essentially matches the RTT for all but the LAN configuration (where signing is still incredibly fast at less than 0.3ms). When comparing with prior work for the same security level, our protocol is never slower and significantly faster in many configurations. For instance, we can generate 4 times as many signatures per second in WAN. Finally, we perform the first study to measure the extent to which multiple DNS operators are used in the Internet and we integrate our novel threshold ECDSA protocols into a DNS application.

###### Mustafa Khairallah

ePrint Report
In this article, we analyze two of the NIST Round 1 Candidates for the Lightweight Cryptography Standardization Process: COMET and mixFeed. We show how AEAD modes that are based on rekeying can be modelled as modes without rekeying in the multi-key setting, where every nonce is treated as a different user. Then we show that the security degradation due to weak keys in the multi-key setting will affect these modes in the single key setting. We show how the weak key analysis of both these modes may be applied.

###### Paul Bottinelli, Robert Lambert

ePrint Report
The increasing communication capabilities of vehicles are paving the way for promising road safety
and traffic management applications.
But the rise of connected vehicles also potentially introduces many security and privacy concerns.
Thus, a vision of a successful cooperative vehicular network relies on strong security properties.
Proposals such as the Security Credential Management System (SCMS) fulfil these security
requirements with the concept of pseudonym certificates, relying on large-scale PKI.
But since the on-board units performing these cryptographic operations are usually
resource-constrained devices, it is important to consider ways to optimize and devise efficient implementations
of the proposed algorithms.

In this work, we study optimizations on the mathematical and algorithmic aspects of the validation of implicit certificates and the verification of ECDSA signatures used in the SCMS. We propose efficient algorithms to validate batches of implicit certificates, providing significant savings compared to the sequential validation of the individual certificates. We also propose optimizations to the verification of ECDSA signatures when the verification is performed with an implicit certificate. Although we focus our work on the SCMS and V2X communications, our contributions are more general and apply to every system combining ECQV and ECDSA.

In this work, we study optimizations on the mathematical and algorithmic aspects of the validation of implicit certificates and the verification of ECDSA signatures used in the SCMS. We propose efficient algorithms to validate batches of implicit certificates, providing significant savings compared to the sequential validation of the individual certificates. We also propose optimizations to the verification of ECDSA signatures when the verification is performed with an implicit certificate. Although we focus our work on the SCMS and V2X communications, our contributions are more general and apply to every system combining ECQV and ECDSA.

###### T-H. Hubert Chan, Rafael Pass, Elaine Shi

ePrint Report
Although Byzantine Agreement (BA) has been studied for three decades,
perhaps somewhat surprisingly,
there still exist significant gaps in our understanding
regarding its round complexity. First, although expected constant-round protocols
are known in the honest majority setting,
it is unclear whether one has to settle for expected constant-round
or whether there exist better protocols that are worst-case constant-round.
Second, for the corrupt majority setting, the existence of sublinear-round BA protocols continues
to ellude us except for the narrow regime when only sublinearly more than a half are corrupt.

In this paper, we make a couple important steps forward in bridging this gap.

We show two main results:

- No (even randomized) protocol that completes in worst-case $o\left(\log(1/\delta)/\log \log(1/\delta)\right)$ rounds can achieve BA with $1-\delta$ probability, even when only 1% of the nodes are corrupt. In comparison, known expected constant-round, honest-majority protocols complete in $O(\log(1/\delta))$ rounds in the worst-case. Therefore, our lower bound is tight upto a $\log\log$ factor for the honest majority setting.

- There exists a corrupt-majority BA protocol that terminates in $O(\log(1/\delta)/\epsilon)$ rounds in the worst case and tolerates $(1-\epsilon)$ fraction of corrupt nodes. Our upper bound is optimal upto a logarithmic factor in light of the elegant $\Omega(1/\epsilon)$ lower bound by Garay et al. (FOCS'07).

In this paper, we make a couple important steps forward in bridging this gap.

We show two main results:

- No (even randomized) protocol that completes in worst-case $o\left(\log(1/\delta)/\log \log(1/\delta)\right)$ rounds can achieve BA with $1-\delta$ probability, even when only 1% of the nodes are corrupt. In comparison, known expected constant-round, honest-majority protocols complete in $O(\log(1/\delta))$ rounds in the worst-case. Therefore, our lower bound is tight upto a $\log\log$ factor for the honest majority setting.

- There exists a corrupt-majority BA protocol that terminates in $O(\log(1/\delta)/\epsilon)$ rounds in the worst case and tolerates $(1-\epsilon)$ fraction of corrupt nodes. Our upper bound is optimal upto a logarithmic factor in light of the elegant $\Omega(1/\epsilon)$ lower bound by Garay et al. (FOCS'07).

#### 01 August 2019

###### Aurore Guillevic, Shashank Singh

ePrint Report
In this paper, we provide a notable step towards filling the gap between theory (estimates of running-time) and practice (a discrete logarithm record computation) for the Tower Number Field Sieve (TNFS) algorithm. We propose a generalisation of ranking formula for selecting the polynomials used in the very first step of TNFS algorithm. For this we provide a definition and an exact implementation (Magma and SageMath) of the alpha function. This function measures the bias in the smoothness probability of norms in number fields compared to random integers of the same size. We use it to estimate the yield of polynomials, that is the expected number of relations, as a generalisation of Murphy's E function, and finally the total amount of operations needed to compute a discrete logarithm with TNFS algorithm in the targeted fields.
This is an improvement of the earlier work of Barbulescu and Duquesne on estimating the running-time of the algorithm. We apply our estimates to a wide size range of finite fields GF(p^n), for small composite n = 12, 16, 18, 24, that are target fields of pairing-friendly curves.