International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

01 September 2020

Daniel Shumow
ePrint Report ePrint Report
When generating primes $p$ and $q$ for an RSA key, the algorithm specifies that they should be checked to see that $p-1$ and $q-1$ are relatively prime to the public exponent $e$, and regenerated if this is not the case. If this is not done, then the calculation of the decrypt exponent will fail. However, what if a software bug allows the generation of public parameters $N$ and $e$ of an RSA key with this property and then it is subsequently used for encryption? Though this may seem like a purely academic question, a software bug in the RSA key generation implementation in the CNG API of a preview release of the Windows 10 operating system makes this question of more than purely hypothetical value. Without a well defined decrypt exponent, plaintexts encrypted to such keys will be undecryptable thus potentially losing user data, a serious software defect. Though the decrypt exponent is no longer well defined, it is in fact possible to recover the plaintext, or a small number of potential plaintexts if the prime factors $p$ and $q$ of the public modulus $N$ are known. This paper presents an analysis of what steps fail in the RSA algorithm and use this to give a plaintext recovery algorithm. The runtime of the algorithm scales linearly in the magnitude of the public exponent, in practice this is manageable as there are only a few small public exponents that are used. This algorithm has been implemented in a publicly available python script. We further discuss the software bug that lead to this and derive lessons that can be used while testing randomized functions in cryptographic software. Specifically, we derive an explicit formula that describes the trade off between number of iterations of tests of a randomized cryptographic functions and the potential number of users affected by a bug dependent on the random values.
Expand
João Diogo Duarte
ePrint Report ePrint Report
The Joux-Vitse Crossbred algorithm's aim is to efficiently solve a system of semi-regular multivariate polynomials. For random polynomials, this is thought to be at least hard for a classical computer to solve. In this work, the algorithm is described in detail, a bivariate generating series to test the admissibility of parameters in F_2 is investigated and from this, a new series for F_q, q > 2 is presented. In addition, a complexity estimate is given for both F_2 and F_q, q > 2, which is compared to an estimate presented by Samardijska et al. Their estimate is shown to be an upper bound for the Crossbred algorithm. By obtaining optimal parameters using the previous results, the cost of theoretically applying Crossbred, FES and Hybrid-F5 to polynomial systems of various sizes of various numbers of variables and q was plotted. Overall, it was determined that for larger fields, the Crossbred algorithm provides an improved asymptotic complexity over FES. However, it does not provide any improvement over Hybrid-F5.
Expand
Jonas Nick, Tim Ruffing, Yannick Seurin, Pieter Wuille
ePrint Report ePrint Report
MuSig is a multi-signature scheme for Schnorr signatures, which supports key aggregation and is secure in the plain public key model. Standard derandomization techniques for discrete logarithm-based signatures such as RFC 6979, which make the signing procedure immune to catastrophic failures in the randomness generation, are not applicable to multi-signatures as an attacker could trick an honest user into producing two different partial signatures with the same randomness, which would reveal the user's secret key.

In this paper, we propose a variant of MuSig in which signers generate their nonce deterministically as a pseudorandom function of the message and all signers' public keys and prove that they did so by providing a non-interactive zero-knowledge proof to their cosigners. The resulting scheme, which we call MuSig-DN, is the first Schnorr multi-signature scheme with deterministic signing. Therefore its signing protocol is robust against failures in the randomness generation as well as attacks trying to exploit the statefulness of the signing procedure, e.g., virtual machine rewinding attacks. As an additional benefit, a signing session in MuSig-DN requires only two rounds instead of three as required by all previous Schnorr multi-signatures including MuSig. To instantiate our construction, we identify a suitable algebraic pseudorandom function and provide an efficient implementation of this function as an arithmetic circuit. This makes it possible to realize MuSig-DN efficiently using zero-knowledge proof frameworks for arithmetic circuits which support inputs given in Pedersen commitments, e.g., Bulletproofs. We demonstrate the practicality of our technique by implementing it for the secp256k1 elliptic curve used in Bitcoin.
Expand
Wei-Zhu Yeoh, Je Sen Teh, Jiageng Chen
ePrint Report ePrint Report
Differential cryptanalysis of block ciphers require the identification of differential characteristics (trails) that occur with high probabilities. The effort required to search for these characteristics increases exponentially as the number of rounds and block size increases. Differentials, which are clusters of differential characteristics sharing the same input and output differences, can be constructed to improve the overall distinguishing probability, thus improving the efficiency of a key recovery attack. Matsui's branch-and-bound algorithm that automates the search for differential characteristics is commonly used to construct these differentials. However, the algorithm is still inefficient when enumerating a large number of characteristics, especially for block ciphers with large block sizes or number of rounds. In this paper, we improve upon the differential search by proposing a GPU-accelerated branch-and-bound framework that is more efficient and cost-effective as compared to its CPU counterpart. Efficiency is optimized by parallelizing all operations involved in a typical branch-and-bound algorithm by completing the entire search without transferring intermediate results back to the CPU. The meet-in-the-middle (MITM) approach is also adopted for further performance gains. We analyze the proposed framework in terms of both financial and computational costs based on simulations on the Google Cloud VM environment. When optimizing for performance, the proposed framework can achieve up to 90x speedup while saving up to 47% of the running cost as compared to a single CPU core. If cost-saving is the goal, the proposed framework can save up to 83% of the running cost while retaining a speedup of up to 40x as compared to single CPU core. The proposed framework is then applied to 128-bit TRIFLE-BC, 64-bit PRESENT and 64-bit GIFT, leading to the discovery of improved differentials. Notably, we identified the best differentials for PRESENT and 64-bit GIFT to date, with probabilities of $2^{-61.7964}$ and $2^{-60.66}$ for 16 and 13 rounds respectively. Although the differential probability for 43 rounds of TRIFLE-BC was not significantly improved, we were able to construct larger differentials with approximately 5.8x more characteristics than existing ones. Thus, the proposed GPU framework is currently the most efficient approach for enumerating 128-bit block cipher differentials, especially for a large number of rounds.
Expand
Santi J. Vives
ePrint Report ePrint Report
A new post-quantum, hash-based signature (HBS) scheme is introduced. In known HBS, the size and cost of each signature increase as the total number of messages one wishes to authenticate increase. In real-world applications, requiring large volumes of signatures, they can become impractical. This paper studies HBS in a blockchain: a public, decentralized database. The proposed HBS scheme shows that, when all signatures are known, quite the opposite is possible: the signatures can become more efficient as the number of signatures grows. Authenticating large volumes of messages results less costly on average than authenticating only a few.
Expand
Ben Smyth
ePrint Report ePrint Report
We show that verifiable voting systems require a security notion beyond individual- and universal-verifiability plus cast-as-intended.
Expand
Anders Dalskov, Eysa Lee, Eduardo Soria-Vazquez
ePrint Report ePrint Report
At CRYPTO 2018, Cascudo et al. introduced Reverse Multiplication Friendly Embeddings (RMFEs). These are a mechanism to compute $\delta$ parallel evaluations of the same arithmetic circuit over a field $\mathbb{F}_q$ at the cost of a single evaluation of that circuit in $\mathbb{F}_{q^d}$, where $\delta < d$. Due to this inequality, RMFEs are a useful tool when protocols require to work over $\mathbb{F}_{q^d}$ but one is only interested in computing over $\mathbb{F}_q$. In this work we introduce Circuit Amortization Friendly Encodings (CAFEs), which generalize RMFEs while having concrete efficiency in mind. For a Galois Ring $R = GR(2^{k}, d)$, CAFEs allow to compute certain circuits over $\mathbb{Z}_{2^k}$ at the cost of a single secure multiplication in $R$. We present three CAFE instantiations, which we apply to the protocol for MPC over $\mathbb{Z}_{2^k}$ via Galois Rings by Abspoel et al. (TCC 2019). Our protocols allow for efficient switching between the different CAFEs, as well as between computation over $GR(2^{k}, d)$ and $\mathbb{F}_{2^{d}}$ in a way that preserves the CAFE in both rings. This adaptability leads to efficiency gains for e.g. Machine Learning applications, which can be represented as highly parallel circuits over $\mathbb{Z}_{2^k}$ followed by bit-wise operations. From an implementation of our techniques, we estimate that an SVM can be evaluated on 250 images in parallel up to $\times 7$ more efficiently using our techniques, compared to the protocol from Abspoel et al. (TCC 2019).
Expand
Jean-Philippe Aumasson, Omer Shlomovits
ePrint Report ePrint Report
Threshold wallets leverage threshold signature schemes (TSS) to distribute signing rights across multiple parties when issuing blockchain transactions. These provide greater assurance against insider fraud, and are sometimes seen as an alternative to methods using a trusted execution environment to issue the signature. This new class of applications motivated researchers to discover better protocols, entrepreneurs to create start-up companies, and large organizations to deploy TSS-based solutions. For example, the leading cryptocurrency exchange (in transaction volume) adopted TSS to protect some of its wallets.

Although the TSS concept is not new, this is the first time that so many TSS implementations are written and deployed in such a critical context, where all liquidity reserves could be lost in a minute if the crypto fails. Furthermore, TSS schemes are sometimes extended or tweaked to best adapt to their target use case---what could go wrong?

This paper, based on the authors' experience with building and analyzing TSS technology, describes three different attacks on TSS implementations used by leading organizations. Unlike security analyses of on-paper protocols, this work targets TSS as deployed in real applications, and exploits logical vulnerabilities enabled by the extra layers of complexity added by TSS software. The attacks have concrete applications, and could for example have been exploited to empty an organization's cold wallet (typically worth at least an 8-digit dollar figure). Indeed, one of our targets is the cold wallet system of the biggest cryptocurrency exchange (which has been fixed after our disclosure).
Expand
Phil Hebborn, Baptiste Lambin, Gregor Leander, Yosuke Todo
ePrint Report ePrint Report
Only the method to estimate the upper bound of the algebraic degree on block ciphers is known so far, but it is not useful for the designer to guarantee the security. In this paper we provide meaningful lower bounds on the algebraic degree of modern block ciphers.
Expand
Arpita Patra, Divya Ravi, Swati Singla
ePrint Report ePrint Report
The two traditional streams of multiparty computation (MPC) protocols consist of-- (a) protocols achieving guaranteed output delivery (god) or fairness (fn) in the honest-majority setting and (b) protocols achieving unanimous or selective abort (ua, sa) in the dishonest-majority setting. The favorable presence of honest majority amongst the participants is necessary to achieve the stronger notions of god or fn. While the constructions of each type are abound in the literature, one class of protocols does not seem to withstand the threat model of the other. For instance, the honest-majority protocols do not guarantee privacy of the inputs of the honest parties in the face of dishonest majority and likewise the dishonest-majority protocols cannot achieve god and fn, tolerating even a single corruption, let alone dishonest minority. The promise of the unconventional yet much sought-after species of MPC, termed as `Best-of-Both-Worlds' (BoBW), is to offer the best possible security depending on the actual corruption scenario.

This work nearly settles the exact round complexity of two classes of BoBW protocols differing on the security achieved in the honest-majority setting, namely god and fn respectively, under the assumption of no setup (plain model), public setup (CRS) and private setup (CRS + PKI or simply PKI). The former class necessarily requires the number of parties to be strictly more than the sum of the bounds of corruptions in the honest-majority and dishonest-majority setting, for a feasible solution to exist. Demoting the goal to the second-best attainable security in the honest-majority setting, the latter class needs no such restriction.

Assuming a network with pair-wise private channels and a broadcast channel, we show that 5 and 3 rounds are necessary and sufficient for the class of BoBW MPC with fn under the assumption of `no setup' and `public and private setup' respectively. For the class of BoBW MPC with god, we show necessity and sufficiency of 3 rounds for the public setup case and 2 rounds for the private setup case. In the no setup setting, we show the sufficiency of 5 rounds, while the known lower bound is 4. All our upper bounds are based on polynomial-time assumptions and assume black-box simulation. With distinct feasibility conditions, the classes differ in terms of the round requirement. The bounds are in some cases different and on a positive note at most one more, compared to the maximum of the needs of the honest-majority and dishonest-majority setting. Our results remain unaffected when security with abort and fairness are upgraded to their identifiable counterparts.
Expand
Stefano Barbero, Emanuele Bellini, Rusydi Makarim
ePrint Report ePrint Report
We show that the underlying permutation of ChaCha20 stream cipher does not behave as a random permutation for up to 17 rounds with respect to rotational cryptanalysis. In particular, we derive a lower and an upper bound for the rotational probability through ChaCha quarter round, we show how to extend the bound to a full round and then to the full permutation. The obtained bounds show that the probability to find what we call a parallel rotational collision is, for example, less than $2^{-488}$ for 17 rounds of ChaCha permutation, while for a random permutation of the same input size, this probability is $2^{-511}$. We remark that our distinguisher is not an attack to ChaCha20 stream cipher, but rather a theoretical analysis of its internal permutation from the point of view of rotational cryptanalysis.
Expand
Kai Hu, Siwei Sun, Meiqin Wang, Qingju Wang
ePrint Report ePrint Report
Since it was proposed in 2015 as a generalization of integral properties, the division property has evolved into a powerful tool for probing the structures of Boolean functions whose algebraic normal forms are not available. We capture the most essential elements for the detection of division properties from a pure algebraic perspective, proposing a technique named as monomial prediction, which can be employed to determine the presence or absence of a monomial in any product of the coordinate functions of a vectorial Boolean function $\boldsymbol f$ by counting the number of the so-called monomial trails across a sequence of simpler functions whose composition is $\boldsymbol f$. Under the framework of the monomial prediction, we formally prove that most algorithms for detecting division properties in literature raise no false alarms but may miss. We also establish the equivalence between the monomial prediction and the three-subset bit-based division property without unknown subset presented at EUROCRYPT 2020, and show that these two techniques are perfectly accurate.

The monomial prediction technique can be regarded as a purification of the definitions of the division properties without resorting to external multisets. This algebraic formulation gives more insights into division properties and inspires new search strategies. With the monomial prediction, we obtain the exact algebraic degrees of TRIVIUM up to 834 rounds for the first time. In the context of cube attacks, we are able to explore a larger search space in limited time and recover the exact algebraic normal forms of complex superpolies with the help of a divide-and-conquer strategy. As a result, we identify more cubes with smaller dimensions, leading to improvements of some near-optimal attacks against 840-, 841- and 842-round TRIVIUM.
Expand
Gao, Yiwen, Zhou, Yongbin
ePrint Report ePrint Report
Side-channel attacks are one of the greatest practical threats to security-related applications, because they are capable of breaking ciphers that are assumed to be mathematically secure. Lots of studies have been devoted to power or electro-magnetic (EM) analysis against desktop CPUs, mobile CPUs (including ARM, MSP, AVR, etc) and FPGAs, but rarely targeted modern GPUs. Modern GPUs feature their special and specific single instruction multiple threads (SIMT) execution fashion, which makes their power/EM leakage more sophisticated in practical scenarios. In this paper, we study side-channel attacks with leakage from SIMT systems, and propose leakage models suited to any SIMT systems and specifically to CUDA-enabled GPUs. Afterwards, we instantiate the models with a GPU AES implementation, which is also used for performance evaluations. In addition to the models, we provide optimizations on the attacks that are based on the models. To evaluate the models and optimizations, we run the GPU AES implementation on a CUDA-enabled GPU and, at the same time, collect its EM leakage. The experimental results show that the proposed models are more efficient and the optimizations are effective as well. Our study suggests that GPU-based cryptographic implementations may be much vulnerable to microarchitecture-based side-channel attacks. Therefore, GPU-specific countermeasures should be considered for GPU-based cryptographic implementations in practical applications.
Expand
ZUC Design Team
ePrint Report ePrint Report
At FSE 2020, a linear distinguishing attack is presented against the ZUC-256 stream cipher based on the $32$-bit word with a data/time complexity of about $2^{236.38}$. In this paper, we re-evaluate the complexity of this attack and discuss the applicability of such a distinguishing attack in 5G application scenarios, where each keystream frame is limited to $20000$, and up to $2^{32}$ bits. To assure a high success probability close to $1$, it is shown that the precise time complexity of the distinguishing attack is $2^{253.93}$ basic operations with a data complexity of $2^{241.38}$ bits keystream, which is far beyond the keystream length limit in 5G application settings in the single-frame setting. Besides, we also consider the multiple-frame scenario where a long keystream could be formed by concatenating many short keystream frames generated from different (Key, IV) pairs. We show that even in such a strong model of distinguishing attacks, the reported bias will not exist in 5G application scenarios and the linear distinguishing attack will not work due to the fact that the long linear combination relation derived from the polynomial multiple of the LFSR in ZUC-256 over $\mbox{GF}(2^{31}-1)$, which has been verified in experiments. It is concluded that the ZUC-256 stream cipher offers the full $256$-bit security in 5G application scenarios.
Expand

31 August 2020

Asiacrypt Asiacrypt
ASIACRYPT 2020 is the 26th Annual International Conference on the Theory and Application of Cryptology and Information Security and one of the three general conferences of the International Association for Cryptologic Research (IACR). It was originally scheduled to take place in Daejeon, Korea, December 6-10.

Due to the COVID-19 pandemic, ASIACRYPT 2020 has been converted into an all-digital event with slightly changed dates. It is now scheduled to take place online Monday-Friday, December 7-11. The conference proceedings will be published according to the original schedule.

Details about the new all-digital event, including its scientific program and registration process, will be communicated at a later time via the usual IACR channels and the conference website.

The board wishes safety and health to all our members during these challenging times.

Expand
Gaithersburg, USA, 19 October - 21 October 2020
Event Calendar Event Calendar
Event date: 19 October to 21 October 2020
Submission deadline: 14 September 2020
Notification: 28 September 2020
Expand
-
Event Calendar Event Calendar
Event date: to
Submission deadline: 31 March 2021
Expand
Dhiman Saha, Yu Sasaki, Danping Shi, Ferdinand Sibleyras, Siwei Sun, Yingjie Zhang
ePrint Report ePrint Report
This paper presents the first third-party security analysis of \tinyjambu, which is one of 32 second-round candidates in NIST's lightweight cryptography standardization process. TinyJAMBU adopts an NLFSR based keyed-permutation that computes only a single NAND gate as a non-linear component per round. The designers evaluated the minimum number of active AND gates, however such a counting method neglects the dependency between multiple AND gates. There also exist previous works considering such dependencies with stricter models, however those are known to be too slow. In this paper, we present a new model that provides a good balance of efficiency and accuracy by only taking into account the first-order correlation of AND gates that frequently occurs in TinyJAMBU. With the refined model, we show a 338-round differential with probability $2^{-62.68}$ that leads to a forgery attack breaking 64-bit security. This implies that the security margin of TinyJAMBU with respect to the number of unattacked rounds is approximately 12%. We also show a differential on full 384 rounds with probability $2^{-70.64}$, thus the security margin of full rounds with respect to the data complexity, namely the gap between the claimed security bits and the attack complexity, is less than 8 bits. Our attacks also point out structural weaknesses of the mode that essentially come from the minimal state size to be lightweight.
Expand
CHES CHES
CHES 2020 will be held from September 14 to 17 as a virtual event.

To register for CHES 2020, please visit the CHES 2020 registration site. Registration for CHES 2020 is free for IACR members; non-IACR members will be asked to pay the IACR membership fee (USD 50 regular, USD 25 for students) during registration.

You can follow any updates on twitter @2020CHES.
Expand

28 August 2020

Benjamin Dowling, Marc Fischlin, Felix Günther, Douglas Stebila
ePrint Report ePrint Report
We analyze the handshake protocol of the Transport Layer Security (TLS) protocol, version 1.3. We address both the full TLS 1.3 handshake (the one round-trip time mode, with signatures for authentication and (elliptic curve) Diffie–Hellman ephemeral ((EC)DHE) key exchange), and the abbreviated resumption/"PSK" mode which uses a pre-shared key for authentication (with optional (EC)DHE key exchange and zero round-trip time key establishment). Our analysis in the reductionist security framework uses a multi-stage key exchange security model, where each of the many session keys derived in a single TLS 1.3 handshake is tagged with various properties (such as unauthenticated versus unilaterally authenticated versus mutually authenticated, whether it is intended to provide forward security, how it is used in the protocol, and whether the key is protected against replay attacks). We show that these TLS 1.3 handshake protocol modes establish session keys with their desired security properties under standard cryptographic assumptions.
Expand
◄ Previous Next ►