International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

08 April 2021

Chao Sun, Thomas Espitau, Mehdi Tibouchi, Masayuki Abe
ePrint Report ePrint Report
In the past 30 years, lattice reduction has proved to be one powerful tool of public-key cryptanalysis. Since the advent of the Hidden Number Problem, there has been an extensive study on attacks on (EC)DSA with nonce leakage. While lattice attacks require only a few signatures, it can't deal with small nonce bias compared with Bleichenbacher attack. Prior to this work, it is unknown how to utilize more signatures to improve lattice attacks on (EC)DSA. In this paper, we propose several approaches to improve lattice attacks. The key idea is that we can guess some bits of the secret key(or the nonces) and modify the standard lattice to increase the volume, thus making the lattice attack much easier. Besides, we observe that by filtering some specific signatures we are able to modify the lattice, so we can collect a large number of signatures and construct a lattice that is much easier to attack. With a combination of these techniques, we are able to improve lattice attacks on (EC)DSA. On the one hand, we are able to attack 160-bit modulus(and other modulus as well) (EC)DSA with 2-bit leakage within $2^{15}$ BKZ-30 operations with 90 signatures. On the other hand, with $2^{27}$ signatures available, we are able to attack 160-bit (EC)DSA with 2-bit leakage in just one BKZ-30 operation.

As a second contribution, we give an explanation for several questions unexplained in previous works. It was observed that SVP approaches(Kannan embedding) always outperform CVP approaches(nearest plane) and lattice attack is very sensitive to the Kannan Embedding factor, but these questions are not discussed in previous works. We give an explanation for completeness.

Last, we carry out some experiments on the TPM-Fail dataset. While the original attack utilizes around 40000 signatures, with a combination of our method, we are able to recover the secret with only 800 signatures available.
Expand
Veronika Kuchta, Amin Sakzad, Damien Stehle, Ron Steinfeld, Shi-Feng Sun
ePrint Report ePrint Report
We introduce a new technique called `Measure-Rewind-Measure' (MRM) to achieve tighter security proofs in the quantum random oracle model (QROM). We first apply our MRM technique to derive a new security proof for a variant of the `double-sided' quantum One-Way to Hiding Lemma (O2H) of Bindel et al. [TCC 2019] which, for the first time, avoids the square-root advantage loss in the security proof. In particular, it bypasses a previous `impossibility result' of Jiang, Zhang and Ma [IACR eprint 2019]. We then apply our new O2H Lemma to give a new tighter security proof for the Fujisaki-Okamoto transform for constructing a strong (INDCCA) Key Encapsulation Mechanism (KEM) from a weak (INDCPA) public-key encryption scheme satisfying a mild injectivity assumption.
Expand
Yuncong Hu, Kian Hooshmand, Harika Kalidhindi, Seung Jin Yang, Raluca Ada Popa
ePrint Report ePrint Report
Transparency logs are designed to help users audit untrusted servers. For example, Certificate Transparency (CT) enables users to detect when a compromised Certificate Authority (CA) has issued a fake certificate. Practical state-of-the-art transparency log systems, however, suffer from high monitoring costs when used for low-latency applications. To reduce monitoring costs, such systems often require users to wait an hour or more for their updates to take effect, inhibiting low-latency applications. We propose $\text{Merkle}^2$, a transparency log system that supports both efficient monitoring and low-latency updates. To achieve this goal, we construct a new multi-dimensional, authenticated data structure that nests two types of Merkle trees, hence the name of our system, $\text{Merkle}^2$. Using this data structure, we then design a transparency log system with efficient monitoring and lookup protocols that enables low-latency updates. In particular, all the operations in $\text{Merkle}^2$ are independent of update intervals and are (poly)logarithmic to the number of entries in the log. $\text{Merkle}^2$ not only has excellent asymptotics when compared to prior work, but is also efficient in practice. Our evaluation shows that $\text{Merkle}^2$ propagates updates in as little as 1 second and can support 100× more users than state-of-the-art transparency logs.
Expand
Zezhou Hou, Jiongjiong Ren, Shaozhen Chen
ePrint Report ePrint Report
Cryptanalysis based on deep learning has become a hotspot in the international cryptography community since it was proposed. The key point of differential cryptanalysis based on deep learning is to find a neural differential distinguisher with longer rounds or higher probability. Therefore it is important to research how to improve the accuracy and the rounds of neural differential distinguisher. In this paper, we design SAT-based algorithms to find a good input difference so that the accuracy of the neural distinguisher can be improved as high as possible. As applications, we search and obtain the neural differential distinguishers of 9-round SIMON32/64, 10-round SIMON48/96 and 11-round SIMON64/128. For SIMON48/96, we choose $(0x0,0x100000)$ as the input difference and train 9-round and 10-round neural distinguishers of SIMON48/96. In addition, with the automatic search based on SAT, we extend the neural 9-round, 10-round distinguishers to 11-round, 12-round distinguishers by prepending the optimal 2-round differential transition $(0x400000,0x100001) \xrightarrow{2^{-4}}\left( 0x0,0x100000 \right)$. Based on the 11-round and 12-round neural distinguisher, we complete a 14-round key recovery attack of SIMON48/96. Our attack takes about 1550s to recover the final subkey. Its average time complexity is no more than $2^{22.21}$ 14-round encryption of SIMON48/96, and the data complexity is about $2^{12.8}$. Similar to 14-round key recovery attack, we perform 13-round key recovery attack for SIMON32/64 with input difference $(0x0,0x80)$ with a success rate of more than 90$\%$. It takes about 23s to complete an attack with the data complexity no more than $2^{12.5}$ and the time complexity no more than $2^{16.4}$. It is worth mentioning that the attacks are practical for 13-round SIMON32/64 and 14-round SIMON48/96.
Expand
Gang Wang
ePrint Report ePrint Report
Sharding technology is becoming a promising candidate to address the scalability issues in blockchain. The key concept behind sharding technology is to partition the network status into multiple distinct smaller committees, each of which handles a disjoint set of transactions to leverage its capability of parallel processing. However, when introducing sharding technology to blockchain, several key challenges need to be resolved, such as security and heterogeneity among the participating nodes. This paper introduces RepShard, a reputation-based blockchain sharding scheme that aims to achieve both linearly scaling efficiency and system security simultaneously. RepShard adopts a two-layer hierarchical chain structure, consisting of a reputation chain and independent transaction chains. Each transaction chain is maintained within its shard to record transactions, while the reputation chain is maintained by all shards to update the reputation score of each participating node. We leverage a novel reputation scheme to record each participating node's integrated and valid contribution to the system, in which we consider the heterogeneity of participating nodes (e.g., computational resources). The reputation score used in sharding and leader election processes maintains the balance and security of each shard. RepShard relies on verifiable relay transactions for cross-shard transactions to ensure consistency between distinct shards. By integrating reputation into the sharding protocol, our scheme can offer both scalability and security at the same time.
Expand
Gang Wang, Mark Nixon
ePrint Report ePrint Report
Reliable and verifiable public randomness is not only an essential building block in various cryptographic primitives, but also is a critical component in many distributed and decentralized protocols, e.g., blockchain sharding. A 'good' randomness generator should preserve several distinctive properties, such as public-verifiability, bias-resistance, unpredictability, and availability. However, it is a challenging task to generate such good randomness. For instance, a dishonest party may behave deceptively to bias the final randomness, which is toward his preferences. And this challenge is more serious in a distributed and decentralized system. Blockchain technology provides several promising features, such as decentralization, immutability, and trustworthiness. Due to extremely high overheads on both communication and computation, most existing solutions face an additional scalability issue. We propose a sharding-based scheme, RandChain, to obtain a practical scalable distributed and decentralized randomness attested by blockchain in large-scale applications. In RandChain, we eliminate the use of computation-heavy cryptographic operations, e.g., Publicly Verifiable Secret Sharing (PVSS), in prevalent approaches. We build a sub-routine, RandGene, which utilizes a commit-then-reveal strategy to establish local randomness, enforced by efficient Verifiable Random Function (VRF). RandGene generates the randomness based on statistical approaches, instead of cryptographic operations, to eliminate computational operations. RandChain maintains a two-layer hierarchical chain structure via a sharding scheme. The first level chain is maintained by RandGene within each shard to provide a verifiable randomness source by blockchain. The second level chain uses the randomnesses from each shard to build a randomness chain.
Expand
Gang Wang, Mark Nixon, Mike Boudreaux
ePrint Report ePrint Report
Process industries cover a wide set of industries, in which the processes are controlled by a combination of Distributed Control Systems (DCSs) and Programmable Logic Controllers (PLCs). These control systems utilize various measurements such as pressure, flow, and temperature to determine the state of the process and then use field devices such as valves and other actuating devices to manipulate the process. Since there are many different types of field devices and since each device is calibrated to its specific installation, when monitoring devices, it is important to be able to transfer not only the device measurement and diagnostics, but also characteristics about the device and the process in which it is installed. The current monitoring architecture however creates challenges for continuous monitoring and analysis of diagnostic data. In this paper, we present the design of an Industrial IoT system for supporting large-scale and continuous device condition monitoring and analysis in process control systems. The system design seamlessly integrates existing infrastructure (e.g., HART and WirelessHART networks, and DeltaV DCS) and newly developed hardware/software components (e.g., one-way data diode, IoT cellular architecture) together for control network data collection and streaming of the collected device diagnostic parameters to a private cloud to perform streaming data analytics designed for fault identification and prediction. A prototype system has been developed and supported by Emerson Automation Solutions and deployed in the field for design validation and long-term performance evaluation. To the best of our knowledge, this is the first ever publicly reported effort on IoT system design for process automation applications. The design can be readily extended for condition monitoring and analysis of many other industrial facilities and processes.
Expand
Ashrujit Ghoshal, Stefano Tessaro
ePrint Report ePrint Report
We study the memory-tightness of security reductions in public-key cryptography, focusing in particular on Hashed ElGamal. We prove that any straightline (i.e., without rewinding) black-box reduction needs memory which grows linearly with the number of queries of the adversary it has access to, as long as this reduction treats the underlying group generically. This makes progress towards proving a conjecture by Auerbach et al. (CRYPTO 2017), and is also the first lower bound on memory-tightness for a concrete cryptographic scheme (as opposed to generalized reductions across security notions). Our proof relies on compression arguments in the generic group model.
Expand
Daniel Noble
ePrint Report ePrint Report
Cuckoo Hashing is a dictionary data structure in which a data item is stored in a small constant number of possible locations. It has the appealing property that the data structure size is a small constant times larger than the combined size of all inserted data elements. However, many applications, especially cryptographic applications and Oblivious RAM, require insertions, builds and accesses to have a negligible failure probability, which standard Cuckoo Hashing cannot simultaneously achieve. An alternative proposal introduced by Kirsch et al. is to store elements which cannot be placed in the main table in a ``stash'', reducing the failure probability to $\bigoh(n^{-s})$ where $n$ is the table size and $s$ any constant stash size. This failure probability is still not negligible. Goodrich and Mitzenmacher showed that the failure probability can be made negligible in some parameter $N$ when $n = \Omega(log^7(N))$ and $s = \Theta(log N)$. In this paper, I will explore these analyses, as well as the insightful alternative analysis of Aumüller et al. Following this, I present a tighter analysis which shows failure probability negligible in $N$ for all $n = \omega(\log(N))$ (which is asymptotically optimal) and I present explicit constants for the failure probability upper bound.
Expand
Chitchanok Chuengsatiansup, Damien Stehle
ePrint Report ePrint Report
We investigate the efficiency of a (module-)LWR-based PRF built using the GGM design. Our construction enjoys the security proof of the GGM construction and the (module-)LWR hardness assumption which is believed to be post-quantum secure. We propose GGM-based PRFs from PRGs with larger ratio of output to input. This reduces the number of PRG invocations which improves the PRF performance and reduces the security loss in the GGM security reduction. Our construction bridges the gap between practical and provably secure PRFs. We demonstrate the efficiency of our construction by providing parameters achieving at least 128-bit post-quantum security and optimized implementations utilizing AVX2 vector instructions. Our PRF requires, on average, only 39.4 cycles per output byte.
Expand
Anirudh C, Ashish Choudhury, Arpita Patra
ePrint Report ePrint Report
Verifiable Secret-Sharing (VSS) is a fundamental primitive in secure distributed computing. It is used as an important building block in several distributed computing tasks, such as Byzantine agreement and secure multi-party computation. VSS has been widely studied in various dimensions over the last three decades and several important results have been achieved related to the fault-tolerance, round-complexity and communication efficiency of VSS schemes. In this article, we consider VSS schemes with perfect security, tolerating computationally unbounded adversaries. We comprehensively survey the existing perfectly-secure VSS schemes in three different settings, namely synchronous, asynchronous and hybrid communication settings and provide the full details of each of the existing schemes in these settings. The aim of this survey is to provide a clear knowledge and foundation to researchers who are interested in knowing and extending the state-of-the-art perfectly-secure VSS schemes.
Expand
Daniel Nager, "Danny" Niu Jianfang
ePrint Report ePrint Report
This paper proposes a new public-key cryptosystem based on a quasigroup with the special property of "restricted-commutativity". We argue its security empirically and present constructions for key exchange and digital signature. To the best of our knowledge, our primitive and construction have no known polynomial-time attack from quantum computers yet.
Expand

06 April 2021

Cholun Kim
ePrint Report ePrint Report
Proxy signature is a kind of digital signature, in which a user called original signer can delegate his signing rights to another user called proxy signer and the proxy signer can sign messages on behalf of the original signer. Certificateless proxy signature (CLPS) means proxy signature in the certificateless setting in which there exists neither the certificate management issue as in traditional PKI nor private key escrow problem as in Identity-based setting. Up to now, a number of CLPS schemes have been proposed, but some of those schemes either lack formal security analysis or turn out to be insecure and others are less efficient because of using costly operations including bilinear pairings and map-to-point hashing on elliptic curve groups. In this paper, we formalize the definition and security model of CLPS schemes. We then construct a pairing-free CLPS scheme from the standard ECDSA and prove its security in the random oracle model under the discrete semi-logarithm problem’s hardness assumption as in the provable security result of ECDSA.
Expand
Raluca Posteuca, Tomer Ashur
ePrint Report ePrint Report
Newly designed block ciphers are required to show resistance against known attacks, e.g., linear and differential cryptanalysis. Two widely used methods to do this are to employ an automated search tool (e.g., MILP, SAT/SMT, etc.) and/or provide a wide-trail argument. In both cases, the core of the argument consists of bounding the transition probability of the statistical property over an isolated non-linear operation, then multiply it by the number of such operations (e.g., number of active S-boxes). In this paper we show that in the case of linear cryptanalysis such strategies can sometimes lead to a gap between the claimed security and the actual one, and that this gap can be exploited by a malicious designer. We introduce RooD, a block cipher with a carefully crafted backdoor. By using the means of the wide-trail strategy, we argue the resistance of the cipher against linear and differential cryptanalysis. However, the cipher has a key-dependent iterative linear approximation over 12 rounds, holding with probability 1. This property is based on the linear hull effect although any linear trail underlying the linear hull has probability smaller than 1.
Expand
Yukun Wang, Mingqiang Wang
ePrint Report ePrint Report
A software watermarking scheme enables one to embed a ``mark " (i.e., a message) into a program without significantly changing the functionality. Moreover, any removal of the watermark from a marked program is futile without significantly changing the functionality of the program. At present, the construction of software watermarking mainly focuses on watermarking pseudorandom functions (PRFs), watermarking public key encryption, watermarking signature, etc.

In this work, we construct new watermarking PRFs from lattices which provide collusion resistant and public extraction. Our schemes are the first to simultaneously achieve all of these properties. The key to the success of our new constructions lies in two parts. First, we relax the notion of functionality-preserving. In general, we require that a marked program (approximately) preserve the input/output behavior of the original program. For our scheme, the output circuit is divided into two parts, one for PRF output and the other for auxiliary functions. As a result, we only require the PRF output circuit to satisfy functionality-preserving. Second, the marking method we use is essentially different form the previous scheme. In general, the mark program will change the output of some special point. The extraction algorithm determines whether the circuit is marked by determining whether the output of some special points has been changed. In our schemes, we use the constrained signature to mark a PRF circuit.
Expand
Wenshuo Guo, Fangwei Fu
ePrint Report ePrint Report
This paper presents two modifications for Loidreau’s code-based cryptosystem. Loidreau’s cryptosystem is a rank metric code-based cryptosystem constructed by using Gabidulin codes in the McEliece setting. Recently a polynomial-time key recovery attack was proposed to break Loidreau’s cryptosystem in some cases. To prevent this attack, we propose the use of subcodes to disguise the secret codes in Modification I. In Modification II, we choose a random matrix of low column rank over F q to mix with the secret matrix. According to our analysis, these two modifications can both resist the existing structural attacks. Additionally, we adopt the systematic generator matrix of the public code to make a reduction in the public-key size. In additon to stronger resistance against structural attacks and more compact representation of public keys, our modifications also have larger information transmission rates.
Expand
Donghoon Chang, Meltem Sonmez Turan
ePrint Report ePrint Report
Grain-128AEAD is one of the second-round candidates of the NIST lightweight cryptography standardization process. There is an existing body of third-party analysis on the earlier versions of the Grain family that provide insights on the security of Grain-128AEAD. Different from the earlier versions, Grain-128AEAD reintroduces the key into the internal state during the initialization. The designers claim that internal state recovery no longer results in key recovery, due to this change. In this paper, we analyze this claim under different scenarios.
Expand
Toomas Krips, Helger Lipmaa
ePrint Report ePrint Report
Efficient shuffle arguments are essential in mixnet-based e-voting solutions. Terelius and Wikström (TW) proposed a 5-round shuffle argument based on unique factorization in polynomial rings. Their argument is available as the Verificatum software solution for real-world developers, and has been used in real-world elections. It is also the fastest non-patented shuffle argument. We will use the same basic idea as TW but significantly optimize their approach. We generalize the TW characterization of permutation matrices; this enables us to reduce the communication without adding too much to the computation. We make the TW shuffle argument computationally more efficient by using Groth's coefficient-product argument (JOC, 2010). Additionally, we use batching techniques. The resulting shuffle argument is the fastest known $\leq 5$-message shuffle argument, and, depending on the implementation, can be faster than Groth's argument (the fastest 7-message shuffle argument).
Expand
Nikolaj Sidorenco, Sabine Oechsner, Bas Spitters
ePrint Report ePrint Report
Zero-knowledge proofs allow a prover to convince a verifier of the veracity of a statement without revealing any other information. An interesting class of zero-knowledge protocols are those following the MPC-in-the-head paradigm (Ishai et al., STOC ’07) which use secure multiparty computation (MPC) protocols as the basis. Efficient instances of this paradigm have emerged as an active research topic in the last years, starting with ZKBoo (Giacomelli et al., USENIX ’16). Zero-knowledge protocols are a vital building block in the design of privacy-preserving technologies as well as cryptographic primitives like digital signature schemes that provide post-quantum security. This work investigates the security of zero-knowledge protocols following the MPC-in-the-head paradigm. We provide the first machine-checked security proof of such a protocol on the example of ZKBoo. Our proofs are checked in the EasyCrypt proof assistant. To enable a modular security proof, we develop a new security notion for the MPC protocols used in MPC-in-the-head zero-knowledge protocols. This allows us to recast existing security proofs in a black-box fashion which we believe to be of independent interest.
Expand
Duc-Phong Le, Sze Ling Yeo, Khoongming Khoo
ePrint Report ePrint Report
An algebraic differential fault attack (ADFA) is an attack in which an attacker combines a differential fault attack and an algebraic technique to break a targeted cipher. In this paper, we present three attacks using three different algebraic techniques combined with a differential fault attack in the bit-flip fault model to break the SIMON block cipher. First, we introduce a new analytic method that is based on a differential trail between the correct and faulty ciphertexts. This method is able to recover the entire master key of any member of the SIMON family by injecting faults into a single round of the cipher. In our second attack, we present a simplified Grobner basis algorithm to solve the faulty system. We show that this method could totally break SIMON ciphers with only 3 to 5 faults injected. Our third attack combines a fault attack with a modern SAT solver. By guessing some key bits and with only a single fault injected at the round T - 6, where T is the number of rounds of a SIMON cipher, this combined attack could manage to recover a master key of the cipher. For the last two attacks, we perform experiments to demonstrate the effectiveness of our attacks. These experiments are implemented on personal computers and run at very reasonable timing
Expand
◄ Previous Next ►