IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
08 April 2021
Gideon Samid
ePrint Report
In the current crypto paradigm a single secret key transforms a plaintext into a ciphertext and vice versa, or at most a different key is doing the reverse action. Attackers exposed to the ciphertext are hammering it to extract that single key and the plaintext. This paradigm may be challenged with an alternate setup: using a particular crypto algorithm, there is an infinite number of keys that are perfectly interchangeable -- each has the same effect. Nonetheless they are hard to find. And unlike regular cryptography, the best an attacker can hope for using this new "Family Key Cryptography, is to identify the entire infinitely large family of keys, not the actual key that executed the cryptographic action. This very fact is a cornerstone for a host of applications, mostly still to be unfolded. E.g.: (1) Community Cryptography, where each member has a different key, but the community will encrypt and decrypt as if sharing the same key; (2) 'Forever Key Cryptography': crashing the Shannon's limit, the Forever Key strategy will allow a single finite key to last indefinitely. The shared secret key will be used to derive a succession of operating keys, which will be replaced before they are being compromised. Since any cryptanalysis of usage will end up with an infinite list of key candidates, there will be equal number of candidates for the shared "Forever Key", and thus there will be no erosion in the secrecy of the Forever Key regardless of its level of use. The very idea of infinite number of interchangeable keys is so fundamentally different, that most of its applications are still unknown.
Masayuki Abe, Miguel Ambrona, Andrej Bogdanov, Miyako Ohkubo, Alon Rosen
ePrint Report
Proofs of partial knowledge demonstrate the possession of certain subsets of witnesses for a given collection of statements $x_1,\dots,x_n$.
Cramer, Damg{\aa}rd, and Schoenmakers (CDS), built proofs of partial knowledge, given ``atomic'' protocols for individual statements $x_i$, by having the prover randomly secret share the verifier's challenge and using the shares as challenges for the atomic protocols. This simple and highly-influential transformation has been used in numerous applications, ranging from anonymous credentials to ring signatures.
We consider what happens if, instead of using the shares directly as challenges, the prover first hashes them. We show that this elementary enhancement can result in significant benefits: \begin{itemize} \item the proof contains a {\em single} atomic transcript per statement $x_i$, \item it suffices that the atomic protocols are $\kappa$-special sound for $\kappa \geq 2$, \item when compiled to a signature scheme using the Fiat-Shamir heuristic, its unforgeability can be proved in the {\em non-programmable} random oracle model. \end{itemize} None of the above features is satisfied by the CDS transformation.
We consider what happens if, instead of using the shares directly as challenges, the prover first hashes them. We show that this elementary enhancement can result in significant benefits: \begin{itemize} \item the proof contains a {\em single} atomic transcript per statement $x_i$, \item it suffices that the atomic protocols are $\kappa$-special sound for $\kappa \geq 2$, \item when compiled to a signature scheme using the Fiat-Shamir heuristic, its unforgeability can be proved in the {\em non-programmable} random oracle model. \end{itemize} None of the above features is satisfied by the CDS transformation.
Animesh Chhotaray, Thomas Shrimpton
ePrint Report
Design-hiding techniques are a central piece of academic and industrial efforts to protect electronic circuits from being reverse-engineered. However, these techniques have lacked a principled foundation to guide their design and security evaluation, leading to a long line of broken schemes. In this paper, we begin to lay this missing foundation.
We establish formal syntax for design-hiding (DH) schemes, a cryptographic primitive that encompasses all known design-stage methods to hide the circuit that is handed to a (potentially adversarial) foundry for fabrication. We give two security notions for this primitive: function recovery (FR) and key recovery (KR). The former is the ostensible goal of design-hiding methods to prevent reverse-engineering the functionality of the circuit, but most prior work has focused on the latter. We then present the first provably (FR,KR)-secure DH scheme, $OneChaff_hd$. A side-benefit of our security proof is a framework for analyzing a broad class of new DH schemes. We finish by unpacking our main security result, to provide parameter-setting guidance.
We establish formal syntax for design-hiding (DH) schemes, a cryptographic primitive that encompasses all known design-stage methods to hide the circuit that is handed to a (potentially adversarial) foundry for fabrication. We give two security notions for this primitive: function recovery (FR) and key recovery (KR). The former is the ostensible goal of design-hiding methods to prevent reverse-engineering the functionality of the circuit, but most prior work has focused on the latter. We then present the first provably (FR,KR)-secure DH scheme, $OneChaff_hd$. A side-benefit of our security proof is a framework for analyzing a broad class of new DH schemes. We finish by unpacking our main security result, to provide parameter-setting guidance.
Chao Sun, Thomas Espitau, Mehdi Tibouchi, Masayuki Abe
ePrint Report
In the past 30 years, lattice reduction has proved to be one powerful tool of public-key cryptanalysis.
Since the advent of the Hidden Number Problem, there has been an extensive study on attacks on (EC)DSA with nonce leakage.
While lattice attacks require only a few signatures, it can't deal with small nonce bias compared with Bleichenbacher attack. Prior to this work, it is unknown how to utilize more signatures to improve lattice attacks on (EC)DSA.
In this paper, we propose several approaches to improve lattice attacks. The key idea is that we can guess some bits of the secret key(or the nonces) and modify the standard lattice to increase the volume, thus making the lattice attack much easier. Besides, we observe that by filtering some specific signatures we are able to modify the lattice, so we can collect a large number of signatures and construct a lattice that is much easier to attack. With a combination of these techniques, we are able to improve lattice attacks on (EC)DSA. On the one hand, we are able to attack 160-bit modulus(and other modulus as well) (EC)DSA with 2-bit leakage within $2^{15}$ BKZ-30 operations with 90 signatures. On the other hand, with $2^{27}$ signatures available, we are able to attack 160-bit (EC)DSA with 2-bit leakage in just one BKZ-30 operation.
As a second contribution, we give an explanation for several questions unexplained in previous works. It was observed that SVP approaches(Kannan embedding) always outperform CVP approaches(nearest plane) and lattice attack is very sensitive to the Kannan Embedding factor, but these questions are not discussed in previous works. We give an explanation for completeness.
Last, we carry out some experiments on the TPM-Fail dataset. While the original attack utilizes around 40000 signatures, with a combination of our method, we are able to recover the secret with only 800 signatures available.
As a second contribution, we give an explanation for several questions unexplained in previous works. It was observed that SVP approaches(Kannan embedding) always outperform CVP approaches(nearest plane) and lattice attack is very sensitive to the Kannan Embedding factor, but these questions are not discussed in previous works. We give an explanation for completeness.
Last, we carry out some experiments on the TPM-Fail dataset. While the original attack utilizes around 40000 signatures, with a combination of our method, we are able to recover the secret with only 800 signatures available.
Veronika Kuchta, Amin Sakzad, Damien Stehle, Ron Steinfeld, Shi-Feng Sun
ePrint Report
We introduce a new technique called `Measure-Rewind-Measure' (MRM) to achieve tighter security proofs in the quantum random oracle model (QROM). We first apply our MRM technique to derive a new security proof for a variant of the `double-sided' quantum One-Way to Hiding Lemma (O2H) of Bindel et al. [TCC 2019] which, for the first time, avoids the square-root advantage loss in the security proof. In particular, it bypasses a previous `impossibility result' of Jiang, Zhang and Ma [IACR eprint 2019]. We then apply our new O2H Lemma to give a new tighter security proof for the Fujisaki-Okamoto transform for constructing a strong (INDCCA) Key Encapsulation Mechanism (KEM) from a weak (INDCPA) public-key encryption scheme satisfying a mild injectivity assumption.
Yuncong Hu, Kian Hooshmand, Harika Kalidhindi, Seung Jin Yang, Raluca Ada Popa
ePrint Report
Transparency logs are designed to help users audit untrusted servers. For example, Certificate Transparency (CT) enables users to detect when a compromised Certificate Authority (CA) has issued a fake certificate. Practical state-of-the-art transparency log systems, however, suffer from high monitoring costs when used for low-latency applications. To reduce monitoring costs, such systems often require users to wait an hour or more for their updates to take effect, inhibiting low-latency applications. We propose $\text{Merkle}^2$, a transparency log system that supports both efficient monitoring and low-latency updates. To achieve this goal, we construct a new multi-dimensional, authenticated data structure that nests two types of Merkle trees, hence the name of our system, $\text{Merkle}^2$. Using this data structure, we then design a transparency log system with efficient monitoring and lookup protocols that enables low-latency updates. In particular, all the operations in $\text{Merkle}^2$ are independent of update intervals and are (poly)logarithmic to the number of entries in the log. $\text{Merkle}^2$ not only has excellent asymptotics when compared to prior work, but is also efficient in practice. Our evaluation shows that $\text{Merkle}^2$ propagates updates in as little as 1 second and can support 100× more users than state-of-the-art transparency logs.
Zezhou Hou, Jiongjiong Ren, Shaozhen Chen
ePrint Report
Cryptanalysis based on deep learning has become a hotspot in the international cryptography community since it was proposed. The key point of differential cryptanalysis based on deep learning is to find a neural differential distinguisher with longer rounds or higher probability. Therefore it is important to research how to improve the accuracy and the rounds of neural differential distinguisher. In this paper, we design SAT-based algorithms to find a good input difference so that the accuracy of the neural distinguisher can be improved as high as possible. As applications, we search and obtain the neural differential distinguishers of 9-round SIMON32/64, 10-round SIMON48/96 and 11-round SIMON64/128. For SIMON48/96, we choose $(0x0,0x100000)$ as the input difference and train 9-round and 10-round neural distinguishers of SIMON48/96. In addition, with the automatic search based on SAT, we extend the neural 9-round, 10-round distinguishers to 11-round, 12-round distinguishers by prepending the optimal 2-round differential transition $(0x400000,0x100001) \xrightarrow{2^{-4}}\left( 0x0,0x100000 \right)$. Based on the 11-round and 12-round neural distinguisher, we complete a 14-round key recovery attack of SIMON48/96. Our attack takes about 1550s to recover the final subkey. Its average time complexity is no more than $2^{22.21}$ 14-round encryption of SIMON48/96, and the data complexity is about $2^{12.8}$. Similar to 14-round key recovery attack, we perform 13-round key recovery attack for SIMON32/64 with input difference $(0x0,0x80)$ with a success rate of more than 90$\%$. It takes about 23s to complete an attack with the data complexity no more than $2^{12.5}$ and the time complexity no more than $2^{16.4}$. It is worth mentioning that the attacks are practical for 13-round SIMON32/64 and 14-round SIMON48/96.
Gang Wang
ePrint Report
Sharding technology is becoming a promising candidate to address the scalability issues in blockchain. The key concept behind sharding technology is to partition the network status into multiple distinct smaller committees, each of which handles a disjoint set of transactions to leverage its capability of parallel processing. However, when introducing sharding technology to blockchain, several key challenges need to be resolved, such as security and heterogeneity among the participating nodes. This paper introduces RepShard, a reputation-based blockchain sharding scheme that aims to achieve both linearly scaling efficiency and system security simultaneously. RepShard adopts a two-layer hierarchical chain structure, consisting of a reputation chain and independent transaction chains. Each transaction chain is maintained within its shard to record transactions, while the reputation chain is maintained by all shards to update the reputation score of each participating node. We leverage a novel reputation scheme to record each participating node's integrated and valid contribution to the system, in which we consider the heterogeneity of participating nodes (e.g., computational resources). The reputation score used in sharding and leader election processes maintains the balance and security of each shard. RepShard relies on verifiable relay transactions for cross-shard transactions to ensure consistency between distinct shards. By integrating reputation into the sharding protocol, our scheme can offer both scalability and security at the same time.
Gang Wang, Mark Nixon
ePrint Report
Reliable and verifiable public randomness is not only an essential building block in various cryptographic primitives, but also is a critical component in many distributed and decentralized protocols, e.g., blockchain sharding. A 'good' randomness generator should preserve several distinctive properties, such as public-verifiability, bias-resistance, unpredictability, and availability. However, it is a challenging task to generate such good randomness. For instance, a dishonest party may behave deceptively to bias the final randomness, which is toward his preferences. And this challenge is more serious in a distributed and decentralized system. Blockchain technology provides several promising features, such as decentralization, immutability, and trustworthiness. Due to extremely high overheads on both communication and computation, most existing solutions face an additional scalability issue. We propose a sharding-based scheme, RandChain, to obtain a practical scalable distributed and decentralized randomness attested by blockchain in large-scale applications. In RandChain, we eliminate the use of computation-heavy cryptographic operations, e.g., Publicly Verifiable Secret Sharing (PVSS), in prevalent approaches. We build a sub-routine, RandGene, which utilizes a commit-then-reveal strategy to establish local randomness, enforced by efficient Verifiable Random Function (VRF). RandGene generates the randomness based on statistical approaches, instead of cryptographic operations, to eliminate computational operations. RandChain maintains a two-layer hierarchical chain structure via a sharding scheme. The first level chain is maintained by RandGene within each shard to provide a verifiable randomness source by blockchain. The second level chain uses the randomnesses from each shard to build a randomness chain.
Gang Wang, Mark Nixon, Mike Boudreaux
ePrint Report
Process industries cover a wide set of industries, in which the processes are controlled by a combination of Distributed Control Systems (DCSs) and Programmable Logic Controllers (PLCs). These control systems utilize various measurements such as pressure, flow, and temperature to determine the state of the process and then use field devices such as valves and other actuating devices to manipulate the process. Since there are many different types of field devices and since each device is calibrated to its specific installation, when monitoring devices, it is important to be able to transfer not only the device measurement and diagnostics, but also characteristics about the device and the process in which it is installed. The current monitoring architecture however creates challenges for continuous monitoring and analysis of diagnostic data. In this paper, we present the design of an Industrial IoT system for supporting large-scale and continuous device condition monitoring and analysis in process control systems. The system design seamlessly integrates existing infrastructure (e.g., HART and WirelessHART networks, and DeltaV DCS) and newly developed hardware/software components (e.g., one-way data diode, IoT cellular architecture) together for control network data collection and streaming of the collected device diagnostic parameters to a private cloud to perform streaming data analytics designed for fault identification and prediction. A prototype system has been developed and supported by Emerson Automation Solutions and deployed in the field for design validation and long-term performance evaluation. To the best of our knowledge, this is the first ever publicly reported effort on IoT system design for process automation applications. The design can be readily extended for condition monitoring and analysis of many other industrial facilities and processes.
Ashrujit Ghoshal, Stefano Tessaro
ePrint Report
We study the memory-tightness of security reductions in public-key
cryptography, focusing in particular on Hashed ElGamal. We prove that
any straightline (i.e., without rewinding) black-box reduction
needs memory which grows linearly with the number of queries of the
adversary it has access to, as long as this reduction treats the
underlying group generically. This makes progress towards proving a
conjecture by Auerbach et al. (CRYPTO 2017), and is also the
first lower bound on memory-tightness for a concrete cryptographic
scheme (as opposed to generalized reductions across security
notions). Our proof relies on compression arguments in the generic
group model.
Daniel Noble
ePrint Report
Cuckoo Hashing is a dictionary data structure in which a data item is stored in a small constant number of possible locations. It has the appealing property that the data structure size is a small constant times larger than the combined size of all inserted data elements. However, many applications, especially cryptographic applications and Oblivious RAM, require insertions, builds and accesses to have a negligible failure probability, which standard Cuckoo Hashing cannot simultaneously achieve. An alternative proposal introduced by Kirsch et al. is to store elements which cannot be placed in the main table in a ``stash'', reducing the failure probability to $\bigoh(n^{-s})$ where $n$ is the table size and $s$ any constant stash size. This failure probability is still not negligible. Goodrich and Mitzenmacher showed that the failure probability can be made negligible in some parameter $N$ when $n = \Omega(log^7(N))$ and $s = \Theta(log N)$.
In this paper, I will explore these analyses, as well as the insightful alternative analysis of Aumüller et al. Following this, I present a tighter analysis which shows failure probability negligible in $N$ for all $n = \omega(\log(N))$ (which is asymptotically optimal) and I present explicit constants for the failure probability upper bound.
Chitchanok Chuengsatiansup, Damien Stehle
ePrint Report
We investigate the efficiency of a (module-)LWR-based PRF built using the GGM design. Our construction enjoys the security proof of the GGM construction and the (module-)LWR hardness assumption which is believed to be post-quantum secure. We propose GGM-based PRFs from PRGs with larger ratio of output to input. This reduces the number of PRG invocations which improves the PRF performance and reduces the security loss in the GGM security reduction. Our construction bridges the gap between practical and provably secure PRFs. We demonstrate the efficiency of our construction by providing parameters achieving at least 128-bit post-quantum security and optimized implementations utilizing AVX2 vector instructions. Our PRF requires, on average, only 39.4 cycles per output byte.
Anirudh C, Ashish Choudhury, Arpita Patra
ePrint Report
Verifiable Secret-Sharing (VSS) is a fundamental primitive in secure distributed computing. It is used as an important building block in several distributed computing tasks, such as Byzantine agreement and secure multi-party computation. VSS has been widely studied in various dimensions over the last three decades and several important results have been achieved related to the fault-tolerance, round-complexity and communication efficiency of VSS schemes. In this article, we consider VSS schemes with perfect security, tolerating computationally unbounded adversaries. We comprehensively survey the existing perfectly-secure VSS schemes in three different settings, namely synchronous, asynchronous and hybrid communication settings and provide the full details of each of the existing schemes in these settings. The aim of this survey is to provide a clear knowledge and foundation to researchers who are interested in knowing and extending the state-of-the-art perfectly-secure VSS schemes.
Daniel Nager, "Danny" Niu Jianfang
ePrint Report
This paper proposes a new public-key cryptosystem based on a quasigroup with the special property of "restricted-commutativity". We argue its security empirically and present constructions for key exchange and digital signature. To the best of our knowledge, our primitive and construction have no known polynomial-time attack from quantum computers yet.
06 April 2021
Cholun Kim
ePrint Report
Proxy signature is a kind of digital signature, in which a user called original signer can delegate his signing rights to another user called proxy signer and the proxy signer can sign messages on behalf of the original signer. Certificateless proxy signature (CLPS) means proxy signature in the certificateless setting in which there exists neither the certificate management issue as in traditional PKI nor private key escrow problem as in Identity-based setting. Up to now, a number of CLPS schemes have been proposed, but some of those schemes either lack formal security analysis or turn out to be insecure and others are less efficient because of using costly operations including bilinear pairings and map-to-point hashing on elliptic curve groups.
In this paper, we formalize the definition and security model of CLPS schemes. We then construct a pairing-free CLPS scheme from the standard ECDSA and prove its security in the random oracle model under the discrete semi-logarithm problems hardness assumption as in the provable security result of ECDSA.
Raluca Posteuca, Tomer Ashur
ePrint Report
Newly designed block ciphers are required to show resistance against known attacks, e.g., linear and differential cryptanalysis. Two widely used methods to do this are to employ an automated search tool (e.g., MILP, SAT/SMT, etc.) and/or provide a wide-trail argument. In both cases, the core of the argument consists of bounding the transition probability of the statistical property over an isolated non-linear operation, then multiply it by the number of such operations (e.g., number of active S-boxes).
In this paper we show that in the case of linear cryptanalysis such strategies can sometimes lead to a gap between the claimed security and the actual one, and that this gap can be exploited by a malicious designer. We introduce RooD, a block cipher with a carefully crafted backdoor. By using the means of the wide-trail strategy, we argue the resistance of the cipher against linear and differential cryptanalysis. However, the cipher has a key-dependent iterative linear approximation over 12 rounds, holding with probability 1. This property is based on the linear hull effect although any linear trail underlying the linear hull has probability smaller than 1.
Yukun Wang, Mingqiang Wang
ePrint Report
A software watermarking scheme enables one to embed a ``mark " (i.e., a message) into a program without significantly changing the functionality. Moreover, any removal of the watermark from a marked program is futile without significantly changing the functionality of the program. At present, the construction of software watermarking mainly focuses on watermarking pseudorandom functions (PRFs), watermarking public key encryption, watermarking signature, etc.
In this work, we construct new watermarking PRFs from lattices which provide collusion resistant and public extraction. Our schemes are the first to simultaneously achieve all of these properties. The key to the success of our new constructions lies in two parts. First, we relax the notion of functionality-preserving. In general, we require that a marked program (approximately) preserve the input/output behavior of the original program. For our scheme, the output circuit is divided into two parts, one for PRF output and the other for auxiliary functions. As a result, we only require the PRF output circuit to satisfy functionality-preserving. Second, the marking method we use is essentially different form the previous scheme. In general, the mark program will change the output of some special point. The extraction algorithm determines whether the circuit is marked by determining whether the output of some special points has been changed. In our schemes, we use the constrained signature to mark a PRF circuit.
In this work, we construct new watermarking PRFs from lattices which provide collusion resistant and public extraction. Our schemes are the first to simultaneously achieve all of these properties. The key to the success of our new constructions lies in two parts. First, we relax the notion of functionality-preserving. In general, we require that a marked program (approximately) preserve the input/output behavior of the original program. For our scheme, the output circuit is divided into two parts, one for PRF output and the other for auxiliary functions. As a result, we only require the PRF output circuit to satisfy functionality-preserving. Second, the marking method we use is essentially different form the previous scheme. In general, the mark program will change the output of some special point. The extraction algorithm determines whether the circuit is marked by determining whether the output of some special points has been changed. In our schemes, we use the constrained signature to mark a PRF circuit.
Wenshuo Guo, Fangwei Fu
ePrint Report
This paper presents two modifications for Loidreaus code-based cryptosystem. Loidreaus cryptosystem is a rank metric code-based cryptosystem constructed by using Gabidulin codes in the McEliece setting. Recently a polynomial-time key recovery attack was proposed to break Loidreaus cryptosystem in some cases. To prevent this attack, we propose the use of subcodes to disguise the secret codes in Modification I. In Modification II, we choose a random matrix of low column rank over F q to mix with the secret matrix. According to our analysis, these two modifications can both resist the existing structural attacks. Additionally, we adopt the systematic generator matrix of the public code to make a reduction in the public-key size. In additon to stronger resistance against structural attacks and more compact representation of public keys, our modifications also have larger information transmission rates.
Donghoon Chang, Meltem Sonmez Turan
ePrint Report
Grain-128AEAD is one of the second-round candidates of the NIST lightweight cryptography standardization process. There is an existing body of third-party analysis on the earlier versions of the Grain family that provide insights on the security of Grain-128AEAD. Different from the earlier versions, Grain-128AEAD reintroduces the key into the internal state during the initialization. The designers claim that internal state recovery no longer results in key recovery, due to this change. In this paper, we analyze this claim under different scenarios.