International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

08 April 2021

Gang Wang, Mark Nixon
ePrint Report ePrint Report
Reliable and verifiable public randomness is not only an essential building block in various cryptographic primitives, but also is a critical component in many distributed and decentralized protocols, e.g., blockchain sharding. A 'good' randomness generator should preserve several distinctive properties, such as public-verifiability, bias-resistance, unpredictability, and availability. However, it is a challenging task to generate such good randomness. For instance, a dishonest party may behave deceptively to bias the final randomness, which is toward his preferences. And this challenge is more serious in a distributed and decentralized system. Blockchain technology provides several promising features, such as decentralization, immutability, and trustworthiness. Due to extremely high overheads on both communication and computation, most existing solutions face an additional scalability issue. We propose a sharding-based scheme, RandChain, to obtain a practical scalable distributed and decentralized randomness attested by blockchain in large-scale applications. In RandChain, we eliminate the use of computation-heavy cryptographic operations, e.g., Publicly Verifiable Secret Sharing (PVSS), in prevalent approaches. We build a sub-routine, RandGene, which utilizes a commit-then-reveal strategy to establish local randomness, enforced by efficient Verifiable Random Function (VRF). RandGene generates the randomness based on statistical approaches, instead of cryptographic operations, to eliminate computational operations. RandChain maintains a two-layer hierarchical chain structure via a sharding scheme. The first level chain is maintained by RandGene within each shard to provide a verifiable randomness source by blockchain. The second level chain uses the randomnesses from each shard to build a randomness chain.
Expand
Gang Wang, Mark Nixon, Mike Boudreaux
ePrint Report ePrint Report
Process industries cover a wide set of industries, in which the processes are controlled by a combination of Distributed Control Systems (DCSs) and Programmable Logic Controllers (PLCs). These control systems utilize various measurements such as pressure, flow, and temperature to determine the state of the process and then use field devices such as valves and other actuating devices to manipulate the process. Since there are many different types of field devices and since each device is calibrated to its specific installation, when monitoring devices, it is important to be able to transfer not only the device measurement and diagnostics, but also characteristics about the device and the process in which it is installed. The current monitoring architecture however creates challenges for continuous monitoring and analysis of diagnostic data. In this paper, we present the design of an Industrial IoT system for supporting large-scale and continuous device condition monitoring and analysis in process control systems. The system design seamlessly integrates existing infrastructure (e.g., HART and WirelessHART networks, and DeltaV DCS) and newly developed hardware/software components (e.g., one-way data diode, IoT cellular architecture) together for control network data collection and streaming of the collected device diagnostic parameters to a private cloud to perform streaming data analytics designed for fault identification and prediction. A prototype system has been developed and supported by Emerson Automation Solutions and deployed in the field for design validation and long-term performance evaluation. To the best of our knowledge, this is the first ever publicly reported effort on IoT system design for process automation applications. The design can be readily extended for condition monitoring and analysis of many other industrial facilities and processes.
Expand
Ashrujit Ghoshal, Stefano Tessaro
ePrint Report ePrint Report
We study the memory-tightness of security reductions in public-key cryptography, focusing in particular on Hashed ElGamal. We prove that any straightline (i.e., without rewinding) black-box reduction needs memory which grows linearly with the number of queries of the adversary it has access to, as long as this reduction treats the underlying group generically. This makes progress towards proving a conjecture by Auerbach et al. (CRYPTO 2017), and is also the first lower bound on memory-tightness for a concrete cryptographic scheme (as opposed to generalized reductions across security notions). Our proof relies on compression arguments in the generic group model.
Expand
Daniel Noble
ePrint Report ePrint Report
Cuckoo Hashing is a dictionary data structure in which a data item is stored in a small constant number of possible locations. It has the appealing property that the data structure size is a small constant times larger than the combined size of all inserted data elements. However, many applications, especially cryptographic applications and Oblivious RAM, require insertions, builds and accesses to have a negligible failure probability, which standard Cuckoo Hashing cannot simultaneously achieve. An alternative proposal introduced by Kirsch et al. is to store elements which cannot be placed in the main table in a ``stash'', reducing the failure probability to $\bigoh(n^{-s})$ where $n$ is the table size and $s$ any constant stash size. This failure probability is still not negligible. Goodrich and Mitzenmacher showed that the failure probability can be made negligible in some parameter $N$ when $n = \Omega(log^7(N))$ and $s = \Theta(log N)$. In this paper, I will explore these analyses, as well as the insightful alternative analysis of Aumüller et al. Following this, I present a tighter analysis which shows failure probability negligible in $N$ for all $n = \omega(\log(N))$ (which is asymptotically optimal) and I present explicit constants for the failure probability upper bound.
Expand
Chitchanok Chuengsatiansup, Damien Stehle
ePrint Report ePrint Report
We investigate the efficiency of a (module-)LWR-based PRF built using the GGM design. Our construction enjoys the security proof of the GGM construction and the (module-)LWR hardness assumption which is believed to be post-quantum secure. We propose GGM-based PRFs from PRGs with larger ratio of output to input. This reduces the number of PRG invocations which improves the PRF performance and reduces the security loss in the GGM security reduction. Our construction bridges the gap between practical and provably secure PRFs. We demonstrate the efficiency of our construction by providing parameters achieving at least 128-bit post-quantum security and optimized implementations utilizing AVX2 vector instructions. Our PRF requires, on average, only 39.4 cycles per output byte.
Expand
Anirudh C, Ashish Choudhury, Arpita Patra
ePrint Report ePrint Report
Verifiable Secret-Sharing (VSS) is a fundamental primitive in secure distributed computing. It is used as an important building block in several distributed computing tasks, such as Byzantine agreement and secure multi-party computation. VSS has been widely studied in various dimensions over the last three decades and several important results have been achieved related to the fault-tolerance, round-complexity and communication efficiency of VSS schemes. In this article, we consider VSS schemes with perfect security, tolerating computationally unbounded adversaries. We comprehensively survey the existing perfectly-secure VSS schemes in three different settings, namely synchronous, asynchronous and hybrid communication settings and provide the full details of each of the existing schemes in these settings. The aim of this survey is to provide a clear knowledge and foundation to researchers who are interested in knowing and extending the state-of-the-art perfectly-secure VSS schemes.
Expand
Daniel Nager, "Danny" Niu Jianfang
ePrint Report ePrint Report
This paper proposes a new public-key cryptosystem based on a quasigroup with the special property of "restricted-commutativity". We argue its security empirically and present constructions for key exchange and digital signature. To the best of our knowledge, our primitive and construction have no known polynomial-time attack from quantum computers yet.
Expand

06 April 2021

Cholun Kim
ePrint Report ePrint Report
Proxy signature is a kind of digital signature, in which a user called original signer can delegate his signing rights to another user called proxy signer and the proxy signer can sign messages on behalf of the original signer. Certificateless proxy signature (CLPS) means proxy signature in the certificateless setting in which there exists neither the certificate management issue as in traditional PKI nor private key escrow problem as in Identity-based setting. Up to now, a number of CLPS schemes have been proposed, but some of those schemes either lack formal security analysis or turn out to be insecure and others are less efficient because of using costly operations including bilinear pairings and map-to-point hashing on elliptic curve groups. In this paper, we formalize the definition and security model of CLPS schemes. We then construct a pairing-free CLPS scheme from the standard ECDSA and prove its security in the random oracle model under the discrete semi-logarithm problem’s hardness assumption as in the provable security result of ECDSA.
Expand
Raluca Posteuca, Tomer Ashur
ePrint Report ePrint Report
Newly designed block ciphers are required to show resistance against known attacks, e.g., linear and differential cryptanalysis. Two widely used methods to do this are to employ an automated search tool (e.g., MILP, SAT/SMT, etc.) and/or provide a wide-trail argument. In both cases, the core of the argument consists of bounding the transition probability of the statistical property over an isolated non-linear operation, then multiply it by the number of such operations (e.g., number of active S-boxes). In this paper we show that in the case of linear cryptanalysis such strategies can sometimes lead to a gap between the claimed security and the actual one, and that this gap can be exploited by a malicious designer. We introduce RooD, a block cipher with a carefully crafted backdoor. By using the means of the wide-trail strategy, we argue the resistance of the cipher against linear and differential cryptanalysis. However, the cipher has a key-dependent iterative linear approximation over 12 rounds, holding with probability 1. This property is based on the linear hull effect although any linear trail underlying the linear hull has probability smaller than 1.
Expand
Yukun Wang, Mingqiang Wang
ePrint Report ePrint Report
A software watermarking scheme enables one to embed a ``mark " (i.e., a message) into a program without significantly changing the functionality. Moreover, any removal of the watermark from a marked program is futile without significantly changing the functionality of the program. At present, the construction of software watermarking mainly focuses on watermarking pseudorandom functions (PRFs), watermarking public key encryption, watermarking signature, etc.

In this work, we construct new watermarking PRFs from lattices which provide collusion resistant and public extraction. Our schemes are the first to simultaneously achieve all of these properties. The key to the success of our new constructions lies in two parts. First, we relax the notion of functionality-preserving. In general, we require that a marked program (approximately) preserve the input/output behavior of the original program. For our scheme, the output circuit is divided into two parts, one for PRF output and the other for auxiliary functions. As a result, we only require the PRF output circuit to satisfy functionality-preserving. Second, the marking method we use is essentially different form the previous scheme. In general, the mark program will change the output of some special point. The extraction algorithm determines whether the circuit is marked by determining whether the output of some special points has been changed. In our schemes, we use the constrained signature to mark a PRF circuit.
Expand
Wenshuo Guo, Fangwei Fu
ePrint Report ePrint Report
This paper presents two modifications for Loidreau’s code-based cryptosystem. Loidreau’s cryptosystem is a rank metric code-based cryptosystem constructed by using Gabidulin codes in the McEliece setting. Recently a polynomial-time key recovery attack was proposed to break Loidreau’s cryptosystem in some cases. To prevent this attack, we propose the use of subcodes to disguise the secret codes in Modification I. In Modification II, we choose a random matrix of low column rank over F q to mix with the secret matrix. According to our analysis, these two modifications can both resist the existing structural attacks. Additionally, we adopt the systematic generator matrix of the public code to make a reduction in the public-key size. In additon to stronger resistance against structural attacks and more compact representation of public keys, our modifications also have larger information transmission rates.
Expand
Donghoon Chang, Meltem Sonmez Turan
ePrint Report ePrint Report
Grain-128AEAD is one of the second-round candidates of the NIST lightweight cryptography standardization process. There is an existing body of third-party analysis on the earlier versions of the Grain family that provide insights on the security of Grain-128AEAD. Different from the earlier versions, Grain-128AEAD reintroduces the key into the internal state during the initialization. The designers claim that internal state recovery no longer results in key recovery, due to this change. In this paper, we analyze this claim under different scenarios.
Expand
Toomas Krips, Helger Lipmaa
ePrint Report ePrint Report
Efficient shuffle arguments are essential in mixnet-based e-voting solutions. Terelius and Wikström (TW) proposed a 5-round shuffle argument based on unique factorization in polynomial rings. Their argument is available as the Verificatum software solution for real-world developers, and has been used in real-world elections. It is also the fastest non-patented shuffle argument. We will use the same basic idea as TW but significantly optimize their approach. We generalize the TW characterization of permutation matrices; this enables us to reduce the communication without adding too much to the computation. We make the TW shuffle argument computationally more efficient by using Groth's coefficient-product argument (JOC, 2010). Additionally, we use batching techniques. The resulting shuffle argument is the fastest known $\leq 5$-message shuffle argument, and, depending on the implementation, can be faster than Groth's argument (the fastest 7-message shuffle argument).
Expand
Nikolaj Sidorenco, Sabine Oechsner, Bas Spitters
ePrint Report ePrint Report
Zero-knowledge proofs allow a prover to convince a verifier of the veracity of a statement without revealing any other information. An interesting class of zero-knowledge protocols are those following the MPC-in-the-head paradigm (Ishai et al., STOC ’07) which use secure multiparty computation (MPC) protocols as the basis. Efficient instances of this paradigm have emerged as an active research topic in the last years, starting with ZKBoo (Giacomelli et al., USENIX ’16). Zero-knowledge protocols are a vital building block in the design of privacy-preserving technologies as well as cryptographic primitives like digital signature schemes that provide post-quantum security. This work investigates the security of zero-knowledge protocols following the MPC-in-the-head paradigm. We provide the first machine-checked security proof of such a protocol on the example of ZKBoo. Our proofs are checked in the EasyCrypt proof assistant. To enable a modular security proof, we develop a new security notion for the MPC protocols used in MPC-in-the-head zero-knowledge protocols. This allows us to recast existing security proofs in a black-box fashion which we believe to be of independent interest.
Expand
Duc-Phong Le, Sze Ling Yeo, Khoongming Khoo
ePrint Report ePrint Report
An algebraic differential fault attack (ADFA) is an attack in which an attacker combines a differential fault attack and an algebraic technique to break a targeted cipher. In this paper, we present three attacks using three different algebraic techniques combined with a differential fault attack in the bit-flip fault model to break the SIMON block cipher. First, we introduce a new analytic method that is based on a differential trail between the correct and faulty ciphertexts. This method is able to recover the entire master key of any member of the SIMON family by injecting faults into a single round of the cipher. In our second attack, we present a simplified Grobner basis algorithm to solve the faulty system. We show that this method could totally break SIMON ciphers with only 3 to 5 faults injected. Our third attack combines a fault attack with a modern SAT solver. By guessing some key bits and with only a single fault injected at the round T - 6, where T is the number of rounds of a SIMON cipher, this combined attack could manage to recover a master key of the cipher. For the last two attacks, we perform experiments to demonstrate the effectiveness of our attacks. These experiments are implemented on personal computers and run at very reasonable timing
Expand
Elaine Shi, Ke Wu
ePrint Report ePrint Report
Anonymous routing is one of the most fundamental online privacy problems and has been studied extensively for decades. Almost all known approaches for anonymous routing (e.g., mix-nets, DC-nets, and others) rely on multiple servers or routers to engage in some {\it interactive} protocol; and anonymity is guaranteed in the {\it threshold} model, i.e., if one or more of the servers/routers behave honestly.

Departing from all prior approaches, we propose a novel {\it non-interactive} abstraction called a Non-Interactive Anonymous Router (NIAR), which works even with a {\it single untrusted router}. In a NIAR scheme, suppose that $n$ senders each want to talk to a distinct receiver. A one-time trusted setup is performed such that each sender obtains a sending key, each receiver obtains a receiving key, and the router receives a {\it token} that ``encrypts'' the permutation mapping the senders to receivers. In every time step, each sender can encrypt its message using its sender key, and the router can use its token to convert the $n$ ciphertexts received from the senders to $n$ {\it transformed ciphertexts}. Each transformed ciphertext is delivered to the corresponding receiver, and the receiver can decrypt the message using its receiver key. Imprecisely speaking, security requires that the untrusted router, even when colluding with a subset of corrupt senders and/or receivers, should not be able to compromise the privacy of honest parties, including who is talking to who, and the message contents.

We show how to construct a communication-efficient NIAR scheme with provable security guarantees based on the standard Decisional Linear assumption in suitable bilinear groups. We show that a compelling application of NIAR is to realize a Non-Interactive Anonymous Shuffler (NIAS), where an untrusted server or data analyst can only decrypt a permuted version of the messages coming from $n$ senders where the permutation is hidden. NIAS can be adopted to construct privacy-preserving surveys, differentially private protocols in the shuffle model, and pseudonymous bulletin boards.

Besides this main result, we also describe a variant that achieves fault tolerance when a subset of the senders may crash. Finally, we further explore a paranoid notion of security called full insider protection, and show that if we additionally assume sub-exponentially secure Indistinguishability Obfuscation and as sub-exponentially secure one-way functions, one can construct a NIAR scheme with paranoid security.
Expand
Sonia Belaïd, Matthieu Rivain, Abdul Rahman Taleb
ePrint Report ePrint Report
The random probing model is a leakage model in which each wire of a circuit leaks with a given probability $p$. This model enjoys practical relevance thanks to a reduction to the noisy leakage model, which is admitted as the right formalization for power and electromagnetic side-channel attacks. In addition, the random probing model is much more convenient than the noisy leakage model to prove the security of masking schemes. In a recent work, Ananth, Ishai and Sahai (CRYPTO 2018) introduce a nice expansion strategy to construct random probing secure circuits. Their construction tolerates a leakage probability of $2^{-26}$, which is the first quantified achievable leakage probability in the random probing model. In a follow-up work, Bela\"id, Coron, Prouff, Rivain and Taleb (CRYPTO 2020) generalize their idea and put forward a complete and practical framework to generate random probing secure circuits. The so-called expanding compiler can bootstrap simple base gadgets as long as they satisfy a new security notion called random probing expandability (RPE). They further provide an instantiation of the framework which tolerates a $2^{-8}$ leakage probability in complexity $\mathcal{O}(\kappa^{7.5})$ where $\kappa$ denotes the security parameter.

In this paper, we provide an in-depth analysis of the RPE security notion. We exhibit the first upper bounds for the main parameter of a RPE gadget, which is known as the amplification order. We further show that the RPE notion can be made tighter and we exhibit strong connections between RPE and the strong non-interference (SNI) composition notion. We then introduce the first generic constructions of gadgets achieving RPE for any number of shares and with nearly optimal amplification orders and provide an asymptotic analysis of such constructions. Last but not least, we introduce new concrete constructions of small gadgets achieving maximal amplification orders. This allows us to obtain much more efficient instantiations of the expanding compiler: we obtain a complexity of $\mathcal{O}(\kappa^{3.9})$ for a slightly better leakage probability, as well as $\mathcal{O}(\kappa^{3.2})$ for a slightly lower leakage probability.
Expand
Aaram Yun
ePrint Report ePrint Report
In the quantum random oracle model, the adversary may make quantum superposition queries to the random oracle. Since even a single query can potentially probe exponentially many points, classical proof techniques are hard to apply. For example, recording the oracle queries seemed difficult.

In 2018, Mark Zhandry showed that, despite the apparent difficulties, it is in fact possible to ‘record’ the quantum queries. He has defined the compressed oracle, which is indistinguishable from the quantum random oracle, and records information the adversary has gained through the oracle queries. It is a technically subtle work, which we believe to be a challenging work to grasp fully.

Our aim is to obtain a mathemathically clean, simple reinterpretation of the compressed oracle technique. For each partial function, we define what we call the formation and the completion of that partial function. The completions describe what happens to the real quantum random oracle, and the formations describe what happens to the compressed oracle. We will show that the formations are 'isomorphic' to the completions, giving an alternative proof that the compressed oracle is indistinguishable from the quantum random oracle.
Expand
Kevin Deforth, Marc Desgroseilliers, Nicolas Gama, Mariya Georgieva, Dimitar Jetchev, Marius Vuille
ePrint Report ePrint Report
We present a novel protocol XORBoost for both training gradient boosted tree models and for using these models for inference in the multiparty computation (MPC) setting. Similarly to [AEV20], our protocol is the first one supporting training for generically split datasets (vertical and horizontal splitting, or combination of those) while keeping all the information about the features and thresholds associated with the nodes private, thus, having only the depths and the number of the binary trees as public parameters of the model. By using optimization techniques reducing the number of oblivious permutation evaluations as well as the quicksort and real number arithmetic algorithms from the recent Manticore MPC framework [CDG+21], we obtain a scalable implementation operating under information-theoretic security model in the honest-but-curious setting with a trusted dealer. On a training dataset of 25,000 samples and 300 features in the 2-player setting, we are able to train 10 regression trees of depth 4 in less than 5 minutes per tree (using histograms of 128 bins).
Expand
Prabhanjan Ananth, Abhishek Jain, Zhengzhong Jin, Giulio Malavolta
ePrint Report ePrint Report
We consider the problem of round-optimal unbounded MPC: in the first round, parties publish a message that depends only on their input. In the second round, any subset of parties can jointly and securely compute any function $f$ over their inputs in a single round of broadcast. We do not impose any a-priori bound on the number of parties nor on the size of the functions that can be computed. Our main result is a semi-malicious two-round protocol for unbounded MPC in the plain model from the hardness of the standard learning with errors (LWE) problem. Prior work in the same setting assumes the hardness of problems over bilinear maps. Thus, our protocol is the first example of unbounded MPC that is post-quantum secure. The central ingredient of our protocol is a new scheme of attribute-based secure function evaluation (AB-SFE) with public decryption. Our construction combines techniques from the realm of homomorphic commitments with delegation of lattice basis. We believe that such a scheme may find further applications in the future.
Expand
◄ Previous Next ►