International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

11 October 2023

Yuncong Zhang, Shi-Feng Sun, Ren Zhang, Dawu Gu
ePrint Report ePrint Report
Zero-Knowledge Virtual Machines (ZKVMs) have gained traction in recent years due to their potential applications in a variety of areas, particularly blockchain ecosystems. Despite tremendous progress on ZKVMs in the industry, no formal definitions or security proofs have been established in the literature. Due to this lack of formalization, existing protocols exhibit significant discrepancies in terms of problem definitions and performance metrics, making it difficult to analyze and compare these advancements, or to trust the security of the increasingly complex ZKVM implementations.

In this work, we focus on random-access memory, an influential and expensive component of ZKVMs. Specifically, we investigate the state-of-the-art protocols for validating the correct functioning of memory, which we refer to as the \emph{memory consistency checks}. Isolating these checks from the rest of the system allows us to formalize their definition and security notion. Furthermore, we summarize the state-of-the-art constructions using the Polynomial IOP model and formally prove their security. Observing that the bottleneck of existing designs lies in sorting the entire memory trace, we break away from this paradigm and propose a novel memory consistency check, dubbed $\mathsf{Permem}$. $\mathsf{Permem}$ bypasses this bottleneck by introducing a technique called the address cycle method, which requires fewer building blocks and---after instantiating the building blocks with state-of-the-art constructions---fewer online polynomial oracles and evaluation queries. In addition, we propose $\mathsf{gcq}$, a new construction for the lookup argument---a key building block of the memory consistency check, which costs fewer online polynomial oracles than the state-of-the-art construction $\mathsf{cq}$.
Expand
Miranda Christ, Kevin Choi, Joseph Bonneau
ePrint Report ePrint Report
We propose Cornucopia, a distributed randomness beacon protocol combining accumulators and verifiable delay functions. Cornucopia extends the Unicorn protocol of Lenstra and Wesolowski, utilizing an accumulator to enable efficient verification by each participant that their randomness contribution has been included in the beacon output. The security of this construction reduces to a novel property of accumulators, insertion security. We first show that not all accumulators are insertion-secure. We then prove that common constructions (Merkle trees and RSA accumulators) are naturally insertion-secure. Finally, we give a generic transformation from any universal accumulator (supporting non-membership proofs) to an insertion-secure accumulator, albeit with an efficiency loss proportional to the security parameter.
Expand
Sourav Das, Ling Ren
ePrint Report ePrint Report
Threshold signature is one of the most important cryptographic primitives in distributed systems. A popular choice of threshold signature scheme is the BLS threshold signature introduced by Boldyreva (PKC'03). Some attractive properties of Boldyreva's threshold signature are that the signatures are unique and short, the signing process is non-interactive, and the verification process is identical to that of non-threshold BLS. These properties have resulted in its practical adoption in several decentralized systems. However, despite its popularity and wide adoption, up until recently, the Boldyreva scheme has been proven secure only against a static adversary. Very recently, Bacho and Loss (CCS'22) presented the first proof of adaptive security for Boldyreva's threshold signature, but they have to rely on strong and non-standard assumptions such as the hardness of one-more discrete log (OMDL) and the Algebraic Group Model~(AGM).

In this paper, we present the first adaptively secure threshold BLS signature scheme that relies on the hardness of DDH and co-CDH in asymmetric pairing group in the Random Oracle Model (ROM). Our signature scheme also has non-interactive signing, compatibility with non-threshold BLS verification, and practical efficiency like Boldyreva's scheme. Moreover, to achieve static security, our scheme only needs the hardness of CDH in the ROM, which is the same as the standard non-threshold BLS signature. These properties make our protocol a suitable candidate for practical adoption with the added benefit of provable adaptive security. We also present an efficient distributed key generation (DKG) protocol to set up the signing keys for our signature scheme. We implement our scheme in Go and evaluate its signing and aggregation costs.
Expand
Xiuquan Ding, Giulio Malavolta, Tianwei Zhang
ePrint Report ePrint Report
Private information retrieval (PIR) allows a client to read data from a server, without revealing which information they are interested in. A PIR is doubly efficient if the server runtime is, after a one-time pre-processing, sublinear in the database size. A recent breakthrough result from Lin, Mook, and Wichs [STOC’23] proposed the first-doubly efficient PIR with (online) server computation poly-logarithmic in the size of the database, assuming the hardness of the standard Ring-LWE problem.

In this work, we consider the problem of doubly efficient batched PIR (DEBPIR), where the client wishes to download multiple entries. This problem arises naturally in many practical applications of PIR, or when the database contains large entries. Our main result is a construction of DEBPIR where the amortized communication and server computation overhead is $\tilde{O}(1)$, from the Ring-LWE problem. This represents an exponential improvement compared with known constructions, and it is optimal up to poly-logarithmic factors in the security parameter. Interestingly, the server’s online operations are entirely combinatorial and all algebraic computations are done in the pre-processing or delegated to the client.
Expand
Vasily Mikhalev, Nils Kopal, Bernhard Esslinger
ePrint Report ePrint Report
In the rapidly advancing domain of artificial intelligence, ChatGPT, powered by the GPT-4 model, has emerged as a state-of-the-art interactive agent, exhibiting substantial capabilities across various domains. This paper aims to assess the efficacy of GPT-4 in addressing and solving problems found within cryptographic examinations. We devised a multi-faceted methodology, presenting the model with a series of cryptographic questions of varying complexities derived from real academic examinations. Our evaluation encompasses both classical and modern cryptographic challenges, focusing on the model's ability to understand, interpret, and generate correct solutions while discerning its limitations. The model was challenged with a spectrum of cryptographic tasks, earning 201 out of 208 points by solving fundamental queries inspired by an oral exam, 80.5 out of 90 points on a written Crypto 1 exam, and 287 out of 385 points on advanced exercises from the Crypto 2 course. The results demonstrate that while GPT-4 shows significant promise in grasping fundamental cryptographic concepts and techniques, certain intricate problems necessitate domain-specific knowledge that may sometimes lie beyond the model's general training. Insights from this study can provide educators, researchers, and examiners with a deeper understanding of how cutting-edge AI models can be both an asset and a potential concern in academic settings related to cryptology. To enhance the clarity and coherence of our work, we utilized ChatGPT-4 to help us in formulating sentences in this paper.
Expand
Daniel Lammers, Amir Moradi, Nicolai Müller, Aein Rezaei Shahmirzadi
ePrint Report ePrint Report
The application of masking, widely regarded as the most robust and reliable countermeasure against Side-Channel Analysis (SCA) attacks, has been the subject of extensive research across a range of cryptographic algorithms, especially AES. However, the implementation cost associated with applying such a countermeasure can be significant and even in some scenarios infeasible due to considerations such as area and latency overheads, as well as the need for fresh randomness to ensure the security properties of the resulting design. Most of these overheads originate from the ability to maintain security in the presence of physical defaults such as glitches and transitions. Among several schemes with a trade-off between such overheads, RAMBAM, presented at CHES 2022, offers an ultra-low latency in terms of the number of clock cycles. It is dedicated to the AES and utilizes redundant representations of the finite field elements to enhance protection against both passive and active physical attacks.

In this paper, we have a deeper look at this technique and provide a comprehensive analysis. The original authors reported that the number of required traces to mount a successful attack increases exponentially with the size of the redundant representation. We however examine their scheme from theoretical point of view. More specifically, we investigate the relationship between RAMBAM and the well-established Boolean masking and, based on this, prove the insecurity of RAMBAM. Through the examples and use cases, we assess the leakage of the scheme in practice and use verification tools to demonstrate that RAMBAM does not necessarily offer adequate protection against SCA attacks neither in theory nor in practice. Confirmed by real-world experiments, we additionally highlight that -- if no dedicated facility is incorporated -- the RAMBAM designs are susceptible to fault-injection attacks despite providing some degree of protection against a sophisticated attack vector, i.e., SIFA.
Expand
Xiao Sui, Sisi Duan
ePrint Report ePrint Report
Byzantine atomic broadcast (ABC) is at the heart of permissioned blockchains and various multi-party computation protocols. We resolve a long-standing open problem in ABC, presenting the first information-theoretic (IT) and signature-free asynchronous ABC protocol that achieves optimal $O(n^2)$ messages and $O(1)$ expected time. Our ABC protocol adopts a new design, relying on a reduction from---perhaps surprisingly---a somewhat neglected primitive called multivalued Byzantine agreement (MBA).
Expand
Carsten Baum, Nikolas Melissaris, Rahul Rachuri, Peter Scholl
ePrint Report ePrint Report
Cheater identification in secure multi-party computation (MPC) allows the honest parties to agree upon the identity of a cheating party, in case the protocol aborts. In the context of a dishonest majority, this becomes especially critical, as it serves to thwart denial-of-service attacks and mitigate known impossibility results on ensuring fairness and guaranteed output delivery.

In this work, we present a new, lightweight approach to achieving identifiable abort in dishonest majority MPC. We avoid all of the heavy machinery used in previous works, instead relying on a careful combination of lightweight detection mechanisms and techniques from state-of-the-art protocols secure with (non-identifiable) abort.

At the core of our construction is a homomorphic, multi-receiver commitment scheme secure with identifiable abort. This commitment scheme can be constructed from cheap vector oblivious linear evaluation protocols based on learning parity with noise. To support cheater identification, we design a general compilation technique, similar to a compiler of Ishai et al. (Crypto 2014), but avoid its requirement for adaptive security of the underlying protocol.

Instead, we rely on a different (and seemingly easier to achieve) property we call online extractability, which may be of independent interest. Our MPC protocol can be viewed as a version of the BDOZ MPC scheme (Bendlin et al., Eurocrypt 2011) based on pairwise information-theoretic MACs, enhanced to support cheater identification and a highly efficient preprocessing phase, essentially as efficient as the non-identifiable protocol of Le Mans (Rachuri & Scholl, Crypto 2022).
Expand
Alessandro Budroni, Erik Mårtensson
ePrint Report ePrint Report
In post-quantum cryptography, Learning With Errors (LWE) is one of the dominant underlying mathematical problems. The dual attack is one of the main strategies for solving the LWE problem, and it has recently gathered significant attention within the research community. A series of studies initially suggested that it might be more efficient than the other main strategy, the primal attack. Then, a subsequent work by Ducas and Pulles (Crypto’23) raised doubts on the estimated complexity of such an approach. The dual attack consists of a reduction part and a distinguishing part. When estimating the cost of the distinguishing part, one has to estimate the expected cost of enumerating over a certain number of positions of the secret key.

Our contribution consists of giving a polynomial-time approach for calculating the expected complexity of such an enumeration procedure. This allows us to decrease the estimated cost of this procedure and, hence, of the whole attack both classically and quantumly. In addition, we explore different enumeration strategies to achieve some further improvements. Our work is independent from the questions raised by Ducas and Pulles, which do not concern the estimation of the enumeration procedure in the dual attack. As our method of calculating the expected cost of enumeration is fairly general, it might be of independent interest in other areas of cryptanalysis or even in other research areas.
Expand
Sudhanshu Sekhar Tripathy, Bichitrananda Behera
ePrint Report ePrint Report
The escalation of hazards to safety and hijacking of digital networks are among the strongest perilous difficulties that must be addressed in the present day. Numerous safety procedures were set up to track and recognize any illicit activity on the network's infrastructure. IDS are the best way to resist and recognize intrusions on internet connections and digital technologies. To classify network traffic as normal or anomalous, Machine Learning (ML) classifiers are increasingly utilized. An IDS with machine learning increases the accuracy with which security attacks are detected. This paper focuses on intrusion detection systems (IDSs) analysis using ML techniques. IDSs utilizing ML techniques are efficient and precise at identifying network assaults. In data with large dimensional spaces, however, the efficacy of these systems degrades. correspondingly, the case is essential to execute a feasible feature removal technique capable of getting rid of characteristics that have little effect on the classification process. In this paper, we analyze the KDD CUP-'99' intrusion detection dataset used for training and validating ML models. Then, we implement ML classifiers such as “Logistic Regression, Decision Tree, K-Nearest Neighbour, Naïve Bayes, Bernoulli Naïve Bayes, Multinomial Naïve Bayes, XG-Boost Classifier, Ada-Boost, Random Forest, SVM, Rocchio classifier, Ridge, Passive-Aggressive classifier, ANN besides Perceptron (PPN), the optimal classifiers are determined by comparing the results of Stochastic Gradient Descent and back-propagation neural networks for IDS”, Conventional categorization indicators, such as "accuracy, precision, recall, and the f1-measure", have been used to evaluate the performance of the ML classification algorithms.
Expand

10 October 2023

Journal of Cryptology Journal of Cryptology
The Journal of Cryptology will have a Topical Collection “Modern Zero-Knowledge Protocols”.

The CFPs are available at the URL: https://iacr.org/jofc/TopicalCollection-mzkp.html
Expand

09 October 2023

Olivier Bronchain, Melissa Azouaoui, Mohamed ElGhamrawy, Joost Renes, Tobias Schneider
ePrint Report ePrint Report
We present a set of physical attacks against CRYSTALS-Dilithium that accumulate noisy knowledge on secret keys over multiple signatures, finally leading to a full recovery attack. The methodology is composed of two steps. The first step consists of observing or inserting a bias in the posterior distribution of sensitive variables. The second step of an information processing phase which is based on belief propagation, which allows effectively exploiting that bias. The proposed concrete attacks rely on side-channel information, injection of fault attacks, or a combination of the two. Interestingly, the adversary benefits from the knowledge on the released signature, but is not dependent on it. We show that the combination of a physical attack with the binary knowledge of acceptance or rejection of a signature also leads to exploitable information on the secret key. Finally, we demonstrate that this approach is also effective against shuffled implementations of CRYSTALS-Dilithium.
Expand
Jung Hee Cheon, Hyeongmin Choe, Saebyul Jung, Duhyeong Kim, Dah Hoon Lee, Jai Hyun Park
ePrint Report ePrint Report
Reducing the size of large dimensional data is a critical task in machine learning (ML) that often involves using principal component analysis (PCA). In privacy-preserving ML, data confidentiality is of utmost importance, and reducing data size is a crucial way to cut overall costs.

This work focuses on minimizing the number of normalization processes in the PCA algorithm, which is a costly procedure in encrypted PCA. By modifying Krasulina's algorithm, non-polynomial operations were eliminated, except for a single delayed normalization at the end.

Our PCA algorithm demonstrated similar performance to conventional PCA algorithms in face recognition applications. We also implemented it using the CKKS (Cheon-Kim-Kim-Song) homomorphic encryption scheme and obtained the first 6 principal components of a 128$\times$128 real matrix in 7.85 minutes using 8 threads.
Expand
Amit Jana, Mostafizar Rahman, Dhiman Saha, Goutam Paul
ePrint Report ePrint Report
The Boomerang attack was one of the first attempts to visualize a cipher ($E$) as a composition of two sub-ciphers ($E_0\circ E_1$) to devise and exploit two high-probability (say $p,q$) shorter trails instead of relying on a single low probability (say $s$) longer trail for differential cryptanalysis. The attack generally works whenever $p^2 \cdot q^2 > s$. However, it was later succeeded by the so-called ``sandwich attack'' which essentially splits the cipher in three parts $E'_0\circ E_m \circ E'_1$ adding an additional middle layer ($E_m$) with distinguishing probability of $p^2\cdot r\cdot q^2$. It is primarily the generalization of a body of research in this direction that investigate what is referred to as the switching activity and capture the dependencies and potential incompatibilities of the layers that the middle layer separates.

This work revisits the philosophy of the sandwich attack over multiple rounds for NLFSR-based block ciphers and introduces a new method to find high probability boomerang distinguishers. The approach formalizes boomerang attacks using only ladder, And switches. The cipher is treated as $E = E_m \circ E_1$, a specialized form of a sandwich attack which we called as the ``open-sandwich attack''. The distinguishing probability for this attack configuration is $r \cdot q^2$.

Using this innovative approach, the study successfully identifies a deterministic boomerang distinguisher for the keyed permutation of the TinyJambu cipher over 320 rounds. Additionally, a 640-round boomerang with a probability of $2^{-22}$ is presented with 95% success rate. In the related-key setting, we unveil full-round boomerangs with probabilities of $2^{-19}$, $2^{-18}$, and $2^{-12}$ for all three variants, demonstrating a 99% success rate.

Similarly, for Katan-32, a more effective related-key boomerang spanning 140 rounds with a probability of $2^{-15}$ is uncovered with 70% success rate. Further, in the single-key setting, a 84-round boomerang with probability $2^{-30}$ found with success rate of 60%. This research deepens the understanding of boomerang attacks, enhancing the toolkit for cryptanalysts to develop efficient and impactful attacks on NLFSR-based block ciphers.
Expand
Yu Dai, Fangguo Zhang, Chang-an Zhao
ePrint Report ePrint Report
Pairing-friendly curves with odd prime embedding degrees at the 128-bit security level, such as BW13-310 and BW19-286, sparked interest in the field of public-key cryptography as small sizes of the prime fields. However, compared to mainstream pairing-friendly curves at the same security level, i.e., BN446 and BLS12-446, the performance of pairing computations on BW13-310 and BW19-286 is usually considered ineffcient. In this paper we investigate high performance software implementations of pairing computation on BW13-310 and corresponding building blocks used in pairing-based protocols, including hashing, group exponentiations and membership testings. Firstly, we propose effcient explicit formulas for pairing computation on this curve. Moreover, we also exploit the state-of-art techniques to implement hashing in G1 and G2, group exponentiations and membership testings. In particular, for exponentiations in G2 and GT , we present new optimizations to speed up computational effciency. Our implementation results on a 64-bit processor show that the gap in the performance of pairing computation between BW13-310 and BN446 (resp. BLS12-446) is only up to 4.9% (resp. 26%). More importantly, compared to BN446 and BLS12-446, BW13- 310 is about 109.1% − 227.3%, 100% − 192.6%, 24.5% − 108.5% and 68.2% − 145.5% faster in terms of hashing to G1, exponentiations in G1 and GT , and membership testing for GT , respectively. These results reveal that BW13-310 would be an interesting candidate in pairing-based cryptographic protocols.
Expand
Muhammad Asfand Hafeez, Wai-Kong Lee, Angshuman Karmakar, Seong Oun Hwang
ePrint Report ePrint Report
Recently proposed lattice-based cryptography algorithms can be used to protect the IoT communication against the threat from quantum computers, but they are computationally heavy. In particular, polynomial multiplication is one of the most time-consuming operations in lattice-based cryptography. To achieve efficient implementation, the Number Theoretic Transform (NTT) algorithm is an ideal choice, but it has certain limitations on the parameters, which not all lattice-based schemes can employ directly. Hence, alternative techniques are proposed to accelerate polynomial multiplication on lattice-based schemes that cannot utilize the NTT directly. In this paper, we propose a parallel Toeplitz matrix-vector product (TMVP) version to accelerate the polynomial multiplication in PQC algorithms implemented it on a graphics processing unit (GPU). This is the first time a TMVP parallel version has been proposed and experimented on different GPU cores (i.e., CUDA-cores and Tensor-cores). The effectiveness of the proposed solution is validated on Saber (the NIST post-quantum standardization finalist) and Sable (an improved version of Saber) schemes. Experimental results show that TMVP-based polynomial convolution using CUDA-cores fails to exhibit a significant enhancement compared to the schoolbook CUDA-core method already proposed by Hafeez et al. 2023. However, when the TMVP technique is applied to Tensor-cores, it outperformed state-of-the-art implementations. The proposed Tensor-core approach outperformed the schoolbook Tensor-core method by up to 1.21×, and outperformed the dot-product-instructions method (Lee et al. 2022) by up to 3.63×. The proposed TMVP Tensor-cores is also faster than the TMVP CUDA-cores method by 13.76×
Expand
Zhengjun Cao, Lihua Liu
ePrint Report ePrint Report
We show that the scheme [Neurocomputing, 2022 (500), 741-749] fails to keep anonymity, not as claimed. The scheme neglects the basic requirement for bit-wise XOR, and tries to encrypt data by the operator. The negligence results in some trivial equalities. An adversary can retrieve the user's identity from one captured string via the open channel.
Expand
Dimitrios Sikeridis, David Ott, Sean Huntley, Shivali Sharma, Vasantha Kumar Dhanasekar, Megha Bansal, Akhilesh Kumar, Anwitha U N, Daniel Beveridge, Sairam Veeraswamy
ePrint Report ePrint Report
Given the importance of cryptography to modern security and privacy solutions, it is surprising how little attention has been given to the problem of \textit{cryptographic agility}, or frameworks enabling the transition from one cryptographic algorithm or implementation to another. In this paper, we argue that traditional notions of cryptographic agility fail to capture the challenges facing modern enterprises that will soon be forced to implement a disruptive migration from today’s public key algorithms (e.g., RSA, ECDH) to quantum-safe alternatives (e.g., CRYSTALS-KYBER). After discussing the challenge of real-world cryptographic transition at scale, we describe our work on enterprise-level cryptographic agility for secure communications based on orchestrated \textit{cryptographic providers}. Our policy-driven approach, prototyped in service mesh, provides a much-needed re-envisioning for cryptographic agility and highlights what’s missing today to enable disruptive cryptographic change at scale.
Expand
Vipul Goyal, Giulio Malavolta, Justin Raizes
ePrint Report ePrint Report
Non-malleable cryptography, proposed by Dolev, Dwork, and Naor (SICOMP '00), has numerous applications in protocol composition. In the context of proofs, it guarantees that an adversary who receives a proof cannot maul it into another valid proof. However, non-malleable cryptography (particularly in the non-interactive setting) suffers from an important limitation: An attacker can always copy the proof and resubmit it to another verifier (or even multiple verifiers). In this work, we prevent even the possibility of copying the proof as it is, by relying on quantum information. We call the resulting primitive unclonable proofs, making progress on a question posed by Aaronson. We also consider the related notion of unclonable commitments. We introduce formal definitions of these primitives that model security in various settings of interest. We also provide a near tight characterization of the conditions under which these primitives are possible, including a rough equivalence between unclonable proofs and public-key quantum money.
Expand
Knud Ahrens, Jens Zumbrägel
ePrint Report ePrint Report
We present a verifiable delay function based on isogenies of supersingular elliptic curves, using Deuring correspondence and computation of endomorphism rings for the delay. For each input x a verifiable delay function has a unique output y and takes a predefined time to evaluate, even with parallel computing. Additionally, it generates a proof by which the output can efficiently be verified. In our approach the input is a path in the 2-isogeny graph and the output is the maximal order isomorphic to the endomorphism ring of the curve at the end of that path. This approach is presumably quantum-secure, does not require a trusted setup or special primes and the verification is independent from the delay. It works completely within the isogeny setting and the computation of the proof causes no overhead.
Expand
◄ Previous Next ►