IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
12 October 2023
a16z crypto research, New York, NY, USA
a16z crypto research is a new kind of multidisciplinary lab that bridges the worlds of academic theory and industry practice to advance the science and technology of the next generation of the internet. In addition to fundamental research, we collaborate with portfolio companies to solve hard technical and conceptual problems. Research interns will have the opportunity to learn from the firm’s investment and engineering teams, although this is a research role with no responsibility for investment decisions. We are seeking students with a strong research background and an interest in blockchains and web3 to join the group for the summer. Specific research areas of interest include cryptography, security, distributed computing, economics (both micro and macro), incentives, quantitative finance, political science and governance, and market and mechanism design. This list is not exhaustive and we encourage applicants with different backgrounds who may have unique perspectives on the space to apply.
Closing date for applications:
Contact: Joseph Bonneau
More information: https://a16z.com/about/jobs/?gh_jid=5766443003
11 October 2023
Maria Corte-Real Santos, Jonathan Komada Eriksen, Michael Meyer, Krijn Reijnders
We optimise the verification of the SQIsign signature scheme. By using field extensions in the signing procedure, we are able to significantly increase the amount of available rational $2$-power torsion in verification, which achieves a significant speed-up. This, moreover, allows several other speed-ups on the level of curve arithmetic. We show that the synergy between these high-level and low-level improvements gives significant improvements, making verification $2.65$ times faster, or up to $4.40$ times when using size-speed trade-offs, without degrading the performance of signing.
Siemen Dhooghe, Artemii Ovchinnikov, Dilara Toprakhisar
Fault attacks impose a serious threat against the practical implementations of cryptographic algorithms. Statistical Ineffective Fault Attacks (SIFA), exploiting the dependency between the secret data and the fault propagation overcame many of the known countermeasures. Later, several countermeasures have been proposed to tackle this attack using error detection methods. However, the efficiency of the countermeasures, in part governed by the number of error checks, still remains a challenge.
In this work, we propose a fault countermeasure, StaTI, based on threshold implementations and linear encoding techniques. The proposed countermeasure protects the implementations of cryptographic algorithms against both side-channel and fault adversaries in a non-combined attack setting. We present a new composable notion, stability, to protect a threshold implementation against a formal gate/register-faulting adversary. Stability ensures fault propagation, making a single error check of the output suffice. To illustrate the stability notion, first, we provide stable encodings of the XOR and AND gates. Then, we present techniques to encode threshold implementations of S-boxes, and provide stable encodings of some quadratic S-boxes together with their security and performance evaluation. Additionally, we propose general encoding techniques to transform a threshold implementation of any function (e.g., non-injective functions) to a stable one. We then provide an encoding technique to use in symmetric primitives which encodes state elements together significantly reducing the encoded state size. Finally, we used StaTI to implement a secure Keccak on FPGA and report on its efficiency.
Yanbin Xu, Yonglin Hao, Mingxing Wang
At ASIACRYPT 2019, Zhang proposed a near collision attack on A5/1 claiming to recover the 64-bit A5/1 state with a time complexity around $2^{32}$ cipher ticks with negligible memory requirements. Soon after its proposal, Zhang's near collision attack was severely challenged by Derbez \etal who claimed that Zhang's attack cannot have a time complexity lower than Golic's memoryless guess-and-determine attack dating back to EUROCRYPT 1997. In this paper, we study both the guess-and-determine and the near collision attacks for recovering A5/1 states with negligible memory complexities. Firstly, we propose a new guessing technique called the \emph{move guessing technique} that can construct linear equation filters in a more efficient manner. Such a technique can be applied to both guess-and-determine and collision attacks for efficiency improvements. Secondly, we take the filtering strength of the linear equation systems into account for complexity analysis. Such filtering strength are evaluated with practical experiments making the complexities more convincing. Based on such new techniques, we are able to give 2 new guess-and-determine attacks on A5/1: the 1st attack recovers the internal state $\vec{s}^0$ with time complexity $2^{43.92}$; the 2nd one recovers a different state $\vec{s}^1$ with complexity $2^{43.25}$. We also revisit Golic's guess-and-determine attack and Zhang's near collision attacks. According to our detailed analysis, the complexity of Golic's $\vec{s}^1$ recovery attack is no lower than $2^{46.04}$, higher than the previously believed $2^{43}$. On the other hand, Zhang's near collision attack recovers $\vec{s}^0$ with the time complexity $2^{53.19}$: such a complexity can be further lowered to $2^{50.78}$ with our move guessing technique.
Srivatsan Sridhar, Dionysis Zindros, David Tse
The security of blockchain protocols is a combination of two properties: safety and liveness. It is well known that no blockchain protocol can provide both to sleepy (intermittently online) clients under adversarial majority. However, safety is more critical in that a single safety violation can cause users to lose money. At the same time, liveness must not be lost forever. We show that, in a synchronous network, it is possible to maintain safety for all clients even during adversarial majority, and recover liveness after honest majority is restored. Our solution takes the form of a recovery gadget that can be applied to any protocol with certificates (such as HotStuff, Streamlet, Tendermint, and their variants).
Yuncong Zhang, Shi-Feng Sun, Ren Zhang, Dawu Gu
Zero-Knowledge Virtual Machines (ZKVMs) have gained traction in recent years due to their potential applications in a variety of areas, particularly blockchain ecosystems. Despite tremendous progress on ZKVMs in the industry, no formal definitions or security proofs have been established in the literature. Due to this lack of formalization, existing protocols exhibit significant discrepancies in terms of problem definitions and performance metrics, making it difficult to analyze and compare these advancements, or to trust the security of the increasingly complex ZKVM implementations.
In this work, we focus on random-access memory, an influential and expensive component of ZKVMs. Specifically, we investigate the state-of-the-art protocols for validating the correct functioning of memory, which we refer to as the \emph{memory consistency checks}. Isolating these checks from the rest of the system allows us to formalize their definition and security notion. Furthermore, we summarize the state-of-the-art constructions using the Polynomial IOP model and formally prove their security. Observing that the bottleneck of existing designs lies in sorting the entire memory trace, we break away from this paradigm and propose a novel memory consistency check, dubbed $\mathsf{Permem}$. $\mathsf{Permem}$ bypasses this bottleneck by introducing a technique called the address cycle method, which requires fewer building blocks and---after instantiating the building blocks with state-of-the-art constructions---fewer online polynomial oracles and evaluation queries. In addition, we propose $\mathsf{gcq}$, a new construction for the lookup argument---a key building block of the memory consistency check, which costs fewer online polynomial oracles than the state-of-the-art construction $\mathsf{cq}$.
In this work, we focus on random-access memory, an influential and expensive component of ZKVMs. Specifically, we investigate the state-of-the-art protocols for validating the correct functioning of memory, which we refer to as the \emph{memory consistency checks}. Isolating these checks from the rest of the system allows us to formalize their definition and security notion. Furthermore, we summarize the state-of-the-art constructions using the Polynomial IOP model and formally prove their security. Observing that the bottleneck of existing designs lies in sorting the entire memory trace, we break away from this paradigm and propose a novel memory consistency check, dubbed $\mathsf{Permem}$. $\mathsf{Permem}$ bypasses this bottleneck by introducing a technique called the address cycle method, which requires fewer building blocks and---after instantiating the building blocks with state-of-the-art constructions---fewer online polynomial oracles and evaluation queries. In addition, we propose $\mathsf{gcq}$, a new construction for the lookup argument---a key building block of the memory consistency check, which costs fewer online polynomial oracles than the state-of-the-art construction $\mathsf{cq}$.
Miranda Christ, Kevin Choi, Joseph Bonneau
We propose Cornucopia, a distributed randomness beacon protocol combining accumulators and verifiable delay functions. Cornucopia extends the Unicorn protocol of Lenstra and Wesolowski, utilizing an accumulator to enable efficient verification by each participant that their randomness contribution has been included in the beacon output.
The security of this construction reduces to a novel property of accumulators, insertion security. We first show that not all accumulators are insertion-secure. We then prove that common constructions (Merkle trees and RSA accumulators) are naturally insertion-secure. Finally, we give a generic transformation from any universal accumulator (supporting non-membership proofs) to an insertion-secure accumulator, albeit with an efficiency loss proportional to the security parameter.
Sourav Das, Ling Ren
Threshold signature is one of the most important cryptographic primitives in distributed systems. A popular choice of threshold signature scheme is the BLS threshold signature introduced by Boldyreva (PKC'03). Some attractive properties of Boldyreva's threshold signature are that the signatures are unique and short, the signing process is non-interactive, and the verification process is identical to that of non-threshold BLS. These properties have resulted in its practical adoption in several decentralized systems. However, despite its popularity and wide adoption, up until recently, the Boldyreva scheme has been proven secure only against a static adversary. Very recently, Bacho and Loss (CCS'22) presented the first proof of adaptive security for Boldyreva's threshold signature, but they have to rely on strong and non-standard assumptions such as the hardness of one-more discrete log (OMDL) and the Algebraic Group Model~(AGM).
In this paper, we present the first adaptively secure threshold BLS signature scheme that relies on the hardness of DDH and co-CDH in asymmetric pairing group in the Random Oracle Model (ROM). Our signature scheme also has non-interactive signing, compatibility with non-threshold BLS verification, and practical efficiency like Boldyreva's scheme. Moreover, to achieve static security, our scheme only needs the hardness of CDH in the ROM, which is the same as the standard non-threshold BLS signature. These properties make our protocol a suitable candidate for practical adoption with the added benefit of provable adaptive security. We also present an efficient distributed key generation (DKG) protocol to set up the signing keys for our signature scheme. We implement our scheme in Go and evaluate its signing and aggregation costs.
In this paper, we present the first adaptively secure threshold BLS signature scheme that relies on the hardness of DDH and co-CDH in asymmetric pairing group in the Random Oracle Model (ROM). Our signature scheme also has non-interactive signing, compatibility with non-threshold BLS verification, and practical efficiency like Boldyreva's scheme. Moreover, to achieve static security, our scheme only needs the hardness of CDH in the ROM, which is the same as the standard non-threshold BLS signature. These properties make our protocol a suitable candidate for practical adoption with the added benefit of provable adaptive security. We also present an efficient distributed key generation (DKG) protocol to set up the signing keys for our signature scheme. We implement our scheme in Go and evaluate its signing and aggregation costs.
Xiuquan Ding, Giulio Malavolta, Tianwei Zhang
Private information retrieval (PIR) allows a client to read data from a server, without revealing which information they are interested in. A PIR is doubly efficient if the server runtime is, after a one-time pre-processing, sublinear in the database size. A recent breakthrough result from Lin, Mook, and Wichs [STOC’23] proposed the first-doubly efficient PIR with (online) server computation poly-logarithmic in the size of the database, assuming the hardness of the standard Ring-LWE problem.
In this work, we consider the problem of doubly efficient batched PIR (DEBPIR), where the client wishes to download multiple entries. This problem arises naturally in many practical applications of PIR, or when the database contains large entries. Our main result is a construction of DEBPIR where the amortized communication and server computation overhead is $\tilde{O}(1)$, from the Ring-LWE problem. This represents an exponential improvement compared with known constructions, and it is optimal up to poly-logarithmic factors in the security parameter. Interestingly, the server’s online operations are entirely combinatorial and all algebraic computations are done in the pre-processing or delegated to the client.
In this work, we consider the problem of doubly efficient batched PIR (DEBPIR), where the client wishes to download multiple entries. This problem arises naturally in many practical applications of PIR, or when the database contains large entries. Our main result is a construction of DEBPIR where the amortized communication and server computation overhead is $\tilde{O}(1)$, from the Ring-LWE problem. This represents an exponential improvement compared with known constructions, and it is optimal up to poly-logarithmic factors in the security parameter. Interestingly, the server’s online operations are entirely combinatorial and all algebraic computations are done in the pre-processing or delegated to the client.
Vasily Mikhalev, Nils Kopal, Bernhard Esslinger
In the rapidly advancing domain of artificial intelligence, ChatGPT, powered by the GPT-4 model, has emerged as a state-of-the-art interactive agent, exhibiting substantial capabilities across various domains. This paper aims to assess the efficacy of GPT-4 in addressing and solving problems found within cryptographic examinations. We devised a multi-faceted methodology, presenting the model with a series of cryptographic questions of varying complexities derived from real academic examinations. Our evaluation encompasses both classical and modern cryptographic challenges, focusing on the model's ability to understand, interpret, and generate correct solutions while discerning its limitations.
The model was challenged with a spectrum of cryptographic tasks, earning 201 out of 208 points by solving fundamental queries inspired by an oral exam, 80.5 out of 90 points on a written Crypto 1 exam, and 287 out of 385 points on advanced exercises from the Crypto 2 course.
The results demonstrate that while GPT-4 shows significant promise in grasping fundamental cryptographic concepts and techniques, certain intricate problems necessitate domain-specific knowledge that may sometimes lie beyond the model's general training. Insights from this study can provide educators, researchers, and examiners with a deeper understanding of how cutting-edge AI models can be both an asset and a potential concern in academic settings related to cryptology.
To enhance the clarity and coherence of our work, we utilized ChatGPT-4 to help us in formulating sentences in this paper.
Daniel Lammers, Amir Moradi, Nicolai Müller, Aein Rezaei Shahmirzadi
The application of masking, widely regarded as the most robust and reliable countermeasure against Side-Channel Analysis (SCA) attacks, has been the subject of extensive research across a range of cryptographic algorithms, especially AES. However, the implementation cost associated with applying such a countermeasure can be significant and even in some scenarios infeasible due to considerations such as area and latency overheads, as well as the need for fresh randomness to ensure the security properties of the resulting design. Most of these overheads originate from the ability to maintain security in the presence of physical defaults such as glitches and transitions.
Among several schemes with a trade-off between such overheads, RAMBAM, presented at CHES 2022, offers an ultra-low latency in terms of the number of clock cycles. It is dedicated to the AES and utilizes redundant representations of the finite field elements to enhance protection against both passive and active physical attacks.
In this paper, we have a deeper look at this technique and provide a comprehensive analysis. The original authors reported that the number of required traces to mount a successful attack increases exponentially with the size of the redundant representation. We however examine their scheme from theoretical point of view. More specifically, we investigate the relationship between RAMBAM and the well-established Boolean masking and, based on this, prove the insecurity of RAMBAM. Through the examples and use cases, we assess the leakage of the scheme in practice and use verification tools to demonstrate that RAMBAM does not necessarily offer adequate protection against SCA attacks neither in theory nor in practice. Confirmed by real-world experiments, we additionally highlight that -- if no dedicated facility is incorporated -- the RAMBAM designs are susceptible to fault-injection attacks despite providing some degree of protection against a sophisticated attack vector, i.e., SIFA.
In this paper, we have a deeper look at this technique and provide a comprehensive analysis. The original authors reported that the number of required traces to mount a successful attack increases exponentially with the size of the redundant representation. We however examine their scheme from theoretical point of view. More specifically, we investigate the relationship between RAMBAM and the well-established Boolean masking and, based on this, prove the insecurity of RAMBAM. Through the examples and use cases, we assess the leakage of the scheme in practice and use verification tools to demonstrate that RAMBAM does not necessarily offer adequate protection against SCA attacks neither in theory nor in practice. Confirmed by real-world experiments, we additionally highlight that -- if no dedicated facility is incorporated -- the RAMBAM designs are susceptible to fault-injection attacks despite providing some degree of protection against a sophisticated attack vector, i.e., SIFA.
Xiao Sui, Sisi Duan
Byzantine atomic broadcast (ABC) is at the heart of permissioned blockchains and various multi-party computation protocols. We resolve a long-standing open problem in ABC, presenting the first information-theoretic (IT) and signature-free asynchronous ABC protocol that achieves optimal $O(n^2)$ messages and $O(1)$ expected time. Our ABC protocol adopts a new design, relying on a reduction from---perhaps surprisingly---a somewhat neglected primitive called multivalued Byzantine agreement (MBA).
Carsten Baum, Nikolas Melissaris, Rahul Rachuri, Peter Scholl
Cheater identification in secure multi-party computation (MPC) allows the honest parties to agree upon the identity of a cheating party, in case the protocol aborts.
In the context of a dishonest majority, this becomes especially critical, as it serves to thwart denial-of-service attacks and mitigate known impossibility results on ensuring fairness and guaranteed output delivery.
In this work, we present a new, lightweight approach to achieving identifiable abort in dishonest majority MPC. We avoid all of the heavy machinery used in previous works, instead relying on a careful combination of lightweight detection mechanisms and techniques from state-of-the-art protocols secure with (non-identifiable) abort.
At the core of our construction is a homomorphic, multi-receiver commitment scheme secure with identifiable abort. This commitment scheme can be constructed from cheap vector oblivious linear evaluation protocols based on learning parity with noise. To support cheater identification, we design a general compilation technique, similar to a compiler of Ishai et al. (Crypto 2014), but avoid its requirement for adaptive security of the underlying protocol.
Instead, we rely on a different (and seemingly easier to achieve) property we call online extractability, which may be of independent interest. Our MPC protocol can be viewed as a version of the BDOZ MPC scheme (Bendlin et al., Eurocrypt 2011) based on pairwise information-theoretic MACs, enhanced to support cheater identification and a highly efficient preprocessing phase, essentially as efficient as the non-identifiable protocol of Le Mans (Rachuri & Scholl, Crypto 2022).
In this work, we present a new, lightweight approach to achieving identifiable abort in dishonest majority MPC. We avoid all of the heavy machinery used in previous works, instead relying on a careful combination of lightweight detection mechanisms and techniques from state-of-the-art protocols secure with (non-identifiable) abort.
At the core of our construction is a homomorphic, multi-receiver commitment scheme secure with identifiable abort. This commitment scheme can be constructed from cheap vector oblivious linear evaluation protocols based on learning parity with noise. To support cheater identification, we design a general compilation technique, similar to a compiler of Ishai et al. (Crypto 2014), but avoid its requirement for adaptive security of the underlying protocol.
Instead, we rely on a different (and seemingly easier to achieve) property we call online extractability, which may be of independent interest. Our MPC protocol can be viewed as a version of the BDOZ MPC scheme (Bendlin et al., Eurocrypt 2011) based on pairwise information-theoretic MACs, enhanced to support cheater identification and a highly efficient preprocessing phase, essentially as efficient as the non-identifiable protocol of Le Mans (Rachuri & Scholl, Crypto 2022).
Alessandro Budroni, Erik Mårtensson
In post-quantum cryptography, Learning With Errors (LWE) is one of the dominant underlying mathematical problems. The dual attack is one of the main strategies for solving the LWE problem, and it has recently gathered significant attention within the research community. A series of studies initially suggested that it might be more efficient than the other main strategy, the primal attack. Then, a subsequent work by Ducas and Pulles (Crypto’23) raised doubts on the estimated complexity of such an approach. The dual attack consists of a reduction part and a distinguishing part. When estimating the cost of the distinguishing part, one has to estimate the expected cost of enumerating over a certain number of positions of the secret key.
Our contribution consists of giving a polynomial-time approach for calculating the expected complexity of such an enumeration procedure. This allows us to decrease the estimated cost of this procedure and, hence, of the whole attack both classically and quantumly. In addition, we explore different enumeration strategies to achieve some further improvements. Our work is independent from the questions raised by Ducas and Pulles, which do not concern the estimation of the enumeration procedure in the dual attack. As our method of calculating the expected cost of enumeration is fairly general, it might be of independent interest in other areas of cryptanalysis or even in other research areas.
Our contribution consists of giving a polynomial-time approach for calculating the expected complexity of such an enumeration procedure. This allows us to decrease the estimated cost of this procedure and, hence, of the whole attack both classically and quantumly. In addition, we explore different enumeration strategies to achieve some further improvements. Our work is independent from the questions raised by Ducas and Pulles, which do not concern the estimation of the enumeration procedure in the dual attack. As our method of calculating the expected cost of enumeration is fairly general, it might be of independent interest in other areas of cryptanalysis or even in other research areas.
Sudhanshu Sekhar Tripathy, Bichitrananda Behera
The escalation of hazards to safety and hijacking of digital networks are among the strongest perilous difficulties that must be addressed in the present day. Numerous safety procedures were set up to track and recognize any illicit activity on the network's infrastructure. IDS are the best way to resist and recognize intrusions on internet connections and digital technologies. To classify network traffic as normal or anomalous, Machine Learning (ML) classifiers are increasingly utilized. An IDS with machine learning increases the accuracy with which security attacks are detected. This paper focuses on intrusion detection systems (IDSs) analysis using ML techniques. IDSs utilizing ML techniques are efficient and precise at identifying network assaults. In data with large dimensional spaces, however, the efficacy of these systems degrades. correspondingly, the case is essential to execute a feasible feature removal technique capable of getting rid of characteristics that have little effect on the classification process. In this paper, we analyze the KDD CUP-'99' intrusion detection dataset used for training and validating ML models. Then, we implement ML classifiers such as “Logistic Regression, Decision Tree, K-Nearest Neighbour, Naïve Bayes, Bernoulli Naïve Bayes, Multinomial Naïve Bayes, XG-Boost Classifier, Ada-Boost, Random Forest, SVM, Rocchio classifier, Ridge, Passive-Aggressive classifier, ANN besides Perceptron (PPN), the optimal classifiers are determined by comparing the results of Stochastic Gradient Descent and back-propagation neural networks for IDS”, Conventional categorization indicators, such as "accuracy, precision, recall, and the f1-measure", have been used to evaluate the performance of the ML classification algorithms.
10 October 2023
The Journal of Cryptology will have a Topical Collection “Modern
Zero-Knowledge Protocols”.
The CFPs are available at the URL: https://iacr.org/jofc/TopicalCollection-mzkp.html
The CFPs are available at the URL: https://iacr.org/jofc/TopicalCollection-mzkp.html
09 October 2023
Olivier Bronchain, Melissa Azouaoui, Mohamed ElGhamrawy, Joost Renes, Tobias Schneider
We present a set of physical attacks against CRYSTALS-Dilithium that accumulate noisy knowledge on secret keys over multiple signatures, finally leading to a full recovery attack. The methodology is composed of two steps. The first step consists of observing or inserting a bias in the posterior distribution of sensitive variables. The second step of an information processing phase which is based on belief propagation, which allows effectively exploiting that bias. The proposed concrete attacks rely on side-channel information, injection of fault attacks, or a combination of the two. Interestingly, the adversary benefits from the knowledge on the released signature, but is not dependent on it. We show that the combination of a physical attack with the binary knowledge of acceptance or rejection of a signature also leads to exploitable information on the secret key. Finally, we demonstrate that this approach is also effective against shuffled implementations of CRYSTALS-Dilithium.
Jung Hee Cheon, Hyeongmin Choe, Saebyul Jung, Duhyeong Kim, Dah Hoon Lee, Jai Hyun Park
Reducing the size of large dimensional data is a critical task in machine learning (ML) that often involves using principal component analysis (PCA). In privacy-preserving ML, data confidentiality is of utmost importance, and reducing data size is a crucial way to cut overall costs.
This work focuses on minimizing the number of normalization processes in the PCA algorithm, which is a costly procedure in encrypted PCA. By modifying Krasulina's algorithm, non-polynomial operations were eliminated, except for a single delayed normalization at the end.
Our PCA algorithm demonstrated similar performance to conventional PCA algorithms in face recognition applications. We also implemented it using the CKKS (Cheon-Kim-Kim-Song) homomorphic encryption scheme and obtained the first 6 principal components of a 128$\times$128 real matrix in 7.85 minutes using 8 threads.
This work focuses on minimizing the number of normalization processes in the PCA algorithm, which is a costly procedure in encrypted PCA. By modifying Krasulina's algorithm, non-polynomial operations were eliminated, except for a single delayed normalization at the end.
Our PCA algorithm demonstrated similar performance to conventional PCA algorithms in face recognition applications. We also implemented it using the CKKS (Cheon-Kim-Kim-Song) homomorphic encryption scheme and obtained the first 6 principal components of a 128$\times$128 real matrix in 7.85 minutes using 8 threads.
Amit Jana, Mostafizar Rahman, Dhiman Saha, Goutam Paul
The Boomerang attack was one of the first attempts to visualize a cipher ($E$) as a composition of two sub-ciphers ($E_0\circ E_1$) to devise and exploit two high-probability (say $p,q$) shorter trails instead of relying on a single low probability (say $s$) longer trail for differential cryptanalysis. The attack generally works whenever $p^2 \cdot q^2 > s$. However, it was later succeeded by the so-called ``sandwich attack'' which essentially splits the cipher in three parts $E'_0\circ E_m \circ E'_1$ adding an additional middle layer ($E_m$) with distinguishing probability of $p^2\cdot r\cdot q^2$. It is primarily the generalization of a body of research in this direction that investigate what is referred to as the switching activity and capture the dependencies and potential incompatibilities of the layers that the middle layer separates.
This work revisits the philosophy of the sandwich attack over multiple rounds for NLFSR-based block ciphers and introduces a new method to find high probability boomerang distinguishers. The approach formalizes boomerang attacks using only ladder, And switches. The cipher is treated as $E = E_m \circ E_1$, a specialized form of a sandwich attack which we called as the ``open-sandwich attack''. The distinguishing probability for this attack configuration is $r \cdot q^2$.
Using this innovative approach, the study successfully identifies a deterministic boomerang distinguisher for the keyed permutation of the TinyJambu cipher over 320 rounds. Additionally, a 640-round boomerang with a probability of $2^{-22}$ is presented with 95% success rate. In the related-key setting, we unveil full-round boomerangs with probabilities of $2^{-19}$, $2^{-18}$, and $2^{-12}$ for all three variants, demonstrating a 99% success rate.
Similarly, for Katan-32, a more effective related-key boomerang spanning 140 rounds with a probability of $2^{-15}$ is uncovered with 70% success rate. Further, in the single-key setting, a 84-round boomerang with probability $2^{-30}$ found with success rate of 60%. This research deepens the understanding of boomerang attacks, enhancing the toolkit for cryptanalysts to develop efficient and impactful attacks on NLFSR-based block ciphers.
This work revisits the philosophy of the sandwich attack over multiple rounds for NLFSR-based block ciphers and introduces a new method to find high probability boomerang distinguishers. The approach formalizes boomerang attacks using only ladder, And switches. The cipher is treated as $E = E_m \circ E_1$, a specialized form of a sandwich attack which we called as the ``open-sandwich attack''. The distinguishing probability for this attack configuration is $r \cdot q^2$.
Using this innovative approach, the study successfully identifies a deterministic boomerang distinguisher for the keyed permutation of the TinyJambu cipher over 320 rounds. Additionally, a 640-round boomerang with a probability of $2^{-22}$ is presented with 95% success rate. In the related-key setting, we unveil full-round boomerangs with probabilities of $2^{-19}$, $2^{-18}$, and $2^{-12}$ for all three variants, demonstrating a 99% success rate.
Similarly, for Katan-32, a more effective related-key boomerang spanning 140 rounds with a probability of $2^{-15}$ is uncovered with 70% success rate. Further, in the single-key setting, a 84-round boomerang with probability $2^{-30}$ found with success rate of 60%. This research deepens the understanding of boomerang attacks, enhancing the toolkit for cryptanalysts to develop efficient and impactful attacks on NLFSR-based block ciphers.
Yu Dai, Fangguo Zhang, Chang-an Zhao
Pairing-friendly curves with odd prime embedding degrees
at the 128-bit security level, such as BW13-310 and BW19-286, sparked
interest in the field of public-key cryptography as small sizes of the prime
fields. However, compared to mainstream pairing-friendly curves at the
same security level, i.e., BN446 and BLS12-446, the performance of pairing computations on BW13-310 and BW19-286 is usually considered
ineffcient. In this paper we investigate high performance software implementations of pairing computation on BW13-310 and corresponding
building blocks used in pairing-based protocols, including hashing, group
exponentiations and membership testings. Firstly, we propose effcient
explicit formulas for pairing computation on this curve. Moreover, we
also exploit the state-of-art techniques to implement hashing in G1 and
G2, group exponentiations and membership testings. In particular, for
exponentiations in G2 and GT , we present new optimizations to speed
up computational effciency. Our implementation results on a 64-bit processor show that the gap in the performance of pairing computation between BW13-310 and BN446 (resp. BLS12-446) is only up to 4.9% (resp.
26%). More importantly, compared to BN446 and BLS12-446, BW13-
310 is about 109.1% − 227.3%, 100% − 192.6%, 24.5% − 108.5% and
68.2% − 145.5% faster in terms of hashing to G1, exponentiations in G1
and GT , and membership testing for GT , respectively. These results reveal that BW13-310 would be an interesting candidate in pairing-based
cryptographic protocols.