International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

04 December 2023

Léo Ducas, Ludo N. Pulles
ePrint Report ePrint Report
The Dual-Sieve Attack on Learning with Errors (LWE), or more generally Bounded Distance Decoding (BDD), has seen many improvements in the recent years, and ultimately led to claims that it outperforms the primal attack against certain lattice-based schemes in the PQC standardization process organised by NIST. However, the work of Ducas--Pulles (Crypto '23) revealed that the so-called "Independence Heuristic", which all recent dual attacks used, leads to wrong predictions in a contradictory regime, which is relevant for the security of cryptoschemes. More specifically, the stated distributions of scores for the actual solution and for incorrect candidates were both incorrect.

In this work, we propose to use the weaker heuristic that the output vectors of a lattice sieve are uniformly distributed in a ball. Under this heuristic, we give an analysis of the score distribution in the case of an error of fixed length. Integrating over this length, we extend this analysis to any radially distributed error, in particular the gaussian as a fix for the score distribution of the actual solution. This approach also provides a prediction for the score of incorrect candidates, using a ball as an approximation of the Voronoi cell of a lattice.

We compare the predicted score distributions to extensive experiments, and observe them to be qualitatively and quantitatively quite accurate. This constitutes a first step towards fixing the analysis of the dual-sieve attack: we can now accurately estimate false-positives and false-negatives. Now that the analysis is fixed, one may consider how to fix the attack itself, namely exploring the opportunities to mitigate a large number of false-positives.
Expand

01 December 2023

Jiang Zhang, Yu Chen, Zhenfeng Zhang
ePrint Report ePrint Report
Driven by the open problem raised by Hofheinz and Kiltz (Journal of Cryptology, 2012), we study the formalization of lattice-based programmable hash function (PHF), and give three types of concrete constructions by using several techniques such as a novel combination of cover-free sets and lattice trapdoors. Under the Inhomogeneous Small Integer Solution (ISIS) assumption, we show that any (non-trivial) lattice-based PHF is a collision-resistant hash function, which gives a direct application of this new primitive.

We further demonstrate the power of lattice-based PHF by giving generic constructions of signature and identity-based encryption (IBE) in the standard model, which not only provide a way to unify several previous lattice-based schemes using the partitioning proof techniques, but also allow us to obtain new short signature schemes and IBE schemes from (ideal) lattices. Specifically, by instantiating the generic constructions with our Type-II and Type-III PHF constructions, we immediately obtain two short signatures and two IBE schemes with asymptotically much shorter keys. A major downside which inherits from our Type-II and Type-III PHF constructions is that we can only prove the security of the new signatures and IBEs in the bounded security model that the number Q of the adversary’s queries is required to be known in advance. Another downside is that the computational time of our new signatures and IBEs is a linear function of Q, which is large for typical parameters.

To overcome the above limitations, we also give a refined way of using Type-II and Type-III PHFs to construct lattice-based short signatures with short verification keys in the full security model. In particular, our methods depart from the confined guessing technique of B¨ohl et al. (Eurocrypt’13) that was used to construct previous standard model short signature schemes with short verification keys by Ducas and Micciancio (Crypto’14) and by Alperin-Sheriff (PKC’15), and allow us to achieve much tighter security from weaker hardness assumptions.
Expand
Dario Pasquini, Danilo Francati, Giuseppe Ateniese, Evgenios M. Kornaropoulos
ePrint Report ePrint Report
Credential tweaking attacks use breached passwords to generate semantically similar passwords and gain access to victims' services. These attacks sidestep the first generation of compromised credential checking (C3) services. The second generation of compromised credential checking services, called "Might I Get Pwned" (MIGP), is a privacy-preserving protocol that defends against credential tweaking attacks by allowing clients to query whether a password or a semantically similar variation is present in the server's compromised credentials dataset. The desired privacy requirements include not revealing the user's entered password to the server and ensuring that no compromised credentials are disclosed to the client.

In this work, we formalize the cryptographic leakage of the MIGP protocol and perform a security analysis to assess its impact on the credentials held by the server. We focus on how this leakage aids breach extraction attacks, where an honest-but-curious client interacts with the server to extract information about the stored credentials. Furthermore, we discover additional leakage that arises from the implementation of Cloudflare's deployment of MIGP. We evaluate how the discovered leakage affects the guessing capability of an attacker in relation to breach extraction attacks. Finally, we propose MIGP 2.0, a new iteration of the MIGP protocol designed to minimize data leakage and prevent the introduced attacks.
Expand
Xianghan Wang, Jianghua Zhong, Dongdai Lin
ePrint Report ePrint Report
Nonlinear feedback shift registers (NFSRs) are used in many stream ciphers as their main building blocks. One security criterion for the design of a stream cipher is to assure its keystream has a long period. To meet this criterion, the NFSR used in a stream cipher must have a long state cycle. Further, to simultaneously avoid equivalent keys, the keystream's period is not compressed compared to the NFSR's state cycle length, which can be guaranteed if the NFSR is observable in the sense that any two distinct initial states are distinguishable from their resulting output sequences. The cycle structure of a general NFSR remains an open hard problem. Constructing Fibonacci NFSRs with maximum state cycles has therefore attracted much attention, but so far such Fibonacci NFSRs with known feedback functions have been found only for their stage numbers no greater than 33.

Considering that Galois NFSRs may decrease the area and increase the throughput compared to Fibonacci NFSRs, this paper studies two types of $n$-stage Galois NFSRs, whose state transition matrices are circulant matrices with only one nonzero element of 1 in each column. The cycle structure and observability of both types are disclosed using the semi-tensor product based Boolean network approach. In the first type, each Galois NFSR has the state transition matrix, in which the position of the element 1 in the first column is even. It has the maximum state cycle with an arbitrary stage number and an explicit feedback functions. It is observable if and only if its output function is dependent on the first state bit. In the second type, each Galois NFSR has the state transition matrix, in which the position of the element 1 in the first column is $2^m+1$ with positive integer $m\leq n-1$ for the NFSR's stage number $n$. It has $2^m$ cycles of length $2^{n-m}$, and it is observable if its output function is dependent on all the state bits whose indices are no smaller than $n-m+1$.
Expand
Orhun Kara
ePrint Report ePrint Report
Common block ciphers like AES specified by the NIST or KASUMI (A5/3) of GSM are extensively utilized by billions of individuals globally to protect their privacy and maintain confidentiality in daily communications. However, these ciphers lack comprehensive security proofs against the vast majority of known attacks. Currently, security proofs are limited to differential and linear attacks for both AES and KASUMI. For instance, the consensus on the security of AES is not based on formal mathematical proofs but on intensive cryptanalysis over its reduced rounds spanning several decades. In this work, we introduce new security proofs for AES against another attack method: impossible differential (ID) attacks. We classify ID attacks as reciprocal and nonreciprocal ID attacks. We show that sharp and generic lower bounds can be imposed on the data complexities of reciprocal ID attacks on substitution permutation networks. We prove that the minimum data required for a reciprocal ID attack on AES using a conventional ID characteristic is $2^{66}$ chosen plaintexts whereas a nonreciprocal ID attack involves at least $2^{88}$ computational steps. We mount a nonreciprocal ID attack on 6-round AES for 192-bit and 256-bit keys, which requires only $2^{18}$ chosen plaintexts and outperforms the data complexity of any attack. Given its marginal time complexity, this attack does not pose a substantial threat to the security of AES. However, we have made enhancements to the integral attack on 6-round AES, thereby surpassing the longstanding record for the most efficient attack after a period of 23 years.
Expand
Olivier Sanders, Jacques Traoré
ePrint Report ePrint Report
Anonymous credentials are cryptographic mechanisms enabling users to authenticate themselves with a fine-grained control on the information they leak in the process. They have been the topic of countless papers which have improved the performance of such mechanisms or proposed new schemes able to prove ever-more complex statements about the attributes certified by those credentials. However, whereas these papers have studied in depth the problem of the information leaked by the credential and/or the attributes, almost all of them have surprisingly overlooked the information one may infer from the knowledge of the credential issuer.

In this paper we address this problem by showing how one can efficiently hide the actual issuer of a credential within a set of potential issuers. The novelty of our work is that we do not resort to zero-knowledge proofs but instead we show how one can tweak Pointcheval-Sanders signatures to achieve this issuer-hiding property at a very low cost. This results in an efficient anonymous credential system that indeed provide a complete control of the information leaked in the authentication process. Our construction is moreover modular and can then fit a wide spectrum of applications, notably for Self-Sovereign Identity (SSI) systems.
Expand
Tomoyuki Morimae, Barak Nehoran, Takashi Yamakawa
ePrint Report ePrint Report
We show the following unconditional results on quantum commitments in two related yet different models:

1. We revisit the notion of quantum auxiliary-input commitments introduced by Chailloux, Kerenidis, and Rosgen (Comput. Complex. 2016) where both the committer and receiver take the same quantum state, which is determined by the security parameter, as quantum auxiliary inputs. We show that computationally-hiding and statistically-binding quantum auxiliary-input commitments exist unconditionally, i.e., without relying on any unproven assumption, while Chailloux et al. assumed a complexity-theoretic assumption, ${\bf QIP}\not\subseteq{\bf QMA}$. On the other hand, we observe that achieving both statistical hiding and statistical binding at the same time is impossible even in the quantum auxiliary-input setting. To the best of our knowledge, this is the first example of unconditionally proving computational security of any form of (classical or quantum) commitments for which statistical security is impossible. As intermediate steps toward our construction, we introduce and unconditionally construct post-quantum sparse pseudorandom distributions and quantum auxiliary-input EFI pairs which may be of independent interest.

2. We introduce a new model which we call the common reference quantum state (CRQS) model where both the committer and receiver take the same quantum state that is randomly sampled by an efficient setup algorithm. We unconditionally prove that there exist statistically hiding and statistically binding commitments in the CRQS model, circumventing the impossibility in the plain model.

We also discuss their applications to zero-knowledge proofs, oblivious transfers, and multi-party computations.
Expand
Mike Nkongolo Wa Nkongolo
ePrint Report ePrint Report
This study focuses on spotting and stopping new types of online threats by improving the UGRansome dataset to detect unusual activity in real-time. By blending different machine learning methods, like naïve tree-based ensemble learning and recursive feature elimination (RFE), the research achieves a high accuracy rate of 97%. Naïve Bayes (NB) stands out as the most effective classifier. The suggested setup, combining gradient boosting (GB) and random forest (RF) with NB, effectively identifies and prevents unknown vulnerabilities in computer systems. UGRansome successfully blocks over 100 kilobits per second (kbps) of harmful online traffic by using details pinpointed by the RFE method, specifically uniform resource locators (URLs). This outperforms existing Intrusion Detection System (IDS) datasets. It's particularly good at stopping secure shell attacks, proving the dataset's usefulness in making networks safer. This research marks significant progress in detecting intrusions. The NB model excels in accuracy, precision, and remembering patterns, especially in identifying new threats. Moreover, the suggested naïve tree-based ensemble model shows outstanding accuracy, standing out as the best-performing technique among all models studied. Applying the UGRansome properties-based rule noticeably changes how traffic is sorted, decreasing unknown traffic while increasing unclassified traffic, which requires more investigation.
Expand
Xihan Xiong, Zhipeng Wang, Xi Chen, William Knottenbelt, Michael Huth
ePrint Report ePrint Report
Lido, the leading Liquidity Staking Derivative (LSD) provider on Ethereum, allows users to stake an arbitrary amount of ETH to receive stETH, which can be integrated with Decentralized Finance (DeFi) protocols such as Aave. The composability between Lido and Aave enables a novel strategy called “leverage staking”, where users stake ETH on Lido to acquire stETH, utilize stETH as collateral on Aave to borrow ETH, and then restake the borrowed ETH on Lido. Users can iteratively execute this process to optimize potential returns based on their risk profile.

This paper systematically studies the opportunities and risks associated with leverage staking. We are the first to formalize the stETH-ETH leverage staking strategy within the Lido-Aave ecosystem. Our empirical study identifies 262 leverage staking positions on Ethereum, with an aggregated staking amount of 295,243 ETH (482M USD). We discover that 90.13% of leverage staking positions have achieved higher returns than conventional staking. Furthermore, we perform stress tests to evaluate the risk introduced by leverage staking under extreme conditions. We find that leverage staking significantly amplifies the risk of cascading liquidations. We hope this paper can inform and encourage the development of robust risk management approaches to protect the Lido-Aave LSD ecosystem.
Expand
Alper Çakan, Vipul Goyal
ePrint Report ePrint Report
Quantum no-cloning theorem gives rise to the intriguing possibility of quantum copy protection where we encode a program in a quantum state such that a user in possession of $k$ such states cannot create $k+1$ working copies. Introduced by Aaronson (CCC'09) over a decade ago, copy protection has proven to be notoriously hard to achieve.

In this work, we construct public-key encryption and functional encryption schemes whose secret keys are copy-protected against unbounded collusions in the plain model (i.e. without any idealized oracles), assuming (post-quantum) subexponentially secure $\mathcal{iO}$, one-way functions and LWE. This resolves a long-standing open question of constructing fully collusion-resistant copy-protected functionalities raised by multiple previous works.

Prior to our work, copy-protected functionalities were known only in restricted collusion models where either an a-priori bound on the collusion size was needed, in the plain model with the same assumptions as ours (Liu, Liu, Qian, Zhandry [TCC'22]), or adversary was only prevented from doubling their number of working programs, in a structured quantum oracle model (Aaronson [CCC'09]).

We obtain our results through a novel technique which uses identity-based encryption to construct unbounded collusion resistant copy-protection schemes from $1\to2$ secure schemes. This is analogous to the technique of using digital signatures to construct full-fledged quantum money from single banknote schemes (Lutomirski et al. [ICS'09], Farhi et al. [ITCS'12], Aaronson and Christiano [STOC'12]). We believe our technique is of independent interest.

Along the way, we also construct a puncturable functional encryption scheme whose master secret key can be punctured at all functions $f$ such that $f(m_0) \neq f(m_1)$. This might also be of independent interest.
Expand
Luowen Qian
ePrint Report ePrint Report
We demonstrate how to build computationally secure commitment schemes with the aid of quantum auxiliary inputs without unproven complexity assumptions. Furthermore, the quantum auxiliary input can be prepared either (1) efficiently through a trusted setup similar to the classical common random string model, or (2) strictly between the two involved parties in uniform exponential time. Classically this remains impossible without first proving $\mathsf{P} \neq \mathsf{NP}$.
Expand
Charanjit S Jutla, Chengyu Lin
ePrint Report ePrint Report
We extend the known pseudorandomness of Ring-LWE to be based on lattices that do not correspond to any ideal of any order in the underlying number field. In earlier works of Lyubashevsky et al (EUROCRYPT 2010) and Peikert et al (STOC 2017), the hardness of RLWE was based on ideal lattices of ring of integers of number fields, which are known to be Dedekind domains. While these works extended Regev's (STOC 2005) quantum polynomial-time reduction for LWE, thus allowing more efficient and more structured cryptosystems, the additional algebraic structure of ideals of Dedekind domains leaves open the possibility that such ideal lattices are not as hard as general lattices.

In this work we show that hardness of $q$-Ring-LWE can be based on worst-case hardness of ideal lattices in arbitrary orders $O$, as long as the order $O$ satisfies the property that $\frac{1}{m}\cdot O$ contains the ring of integers, for some $m$ co-prime to $q$. Further, the hard lattice problems need not be given the order $O$ itself as input. The reduction requires that the noise be a factor $m$ more than the original Ring-LWE reduction. We also show that for the power-of-two cyclotomic number fields, there exist orders with $m=4$ such that non-trivial ideals of the order, which are not contained in the conductor, are non-invertible.

Another reduction shows that hardness of $q$-Ring-LWE can be based on worst-case hardness of lattices that correspond to sum of ideal-lattices in arbitrary and different orders in the number field, as long as the (set of) orders $\{O_i\}$ satisfy the property that $\frac{1}{m}\cdot O_i$ contains the ring of integers, for some $m$ co-prime to $q$. We also show that for the power-of-two cyclotomic number fields, there exist orders $O_1, O_2$ with $m=8$ such that there are ideals $I_1, I_2$ of $O_1, O_2$ resp. with $I_1+ I_2$ not an ideal of any order in the number field.
Expand
Daniel J. Bernstein
ePrint Report ePrint Report
There appears to be a widespread belief that some processes of selecting cryptosystems are less risky than other processes. As a case study of quantifying the difference in risks, this paper compares the currently-known-failure rates of three large groups of cryptosystems: (1) the round-1 submissions to the NIST Post-Quantum Cryptography Standardization Project, (2) the round-1 submissions not broken by the end of round 1, and (3) the round-1 submissions selected by NIST for round 2 of the same project. These groups of cryptosystems turn out to have currently-known-failure rates that are strikingly high, and that include statistically significant differences across the groups, not matching the pattern of differences that one might expect. Readers are cautioned that the actual failure rates could be much higher than the currently-known-failure rates.
Expand
Daniel R. L. Brown, Chris Monico
ePrint Report ePrint Report
Panny [3] described how to forge the “tropical signatures” proposed by Chen, Grigoriev and Shpilrain [1]. (These signatures are loosely related to the NP-complete problem of factoring tropical polynomials). We describe more methods to forge these tropical signatures. We also describe some patches that thwart all but one of these forgery methods (which we summarize as re-hashing an honest signature).
Expand
Hamza Abusalah, Valerio Cini
ePrint Report ePrint Report
A proof of sequential work (PoSW) scheme allows the prover to convince a verifier that it computed a certain number of computational steps sequentially. Very recently, graph-labeling PoSW schemes, found applications in light-client blockchain protocols, most notably bootstrapping. A bootstrapping protocol allows a light client, with minimal information about the blockchain, to hold a commitment to its stable prefix. An incremental PoSW (iPoSW) scheme allows the prover to non-trivially increment proofs: given $\chi,\pi_1$ and integers $N_1,N_2$ such that $\pi_1$ is a valid proof for $N_1$, it generates a valid proof $\pi$ for $N_1+N_2$. In this work, we construct an iPoSW scheme based on the skiplist-based PoSW scheme of Abusalah et al. and prove its security in the random oracle model by employing the powerful on-the-fly sampling technique of Döttling et al. Moreover, unlike the iPoSW scheme of Döttling et al., ours is the first iPoSW scheme which is suitable for constructing incremental non-interactive arguments of chain knowledge (SNACK) schemes, which are at the heart of space and time efficient blockchain light-client protocols. In particular, our scheme works for general weight distributions, which we characterize as incrementally sampleable distributions. Our general treatment recovers the distribution underlying the scheme of Döttling et al. as well as the distribution underlying SNACK-enabled bootstrapping application as special cases. In realizing our general construction, we develop a new on-the-fly sampling technique.
Expand
Apurva K Vangujar, Alia Umrani, Paolo Palmieri
ePrint Report ePrint Report
Vehicle Ad Hoc Networks (VANETs) play a pivotal role in intelligent transportation systems, offering dynamic communication between vehicles, Road Side Units (RSUs), and the internet. Given the open-access nature of VANETs and the associated threats, such as impersonation and privacy violations, ensuring the security of these communications is of utmost importance. This paper presents the Identity-based Cluster Authentication and Key Exchange (ID-CAKE) scheme, a new approach to address security challenges in VANETs. The ID-CAKE scheme integrates the Cluster Consensus Identity-based Identification (CCIBI) with Zero-Knowledge (ZK) proofs and the Identity-based Multireceiver Key Exchange Mechanism (ID-mKEM) signature scheme. This integration provides robust authorization via CCIBI, while ID-mKEM signatures ensure message integrity, and guarantee both non-repudiation and unforgeability through mKEM for message broadcasting. The scheme employs a novel three-party ZK proof for batch verification using mKEM, which significantly reduces computational burdens. Our scheme also ensures anonymity and unlinkability by introducing pseudo-identities to all users in the cluster. The rigorous security proofs provided confirm the resilience of the ID-CAKE scheme against potential attacks, adhering to the different scenarios, against the hardness of the elliptic curve computational Diffie-Hellman under the random oracle model. The ID-CAKE scheme establishes a robust security framework for VANETs, and its introduction highlights potential pathways for future exploration in the realm of VANET security.
Expand
Ritam Bhaumik, Mohammad Amin Raeisi
ePrint Report ePrint Report
In spite of being a popular technique for designing block ciphers, Lai-Massey networks have received considerably less attention from a security analysis point-of-view than Feistel networks and Substitution-Permutation networks. In this paper we study the beyond-birthday-bound (BBB) security of Lai-Massey networks with independent random round functions against chosen-plaintext adversaries. Concretely, we show that five rounds are necessary and sufficient to achieve BBB security.
Expand
Hosein Hadipour, Yosuke Todo
ePrint Report ePrint Report
QARMAv2 is a general-purpose and hardware-oriented family of lightweight tweakable block ciphers (TBCs) introduced in ToSC 2023. QARMAv2, as a redesign of QARMA with a longer tweak and tighter security margins, is also designed to be suitable for cryptographic memory protection and control flow integrity. The designers of QARMAv2 provided a relatively comprehensive security analysis in the design specification, e.g., some bounds for the number of attacked rounds in differential and boomerang analysis, together with some concrete impossible differential, zero-correlation, and integral distinguishers. As one of the first third-party cryptanalysis of QARMAv2, Hadipour et al. significantly improved the integral distinguishers of QARMAv2 and provided the longest concrete distinguishers of QARMAv2 up to now. However, they provided no key recovery attack based on their distinguishers.

This paper delves into the cryptanalysis of QARMAv2 to enhance our understanding of its security. Given that the integral distinguishers of QARMAv2 are the longest concrete distinguishers for this cipher so far, we focus on integral attack. To this end, we first further improve the automatic tool introduced by Hadipour et al., for finding integral distinguishers of TBCs following the TWEAKEY framework. This new tool exploits the MixColumns property of QARMAv2 to find integral distinguishers more suitable for key recovery attacks. Then, we combine several techniques for integral key recovery attacks, e.g., Meet-in-the-middle and partial-sum techniques to build a fine-grained integral key recovery attack on QARMAv2. Notably, we demonstrate how to leverage the low data complexity of the integral distinguishers of QARMAv2 to reduce the memory complexity of the meet-in-the-middle technique. As a result, we managed to propose the first concrete key recovery attacks on reduced-round versions of QARMAv2 by attacking 13 rounds of QARMAv2-64-128 with a single tweak block, 14 rounds of QARMAv2-64-128 with two independent tweak blocks, and 16 rounds of QARMAv2-128-256 with two independent tweak blocks. Our attacks do not compromise the claimed security of QARMAv2, but they shed more light on the cryptanalysis of this cipher.
Expand
Noam Mazor, Rafael Pass
ePrint Report ePrint Report
The relationships between various meta-complexity problems are not well understood in the worst-case regime, including whether the search version is harder than the decision version, whether the hardness scales with the ``threshold", and how the hardness of different meta complexity problems relate to one another, and to the task of function inversion.

In this note, we present resolutions to some of these questions with respect to the \emph{black-box} analog of these problems. In more detail, let $MK^t_MP[s]$ denote the language consisting of strings $x$ with $K_{M}^t(x) < s(|x|)$, where $K_M^t(x)$ denotes the $t$-bounded Kolmogorov complexity of $x$ with $M$ as the underlying (Universal) Turing machine, and let $search-MK^t_MP[s]$ denote the search version of the same problem.

We show that if there for every Universal Turing machine $U$ there exists a $2^{\alpha n}poly(n)$-size $U$-oracle aided circuit deciding $MK^t_UP [n-O(1)]$, then for every function $s$, and every not necessarily universal Turing machine $M$, there exists a $2^{\alpha s(n)}poly(n)$ size $M$-oracle aided circuit solving $search-MK^t_MP[s(n)]$; this in turn yields circuits of roughly the same size for both the Minimum Circuit Size Problem (MCSP), and the function inversion problem, as they can be thought of as instantiating $MK^t_MP$ with particular choices of (a non universal) TMs $M$ (the circuit emulator for the case of MCSP, and the function evaluation in the case of function inversion).

As a corollary of independent interest, we get that the complexity of black-box function inversion is (roughly) the same as the complexity of black-box deciding $MK^t_UP[n-O(1)]$ for any universal TM $U$; that is, also in the worst-case regime, black-box function inversion is ``equivalent" to black-box deciding $MKtUP$.
Expand
François Delobel, Patrick Derbez, Arthur Gontier, Loïc Rouquette, Christine Solnon
ePrint Report ePrint Report
An important criteria to assert the security of a cryptographic primitive is its resistance against differential cryptanalysis. For word-oriented primitives, a common technique to determine the number of rounds required to ensure the immunity against differential distinguishers is to consider truncated differential characteristics and to count the number of active S-boxes. Doing so allows one to provide an upper bound on the probability of the best differential characteristic with a reduced computational cost. However, in order to design very efficient primitives, it might be needed to evaluate the probability more accurately. This is usually done in a second step, during which one tries to instantiate truncated differential characteristics with actual values and computes its corresponding probability. This step is usually done either with ad-hoc algorithms or with CP, SAT or MILP models that are solved by generic solvers. In this paper, we present a generic tool for automatically generating these models to handle all word-oriented ciphers. Furthermore the running times to solve these models are very competitive with all the previous dedicated approaches.
Expand
◄ Previous Next ►