International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

10 April 2023

Reyhaneh Rabaninejad, Behzad Abdolmaleki, Giulio Malavolta, Antonis Michalas, Amir Nabizadeh
ePrint Report ePrint Report
Proof of Storage-time (PoSt) is a cryptographic primitive that enables a server to demonstrate non-interactive continuous avail- ability of outsourced data in a publicly verifiable way. This notion was first introduced by Filecoin to secure their Blockchain-based decentral- ized storage marketplace, using expensive SNARKs to compact proofs. Recent work [2] employs the notion of trapdoor delay function to address the problem of compact PoSt without SNARKs. This approach however entails statefulness and non-transparency, while it requires an expensive pre-processing phase by the client. All of the above renders their solution impractical for decentralized storage marketplaces, leaving the stateless trapdoor-free PoSt with reduced setup costs as an open problem. In this work, we present stateless and transparent PoSt constructions using probabilistic sampling and a new Merkle variant commitment. In the process of enabling adjustable prover difficulty, we then propose a multi- prover construction to diminish the CPU work each prover is required to do. Both schemes feature a fast setup phase and logarithmic verification time and bandwidth with the end-to-end setup, prove, and verification costs lower than the existing solutions
Expand
Yuval Ishai, Dakshita Khurana, Amit Sahai, Akshayaram Srinivasan
ePrint Report ePrint Report
We revisit the problem of {\em reusable} non-interactive secure computation (NISC). A standard NISC protocol for a sender-receiver functionality $f$ enables the receiver to encrypt its input $x$ such that any sender, on input $y$, can send back a message revealing only $f(x,y)$. Security should hold even when either party can be malicious. A {\em reusable} NISC protocol has the additional feature that the receiver's message can be safely reused for computing multiple outputs $f(x,y_i)$. Here security should hold even when a malicious sender can learn partial information about the honest receiver's outputs in each session.

We present the first reusable NISC protocol for general functions $f$ that only makes a {\em black-box} use of any two-message oblivious transfer protocol, along with a random oracle. All previous reusable NISC protocols either made a non-black-box use of cryptographic primitives (Cachin et al., ICALP 2002) or alternatively required a stronger arithmetic variant of oblivious transfer and were restricted to $f$ in $\mathsf{NC}^1$ or similar classes (Chase et al., Crypto 2019). Our result is obtained via a general compiler from standard NISC to reusable NISC that makes use of special type of honest-majority protocols for secure multiparty computation.

Finally, we extend the above main result to reusable {\em two-sided} NISC, in which two parties can encrypt their inputs in the first round and then reveal different functions of their inputs in multiple sessions. This extension either requires an additional (black-box) use of additively homomorphic commitment or alternatively requires the parties to maintain a state between sessions.
Expand
Elette Boyle, Geoffroy Couteau, Pierre Meyer
ePrint Report ePrint Report
Secure computation enables mutually distrusting parties to jointly compute a function on their secret inputs, while revealing nothing beyond the function output. A long-running challenge is understanding the required communication complexity of such protocols---in particular, when communication can be sublinear in the circuit representation size of the desired function. For certain functions, such as Private Information Retrieval (PIR), this question extends to even sublinearity in the input size.

We develop new techniques expanding the set of computational assumptions for sublinear communication in both settings:

1) [Circuit size] We present sublinear-communication protocols for secure evaluation of general layered circuits, given any 2-round rate-1 batch oblivious transfer (OT) protocol with a particular ``decomposability'' property. In particular, this condition can be shown to hold for the recent batch OT protocols of (Brakerski et al. Eurocrypt 2022), in turn yielding a new sublinear secure computation feasibility result: from Quadratic Residuosity (QR) together with polynomial-noise-rate Learning Parity with Noise (LPN). Our approach constitutes a departure from existing paths toward sublinear secure computation, all based on fully homomorphic encryption or homomorphic secret sharing.

2) [Input size.] We construct single-server PIR based on the Computational Diffie-Hellman (CDH) assumption, with polylogarithmic communication in the database input size $n$. Previous constructions from CDH required communication $\Omega(n)$. In hindsight, our construction comprises of a relatively simple combination of existing tools from the literature.
Expand
Shankara Pailoor, Yanju Chen, Franklyn Wang, Clara Rodríguez, Jacob Van Gaffen, Jason Morton, Michael Chu, Brian Gu, Yu Feng, Isil Dillig
ePrint Report ePrint Report
As zero-knowledge proofs gain increasing adoption, the cryptography community has designed domain-specific languages (DSLs) that facilitate the construction of zero-knowledge proofs (ZKPs). Many of these DSLs, such as Circom, facilitate the construction of arithmetic circuits, which are essentially polynomial equations over a finite field. In particular, given a program in a zero-knowledge proof DSL, the compiler automatically produces the corresponding arithmetic circuit. However, a common and serious problem is that the generated circuit may be underconstrained, either due to a bug in the program or a bug in the compiler itself. Underconstrained circuits admit multiple witnesses for a given input, so a malicious party can generate bogus witnesses, thereby causing the verifier to accept a proof that it should not. Because of the increasing prevalence of such arithmetic circuits in blockchain applications, several million dollars worth of cryptocurrency have been stolen due to underconstrained arithmetic circuits.

Motivated by this problem, we propose a new technique for finding ZKP bugs caused by underconstrained polynomial equations over finite fields. Our method performs semantic reasoning over the finite field equations generated by the compiler to prove whether or not each signal is uniquely determined by the input. Our proposed approach combines SMT solving with lightweight uniqueness inference to effectively reason about underconstrained circuits. We have implemented our proposed approach in a tool called $\mathbf{\mathsf{QED}^2}$ and evaluate it on 163 Circom circuits. Our evaluation shows that $\mathbf{\mathsf{QED}^2}$ can successfully solve 70\% of these benchmarks, meaning that it either verifies the uniqueness of the output signals or finds a pair of witnesses that demonstrate non-uniqueness of the circuit. Furthermore, $\mathbf{\mathsf{QED}^2}$ has found 8 previously unknown vulnerabilities in widely-used circuits.
Expand
Dimitris Mouris, Charles Gouert, Nektarios Georgios Tsoutsos
ePrint Report ePrint Report
The global supply chain involves multiple independent entities, and potential adversaries can exploit different attack vectors to steal proprietary designs and information. As a result, intellectual property (IP) owners and consumers have reasons to keep their designs private. Without a trusted third party, this mutual mistrust can lead to a deadlock where IP owners are unwilling to disclose their IP core before a financial agreement is reached, while consumers need assurance that the proprietary design will meet their integration needs without compromising the confidentiality of their test vectors. To address this challenge, we introduce an efficient framework called MPloC that resolves this deadlock by allowing owners and consumers to jointly evaluate the target design with consumer-supplied test vectors while preserving the privacy of both the IP core and the inputs. MPloC is the first work that combines secure multiparty computation (MPC) and logic-locking techniques to accomplish these goals. Our approach supports both semi-honest and malicious security models to allow users to balance stronger security guarantees with performance. We compare our approach to existing state-of-the-art works that utilize homomorphic encryption across several benchmarks and report runtime improvements of more than two orders of magnitude.
Expand
Anit Kumar Ghosal, Dipanwita Roychowdhury
ePrint Report ePrint Report
Tampering attack is the act of deliberately modifying the codeword to produce another codeword of a related message. The main application is to find out the original message from the codeword. Non-malleable codes are introduced to protect the message from such attack. Any tampering attack performed on the message encoded by non-malleable codes, guarantee that output is either completely unrelated or original message. It is useful mainly in the situation when privacy and integrity of the message is important rather than correctness. Unfortunately, standard version of non-malleable codes are used for one-time tampering attack. In literature, we show that it is possible to construct non-malleable codes from authenticated encryptions. But, such construction does not provide security when an adversary tampers the codeword more than once. Later, continuously non-malleable codes are constructed where an attacker can tamper the message for polynomial number of times. In this work, we propose a construction of continuously non-malleable code from authenticated encryption in 2-split-state model. Our construction provides security against polynomial number of tampering attacks and non-malleability property is preserved. The security of proposed continuously non-malleable code reduces to the security of underlying leakage resilient storage when tampering experiment triggers self-destruct.
Expand
Anit Kumar Ghosal, Dipanwita Roychowdhury
ePrint Report ePrint Report
The secret key of any encryption scheme that are stored in secure memory of the hardwired devices can be tampered using fault attacks. The usefulness of tampering attack is to recover the key by altering some regions of the memory. Such attack may also appear when the device is stolen or viruses has been introduced. Non-malleable codes are used to protect the secret information from tampering attacks. The secret key can be encoded using non-malleable codes rather than storing it in plain form. An adversary can apply some arbitrary tampering function on the encoded message but it guarantees that output is either completely unrelated or original message. In this work, we propose a computationally secure non-malleable code from leakage resilient authenticated encryption along with 1-more extractable hash function in split-state model with no common reference string (CRS) based trusted setup. Earlier constructions of non-malleable code cannot handle the situation when an adversary has access to some arbitrary decryption leakage (i.e., during decoding of the codeword) function to get partial information about the codeword. In this scenario, the proposed construction is capable of handling such decryption leakages along with tampering attacks.
Expand
Jesús-Javier Chi-Domínguez, Amalia Pizarro-Madariaga, Edgardo Riquelme
ePrint Report ePrint Report
There is an increasing interest in efficiently computing isogenies with a kernel of large-smooth size, for instance, as a building block for building secure Proof-of-Knowledge (PoK) with isogenies of degree equals a power of a small prime number. Another example corresponded to the attacks started by Castyck and Decru and followed up by Maino-Martindale and Robert, which require calculating isogenies over superspecial principally polarized abelian surfaces (superspecial PPAS). On the opposite side of cryptanalysis, some of the current state-of-the-art on safe isogeny-based PoK constructions extends to the case of superspecial PPAS, with the property that one could use smaller fields (e.g., 128, 192, and 256 bits).

This work presents a general framework that generalizes the situation of computing isogenies of the large-smooth degree to the context of quotient groups. More precisely, we abstract and propose a generalization of the strategy technique by Jao, De Feo, and Plût. Such a framework provides an efficient generic algorithm that easily applies to computing isogenies over superspecial PPAS when given the isogeny kernel. Additionally, our algorithm induces an efficient algorithm to perform the KernelToIsogeny procedure required in SQISignHD.

To illustrate the impact of optimal strategies, we draft our experiments on the isogenies over superspecial PPAS required in the Castryck-Decru attack (powers of two and three). Our experiments illustrate a decent speed up of 1.25x faster than the state-of-the-art (about 20% of savings). Our results should be viewed as proof-of-concept implementation and considered for optimized C-language implementations.
Expand
Jesús-Javier Chi-Domínguez, Andre Esser, Sabrina Kunzweiler, Alexander May
ePrint Report ePrint Report
Despite recent breakthrough results in attacking SIDH, the CSIDH protocol remains a secure post-quantum key exchange protocol with appealing properties. However, for obtaining efficient CSIDH instantiations one has to resort to small secret keys. In this work, we provide novel methods to analyze small key CSIDH, thereby introducing the representation method ---that has been successfully applied for attacking small secret keys in code- and lattice-based schemes--- also to the isogeny-based world.

We use the recently introduced Restricted Effective Group Actions ($\mathsf{REGA}$) to illustrate the analogy between CSIDH and Diffie-Hellman key exchange. This framework allows us to introduce a $\mathsf{REGA}\text{-}\mathsf{DLOG}$ problem as a level of abstraction to computing isogenies between elliptic curves, analogous to the classic discrete logarithm problem. This in turn allows us to study $\mathsf{REGA}\text{-}\mathsf{DLOG}$ with ternary key spaces such as $\{-1, 0, 1\}^n, \{0,1,2\}^n$ and $\{-2,0,2\}^n$, which lead to especially efficient, recently proposed CSIDH instantiations. The best classic attack on these key spaces is a Meet-in-the-Middle algorithm that runs in time $3^{0.5 n}$, using also $3^{0.5 n}$ memory.

We first show that $\mathsf{REGA}\text{-}\mathsf{DLOG}$ with ternary key spaces $\{0,1,2\}^n$ or $\{-2,0,2\}^n$ can be reduced to the ternary key space $\{-1,0,1\}^n$.

We further provide a heuristic time-memory tradeoff for $\mathsf{REGA}\text{-}\mathsf{DLOG}$ with keyspace $\{-1,0,1\}^n$ based on Parallel Collision Search with memory requirement $M$ that under standard heuristics runs in time $3^{0.75 n}/M^{0.5}$ for all $M \leq 3^{n/2}$. We then use the representation technique to heuristically improve to $3^{0.675n}/M^{0.5}$ for all $M \leq 3^{0.22 n}$, and further provide more efficient time-memory tradeoffs for all $M \leq 3^{n/2}$.

Although we focus in this work on $\mathsf{REGA}\text{-}\mathsf{DLOG}$ with ternary key spaces for showing its efficacy in providing attractive time-memory tradeoffs, we also show how to use our framework to analyze larger key spaces $\{-m, \ldots, m\}^n$ with $m = 2,3$.
Expand
George Tasopoulos, Charis Dimopoulos, Apostolos P. Fournaris, Raymond K. Zhao, Amin Sakzad, Ron Steinfeld
ePrint Report ePrint Report
Post-Quantum cryptography (PQC), in the past few years, constitutes the main driving force of the quantum resistance transition for security primitives, protocols and tools. TLS is one of the widely used security protocols that needs to be made quantum safe. However, PQC algorithms TLS integration introduce various implementation overheads compared to traditional TLS that in battery powered embedded devices with constrained resources, cannot be overlooked. While there exist several works, evaluating the PQ TLS execution time overhead in embedded systems there are only a few that explore the PQ TLS energy consumption cost. In this paper, a thorough power/energy consumption evaluation and analysis of PQ TLS 1.3 embedded system implementation has been made. A WolfSSL PQ TLS 1.3 custom implementation is used that integrates all the NIST PQC algorithms selected for standardization as well as those evaluated in NIST Round 4. The BSI recommendations have also been included. The various PQ TLS versions are deployed in a STM Nucleo evaluation board under a mutual and a unilateral client-server authentication scenario. The power and energy consumption collected results are analyzed in detail. The performed comparisons and overall analysis provide very interesting results indicating that the choice of the PQC algorithm TLS 1.3 combination to be deployed on an embedded system may be very different depending on the device use as an authenticated or not authenticated, client or server. Also, the results indicate that in some cases, PQ TLS 1.3 implementations can be equally or more energy consumption efficient compared to traditional TLS 1.3.
Expand
Matthias Probst, Manuel Brosch, Georg Sigl
ePrint Report ePrint Report
Spiking neural networks gain attention due to low power properties and event-based operation, making them suitable for usage in resource constrained embedded devices. Such edge devices allow physical access opening the door for side-channel analysis. In this work, we reverse engineer the parameters of a feed-forward spiking neural network implementation with correlation power analysis. Localized measurements of electro-magnetic emanations enable our attack, despite inherent parallelism and the resulting algorithmic noise of the network. We provide a methodology to extract valuable parameters of integrate-and-fire neurons in all layers, as well as the layer sizes.
Expand
Shuailiang Hu
ePrint Report ePrint Report
Homomorphic encryption requires the homomorphism of encrypted ciphertext, and the operation between ciphertexts can be reflected in plaintexts. Fully homomorphic encryption requires that the encryption algorithm can satisfy additive homomorphism and multiplicative homomorphism at the same time. At present, there are many fully homomorphic encryption schemes, such as fully homomorphic encryption based on ideal lattices, AGCD problem, LWE problem, RLWE problem, and so on. But the improvement of efficiency, length of ciphertext, and calculation limit of the fully homomorphic encryption scheme are still problems that need further study.

Based on Lagrangian interpolation polynomials, we propose a fully homomorphic encryption scheme according to the difficulty of finding roots of a polynomial with the degree of at least two(mod n=p*q, p, q are both private large primes). We reasonably construct polynomials $trap_1$ and $trap_0$ to generate the ciphertext of message m, so that calculation between ciphertexts can directly act on plaintexts. Our scheme is safe as long as the Rabin encryption algorithm cannot be cracked.
Expand

07 April 2023

Wouter Legiest, Jan-Pieter D'Anvers, Michiel Van Beirendonck, Furkan Turan, Ingrid Verbauwhede
ePrint Report ePrint Report
Homomorphic encryption (HE) enables calculating on encrypted data, which makes it possible to perform privacy- preserving neural network inference. One disadvantage of this technique is that it is several orders of magnitudes slower than calculation on unencrypted data. Neural networks are commonly trained using floating-point, while most homomorphic encryption libraries calculate on integers, thus requiring a quantisation of the neural network. A straightforward approach would be to quantise to large integer sizes (e.g., 32 bit) to avoid large quantisation errors. In this work, we reduce the integer sizes of the networks, using quantisation-aware training, to allow more efficient computations. For the targeted MNIST architecture proposed by Badawi et al., we reduce the integer sizes by 33% without significant loss of accuracy, while for the CIFAR architecture, we can reduce the integer sizes by 43%. Implementing the resulting networks under the BFV homomorphic encryption scheme using SEAL, we could reduce the execution time of an MNIST neural network by 80% and by 40% for a CIFAR neural network.
Expand
Nico Döttling, Phillip Gajland, Giulio Malavolta
ePrint Report ePrint Report
Laconic function evaluation (LFE) allows Alice to compress a large circuit $\mathbf{C}$ into a small digest $\mathsf{d}$. Given Alice's digest, Bob can encrypt some input $x$ under $\mathsf{d}$ in a way that enables Alice to recover $\mathbf{C}(x)$, without learning anything beyond that. The scheme is said to be $laconic$ if the size of $\mathsf{d}$, the runtime of the encryption algorithm, and the size of the ciphertext are all sublinear in the size of $\mathbf{C}$. Until now, all known LFE constructions have ciphertexts whose size depends on the $depth$ of the circuit $\mathbf{C}$, akin to the limitation of $levelled$ homomorphic encryption. In this work we close this gap and present the first LFE scheme (for Turing machines) with asymptotically optimal parameters. Our scheme assumes the existence of indistinguishability obfuscation and somewhere statistically binding hash functions.

As further contributions, we show how our scheme enables a wide range of new applications, including two previously unknown constructions: • Non-interactive zero-knowledge (NIZK) proofs with optimal prover complexity. • Witness encryption and attribute-based encryption (ABE) for Turing machines from falsifiable assumptions.
Expand
Marshall Ball, Hanjun Li, Huijia Lin, Tianren Liu
ePrint Report ePrint Report
The beautiful work of Applebaum, Ishai, and Kushilevitz [FOCS'11] initiated the study of arithmetic variants of Yao's garbled circuits. An arithmetic garbling scheme is an efficient transformation that converts an arithmetic circuit $C: \mathcal{R}^n \rightarrow \mathcal{R}^m$ over a ring $\mathcal{R}$ into a garbled circuit $\widehat C$ and $n$ affine functions $L_i$ for $i \in [n]$, such that $\widehat C$ and $L_i(x_i)$ reveals only the output $C(x)$ and no other information of $x$. AIK presented the first arithmetic garbling scheme supporting computation over integers from a bounded (possibly exponentially large) range, based on Learning With Errors (LWE). In contrast, converting $C$ into a Boolean circuit and applying Yao's garbled circuit treats the inputs as bit strings instead of ring elements, and hence is not "arithmetic".

In this work, we present new ways to garble arithmetic circuits, which improve the state-of-the-art on efficiency, modularity, and functionality. To measure efficiency, we define the rate of a garbling scheme as the maximal ratio between the bit-length of the garbled circuit $|\widehat C|$ and that of the computation tableau $|C|\ell$ in the clear, where $\ell$ is the bit length of wire values (e.g., Yao's garbled circuit has rate $O(\lambda)$). $\bullet$ We present the first constant-rate arithmetic garbled circuit for computation over large integers based on the Decisional Composite Residuosity (DCR) assumption, significantly improving the efficiency of the schemes of Applebaum, Ishai, and Kushilevitz. $\bullet$ We construct an arithmetic garbling scheme for modular computation over $\mathcal{R} = \mathbb{Z}_p$ for any integer modulus $p$, based on either DCR or LWE. The DCR-based instantiation achieves rate $O(\lambda)$ for large $p$. Furthermore, our construction is modular and makes black-box use of the underlying ring and a simple key extension gadget. $\bullet$ We describe a variant of the first scheme supporting arithmetic circuits over bounded integers that are augmented with Boolean computation (e.g., truncation of an integer value, and comparison between two values), while keeping the constant rate when garbling the arithmetic part.

To the best of our knowledge, constant-rate (Boolean or arithmetic) garbling was only achieved before using the powerful primitive of indistinguishability obfuscation, or for restricted circuits with small depth.
Expand
Giulio Malavolta, Michael Walter
ePrint Report ePrint Report
Quantum key distribution (QKD) allows Alice and Bob to agree on a shared secret key, while communicating over a public (untrusted) quantum channel. Compared to classical key exchange, it has two main advantages:

(i)The key is unconditionally hidden to the eyes of any attacker, and

(ii) its security assumes only the existence of authenticated classical channels which, in practice, can be realized using Minicrypt assumptions, such as the existence of digital signatures.

On the flip side, QKD protocols typically require multiple rounds of interactions, whereas classical key exchange can be realized with the minimal amount of two messages. A long-standing open question is whether QKD requires more rounds of interaction than classical key exchange.

In this work, we propose a two-message QKD protocol that satisfies everlasting security, assuming only the existence of quantum-secure one-way functions. That is, the shared key is unconditionally hidden, provided computational assumptions hold during the protocol execution. Our result follows from a new quantum cryptographic primitive that we introduce in this work: the quantum-public-key one-time pad, a public-key analogue of the well-known one-time pad.
Expand
Andreas Brüggemann, Robin Hundt, Thomas Schneider, Ajith Suresh, Hossein Yalame
ePrint Report ePrint Report
The concept of using Lookup Tables (LUTs) instead of Boolean circuits is well-known and been widely applied in a variety of applications, including FPGAs, image processing, and database management systems. In cryptography, using such LUTs instead of conventional gates like AND and XOR results in more compact circuits and has been shown to substantially improve online performance when evaluated with secure multi-party computation. Several recent works on secure floating-point computations and privacy-preserving machine learning inference rely heavily on existing LUT techniques. However, they suffer from either large overhead in the setup phase or subpar online performance.

We propose FLUTE, a novel protocol for secure LUT evaluation with good setup and online performance. In a two-party setting, we show that FLUTE matches or even outperforms the online performance of all prior approaches, while being competitive in terms of overall performance with the best prior LUT protocols. In addition, we provide an open-source implementation of FLUTE written in the Rust programming language, and implementations of the Boolean secure two-party computation protocols of ABY2.0 and silent OT. We find that FLUTE outperforms the state of the art by two orders of magnitude in the online phase while retaining similar overall communication.
Expand
Foteini Baldimtsi, Konstantinos Kryptos Chalkias, Francois Garillot, Jonas Lindstrom, Ben Riva, Arnab Roy, Alberto Sonnino, Pun Waiwitlikhit, Joy Wang
ePrint Report ePrint Report
We propose a variant of the original Boneh, Drijvers, and Neven (Asiacrypt '18) BLS multi-signature aggregation scheme best suited to applications where the full set of potential signers is fixed and known and any subset $I$ of this group can create a multi-signature over a message $m$. This setup is very common in proof-of-stake blockchains where a $2f+1$ majority of $3f$ validators sign transactions and/or blocks and is secure against $\textit{rogue-key}$ attacks without requiring a proof of key possession mechanism.

In our scheme, instead of randomizing the aggregated signatures, we have a one-time randomization phase of the public keys: each public key is replaced by a sticky randomized version (for which each participant can still compute the derived private key). The main benefit compared to the original Boneh at al. approach is that since our randomization process happens only once and not per signature we can have significant savings during aggregation and verification. Specifically, for a subset $I$ of $t$ signers, we save $t$ exponentiations in $\mathbb{G}_2$ at aggregation and $t$ exponentiations in $\mathbb{G}_1$ at verification or vice versa, depending on which BLS mode we prefer: $\textit{minPK}$ (public keys in $\mathbb{G}_1$) or $\textit{minSig}$ (signatures in $\mathbb{G}_1$).

Interestingly, our security proof requires a significant departure from the co-CDH based proof of Boneh at al. When $n$ (size of the universal set of signers) is small, we prove our protocol secure in the Algebraic Group and Random Oracle models based on the Discrete Log problem. For larger $n$, our proof also requires the Random Modular Subset Sum (RMSS) problem.
Expand
Sergey Agievich
ePrint Report ePrint Report
Using the representation of bent functions by bent rectangles, that is, special matrices with restrictions on columns and rows, we obtain an upper bound on the number of bent functions that improves previously known bounds in a practical range of dimensions. The core of our method is the following fact based on the recent observation by Potapov (arXiv:2107.14583): a 2-row bent rectangle is completely defined by one of its rows and the remaining values in slightly more than half of the columns.
Expand
Xichao Hu, Yongqiang Li, Lin Jiao, Zhengbin Liu, Mingsheng Wang
ePrint Report ePrint Report
Zero-correlation linear attack is a powerful attack of block ciphers, the lower number of rounds (LNR) which no its distinguisher (named zero-correlation linear approximation, ZCLA) exists reflects the ability of a block cipher against the zero-correlation linear attack. However, due to the large search space, showing there are no ZCLAs exist for a given block cipher under a certain number of rounds is a very hard task. Thus, present works can only prove there no ZCLAs exist in a small search space, such as 1-bit/nibble/word input and output active ZCLAs, which still exist very large gaps to show no ZCLAs exist in the whole search space.

In this paper, we propose the meet-in-the-middle method and double-collision method to show there no ZCLAs exist in the whole search space. The basic ideas of those two methods are very simple, but they work very effectively. As a result, we apply those two methods to AES, Midori64, and ARIA, and show that there no ZCLAs exist for $5$-round AES without the last Mix-Column layer, $7$-round Midori64 without the last Mix-Column layer, and $5$-round ARIA without the last linear layer.

As far as we know, our method is the first automatic method that can be used to show there no ZCLAs exist in the whole search space, which can provide sufficient evidence to show the security of a block cipher against the zero-correlation linear attack in the distinguishers aspect, this feature is very useful for designing block ciphers.
Expand
◄ Previous Next ►