International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

29 November 2021

Jipeng Zhang, Junhao Huang, Zhe Liu, Sujoy Sinha Roy
ePrint Report ePrint Report
Saber is a module-lattice-based key encapsulation scheme that has been selected as a finalist in the NIST Post-Quantum Cryptography Standardization Project. As Saber computes on considerably large matrices and vectors of polynomials, its efficient implementation on memory-constrained IoT devices is very challenging. In this paper, we present an implementation of Saber with a minor tweak to the original Saber protocol for achieving reduced memory consumption and better performance. We call this tweaked implementation `Saber+', and the difference compared to Saber is that we use different generation methods of public matrix \(\boldsymbol{A}\) and secret vector \(\boldsymbol{s}\) for memory optimization. Our highly optimized software implementation of Saber+ on a memory-constrained RISC-V platform achieves 48\% performance improvement compared with the best state-of-the-art memory-optimized implementation of original Saber. Specifically, we present various memory and performance optimizations for Saber+ on a memory-constrained RISC-V microcontroller, with merely 16KB of memory available. We utilize the Number Theoretic Transform (NTT) to speed up the polynomial multiplication in Saber+. For optimizing cycle counts and memory consumption during NTT, we carefully compare the efficiency of the complete and incomplete-NTTs, with platform-specific optimization. We implement 4-layers merging in the complete-NTT and 3-layers merging in the 6-layer incomplete-NTT. An improved on-the-fly generation strategy of the public matrix and secret vector in Saber+ results in low memory footprint. Furthermore, by combining different optimization strategies, various time-memory trade-offs are explored. Our software implementation for Saber+ on selected RISC-V core takes just 3,809K, 3,594K, and 3,193K clock cycles for key generation, encapsulation, and decapsulation, respectively, while consuming only 4.8KB of stack at most.
Expand
Ziaur Rahman, Xun Yi, Ibrahim Khalil, Andrei Kelarev
ePrint Report ePrint Report
The world has been experiencing a mind-blowing expansion of blockchain technology since it was first introduced as an emerging means of cryptocurrency called bitcoin. Currently, it has been regarded as a pervasive frame of reference across almost all research domains, ranging from virtual cash to agriculture or even supply-chain to the Internet of Things. The ability to have a self-administering register with legitimate immutability makes blockchain appealing for the Internet of Things (IoT). As billions of IoT devices are now online in distributed fashion, the huge challenges and questions require to addressed in pursuit of urgently needed solutions. The present paper has been motivated by the aim of facilitating such efforts. The contribution of this work is to figure out those trade-offs the IoT ecosystem usually encounters because of the wrong choice of blockchain technology. Unlike a survey or review, the critical findings of this paper target sorting out specific security challenges of blockchain-IoT Infrastructure. The contribution includes how to direct developers and researchers in this domain to pick out the unblemished combinations of Blockchain enabled IoT applications. In addition, the paper promises to bring a deep insight on Ethereum, Hyperledger blockchain and IOTA technology to show their limitations and prospects in terms of performance and scalability.
Expand
Ziaur Rahman andIbrahim Khalil, Mousumi Sumi
ePrint Report ePrint Report
Several efforts have been seen claiming the lightweight block ciphers as a necessarily suitable substitute in securing the Internet of Things. Currently, it has been able to envisage as a pervasive frame of reference almost all across the privacy preserving of smart and sensor-oriented appliances. Different approaches are likely to be inefficient, bringing desired degree of security considering the easiness and surely the process of simplicity but security. Strengthening the well-known symmetric key and block dependent algorithm using either chaos motivated logistic map or elliptic curve has shown a far-reaching potential to be a discretion in secure real-time communication. The popular feature of logistic maps, such as the un-foreseeability and randomness often expected to be used in dynamic key-propagation in sync with chaos and scheduling technique towards data integrity. As a bit alternation in keys, able to come up with oversize deviation, also would have consequence to leverage data confidentiality. Henceforth it may have proximity to time consumption, which may lead to a challenge to make sure instant data exchange between participating node entities. In consideration of delay latency required to both secure encryption and decryption, the proposed approach suggests a modification on the key-origination matrix along with S-box. It has plausibly been taken us to this point that the time required proportionate to the plain-text sent while the plain-text disproportionate to the probability happening a letter on the message made. In line with that the effort so far sought how apparent chaos escalates the desired key-initiation before message transmission.
Expand
Mariana Botelho da Gama, John Cartlidge, Antigoni Polychroniadou, Nigel P. Smart, Younes Talibi Alaoui
ePrint Report ePrint Report
We examine bucket-based and volume-based algorithms for privacy-preserving asset trading in a financial dark pool. Our bucket-based algorithm places orders in quantised buckets, whereas the volume-based algorithm allows any volume size but requires more complex validation mechanisms. In all cases, we conclude that these algorithms are highly efficient and offer a practical solution to the commercial problem of preserving privacy of order information in a dark pool trading venue.
Expand
Huck Bennett, Atul Ganju, Pura Peetathawatchai, Noah Stephens-Davidowitz
ePrint Report ePrint Report
We study the computational problem of finding a shortest non-zero vector in a rotation of $\mathbb{Z}^n$, which we call $\mathbb{Z}$SVP. It has been a long-standing open problem to determine if a polynomial-time algorithm for $\mathbb{Z}$SVP exists, and there is by now a beautiful line of work showing how to solve it efficiently in certain special cases. However, despite all of this work, the fastest known algorithm that is proven to solve $\mathbb{Z}$SVP is still simply the fastest known algorithm for solving SVP (i.e., the problem of finding shortest non-zero vectors in arbitrary lattices), which runs in $2^{n + o(n)}$ time. We therefore set aside the (perhaps impossible) goal of finding an efficient algorithm for $\mathbb{Z}$SVP and instead ask what else we can say about the problem. E.g, can we find any non-trivial speedup over the best known SVP algorithm? And, what consequences would follow if $\mathbb{Z}$SVP actually is hard? Our results are as follows.

1) We show that $\mathbb{Z}$SVP is in a certain sense strictly easier than SVP on arbitrary lattices. In particular, we show how to reduce $\mathbb{Z}$SVP to an approximate version of SVP in the same dimension (in fact, even to approximate unique SVP, for any constant approximation factor). Such a reduction seems very unlikely to work for SVP itself, so we view this as a qualitative separation of $\mathbb{Z}$SVP from SVP. As a consequence of this reduction, we obtain a $2^{0.802n}$-time algorithm for $\mathbb{Z}$SVP, i.e., a non-trivial speedup over the best known algorithm for SVP on general lattices.

2) We show a simple public-key encryption scheme that is secure if (an appropriate variant of) $\mathbb{Z}$SVP is actually hard. Specifically, our scheme is secure if it is difficult to distinguish (in the worst case) a rotation of $\mathbb{Z}^n$ from either a lattice with all non-zero vectors longer than $\sqrt{n/\log n}$ or a lattice with smoothing parameter significantly smaller than the smoothing parameter of $\mathbb{Z}^n$. The latter result has an interesting qualitative connection with reverse Minkowski theorems, which in some sense say that ``$\mathbb{Z}^n$ has the largest smoothing parameter.''

3) We show a distribution of bases $B$ for rotations of $\mathbb{Z}^n$ such that, if $\mathbb{Z}$SVP is hard for any input basis, then $\mathbb{Z}$SVP is hard on input $B$. This gives a satisfying theoretical resolution to the problem of sampling hard bases for $\mathbb{Z}^n$, which was studied by Blanks and Miller (PQCrypto, 2021). This worst-case to average-case reduction is also crucially used in the analysis of our encryption scheme. (In recent independent work that appeared as a preprint before this work, Ducas and van Woerden showed essentially the same thing for general lattices (ia.cr/2021/1332), and they also used this to analyze the security of a public-key encryption scheme.)

4) We perform experiments to determine how practical basis reduction performs on different bases of $\mathbb{Z}^n$. These experiments complement and add to those performed by Blanks and Miller, as we work with a larger class of reduction algorithms (i.e., larger block sizes) and study the ``provably hard'' distribution of bases described above. We also observe a threshold phenomenon in which ``basis reduction algorithms on $\mathbb{Z}^n$ nearly always find a shortest non-zero vector once they have found a vector with length less than $\sqrt{n}/2$,'' and we explore this further.
Expand
Chen Chen, Xiao Liang, Bogdan Carbunar, Radu Sion
ePrint Report ePrint Report
Data privacy is critical in instilling trust and empowering the societal pacts of modern technology-driven democracies. Unfortunately, it is under continuous attack by overreaching or outright oppressive governments, including some of the world's oldest democracies. Increasingly-intrusive anti-encryption laws severely limit the ability of standard encryption to protect privacy. New defense mechanisms are needed. Plausible deniability (PD) is a powerful property, enabling users to hide the existence of sensitive information in a system under direct inspection by adversaries. Popular encrypted storage systems such as TrueCrypt and other research efforts have attempted to also provide plausible deniability. Unfortunately, these efforts have often operated under less well-defined assumptions and adversarial models. Careful analyses often uncover not only high overheads but also outright security compromise. Further, our understanding of adversaries, the underlying storage technologies, as well as the available plausible deniable solutions have evolved dramatically in the past two decades. The main goal of this work is to systematize this knowledge. It aims to: - identify key PD properties, requirements, and approaches; - present a direly-needed unified framework for evaluating security and performance; - explore the challenges arising from the critical interplay between PD and modern system layered stacks; - propose a new "trace-oriented" PD paradigm, able to decouple security guarantees from the underlying systems and thus ensure a higher level of flexibility and security independent of the technology stack. This work is meant also as a trusted guide for system and security practitioners around the major challenges in understanding, designing, and implementing plausible deniability into new or existing systems.
Expand
Damien Robissout, Lilian Bossuet, Amaury Habrard, Vincent Grosso
ePrint Report ePrint Report
The use of deep learning techniques to perform side-channel analysis attracted the attention of many researchers as they obtained good performances with them. Unfortunately, the understanding of the neural networks used to perform side-channel attacks is not very advanced yet. In this paper, we propose to contribute to this direction by studying the impact of some particular deep learning techniques for tackling side-channel attack problems. More precisely, we propose to focus on three existing techniques: batch normalization, dropout and weight decay, not yet used in side-channel context. By combining adequately these techniques for our problem, we show that it is possible to improve the attack performance, i.e. the number of traces needed to recover the secret, by more than 55%. Additionally, they allow us to have a gain of more than 34% in terms of training time. We also show that an architecture trained with such techniques is able to perform attacks efficiently even in the context of desynchronized traces.
Expand
Joachim Neu, Srivatsan Sridhar, Lei Yang, David Tse, Mohammad Alizadeh
ePrint Report ePrint Report
Satoshi Nakamoto's Proof-of-Work (PoW) longest chain (LC) protocol was a breakthrough for Internet-scale open-participation consensus. Many Proof-of-Stake (PoS) variants of Nakamoto's protocol such as Ouroboros or Snow White aim to preserve the advantages of LC by mimicking PoW LC closely, while mitigating downsides of PoW by using PoS for Sybil resistance. Previous works have proven these PoS LC protocols secure assuming all network messages are delivered within a bounded delay. However, this assumption is not compatible with PoS when considering bandwidth constraints in the underlying communication network. This is because PoS enables the adversary to reuse block production opportunities and spam the network with equivocating blocks, which is impossible in PoW. The bandwidth constraint necessitates that nodes choose carefully which blocks to spend their limited download budget on. We show that 'download along the longest header chain', a natural download rule for PoW LC, emulated by PoS variants, is insecure for PoS LC. Instead, we propose 'download towards the freshest block' and prove that PoS LC with this download rule is secure in bandwidth constrained networks. Our result can be viewed as a first step towards the co-design of consensus and network layer protocols.
Expand
Kamilla Nazirkhanova, Joachim Neu, David Tse
ePrint Report ePrint Report
The ability to verifiably retrieve transaction or state data stored off-chain is crucial to blockchain scaling techniques such as rollups or sharding. We formalize the problem and design a storage- and communication-efficient protocol using linear erasure-correcting codes and homomorphic vector commitments. Motivated by application requirements for rollups, our solution departs from earlier Verifiable Information Dispersal schemes in that we do not require comprehensive termination properties or retrievability from any but only from some known sufficiently large set of storage nodes. Compared to Data Availability Oracles, under no circumstance do we fall back to returning empty blocks. Distributing a file of 28.8 MB among 900 storage nodes (up to 300 of which may be adversarial) requires in total approx. 95 MB of communication and storage and approx. 30 seconds of cryptographic computation on a single-threaded consumer-grade laptop computer. Our solution requires no modification to on-chain contracts of Validium rollups such as StarkWare's StarkEx. Additionally, it provides privacy of the dispersed data against honest-but-curious storage nodes.
Expand

23 November 2021

Alex Lombardi, Fermi Ma, Nicholas Spooner
ePrint Report ePrint Report
A major difficulty in quantum rewinding is the fact that measurement is destructive: extracting information from a quantum state irreversibly changes it. This is especially problematic in the context of zero-knowledge simulation, where preserving the adversary's state is essential.

In this work, we develop new techniques for quantum rewinding in the context of extraction and zero-knowledge simulation:

(1) We show how to extract information from a quantum adversary by rewinding it without disturbing its internal state. We use this technique to prove that important interactive protocols, such as the Goldreich-Micali-Wigderson protocol for graph non-isomorphism and the Feige-Shamir protocol for NP, are zero-knowledge against quantum adversaries.

(2) We prove that the Goldreich-Kahan protocol for NP is post-quantum zero knowledge using a simulator that can be seen as a natural quantum extension of the classical simulator.

Our results achieve (constant-round) black-box zero-knowledge with negligible simulation error, appearing to contradict a recent impossibility result due to Chia-Chung-Liu-Yamakawa (FOCS 2021). This brings us to our final contribution:

(3) We introduce coherent-runtime expected quantum polynomial time, a computational model that (a) captures all of our zero-knowledge simulators, (b) cannot break any polynomial hardness assumptions, and (c) is not subject to the CCLY impossibility. In light of our positive results and the CCLY negative results, we propose coherent-runtime simulation to be the right quantum analogue of classical expected polynomial-time simulation.
Expand
Fahim Rahman, Farimah Farahmandi, Mark Tehranipoor
ePrint Report ePrint Report
FPGA bitstream encryption and authentication can be defeated by various techniques and it is critical to understand how these vulnerabilities enable extraction and tampering of commercial FPGA bitstreams. We exploit the physical vulnerability of bitstream encryption keys to readout using failure analysis equipment and conduct an end-to-end bitstream tamper attack. Our work underscores the feasibility of supply chain bitstream tampering and the necessity of guarding against such attacks in critical systems.
Expand
Shay Gueron, Ashwin Jha, Mridul Nandi
ePrint Report ePrint Report
COMETv1, by Gueron, Jha and Nandi, is a mode of operation for nonce-based authenticated encryption with associated data functionality. It was one of the second round candidates in the ongoing NIST Lightweight Cryptography Standardization Process. In this paper, we study a generalized version of COMETv1, that we call gCOMET, from provable security perspective. First, we present a comprehensive and complete security proof for gCOMET in the ideal cipher model. Second, we view COMET, the underlying mode of operation in COMETv1, as an instantiation of gCOMET, and derive its concrete security bounds. Finally, we propose another instantiation of gCOMET, dubbed COMETv2, and show that this version achieves better security guarantees as well as memory-efficient implementations as compared to COMETv1.
Expand
George Teseleanu
ePrint Report ePrint Report
In this paper we describe a provably secure authentication protocol for resource limited devices. The proposed algorithm performs whole-network authentication using very few rounds and in a time logarithmic in the number of nodes. Compared to one-to-one node authentication and previous proposals, our protocol is more efficient: it requires less communication and computation and, in turn, lower energy consumption.
Expand

22 November 2021

Zeta Avarikioti, Mahsa Bastankhah, Mohammad Ali Maddah-Ali, Krzysztof Pietrzak, Jakub Svoboda, Michelle Yeo
ePrint Report ePrint Report
In this work, we are the first to explore route discovery in private channel networks. We first determine what ``ideal" privacy for a routing protocol means in this setting. We observe that protocols achieving this strong privacy definition exist by leveraging (topology hiding) Multi-Party Computation but they are (inherently) inefficient as route discovery must involve the entire network.

We then present protocols with weaker privacy guarantees but much better efficiency. In particular, route discovery typically only involves small fraction of the nodes but some information on the topology and balances -- beyond what is necessary for performing the transaction -- is leaked.

The core idea is that both sender and receiver gossip a message which then slowly propagates through the network, and the moment any node in the network receives both messages, a path is found. In our first protocol the message is always sent to all neighbouring nodes with a delay proportional to the fees of that edge. In our second protocol the message is only sent to one neighbour chosen randomly with a probability proportional to its degree. While the first instantiation always finds the cheapest path, the second might not, but it involves a smaller fraction of the network.

% We discuss some extensions like employing bilinear maps so the gossiped messages can be re-randomized, making them unlikeable and thus improving privacy. We also discuss some extensions to further improve privacy by employing bilinear maps.

Simulations of our protocols on the Lightning network topology (for random transactions and uniform fees) show that our first protocol (which finds the cheapest path) typically involves around 12\% of the 6376 nodes, while the second only touches around 18 nodes $(<0.3\%)$, and the cost of the path that is found is around twice the cost of the optimal one.
Expand
Nishanth Chandran, Divya Gupta, Sai Lakshmi Bhavana Obbattu, Akash Shah
ePrint Report ePrint Report
Secure inference allows a model owner (or, the server) and the input owner (or, the client) to perform inference on machine learning model without revealing their private information to each other. A large body of work has shown efficient cryptographic solutions to this problem through secure 2- party computation. However, they assume that both parties are semi-honest, i.e., follow the protocol specification. Recently, Lehmkuhl et al. showed that malicious clients can extract the whole model of the server using novel model-extraction attacks. To remedy the situation, they introduced the client-malicious threat model and built a secure inference system, MUSE, that provides security guarantees, even when the client is malicious.

In this work, we design and build SIMC, a new cryptographic system for secure inference in the client malicious threat model. On secure inference benchmarks considered by MUSE, SIMC has 23 − 29× lesser communication and is up to 11.4× faster than MUSE. SIMC obtains these improvements using a novel protocol for non-linear activation functions (such as ReLU) that has > 28× lesser communication and is up to 43× more performant than MUSE. In fact, SIMC's performance beats the state-of-the-art semi-honest secure inference system!

Finally, similar to MUSE, we show how to push the majority of the cryptographic cost of SIMC to an input independent preprocessing phase. While the cost of the online phase of this protocol, SIMC++, is same as that of MUSE, the overall improvements of SIMC translate to similar improvements to the preprocessing phase of MUSE.
Expand
Shotaro Miyashita, Ryoma Ito, Atsuko Miyaji
ePrint Report ePrint Report
In this study, we focus on the differential cryptanalysis of the ChaCha stream cipher. In the conventional approach, an adversary first searches for the input/output differential pair with the best differential bias and then analyzes the probabilistic neutral bits (PNB) in detail based on the obtained input/output differential pair. However, although time and data complexities for the attack can be estimated by the differential bias and PNB obtained in this approach, their combination does not always represent the best. In addition, a comprehensive analysis of the PNB was not provided in existing studies; they have not clarified the upper bounds of the number of rounds required for the differential attack based on the PNB to be successful. To solve these problems, we proposed a PNB-based differential attack on the reduced-round ChaCha by first comprehensively analyzing the PNB at all output differential bit positions and then searching for the input/output differential pair with the best differential bias based on the obtained PNB. By comprehensively analyzing the PNB, we clarified that an upper bound of the number of rounds required for the PNB-based differential attack to be successful was 7.25 rounds. As a result, the proposed attack can work on the 7.25-round ChaCha with time and data complexities of \(2^{255.62}\) and \(2^{37.49}\), respectively. Further, using the existing differential bias presented by Coutinho and Neto at EUROCRYPT 2021, we further improved the attack on the 7.25-round ChaCha with time and data complexities of \(2^{244.22}\) and \(2^{69.14}\), respectively. The best existing attack on ChaCha, proposed by Coutinho and Neto at EUROCRYPT 2021, works on up to 7 rounds with time and data complexities of \(2^{228.51}\) and \(2^{80.51}\), respectively. Therefore, we improved the best existing attack on the reduced-round ChaCha. We believe that this study will be the first step towards an attack on more rounds of ChaCha, e.g., the 8-round ChaCha.
Expand
Gang Wang, Mark Nixon
ePrint Report ePrint Report
Blockchain, a potentially disruptive technology, advances many different applications, e.g., crypto-currencies, supply chains, and the Internet of Things. Under the hood of blockchain, it is required to handle different kinds of digital assets and data. The next-generation blockchain ecosystem is expected to consist of numerous applications, and each application may have a distinct representation of digital assets. However, digital assets cannot be directly recorded on the blockchain, and a tokenization process is required to format these assets. Tokenization on blockchain will inevitably require a certain level of proper standards to enrich advanced functionalities and enhance interoperable capabilities for future applications. However, due to specific features of digital assets, it is hard to obtain a standard token form to represent all kinds of assets. For example, when considering fungibility, some assets are divisible and identical, commonly referred to as fungible assets. In contrast, others that are not fungible are widely referred to as non-fungible assets. When tokenizing these assets, we are required to follow different tokenization processes. The way to effectively tokenize assets is thus essential and expecting to confront various unprecedented challenges. This paper provides a systematic and comprehensive study of the current progress of tokenization on blockchain. First, we explore general principles and practical schemes to tokenize digital assets for blockchain and classify digitized tokens into three categories: fungible, non-fungible, and semi-fungible. We then focus on discussing the well-known Ethereum standards on non-fungible tokens. Finally, we discuss several critical challenges and some potential research directions to advance the research on exploring the tokenization process on the blockchain. To the best of our knowledge, this is the first systematic study for tokenization on blockchain.
Expand
Avik Chakraborti, Nilanjan Datta, Ashwin Jha, Cuauhtemoc Manicillas Lopez, Mridul Nandi
ePrint Report ePrint Report
This paper proposes a lightweight authenticated encryption (AE) scheme, called Light-OCB, which can be viewed as a lighter variant of the CAESAR winner OCB as well as a faster variant of the high profile NIST LWC competition submission LOCUS-AEAD. Light-OCB is structurally similar to LOCUS-AEAD and uses a nonce-based derived key that provides optimal security, and short-tweak tweakable blockcipher (tBC) for efficient domain separation. Light-OCB improves over LOCUS-AEAD by reducing the number of primitive calls, and thereby significantly optimizing the throughput. To establish our claim, we provide FPGA hardware implementation details and benchmark for Light-OCB against LOCUS-AEAD and several other well-known AEs. The implementation results depict that, when instantiated with the tBC TweGIFT64, Light-OCB achieves an extremely low hardware footprint - consuming only around 1128 LUTs and 307 slices (significantly lower than that for LOCUS-AEAD) while maintaining a throughput of 880 Mbps, which is almost twice as that of LOCUS-AEAD. To the best of our knowledge, this figure is significantly better than all the known implementation results of other lightweight ciphers with parallel structures.
Expand
Liang Zhao, Ze Chen, Liqun Chen, Xinyi Huang
ePrint Report ePrint Report
In this paper we present an optimized variant of Gentry, Halevi and Vaikuntanathan (GHV)'s Homomorphic Encryption (HE) scheme (EUROCRYPT'10). Our scheme is appreciably more efficient than the original GHV scheme without losing its merits of the (multi-key) homomorphic property and matrix encryption property. In this research, we first measure the density for the trapdoor pairs that are created by using Alwen and Peikert's trapdoor generation algorithm and Micciancio and Peikert's trapdoor generation algorithm, respectively, and use the measurement result to precisely discuss the time and space complexity of the corresponding GHV instantiations. We then propose a generic GHV-type construction with several optimizations that improve the time and space efficiency from the original GHV scheme. In particular, our new scheme can achieve asymptotically optimal time complexity and avoid generating and storing the inverse of the used trapdoor. Finally, we present an instantiation that, by using a new set of (lower) bound parameters, has the smaller sizes of the key and ciphertext than the original GHV scheme.
Expand
Lorenzo Grassi, Dmitry Khovratovich, Sondre Rønjom, Markus Schofnegger
ePrint Report ePrint Report
Motivated by modern cryptographic use cases such as multi-party computation (MPC), homomorphic encryption (HE), and zero-knowledge (ZK) protocols, several symmetric schemes that are efficient in these scenarios have recently been proposed in the literature. Some of these schemes are instantiated with low-degree nonlinear functions, for example low-degree power maps (e.g., MiMC, HadesMiMC, Poseidon) or the Toffoli gate (e.g., Ciminion). Others (e.g., Rescue, Vision, Grendel) are instead instantiated via high-degree functions which are easy to evaluate in the target application. A recent example for the latter case is the hash function Grendel, whose nonlinear layer is constructed using the Legendre symbol.

In this paper, we analyze high-degree functions such as the Legendre symbol or the modulo-2 operation as building blocks for the nonlinear layer of a cryptographic scheme over (F_p)^n. Our focus regards the security analysis rather than the efficiency in the mentioned use cases. For this purpose, we present several new invertible functions that make use of the Legendre symbol or of the modulo-2 operation.

Even though these functions often provide strong statistical properties and ensure a high degree after a few rounds, the main problem regards their small number of possible outputs, that is, only three for the Legendre symbol and only two for the modulo-2 operation. By guessing them, it is possible to reduce the overall degree of the function significantly. We exploit this behavior by describing the first preimage attack on full Grendel, and we verify it in practice.
Expand
◄ Previous Next ►