International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

15 February 2023

Hongbo Wen, Jon Stephens, Yanju Chen, Kostas Ferles, Shankara Pailoor, Kyle Charbonnet, Isil Dillig, Yu Feng
ePrint Report ePrint Report
As privacy-sensitive applications based on zero-knowledge proofs (ZKPs) gain increasing traction, there is a pressing need to detect vulnerabilities in ZKP circuits. This paper studies common vulnerabilities in Circom (the most popular domain-specific language for ZKP circuits) and describes a static analysis framework for detecting these vulnerabilities. Our technique operates over an abstraction called the circuit dependence graph (CDG) that captures key properties of the circuit and allows expressing semantic vulnerability patterns as queries over the CDG abstraction. We have implemented 9 different detectors using this framework and perform an experimental evaluation on over 258 circuits from popular Circom projects on Github. According to our evaluation, these detectors can identify vulnerabilities, including previously unknown ones, with high precision and recall.
Expand
Nicolas Gailly, Kelsey Melissaris, Yolan Romailler
ePrint Report ePrint Report
We present a practical construction and implementation of timelock encryption, in which a ciphertext is guaranteed to be decryptable only after some specified time has passed. We employ an existing threshold network, the League of Entropy, implementing threshold BLS [BLS01, B03] in the context of Boneh and Franklin's identity-based encryption (IBE) [BF01]. At present this threshold network broadcasts BLS signatures over each round number, equivalent to the current time interval, and as such can be considered a decentralised key holder periodically publishing private keys for the IBE where identities are the round numbers. A noticeable advantage of this scheme is that only the encryptors and decryptors are required to perform any additional cryptographic operations; the threshold network can remain unaware of the TLE and does not have to change to support the scheme. We also release an open-source implementation of our scheme and a live web page that can be used in production now relying on the existing League of Entropy network acting as a distributed public randomness beacon service using threshold BLS signatures.
Expand
Daniel R. L. Brown
ePrint Report ePrint Report
Hecht and Scolnik proposed key agreement using rectangular matrices and determinants. This report describes an attack.
Expand
Lúcás Críostóir Meier
ePrint Report ePrint Report
Universally composable (UC) security is the most widely used framework for analyzing the security of cryptographic protocols. Many variants and simplifications of the framework have been proposed and developed, nonetheless, many practitioners find UC proofs to be both difficult to construct and understand.

We remedy this situation by proposing a new framework for protocol security. We believe that our framework provides proofs that are both easier to write, but also more rigorous, and easier to understand. Our work is based on state-separable proofs allowing for modular proofs, by decomposing complicated protocols into simple components.
Expand
Julien Duman, Dominik Hartmann, Eike Kiltz, Sabrina Kunzweiler, Jonas Lehmann, Doreen Riepel
ePrint Report ePrint Report
We define the Generic Group Action Model (GGAM), an adaptation of the Generic Group Model to the setting of group actions (such as CSIDH). Compared to a previously proposed definition by Montgomery and Zhandry (ASIACRYPT'22), our GGAM more accurately abstracts the security properties of group actions.

We are able to prove information-theoretic lower bounds in the GGAM for the discrete logarithm assumption, as well as for non-standard assumptions recently introduced in the setting of threshold and identification schemes on group actions. Unfortunately, in a natural quantum version of the GGAM, the discrete logarithm assumption does not hold.

To this end we also introduce the weaker Quantum Algebraic Group Action Model (QAGAM), where every set element (in superposition) output by an adversary is required to have an explicit representation relative to known elements. In contrast to the Quantum Generic Group Action Model, in the QAGAM we are able to analyze the hardness of group action assumptions: We prove (among other things) the equivalence between the discrete logarithm assumption and non-standard assumptions recently introduced in the setting of QROM security for Password-Authenticated Key Exchange, Non-Interactive Key Exchange, and Public-Key Encryption.
Expand
Philipp G. Haselwarter, Benjamin Salling Hvass, Lasse Letager Hansen, Théo Winterhalter, Catalin Hritcu, Bas Spitters
ePrint Report ePrint Report
The field of high-assurance cryptography is quickly maturing, yet a unified foundational framework for end-to-end formal verification of efficient cryptographic implementations is still missing. To address this gap, we use the Coq proof assistant to formally connect three existing tools: (1) the Hacspec emergent cryptographic specification language; (2) the Jasmin language for efficient, high-assurance cryptographic implementations; and (3) the SSProve foundational verification framework for modular cryptographic proofs. We first connect Hacspec with SSProve by devising a new translation from Hacspec specifications to imperative SSProve code. We validate this translation by considering a second, more standard translation from Hacspec to purely functional Coq code and automatically proving the equivalence of the code produced by the two translations. We further define a translation from Jasmin to SSProve, which allows us to formally reason in SSProve about efficient cryptographic implementations in Jasmin. We prove this translation correct in Coq with respect to Jasmin's operational semantics. Finally, we demonstrate the usefulness of our approach by giving a foundational end-to-end Coq proof of an efficient AES implementation. For this case study we start from an existing Jasmin implementation of AES that makes use of hardware acceleration, prove its security using SSProve, and also that it conforms to a specification of the AES standard written in Hacspec.
Expand
André Schrottenloher
ePrint Report ePrint Report
The Quantum Fourier Transform is a fundamental tool in quantum cryptanalysis. In symmetric cryptanalysis, hidden shift algorithms such as Simon's (FOCS 1994), which rely on the QFT, have been used to obtain structural attacks on some very specific block ciphers. The Fourier Transform is also used in classical cryptanalysis, for example in FFT-based linear key-recovery attacks introduced by Collard et al. (ICISC 2007). Whether such techniques can be adapted to the quantum setting has remained so far an open question.

In this paper, we introduce a new framework for quantum linear key-recovery attacks using the QFT. These attacks loosely follow the classical method of Collard et al., in that they rely on the fast computation of a ``correlation state'' in which experimental correlations, rather than being directly accessible, are encoded in the amplitudes of a quantum state. The experimental correlation is a statistic that is expected to be higher for the good key, and on some conditions, the increased amplitude creates a speedup with respect to an exhaustive search of the key. The same method also yields a new family of structural attacks, and new examples of quantum speedups beyond quadratic using classical known-plaintext queries.
Expand
Mario Larangeira, Maxim Jourenko
ePrint Report ePrint Report
The efficiency of blockchain systems is often compared to popular credit card networks with respect to the transactions per second rate. This seems to be an unfair comparison since these networks do not complete a transaction from beginning to end. Rather they buy the risk and settle it much later. Typically transactions have only two players, the payer and the payee, and the settlement of this transaction requires time since it depends on basic properties of the consensus protocol. In practice, the payee, very often, needs to wait for confirmation in order to ship the traded goods. Alternatively, the payee, or merchant, can ship it in faith that the transaction will be confirmed. Our contribution, the Maravedí Protocol, introduces a third player to minimize the risk of the payee to be left without the payment even without the consensus layer confirmation. The main idea is that the third player can work similarly to a credit card company. That is, it buys the risk from the merchant, by a small discount, and allows the third player to pay it instantaneously via a payment-channel like protocol. In parallel, the third player receives the regular payment transaction from the payer that can be settled on the chain, thus, after waiting the consensus/blockchain required time. Moreover, the on-chain transaction pays the full amount, allowing the third player to cash in the discount. Hence on the side of the merchant, our protocol puts forth instantaneous finality in a novel way to the best of our knowledge.
Expand
Yi-Fu Lai
ePrint Report ePrint Report
In this work, we propose two post-quantum verifiable random functions (VRFs) constructions based on group actions and isogenies, one of which is based on the standard DDH assumption. VRF is a cryptographic tool that enables a user to generate a pseudorandom output along with a publicly verifiable proof. The residual pseudorandomness of VRF ensures the pseudorandomness of unrevealed inputs, even if an arbitrary number of outputs and proofs are revealed. Furthermore, it is infeasible to generate proofs to validate distinct values as outputs for the same input.

In practical applications, VRFs have a wide range of uses, including in DNSSEC protocols, blockchain and cryptocurrency. Currently, most VRF constructions rely on elliptic curve cryptography (ECC), pairing, or Decisional Diffie-Hellman (DDH) type assumptions. These assumptions, however, cannot thwart the threats from quantum adversaries. In light of this, there is a growing need for post-quantum VRFs, which are currently less widely developed in the literature.

We contribute to the study by presenting two VRF proposals from group actions and isogenies. Our constructions are fairly simple and derived from number-theoretic pseudorandom functions. We present a proof system that allows us to prove the factorization of group actions and set elements, providing a proof for our VRFs. The first one is based on the standard DDH problem. For the proof we introduce a new problem, the master decisional Diffie-Hellman problem over group actions, which we prove to be equivalent to the standard DDH problem. Furthermore, we present a new use of quadratic twists to reduce costs by expanding the input size and relaxing the assumption to the square DDH problem. Additionally, we employ advanced techniques in the isogeny literature to optimize the proof size to 39KB and 34 KB using CSIDH512 without compromising VRF notions. To the best of our knowledge, they are the first two provably secure VRF constructions based on isogenies.
Expand
Emanuele Bellini, David Gerault, Juan Grados, Rusydi Makarim, Thomas Peyrin
ePrint Report ePrint Report
In this paper, we present a fully automated tool for differential-linear attacks using Mixed-Integer Linear Programming (MILP) and Mixed-Integer Quadratic Constraint Programming (MIQCP) techniques, which is, to the best of our knowledge, the very first attempt to fully automate such attacks. We use this tool to improve the correlations of the best 9 and 10-round differential-linear distinguishers on Speck32/64, and reach 11 rounds for the first time. Furthermore, we improve the latest 14-round key-recovery attack against Speck32/64, using differential-linear distinguishers obtained with our MILP/MIQCP tool. The techniques we present are generic and can be applied to other ARX ciphers as well.
Expand
Jinpeng Hou, Yansong Gao, Mang Su, Willy Susilo, Jie Chen, Anmin Fu
ePrint Report ePrint Report
We introduce a new primitive called the asymmetric trapdoor pseudorandom generator (ATPRG), which belongs to pseudorandom generators with two additional trapdoors (a public trapdoor and a secret trapdoor) or backdoor pseudorandom generators with an additional trapdoor (a secret trapdoor). Specifically, ATPRG can only generate public pseudorandom numbers $pr_1,\dots,pr_n $ for the users having no knowledge of the public trapdoor and the secret trapdoor; so that this function is the same as pseudorandom generators. However, the users having the public trapdoor can use any public pseudorandom number $pr_i$ to recover the whole $pr$ sequence; so that this function is the same as backdoor pseudorandom generators. Further, the users having the secret trapdoor can use $pr$ sequence to generate a sequence $sr_1,\dots,sr_n$ of the secret pseudorandom numbers. As for applications of ATPRG, we construct the first homomorphic signature scheme (in the standard model) whose public key size is only $O(T)$ independent of the dataset size. As a comparison, the shortest size of the existing public key is $O(\sqrt{N}+\sqrt{T})$, proposed by Catalano et al. (CRYPTO'15), where $N$ is the dataset size and $T$ is the dimension of the message. In other words, we provide the first homomorphic signature scheme with $O(1)$-sized public keys for the one-dimension messages.
Expand
Itay Bookstein, Boaz Tsaban
ePrint Report ePrint Report
We study a novel family of cryptographic hash functions based on Galois linear feedback shift registers (LFSRs), and identify initial guidelines for choosing secure parameters for this family. These hash functions are extremely simple, efficient, and suitable for implementation in constrained environments.
Expand
Siwei Chen, Zejun Xiang, Mingming Zhu, Runqing Xu, Xiangyong Zeng, Shasha Zhang
ePrint Report ePrint Report
In this paper, we propose a rectangle-like method called \textit{rotational-XOR differential rectangle} attack to search for better distinguishers. It is a combination of the rotational-XOR cryptanalysis and differential cryptanalysis in the rectangle-based way. In particular, we put a rotational-XOR characteristic before a differential characteristic to construct a rectangle structure. By choosing some appropriate rotational-XOR and differential characteristics as well as considering multiple differentials, some longer distinguishers that have the probability greater than $2^{-2n}$ can be constructed effectively where $n$ is the block size of a block cipher. We apply this new method to some versions of \textsc{Simon} and \textsc{Simeck} block ciphers. As a result, we obtain rotational-XOR differential rectangle distinguishers up to 16, 16, 17, 16 and 21 rounds for \textsc{Simon}32/64, \textsc{Simon}48/72, \textsc{Simon}48/96, \textsc{Simeck}32 and \textsc{Simeck}48, respectively. Our distinguishers for \textsc{Simon}32/64 and \textsc{Simon}48/96 are both longer than the best differential and rotational-XOR distinguishers. Also, our distinguisher for \textsc{Simeck}32 is longer than the best differential distinguisher (14 rounds) and has the full weak key space (i.e., $2^{64}$) whereas the 16-round rotational-XOR distinguisher has a weak key class of $2^{36}$. In addition, our distinguisher for \textsc{Simeck}48 has a better weak key class ($2^{72}$ weak keys) than the 21-round rotational-XOR distinguisher ($2^{60}$ weak keys). To the best of our knowledge, this is the first time to consider the combinational cryptanalysis based on rotational-XOR and differential cryptanalysis using the rectangle structure.
Expand
Damien Robert
ePrint Report ePrint Report
While the Weil pairing is geometric, the Tate pairing is arithmetic: its value depends on the base field considered. Nevertheless, the étale topology allows to interpret the Galois action in a geometric manner. In this paper, we discuss this point of view for the Tate pairing: its natural geometric interpretation is that it gives étale $\mu_n$-torsors. While well known to experts, this interpretation is perhaps less known in the cryptographic community.

As an application, we explain how to use the Tate pairing to study the fibers of an isogeny, and we prove a conjecture by Castryck and Decru on multiradical isogenies.
Expand
Pierre Briaud, Morten Øygarden
ePrint Report ePrint Report
The Regular Syndrome Decoding (RSD) problem, a variant of the Syndrome Decoding problem with a particular error distribution, was introduced almost 20 years ago by Augot et al.. In this problem, the error vector is divided into equally sized blocks, each containing a single noisy coordinate. More recently, the last five years have seen increased interest in this assumption due to its use in MPC and ZK applications. Generally referred to as "LPN with regular noise" in this context, the assumption allows to achieve better efficiency when compared to plain LPN. In all previous works of cryptanalysis, it has not been shown how to exploit the special feature of this problem in an attack.

We present the first algebraic attack on RSD. Based on a careful theoretical analysis of the underlying polynomial system, we propose concrete attacks that are able to take advantage of the regular noise distribution. In particular, we can identify several examples of concrete parameters where our techniques outperform other algorithms.
Expand
Vasyl Ustimenko
ePrint Report ePrint Report
Studies of linear codes in terms of finite projective geometries form traditional direction in Coding Theory. Some applications of projective geometries are known. Noncommutative groups and semigroups defined in terms of projective geometries can serve as platforms of protocols of Post Quantum Cryptography. We introduce an idea of public keys of Multivariate Cryptography given by quadratic public rules generated via walks on incidence substructures of projective geometry with vertexes from two largest Schubert cells. It differs from the known algorithms of Code Based Cryptography and can be considered as the first attempt to combine ideas of this area with the approach of Multivariate Cryptography.
Expand
Qun Liu, Zheng Zhao, Meiqin Wang
ePrint Report ePrint Report
In many applications, low area and low latency are required for the chip-level implementation of cryptographic primitives. The low-cost implementations of linear layers usually play a crucial role for symmetric ciphers. Some heuristic methods, such as the forward search and the backward search, minimize the number of XOR gates of the linear layer under the minimum latency limitation.

For the sake of achieving further optimization for such implementation of the linear layer, we put forward a new general search framework attaching the division optimization and extending base techniques in this paper. In terms of the number of XOR gates and the searching time, our new search algorithm is better than the previous heuristics, including the forward search and the backward search when testing matrices provided by them. We obtain an improved implementation of AES MixColumns requiring only 102 XORs under minimum latency, which outdoes the previous best record provided by the forward search.
Expand
Daniel Escudero, Hongqing Liu, Chaoping Xing, Chen Yuan
ePrint Report ePrint Report
In the recent work of (Cheon & Lee, Eurocrypt'22), the concept of a degree-$D$ packing method was formally introduced, which captures the idea of embedding multiple elements of a smaller ring into a larger ring, so that element-wise multiplication in the former is somewhat "compatible" with the product in the latter. Then, several optimal bounds and results are presented, and furthermore, the concept is generalized from one multiplication to degrees larger than two. These packing methods encompass several constructions seen in the literature in contexts like secure multiparty computation and fully homomorphic encryption.

One such construction is the concept of reverse multiplication-friendly embeddings (RMFEs), which are essentially degree-2 packing methods. In this work we generalize the notion of RMFEs to \emph{degree-$D$ RMFEs} which, in spite of being "more algebraic" than packing methods, turn out to be essentially equivalent. Then, we present a general construction of degree-$D$ RMFEs by generalizing the ideas on algebraic geometry used to construct traditional degree-$2$ RMFEs which, by the aforementioned equivalence, leads to explicit constructions of packing methods. Furthermore, our theory is given in an unified manner for general Galois rings, which include both rings of the form $\mathbb{Z}_{p^k}$ and fields like $\mathbb{F}_{p^k}$, which have been treated separately in prior works. We present multiple concrete sets of parameters for degree-$D$ RMFEs (including $D=2$), which can be useful for future works.

Finally, we apply our RMFEs to the task of non-interactively generating high degree correlations for secure multiparty computation protocols. This requires the use of Shamir secret sharing for a large number of parties, which is known to require large-degree Galois ring extensions. Our RMFE enables the generation of such preprocessing data over small rings, without paying for the multiplicative overhead incurred by using Galois ring extensions of large degree. For our application we also construct along the way, as a side contribution of potential independent interest, a pseudo-random secret-sharing solution for non-interactive generation of packed Shamir-sharings over Galois rings with structured secrets, inspired by the PRSS solutions from (Benhamouda et al, TCC 2021).
Expand
Luke Demarest, Benjamin Fuller, Alexander Russell
ePrint Report ePrint Report
Fuzzy extractors convert noisy signals from the physical world into reliable cryptographic keys. Fuzzy min-entropy is an important measure the ability of a fuzzy extractor to distill keys from a distribution: in particular, it bounds the length of the key that can be derived (Fuller, Reyzin, and Smith, IEEE Transactions on Information Theory 2020). In general, fuzzy min-entropy that is superlogarithmic in the security parameter is required for a noisy distribution to be suitable for key derivation. There is a wide gap between what is possible with respect to computational and information-theoretic adversaries. Under the assumption of general-purpose obfuscation, keys can be securely derived from all distributions with superlogarithmic entropy. Against information-theoretic adversaries, however, it is impossible to build a single fuzzy extractor that works for all distributions (Fuller, Reyzin, and Smith, IEEE Transactions on Information Theory 2020).

A weaker information-theoretic goal is to build a fuzzy extractor for each particular probability distribution. This is the approach taken by Woodage et al. (Crypto 2017). Prior approaches use the full description of the probability mass function and are inefficient. We show this is inherent: for a quarter of distributions with fuzzy min-entropy and $2^k$ points there is no secure fuzzy extractor that uses less $2^{\Theta(k)}$ bits of information about the distribution.} This result rules out the possibility of efficient, information-theoretic fuzzy extractors for many distributions with fuzzy min-entropy.

We show an analogous result with stronger parameters for information-theoretic secure sketches. Secure sketches are frequently used to construct fuzzy extractors.
Expand
Itai Dinur, Uri Stemmer, David P. Woodruff, Samson Zhou
ePrint Report ePrint Report
We study the space complexity of the two related fields of differential privacy and adaptive data analysis. Specifically, (1) Under standard cryptographic assumptions, we show that there exists a problem $P$ that requires exponentially more space to be solved efficiently with differential privacy, compared to the space needed without privacy. To the best of our knowledge, this is the first separation between the space complexity of private and non-private algorithms. (2) The line of work on adaptive data analysis focuses on understanding the number of samples needed for answering a sequence of adaptive queries. We revisit previous lower bounds at a foundational level, and show that they are a consequence of a space bottleneck rather than a sampling bottleneck. To obtain our results, we define and construct an encryption scheme with multiple keys that is built to withstand a limited amount of key leakage in a very particular way.
Expand
◄ Previous Next ►