International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

14 June 2022

Gennaro Avitabile, Vincenzo Botta, Daniele Friolo, Ivan Visconti
ePrint Report ePrint Report
Recently, there has been great interest towards constructing efficient zero-knowledge proofs for practical languages. In this work, we focus on proofs for threshold relations, in which the prover is required to prove knowledge of witnesses for $k$ out of $\ell$ statements.

The main contribution of our work is an efficient and modular transformation that starting from a large class of $\Sigma$-protocols and a corresponding threshold relation $\mathcal{R}_\mathsf{k,\ell}$, provides an efficient $\Sigma$-protocol for $\mathcal{R}_\mathsf{k,\ell}$ with improved communication complexity w.r.t. prior results. Moreover, our transformation preserves statistical/perfect honest-verifier zero knowledge.
Expand
Hosein Hadipour, Marcel Nageler, Maria Eichlseder
ePrint Report ePrint Report
Automatic tools to search for boomerang distinguishers have seen significant advances over the past few years. However, most of the previous works in this context focus on ciphers based on a Substitution Permutation Network (SPN), while analyzing the Feistel structure is of great significance. Although Boukerrou et al. provided a theoretical framework to formulate the boomerang switch over multiple Feistel rounds very recently, they did not provide an automatic tool to search for boomerang distinguishers of Feistel structures taking the switching effect into account. In this paper, by enhancing the recently proposed method to search for boomerang distinguishers by Hadipour et al., we provide an automatic tool to search for boomerang distinguishers and apply it to block ciphers following the Generalized Feistel Structure (GFS). Applying our tool to a wide range of GFS ciphers, we show that it yields a significant improvement compared to the best previous results concerning boomerang analysis. In particular, we improve the best previous boomerang distinguishers for 20 and 21 rounds of WARP by a factor of $2^{38.28$ and $2^{36.56$, respectively. Thanks to the effectiveness of our method, we even improve the boomerang distinguishers of WARP by two rounds and distinguish 23 rounds of this cipher from a random permutation. Applying our method to the internationally-standardized cipher CLEFIA, we achieve a 9-round boomerang distinguisher which improves the best previous boomerang distinguisher by one round. Furthermore, based on this distinguisher, we build a key-recovery attack on 11 rounds of CLEFIA, which improves the best previous sandwich attack on this cipher by one round. We also apply our method to LBlock, LBlock-s, and TWINE and improve the best previous boomerang distinguisher of these ciphers.
Expand
Zhimei Sui, Joseph K. Liu, Jiangshan Yu, Xianrui Qin
ePrint Report ePrint Report
We propose MoNet, the first bi-directional payment channel network with unlimited lifetime for Monero. It is fully compatible with Monero without requiring any modification of the current Monero blockchain. MoNet preserves transaction fungibility, i.e., transactions over MoNet and Monero are indistinguishable, and guarantees anonymity of Monero and MoNet users by avoiding any potential privacy leakage introduced by the new payment channel network. We also propose a new crypto primitive, named Verifiable Consecutive One-way Function (VCOF). It allows one to generate a sequence of statement-witness pairs in a consecutive and verifiable way, and these statement-witness pairs are one-way, namely it is easy to compute a statement-witness pair by knowing any of the pre-generated pairs, but hard in an opposite flow. By using VCOF, a signer can produce a series of consecutive adaptor signatures CAS. We further propose the generic construction of consecutive adaptor signature as an important building block of MoNet. We develop a proof-of-concept implementation for MoNet, and our evaluation shows that MoNet can reach the same transaction throughput as Lightning Network, the payment channel network for Bitcoin. Moreover, we provide a security analysis of MoNet under the Universal Composable (UC) security framework.
Expand
David Mestel, Johannes Mueller, Pascal Reisert
ePrint Report ePrint Report
Replay attacks are among the most well-known attacks against vote privacy. Many e-voting systems have been proven vulnerable to replay attacks, including systems like Helios that are used in real practical elections.

Despite their popularity, it is commonly believed that replay attacks are inefficient but the actual threat that they pose to vote privacy has never been studied formally. Therefore, in this paper, we precisely analyze for the first time how efficient replay attacks really are.

We study this question from commonly used and complementary perspectives on vote privacy, showing as an independent contribution that a simple extension of a popular game-based privacy definition corresponds to a strong entropy-based notion.

Our results demonstrate that replay attacks can be devastating for a voter's privacy even when an adversary's resources are very limited. We illustrate our formal findings by applying them to a number of real-world elections, showing that a modest number of replays can result in significant privacy loss. Overall, our work reveals that, contrary to a common belief, replay attacks can be very efficient and must therefore be considered a serious threat.
Expand
Samed Düzlü, Juliane Krämer
ePrint Report ePrint Report
In this paper, we propose a new approach to the study of lattice problems used in cryptography. We specifically focus on module lattices of a fixed rank over some number field. An essential question is the hardness of certain computational problems on such module lattices, as the additional structure may allow exploitation. The fundamental insight is the fact that the collection of those lattices are quotients of algebraic manifolds by arithmetic subgroups. Functions on these spaces are studied in mathematics as part of number theory. In particular, those form a module over the Hecke algebra associated with the general linear group. We use results on these function spaces to define a class of distributions on the space of lattices. Using the Hecke algebra, we define Hecke operators associated with collections of prime ideals of the number field and show a criterion on distributions to converge to the uniform distribution, if the Hecke operators are applied to the chosen distribution. Our approach is motivated by the work of de Boer, Ducas, Pellet-Mary, and Wesolowski (CRYPTO'20) on self-reduction of ideal lattices via Arakelov divisors.
Expand
Vincent Cheval, Charlie Jacomme, Steve Kremer, Robert Künnemann
ePrint Report ePrint Report
Symbolic security protocol verifiers have reached a high degree of automation and maturity. Today, experts can model real-world protocols, but this often requires model-specific encodings and deep insight into the strengths and weaknesses of each of those tools. With Sapic+ , we introduce a protocol verification platform that lifts this burden and permits choosing the right tool for the job, at any development stage. We build on the existing compiler from Sapic to Tamarin, and extend it with automated translations from Sapic+ to ProVerif and DeepSec, as well as powerful, protocol-independent optimizations of the existing translation. We prove each part of these translations sound. A user can thus, with a single Sapic+ file, verify reachability and equivalence properties on the specified protocol, either using ProVerif, Tamarin or DeepSec. Moreover, the soundness of the translation allows to directly assume results proven by another tool which allows to exploit the respective strengths of each tool. We demonstrate our approach by analyzing various existing models. This includes a large case study of the 5G authentication protocols, reviously analyzed in Tamarin. Encoding this model in Sapic+ we demonstrate the effectiveness of our approach. Moreover, we study four new case studies: the LAKE and the Privacy-Pass [20] protocols, both under standardization, the SSH protocol with the agent-forwarding feature, and the recent KEMTLS [45] protocol, a post-quantum version of the main TLS key exchange.
Expand

09 June 2022

Lawrence Roy, Stanislav Lyakhov, Yeongjin Jang, Mike Rosulek
ePrint Report ePrint Report
Public-key authentication in SSH reveals more information about the participants' keys than is necessary. (1) The server can learn a client's entire set of public keys, even keys generated for other servers. (2) The server learns exactly which key the client uses to authenticate, and can further prove this fact to a third party. (3) A client can learn whether the server recognizes public keys belonging to other users. Each of these problems lead to tangible privacy violations for SSH users.

In this work we introduce a new public-key authentication method for SSH that reveals essentially the minimum possible amount of information. With our new method, the server learns only whether the client knows the private key for some authorized public key. If multiple keys are authorized, the server does not learn which one the client used. The client cannot learn whether the server recognizes public keys belonging to other users. Unlike traditional SSH authentication, our method is fully deniable. Our new method also makes it harder for a malicious server to intercept first-use SSH connections on a large scale.

Our method supports existing SSH keypairs of all standard flavors — RSA, ECDSA, EdDSA. It does not require users to generate new key material. As in traditional SSH authentication, clients and servers can use a mixture of different key flavors in a single authentication session.

We integrated our new authentication method into OpenSSH, and found it to be practical and scalable. For a typical client and server with at most 10 ECDSA/EdDSA keys each, our protocol requires 9 kB of communication and 12.4 ms of latency. Even for a client with 20 keys and server with 100 keys, our protocol requires only 12 kB of communication and 26.7 ms of latency.
Expand
Antonin Leroux, Maxime Roméas
ePrint Report ePrint Report
Updatable Encryption (UE) allows to rotate the encryption key in the outsourced storage setting while minimizing the bandwith used. The server can update ciphertexts to the new key using a token provided by the client. UE schemes should provide strong confidentiality guarantees against an adversary that can corrupt keys and tokens.

This paper solves three open problems in ciphertext-independent post-quantum UE. First, we propose the first two post-quantum CCA secure UE schemes, solving an open problem left by Jiang at Asiacrypt 2020. Second, our three UE schemes are the first post-quantum schemes that support an unbounded number of updates. Third, the security of our three schemes is based on three different problems which are not lattice problems, whereas the two prior post-quantum UE schemes are both based on LWE.

We do so by studying the problem of building UE in the group action framework. We introduce a new notion of Mappable Effective Group Action (MEGA) and show that we can build UE from a MEGA by generalizing the SHINE construction of Boyd et al. at Crypto 2020. We propose two post-quantum instantiations of our UE scheme using some recent group action constructions. Isogeny-based group actions are the most studied post-quantum group actions. Unfortunately, the resulting group actions are not mappable. We show that we can still build UE from isogenies by introducing a new algebraic structure called Effective Triple Orbital Group Action (ETOGA). We prove that UE can be built from an ETOGA and show how to instantiate this abstract structure from isogeny-based group actions.
Expand
Buvana Ganesh, Paolo Palmieri
ePrint Report ePrint Report
Homomorphic Encryption (HE) is a very attractive solution to ensure privacy when outsourcing confidential data to the cloud, as it enables computation on the data without decryption. As the next step, searching this homomorphic data becomes necessary to navigate it in the server. In this paper, we propose a novel algorithm to search homomorphically encrypted data outsourced to an untrusted server and shared with multiple users. We optimize the steps involved in the process to reduce the number of rounds of communication. We use an order-preserving encoding to batch the data with multi-key HE cryptosystems to reduce the multiplicative depth of the equality circuits and enable direct comparison. Further, we use LEAF to retrieve indices securely, and SealPIR to retrieve the values obliviously to the user. Overall, we provide an efficient end-to-end framework for searching shared data in a semi-honest server.
Expand
Prasanna Ravi, Anupam Chattopadhyay, Anubhab Baksi
ePrint Report ePrint Report
In this work, we present a systematic study of Side-Channel Attacks (SCA) and Fault Injection Attacks (FIA) on structured lattice-based schemes, with main focus on Kyber Key Encapsulation Mechanism (KEM) and Dilithium signature scheme, which are leading candidates in the NIST standardization process for Post-Quantum Cryptography (PQC). Through our study, we attempt to understand the underlying similarities and differences between the existing attacks, while classify them into different categories. Given the wide-variety of reported attacks, simultaneous protection against all the attacks requires to implement customized protections/countermeasures for both Kyber and Dilithium. We therefore present a range of customized countermeasures, capable of providing defenses/mitigations against existing SCA/FIA. Amongst the presented countermeasures, we propose two novel countermeasures to protect Kyber KEM against SCA and FIA assisted chosen-ciphertext attacks. We implement the presented countermeasures within two well-known public software libraries for PQC - (1) pqm4 library for the ARM Cortex-M4 based microcontroller and (2) liboqs library for the Raspberry Pi 3 Model B Plus based on the ARM Cortex-A53 processor. Our performance evaluation reveals that the presented custom countermeasures incur reasonable performance overheads, on both the evaluated embedded platforms. We therefore believe our work argues for usage of custom countermeasures within real-world implementations of lattice-based schemes, either in a standalone manner, or as reinforcements to generic countermeasures such as masking.
Expand
Phil Hebborn, Gregor Leander, Aleksei Udovenko
ePrint Report ePrint Report
This work surveys mathematical aspects of division property, which is a state of the art technique in cryptanalysis of symmetric-key algorithms, such as authenticated encryption, block ciphers and stream ciphers. It aims to find integral distinguishers and cube attacks, which exploit weakness in the algebraic normal forms of the output coordinates of the involved vectorial Boolean functions. Division property can also be used to provide arguments for security of primitives against these attacks.

The focus of this work is a formal presentation of the theory behind the division property, including rigorous proofs, which were often omitted in the existing literature. This survey covers the two major variants of division property, namely conventional and perfect division property. In addition, we explore relationships of the technique with classic degree bounds.
Expand
Ni Trieu, Avishay Yanai, Jiahui Gao
ePrint Report ePrint Report
We describe a new paradigm for multi-party private set intersection cardinality (PSI-CA) that allows n parties to compute the intersection size of their datasets without revealing any additional information. We explore a variety of instantiations of this paradigm. Our protocols avoid computationally expensive public-key operations and are secure in the presence of a malicious adversary.

We demonstrate the practicality of our PSI-CA protocol with an implementation. For n = 16 parties with data-sets of 2^20 items each, our server-aided variant takes 71 seconds. Interestingly, in the server-less setting, the same task takes only 7 seconds. To the best of our knowledge, this is the first ‘special purpose’ implementation of a multi-party PSI-CA (i.e., an implementation that does not rely on a generic underlying MPC protocol).

Our PSI-CA protocols can be used to securely compute the dot-product function. The dot-product function takes n binary vectors v1, ..., vn, each of m elements, and outputs the sum of m entries, where the i-th entry is equal the product of the i-th entries in all n input vectors. Importantly, the complexity of our protocol for secure dot-product (where party Pi has a secret vector vi) is linear only in the Hamming weight of the vectors, which is potentially sub-linear in the input size.

We demonstrate that two interesting applications, namely, ‘COVID-19 heatmap’ and ‘associated rule learning (ARL)’, can be computed securely using a dot-product as a building block. We analyse the performance of securely computing Covid-19 heatmap and ARL using our protocol and compare that to the state-of-the-art.
Expand
Charlotte Lefevre, Bart Mennink
ePrint Report ePrint Report
The cryptographic sponge is a popular method for hash function design. The construction is in the ideal permutation model proven to be indifferentiable from a random oracle up to the birthday bound in the capacity of the sponge. This result in particular implies that, as long as the attack complexity does not exceed this bound, the sponge construction achieves a comparable level of collision, preimage, and second preimage resistance as a random oracle. We investigate these state-of-the-art bounds in detail, and observe that while the collision and second preimage security bounds are tight, the preimage bound is not tight. We derive an improved and tight preimage security bound for the cryptographic sponge construction. The result has direct implications for various lightweight cryptographic hash functions. For example, the NIST Lightweight Cryptography finalist Ascon-Hash does not generically achieve $2^{128}$ preimage security as claimed, but even $2^{192}$ preimage security. Comparable improvements are obtained for the modes of Spongent, PHOTON, ACE, and Subterranean 2.0, among others.
Expand
Vincent Ulitzsch, Jean-Pierre Seifert
ePrint Report ePrint Report
The potential advent of large-scale quantum computers in the near future poses a threat to contemporary cryptography. Without doubt, one of the most active and ubiquitous usage of cryptography is currently present in the very vibrant field of cellular networks, i.e., 3G, 4G, 5G and 6G, which is already in the planning phase. The entire cryptography of cellular networks is centered around seven secret-key algorithms $f1,\ldots, f_5, f_1^{*}, f5^{*}$, aggregated into an "authentication and key agreement" algorithm set. Still, these secret key algorithms have not yet been subject to quantum cryptanalysis. Instead, many quantum security considerations for telecommunication networks argue that the threat posed by quantum computers is restricted to public-key cryptography. On the other hand, the only threat to secret-key algorithms would stem from the famous Grover quantum search algorithm, which admits a general square root speedup of all oracle based search problems, thus resulting in an effectively halved key length of the above algorithms. However, various recent works have presented quantum attacks on secret key cryptography that result in more than a quadratic speedup. These attacks call for a re-evaluation of quantum security considerations for cellular networks, encompassing a quantum cryptanalysis of the secret-key primitives used in cellular security. In this paper, we conduct such a quantum cryptanalysis for the Milenage algorithm set, the prevalent instantiation of the seven secret-key algorithms that underpin cellular security. Building upon recent quantum cryptanalytic results, we show attacks that go beyond a quadratic speedup. Concretely, we provide for all Milenage algorithms various quantum attack scenarios, including exponential speedups distinguishable by different quantum attack models. The presented attacks include a polynomial time quantum existential forgery attack, assuming an attacker has access to a superposition oracle of Milenage and key recovery attacks that reduce the security margin beyond the quadratic speedup of Grover. Our results do not constitute an immediate quantum break of the Milenage algorithms, but they do provide strong evidence against choosing Milenage as the cryptographic primitive underpinning the security of quantum resistant telecommunication networks.
Expand

08 June 2022

Matteo Campanelli, Danilo Francati, Claudio Orlandi
ePrint Report ePrint Report
The dream of software obfuscation is to take programs, as they are, and then compile them into obfuscated versions that hide their secret inner workings. In this work we investigate notions of obfuscations weaker than virtual black-box (VBB) but which still allow obfuscating cryptographic primitives preserving their original functionalities as much as possible. In particular we propose two new notions of obfuscations, which we call oracle-differing-input obfuscation (odiO) and oracle-indistinguishability obfuscation (oiO). In a nutshell, odiO is a natural strengthening of differing-input obfuscation (diO) and allows obfuscating programs for which it is hard to find a differing-input when given only oracle access to the programs. An oiO obfuscator allows to obfuscate programs that are hard to distinguish when treated as oracles. We then show applications of these notions, as well as positive and negative results around them. A few highlights include: – Our new notions are weaker than VBB and stronger than diO. – As it is the case for VBB, we show that there exist programs that cannot be obfuscated with odiO or oiO. – Our new notions allow to compile several flavours of secret key primitives (e.g., SKE, MAC, designated verifier NIZK) into their public key equivalent (e.g., PKE, signatures, publicly verifiable NIZK) while preserving one of the algorithms of the original scheme (function-preserving), or the structure of their outputs (format-preserving).
Expand
Xiaoyang Dong, Jian Guo, Shun Li, Phuong Pham
ePrint Report ePrint Report
The rebound attack was introduced by Mendel et al. at FSE 2009 to fulfill a heavy middle round of a differential path for free, utilizing the degree of freedom from states. The inbound phase was extended to 2 rounds by the Super-Sbox technique invented by Lamberger et al. at ASIACRYPT 2009 and Gilbert and Peyrin at FSE 2010. In ASIACRYPT 2010, Sasaki et al. further reduced the requirement of memory by introducing the non-full-active Super-Sbox. In this paper, we further develop this line of research by introducing Super-Inbound, which is able to connect multiple 1-round or 2-round (non-full-active) Super-Sbox inbound phases by utilizing fully the degrees of freedom from both states and key, yet without the use of large memory. This essentially extends the inbound phase by up to 3 rounds. We applied this technique to find classic or quantum collisions on several AES-like hash functions, and improved the attacked round number by 1 to 5 in targets including AES-128 and SKINNY hashing modes, Saturnin-Hash, and Grostl-512. To demonstrate the correctness of our attacks, the semi-free-start collision on 6-round AES-128-MMO/MP with estimated time complexity $2^{24}$ in classical setting was implemented and an example pair was found instantly on a standard PC.
Expand
Gilad Stern, Ittai Abraham
ePrint Report ePrint Report
In 1985, Dolev and Reischuk proved a fundamental communication lower bounds on protocols achieving fault tolerant synchronous broadcast and consensus: any deterministic protocol solving those tasks requires at least a quadratic number of message to be sent by nonfaulty parties. Followup work by Abraham, Chun, Dolev, Nayak, Pass, Ren and Shi shows a similar lower bound for randomized protocols. With the rise of blockchain systems, there have been many real-world systems that achieve consensus with seemingly linear communication per instance. We bridge this discrepancy in two ways. First, we generalize the lower bound to Crusader Broadcast protocols, and to all-but $m$ Crusader Broadcast. Second, we discuss the ways these lower bounds relate to the security of blockchain systems. Specifically, we show how eclipse style attacks in such systems can be viewed as specific instances of Dolev-Reischuk style attacks. Our observation suggests a more systematic way of analyzing and thinking about eclipse style attacks through the lens of the Dolev-Reischuk family of attacks. Finally, we present an example of a simple subquadratic Crusader Broadcast protocol whose security is highly dependent on insights from the presented lower bounds.
Expand
Hosein Hadipour, Maria Eichlseder
ePrint Report ePrint Report
WARP is a 128-bit block cipher published by Banik et al. at SAC 2020 as a lightweight alternative to AES. It is based on a generalized Feistel network and achieves the smallest area footprint among 128-bit block ciphers in many settings. Previous analysis results include integral key-recovery attacks on 21 out of 41 rounds.

In this paper, we propose integral key-recovery attacks on up to 32 rounds by improving both the integral distinguisher and the key-recovery approach substantially. For the distinguisher, we show how to model the monomial prediction technique proposed by Hu et al. at ASIACRYPT 2020 as a SAT problem and thus create a bit-oriented model of WARP taking the key schedule into account. Together with two additional observations on the properties of WARP's construction, we extend the best previous distinguisher by 2 rounds (as a classical integral distinguisher) or 4 rounds (for a generalized integral distinguisher). For the key recovery, we create a graph-based model of the round function and demonstrate how to manipulate the graph to obtain a cipher representation amenable to FFT-based key recovery.
Expand
Jiangshan Long, Changhai Ou, Zhu Wang, Shihui Zheng, Fei Yan, Fan Zhang, Siew-Kei Lam
ePrint Report ePrint Report
The performance of Side-Channel Attacks (SCAs) decays rapidly when considering more sub-keys, making the full-key recovery a very challenging problem. Limited to independent collision information utilization, collision attacks establish the relationship among sub-keys but do not significantly slow down this trend. To solve it, we first exploit the samples from the previously attacked S-boxes to assist attacks on the targeted S-box under an assumption that similar leakage occurs in program loop or code reuse scenarios. The later considered S-boxes are easier to be recovered since more samples participate in this assist attack, which results in the ``snowball'' effect. We name this scheme as Snowball, which significantly slows down the attenuation rate of attack performance. We further introduce confusion coefficient into the collision attack to construct collision confusion coefficient, and deduce its relationship with correlation coefficient. Based on this relationship, we give two optimizations on our Snowball exploiting the ``values'' information and ``rankings'' information of collision correlation coefficients named Least Deviation from Pearson correlation coefficient (PLD) and Least Deviation from confusion coefficient (CLD). Experiments show that the above optimizations significantly improve the performance of our Snowball.
Expand
Parker Newton, Silas Richelson
ePrint Report ePrint Report
Regev's Learning with Errors (LWE) problem (STOC 2005) is a fundamental hardness assumption for modern cryptography. The Learning with Rounding (LWR) Problem was put forth by Banarjee, Peikert and Rosen (Eurocrypt 2012) as an alternative to LWE, for use in cryptographic situations which require determinism. The only method we currently have for proving hardness of LWR is the so-called "rounding reduction" which is a specific reduction from an analogous LWE problem. This reduction works whenever the LWE error is small relative to the noise introduced by rounding, but it fails otherwise. For this reason, all prior work on establishing hardness of LWR forces the LWE error to be small, either by setting other parameters extremely large (which hurts performance), or by limiting the number of LWR samples seen by the adversary (which rules out certain applications). Hardness of LWR is poorly understood when the LWE modulus ($q$) is polynomial and when the number of LWE samples ($m$) seen by the adversary is an unbounded polynomial. This range of parameters is the most relevant for practical implementations, so the lack of a hardness proof in this situation is not ideal.

In this work, we identify an obstacle for proving the hardness of LWR via a reduction from LWE in the above parameter regime. Specifically, we show that any "point-wise" reduction from LWE to LWR can be used to directly break the corresponding LWE problem. A reduction is "point-wise" if it maps LWE samples to LWR samples one at a time. Our argument goes roughly as follows: first we show that any point-wise reduction from LWE to LWR must have good agreement with some affine map; then we use a Goldreich-Levin-type theorem to extract the LWE secret given oracle access to a point-wise reduction with good affine agreement. Both components may be of independent interest.
Expand
◄ Previous Next ►