IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
04 September 2024
Ehsan Ebrahimi
In this paper, we study the security definitions of various threshold symmetric primitives. Namely, we analyze the security definitions for threshold pseudorandom functions, threshold message authentication codes and threshold symmetric encryption. In each case, we strengthen the existing security definition, and we present a scheme that satisfies our stronger notion of security. In particular, we propose indifferentiability definition and IND-CCA2 definition for a threshold pseudorandom function and a threshold symmetric encryption scheme, respectively. Moreover, we show that these definitions are achievable. Notably, we propose the first IND-CCA2 secure threshold symmetric encryption scheme.
Kamil Kluczniak, Leonard Schild
Fully homomorphic encryption schemes are methods to perform compu-
tations over encrypted data. Since its introduction by Gentry, there has been a
plethora of research optimizing the originally inefficient cryptosystems. Over time,
different families have emerged. On the one hand, schemes such as BGV, BFV, or
CKKS excel at performing coefficient-wise addition or multiplication over vectors
of encrypted data. In contrast, accumulator-based schemes such as FHEW and
TFHE provide efficient methods to evaluate boolean circuits and means to efficiently
compute functions over small plaintext space of up to 4-5 bits in size.
In this paper, we focus on the second family. At a high level, accumulator-based
schemes encode the range of a function f in the coefficients of a polynomial, which
is then encrypted in a homomorphic accumulator. Given an input ciphertext, the
coefficients of the encrypted polynomial are homomorphically rotated, such that there
is a correspondence between the constant term of the result and the message contained
in the ciphertext. In the end, it is possible to derive a ciphertext of the constant term
encrypted with regard to the same encryption scheme as the input ciphertext. To
summarize, by appropriately encoding the function f on the accumulated polynomial,
we can compute f on the plaintext of the input ciphertext, where the output ciphertext
has its noise magnitude independent of the input ciphertext. However, by default, it
is necessary to impose restrictions on the type of functions we evaluate or drastically
limit the plaintext space that can be correctly processed. Otherwise, the procedure’s
output will be incorrect and hard to predict.
In this work, we describe two novel algorithms that have no such restrictions. Furthermore, we derive an algorithm that enables a user to evaluate an arbitrary amount
of functions at a computational cost that differs only by a constant amount compared
to a single function. Our methods lead to an evaluation that is between 15 and 31%
faster than previous works while also being conceptually simpler.
Aravind Vishnu S S, M Sethumadhavan, Lakshmy K V
In this article, we propose a generic hybrid encryption scheme providing entity authentication. The scheme is based on lossy trapdoor functions relying on the hardness of the Learning With Errors problem. The construction can be used on a number of different security requirements with minimal reconfiguration. It ensures entity authentication and ciphertext integrity while providing security against adaptive chosen ciphertext attacks in the standard model. As a desired characteristic of schemes providing entity authentication, we prove the strong unforgeability under chosen message attack for the construction. In addition, the scheme is post-quantum secure based on the hardness of the underlying assumption.
Shuhei Nakamura, Yusuke Tani, Hiroki Furue
In 2022, Wang et al. proposed the multivariate signature scheme SNOVA as a UOV variant over the non-commutative ring of $\ell \times \ell $ matrices over $\mathbb{F}_q$.
This scheme has small public key and signature size and is a first round candidate of NIST PQC additional digital signature project.
Recently, Ikematsu and Akiyama, and Li and Ding show that the core matrices of SNOVA with $v$ vinegar-variables and $o$ oil-variables are regarded as the representation matrices of UOV with $\ell v$ vinegar-variables and $\ell o$ oil-variables over $\mathbb{F}_q$, and thus we can apply existing key recovery attacks as a plain UOV.
In this paper, we propose a method that reduces SNOVA to smaller UOV with $v$ vinegar-variables and $o$ oil-variables over $\mathbb{F}_{q^\ell }$. As a result, we show that the previous first round parameter sets at $\ell = 2$ do not meet the NIST PQC security levels. We also confirm that the present parameter sets are secure from existing key recovery attacks with our approach.
Paco Azevedo Oliveira, Andersson Calle Viera, Benoît Cogliati, Louis Goubin
To be competitive with other signature schemes, the MLWE instance $\bf (A,t)$ on which Dilithium is based is compressed: the least significant bits of $\bf t$, which are denoted $\textbf{t}_0$, are considered part of the secret key. Knowing $\bf t_0$ does not provide any information about the other data in the secret key, but it does allow the construction of much more efficient side-channel attacks. Yet to the best of our knowledge, there is no kown way to recover $\bf t_0$ from Dilithium signatures. In this work, we show that each Dilithium signature leaks information on $\bf t_0$, then we construct an attack that retrieves the vector $\bf t_0$ from Dilithium signatures. Experimentally, for Dilithium-2, $4\,000\,000$ signatures and $2$ hours are sufficient to recover $\textbf{t}_0$ on a desktop computer.
Zhicong Huang, Wen-jie Lu, Yuchen Wang, Cheng Hong, Tao Wei, WenGuang Chen
Achieving malicious security with high efficiency in dishonest-majority secure multiparty computation is a formidable challenge. The milestone works SPDZ and TinyOT have spawn a large family of protocols in this direction. For boolean circuits, state-of-the-art works (Cascudo et. al, TCC 2020 and Escudero et. al, CRYPTO 2022) have proposed schemes based on reverse multiplication-friendly embedding (RMFE) to reduce the amortized cost. However, these protocols are theoretically described and analyzed, resulting in a significant gap between theory and concrete efficiency.
Our work addresses existing gaps by refining and correcting several issues identified in prior research, leading to the first practically efficient realization of RMFE. We introduce an array of protocol enhancements, including RMFE-based quintuples and (extended) double-authenticated bits, aimed at improving the efficiency of maliciously secure boolean and mixed circuits. The culmination of these efforts is embodied in Coral, a comprehensive framework developed atop the MP-SPDZ library. Through rigorous evaluation across multiple benchmarks, Coral demonstrates a remarkable efficiency gain, outperforming the foremost theoretical approach by Escudero et al. (which incorporates our RMFE foundation albeit lacks our protocol enhancements) by a factor of 16-30×, and surpassing the leading practical implementation for Frederiksen et al. (ASIACRYPT 2015) by 4-7×.
Our work addresses existing gaps by refining and correcting several issues identified in prior research, leading to the first practically efficient realization of RMFE. We introduce an array of protocol enhancements, including RMFE-based quintuples and (extended) double-authenticated bits, aimed at improving the efficiency of maliciously secure boolean and mixed circuits. The culmination of these efforts is embodied in Coral, a comprehensive framework developed atop the MP-SPDZ library. Through rigorous evaluation across multiple benchmarks, Coral demonstrates a remarkable efficiency gain, outperforming the foremost theoretical approach by Escudero et al. (which incorporates our RMFE foundation albeit lacks our protocol enhancements) by a factor of 16-30×, and surpassing the leading practical implementation for Frederiksen et al. (ASIACRYPT 2015) by 4-7×.
02 September 2024
Christopher Harth-Kitzerow, Yongqin Wang, Rachit Rajat, Georg Carle, Murali Annavaram
Privacy-Preserving Machine Learning is one of the most relevant use cases for Secure Multiparty Computation (MPC). While private training of large neural networks such as VGG-16 or ResNet-50 on state-of-the-art datasets such as Imagenet is still out of reach, given the performance overhead of MPC, private inference is starting to achieve practical runtimes. However, we show that in contrast to plaintext machine learning, the usage of GPU acceleration for both linear and nonlinear neural network layers is actually counterproductive in PPML and leads to performance and scaling penalties. This can be observed by slow ReLU performance, high GPU memory requirements, and inefficient batch processing in state-of-the-art PPML frameworks, which hinders them from scaling to multiple images per second inference throughput and more than eight images per batch on ImageNet.
To overcome these limitations, we propose PIGEON, an open-source framework for Private Inference of Neural Networks. PIGEON utilizes a novel ABG programming model that switches between \underline{A}rithmetic vectorization, \underline{B}itslicing, and \underline{G}PU offloading depending on the MPC-specific computation required by each layer.
Compared to the state-of-the-art PPML framework Piranha, PIGEON achieves two orders of magnitude improvements in ReLU throughput, reduces peak GPU memory utilization by one order of magnitude, and scales better with large batch size. This translates to one to two orders of magnitude improvements in throughput for large ImageNet batch sizes (e.g. 192) and more than 70\% saturation of a 25 Gbit/s network.
Tarun Yadav, Manoj Kumar
In recent years, ML based differential distinguishers have been explored and compared with the classical methods. Complexity of a key recovery attack on block ciphers is calculated using the probability of a differential distinguisher provided by classical methods. Since theoretical computations suffice to calculate the data complexity in these cases, so there seems no restrictions on the practical availability of computational resources to attack a block cipher using classical methods. However, ML based differential cryptanalysis is based on the machine learning model that uses encrypted data to learn its features using available compute power. This poses a restriction on the accuracy of ML distinguisher for increased number of rounds and ciphers with large block size. Moreover, we can still construct the distinguisher but the accuracy becomes very low in such cases. In this paper, we present a new approach to construct the differential distinguisher with high accuracy using the existing ML based distinguisher of low accuracy. This approach outperforms all existing approaches with similar objective. We demonstrate our method to construct the high accuracy ML based distinguishers for GIFT-128 and ASCON permutation. For GIFT-128, accuracy of 7-round distinguisher is increased to 98.8% with $2^{9}$ data complexity. For ASCON, accuracy of 4-round distinguisher is increased to 99.4% with $2^{18}$ data complexity. We present the first ML based distinguisher for 8 rounds of GIFT-128 using the differential-ML distinguisher presented in Latincrypt-2021. This distinguisher is constructed with 99.8% accuracy and $2^{18}$ data complexity.
Lorenzo Martinico, Markulf Kohlweiss
A Trusted Execution Environment (TEE) is a new type of security technology, implemented by CPU manufacturers, which guarantees integrity and confidentiality on a restricted execution environment to any remote verifier. TEEs are deployed on various consumer and commercial hardwareplatforms, and have been widely adopted as a component in the design of cryptographic protocols both theoretical and practical.
Within the provable security community, the use of TEEs as a setup assumption has converged to a standard ideal definition in the Universal Composability setting ($G_\mathsf{att}$, defined by Pass et al., Eurocrypt '17). However, it is unclear whether any real TEE design can actually implement this, or whether the diverse capabilities of today's TEE implementations will in fact converge to a single standard. Therefore, it is necessary for cryptographers and protocol designers to specify what assumptions are necessary for the TEE they are using to support the correctness and security of their protocol.
To this end, this paper provides a more careful treatment of trusted execution than the existing literature, focusing on the capabilities of enclaves and adversaries. Our goal is to provide meaningful patterns for comparing different classes of TEEs , particularly how a weaker TEE functionality can UC-emulate a stronger one given an appropriate mechanism to bridge the two. We introduce a new, ``modular'' definition of TEEsthat captures a broad range of pre-existing functionalities defined in the literature while maintaining their high level of abstraction. While our goal is not directly to model implementations of specific commercial TEE providers, our modular definition provides a way to capture more meaningful and realistic hardware capabilities. We provide a language to characterise TEE capabilities along the following terms: - a set of trusted features available to the enclave; - the set of allowed attacks for malicious interactions with the enclaves; - the contents of attestation signatures. We then define various possible ideal modular $G_\mathsf{att}$ functionality instantiations that capture existing variants in the literature, and provide generic constructions to implement stronger enclave functionalities from an existing setup. Finally, we conclude the paper with a simple example of how to protect against rollback attacks given access to a trusted storage feature.
Within the provable security community, the use of TEEs as a setup assumption has converged to a standard ideal definition in the Universal Composability setting ($G_\mathsf{att}$, defined by Pass et al., Eurocrypt '17). However, it is unclear whether any real TEE design can actually implement this, or whether the diverse capabilities of today's TEE implementations will in fact converge to a single standard. Therefore, it is necessary for cryptographers and protocol designers to specify what assumptions are necessary for the TEE they are using to support the correctness and security of their protocol.
To this end, this paper provides a more careful treatment of trusted execution than the existing literature, focusing on the capabilities of enclaves and adversaries. Our goal is to provide meaningful patterns for comparing different classes of TEEs , particularly how a weaker TEE functionality can UC-emulate a stronger one given an appropriate mechanism to bridge the two. We introduce a new, ``modular'' definition of TEEsthat captures a broad range of pre-existing functionalities defined in the literature while maintaining their high level of abstraction. While our goal is not directly to model implementations of specific commercial TEE providers, our modular definition provides a way to capture more meaningful and realistic hardware capabilities. We provide a language to characterise TEE capabilities along the following terms: - a set of trusted features available to the enclave; - the set of allowed attacks for malicious interactions with the enclaves; - the contents of attestation signatures. We then define various possible ideal modular $G_\mathsf{att}$ functionality instantiations that capture existing variants in the literature, and provide generic constructions to implement stronger enclave functionalities from an existing setup. Finally, we conclude the paper with a simple example of how to protect against rollback attacks given access to a trusted storage feature.
Renas Bacho, Benedikt Wagner
Due to their simplicity, compactness, and algebraic structure, BLS signatures are among the most widely used signatures in practice. For example, used as multi-signatures, they are integral in Ethereum's proof-of-stake consensus. From the perspective of concrete security, however, BLS (multi-)signatures suffer from a security loss linear in the number of signing queries. It is well-known that this loss can not be avoided using current proof techniques.
In this paper, we introduce a new variant of BLS multi-signatures that achieves tight security while remaining fully compatible with regular BLS. In particular, our signatures can be seamlessly combined with regular BLS signatures, resulting in regular BLS signatures. Moreover, it can easily be implemented using existing BLS implementations in a black-box way. Our scheme is also one of the most efficient non-interactive multi-signatures, and in particular more efficient than previous tightly secure schemes. We demonstrate the practical applicability of our scheme by showing how proof-of-stake protocols that currently use BLS can adopt our variant for fully compatible opt-in tight security.
In this paper, we introduce a new variant of BLS multi-signatures that achieves tight security while remaining fully compatible with regular BLS. In particular, our signatures can be seamlessly combined with regular BLS signatures, resulting in regular BLS signatures. Moreover, it can easily be implemented using existing BLS implementations in a black-box way. Our scheme is also one of the most efficient non-interactive multi-signatures, and in particular more efficient than previous tightly secure schemes. We demonstrate the practical applicability of our scheme by showing how proof-of-stake protocols that currently use BLS can adopt our variant for fully compatible opt-in tight security.
Jonas Bertels, Quinten Norga, Ingrid Verbauwhede
Kyber was selected by NIST as a Post-Quantum
Cryptography Key Encapsulation Mechanism standard. This
means that the industry now needs to transition and adopt
these new standards. One of the most demanding operations in
Kyber is the modular arithmetic, making it a suitable target for
optimization. This work offers a novel modular reduction design
with the lowest area on Xilinx FPGA platforms. This novel design,
through K-reduction and LUT-based reduction, utilizes 49 LUTs
and 1 DSP as opposed to Xing and Li’s 2021 CHES design
requiring 90 LUTs and 1 DSP for one modular multiplication.
Our design is the smallest modular multiplier reported as of
today.
30 August 2024
Jungho Moon, Zhanibek Omarov, Donghoon Yoo, Yongdae An, Heewon Chung
Homomorphic encryption is a cryptographic technique that enables arithmetic
operations to be performed on encrypted data. However, word-wise fully
homomorphic encryption schemes, such as BGV, BFV, and CKKS schemes, only
support addition and multiplication operations on ciphertexts. This limitation
makes it challenging to perform non-linear operations directly on the
encrypted data. To address this issue, prior research has proposed efficient
approximation techniques that utilize iterative methods, such as functional
composition, to identify optimal polynomials. These approximations are
designed to have a low multiplicative depth and a reduced number of
multiplications, as these criteria directly impact the performance of the
approximated operations.
In this paper, we propose a novel method, named as adaptive successive over-relaxation (aSOR), to further optimize the approximations used in homomorphic encryption schemes. Our experimental results show that the aSOR method can significantly reduce the computational effort required for these approximations, achieving a reduction of 2–9 times compared to state-of-the-art methodologies. We demonstrate the effectiveness of the aSOR method by applying it to a range of operations, including sign, comparison, ReLU, square root, reciprocal of m-th root, and division. Our findings suggest that the aSOR method can greatly improve the efficiency of homomorphic encryption for performing non-linear operations.
In this paper, we propose a novel method, named as adaptive successive over-relaxation (aSOR), to further optimize the approximations used in homomorphic encryption schemes. Our experimental results show that the aSOR method can significantly reduce the computational effort required for these approximations, achieving a reduction of 2–9 times compared to state-of-the-art methodologies. We demonstrate the effectiveness of the aSOR method by applying it to a range of operations, including sign, comparison, ReLU, square root, reciprocal of m-th root, and division. Our findings suggest that the aSOR method can greatly improve the efficiency of homomorphic encryption for performing non-linear operations.
Shiyu Shen, Hao Yang, Wangchen Dai, Hong Zhang, Zhe Liu, Yunlei Zhao
Digital signatures are fundamental building blocks in various protocols to provide integrity and authenticity. The development of the quantum computing has raised concerns about the security guarantees afforded by classical signature schemes. CRYSTALS-Dilithium is an efficient post-quantum digital signature scheme based on lattice cryptography and has been selected as the primary algorithm for standardization by the National Institute of Standards and Technology. In this work, we present a high-throughput GPU implementation of Dilithium. For individual operations, we employ a range of computational and memory optimizations to overcome sequential constraints, reduce memory usage and IO latency, address bank conflicts, and mitigate pipeline stalls. This results in high and balanced compute throughput and memory throughput for each operation. In terms of concurrent task processing, we leverage task-level batching to fully utilize parallelism and implement a memory pool mechanism for rapid memory access. We propose a dynamic task scheduling mechanism to improve multiprocessor occupancy and significantly reduce execution time. Furthermore, we apply asynchronous computing and launch multiple streams to hide data transfer latencies and maximize the computing capabilities of both CPU and GPU. Across all three security levels, our GPU implementation achieves over 160× speedups for signing and over 80× speedups for verification on both commercial and server-grade GPUs. This achieves microsecond-level amortized execution times for each task, offering a high-throughput and quantum-resistant solution suitable for a wide array of applications in real systems.
Anca Nitulescu, Nikitas Paslis, Carla Ràfols
In this work, we consider the setting where one or more users with low computational resources would lie to outsource the task of proof generation for SNARKs to one external entity, named Prover. We study the scenario in which Provers have access to all statements and witnesses to be proven beforehand. We take a different approach to proof aggregation and design a new protocol that reduces simultaneously proving time and communication complexity, without going through recursive proof composition.
Our two main contributions: We first design FLIP, a communication efficient folding scheme where we apply the Inner Pairing Product Argument to fold R1CS instances of the same language into a single relaxed R1CS instance. Then, any proof system for relaxed R1CS language can be applied to prove the final instance. As a second contribution, we build a novel variation of Groth16 with the same communication complexity for relaxed R1CS and two extra pairings for verification, with an adapted trusted setup.
Compared to SnarkPack - a prior solution addressing scaling for multiple Groth16 proofs - our scheme improves in prover complexity by orders of magnitude, if we consider the total cost to generated the SNARK proofs one by one and the aggregation effort.
An immediate application of our solution is Filecoin, a decentralized storage network based on incentives that generates more than 6 million SNARKs for large circuits of 100 million constraints per day.
Sabyasachi Dey, Gregor Leander, Nitin Kumar Sharma
In this paper, we present an improved attack on the stream cipher Salsa20. Our improvements are based on two technical contributions.
First, we make use of a distribution of a linear combination of several random variables that are derived from different differentials and explain how to exploit this in order to improve the attack complexity. Secondly, we study and exploit how to choose the actual value for so-called probabilistic neutral bits optimally. Because of the limited influence of these key bits on the computation, in the usual attack approach, these are fixed to a constant value, often zero for simplicity. As we will show, despite the fact that their influence is limited, the constant can be chosen in significantly better ways, and intriguingly, zero is the worst choice. Using this, we propose the first-ever attack on 7.5-round of $128$-bit key version of Salsa20. Also, we provide improvements in the attack against the 8-round of $256$-bit key version of Salsa20 and the 7-round of $128$-bit key version of Salsa20.
Benedikt Wagner, Arantxa Zapico
Data availability sampling allows clients to verify availability of data on a peer-to-peer network provided by an untrusted source. This is achieved without downloading the full data by sampling random positions of the encoded data.
The long-term vision of the Ethereum community includes a comprehensive data availability protocol using polynomial commitments and tensor codes. As the next step towards this vision, an intermediate solution called PeerDAS is about to integrated, to bridge the way to the full protocol. With PeerDAS soon becoming an integral part of Ethereum's consensus layer, understanding its security guarantees is essential.
This document aims to describe the cryptography used in PeerDAS in a manner accessible to the cryptographic community, encouraging innovation and improvements, and to explicitly state the security guarantees of PeerDAS.
The long-term vision of the Ethereum community includes a comprehensive data availability protocol using polynomial commitments and tensor codes. As the next step towards this vision, an intermediate solution called PeerDAS is about to integrated, to bridge the way to the full protocol. With PeerDAS soon becoming an integral part of Ethereum's consensus layer, understanding its security guarantees is essential.
This document aims to describe the cryptography used in PeerDAS in a manner accessible to the cryptographic community, encouraging innovation and improvements, and to explicitly state the security guarantees of PeerDAS.
Christian Badertscher, Fabio Banfi, Jesus Diaz
Self-Sovereign Identity (SSI) empowers individuals and organizations with full control over their data. Decentralized identifiers (DIDs) are at its center, where a DID contains a collection of public keys associated with an entity, and further information to enable entities to engage via secure and private messaging across different platforms. A crucial stepping stone is DIDComm, a cryptographic communication layer that is in production with version 2. Due to its widespread and active deployment, a formal study of DIDComm is highly overdue.
We present the first formal analysis of DIDComm’s cryptography, and formalize its goal of (sender-) anonymity and authenticity. We follow a composable approach to capture its security over a generic network, formulating the goal of DIDComm as a strong ideal communication resource. We prove that the proposed encryption modes reach the expected level of privacy and authenticity, but leak beyond the leakage induced by an underlying network (captured by a parameterizable resource).
We further use our formalism to propose enhancements and prove their security: first, we present an optimized algorithm that achieves simultaneously anonymity and authenticity, conforming to the DIDComm message format, and which outperforms the current DIDComm proposal in both ciphertext size and computation time by almost a factor of 2. Second, we present a novel DIDComm mode that fulfills the notion of anonymity preservation, in that it does never leak more than the leakage induced by the network it is executed over. We finally show how to merge this new mode into our improved algorithm, obtaining an efficient all-in-one mode for full anonymity and authenticity.
We present the first formal analysis of DIDComm’s cryptography, and formalize its goal of (sender-) anonymity and authenticity. We follow a composable approach to capture its security over a generic network, formulating the goal of DIDComm as a strong ideal communication resource. We prove that the proposed encryption modes reach the expected level of privacy and authenticity, but leak beyond the leakage induced by an underlying network (captured by a parameterizable resource).
We further use our formalism to propose enhancements and prove their security: first, we present an optimized algorithm that achieves simultaneously anonymity and authenticity, conforming to the DIDComm message format, and which outperforms the current DIDComm proposal in both ciphertext size and computation time by almost a factor of 2. Second, we present a novel DIDComm mode that fulfills the notion of anonymity preservation, in that it does never leak more than the leakage induced by the network it is executed over. We finally show how to merge this new mode into our improved algorithm, obtaining an efficient all-in-one mode for full anonymity and authenticity.
Biming Zhou, Haodong Jiang, Yunlei Zhao
In the post-quantum migration of TLS 1.3, an ephemeral Diffie-Hellman must be replaced with a post-quantum key encapsulation mechanism (KEM). At EUROCRYPT 2022, Huguenin-Dumittan and Vaudenay [EC:HugVau22] demonstrated that KEMs with standard CPA security are sufficient for the security of the TLS1.3 handshake. However, their result is only proven in the random oracle model (ROM), and as the authors comment, their reduction is very much non-tight and not sufficient to guarantee security in practice due to the $O(q^6)$-loss, where $q$ is the number of adversary’s queries to random oracles. Moreover, in order to analyze the post-quantum security of TLS 1.3 handshake with a KEM, it is necessary to consider the security in the quantum ROM (QROM). Therefore, they leave the tightness improvement of their ROM proof and the QROM proof of such a result as an interesting open question.
In this paper, we resolve this problem. We improve the ROM proof in [EC:HugVau22] from an $O(q^6)$-loss to an $O(q)$-loss with standard CPA-secure KEMs which can be directly obtained from the underlying public-key encryption (PKE) scheme in CRYSTALS-Kyber. Moreover, we show that if the KEMs are constructed from rigid deterministic public-key encryption (PKE) schemes such as the ones in Classic McElieceand NTRU, this $O(q)$-loss can be further improved to an $O(1)$-loss. Hence, our reductions are sufficient to guarantee security in practice. According to our results, a CPA-secure KEM (which is more concise and efficient than the currently used CCA/1CCA-secure KEM) can be directly employed to construct a post-quantum TLS 1.3. Furthermore, we lift our ROM result into QROM and first prove that the CPA-secure KEMs are also sufficient for the post-quantum TLS 1.3 handshake. In particular, the techniques introduced to improve reduction tightness in this paper may be of independent interest.
In this paper, we resolve this problem. We improve the ROM proof in [EC:HugVau22] from an $O(q^6)$-loss to an $O(q)$-loss with standard CPA-secure KEMs which can be directly obtained from the underlying public-key encryption (PKE) scheme in CRYSTALS-Kyber. Moreover, we show that if the KEMs are constructed from rigid deterministic public-key encryption (PKE) schemes such as the ones in Classic McElieceand NTRU, this $O(q)$-loss can be further improved to an $O(1)$-loss. Hence, our reductions are sufficient to guarantee security in practice. According to our results, a CPA-secure KEM (which is more concise and efficient than the currently used CCA/1CCA-secure KEM) can be directly employed to construct a post-quantum TLS 1.3. Furthermore, we lift our ROM result into QROM and first prove that the CPA-secure KEMs are also sufficient for the post-quantum TLS 1.3 handshake. In particular, the techniques introduced to improve reduction tightness in this paper may be of independent interest.
Debasmita Chakraborty, Hosein Hadipour, Phuong Hoa Nguyen, Maria Eichlseder
The impossible differential (ID) attack is one of the most important cryptanalytic techniques for block ciphers. There are two phases to finding an ID attack: searching for the distinguisher and building a key recovery upon it. Previous works only focused on automated distinguisher discovery, leaving key recovery as a manual post-processing task, which may lead to a suboptimal final complexity. At EUROCRYPT~2023, Hadipour et al. introduced a unified constraint programming (CP) approach based on satisfiability for finding optimal complete ID attacks in strongly aligned ciphers. While this approach was extended to weakly-aligned designs like PRESENT at ToSC~2024, its application to ARX and AndRX ciphers remained as future work. Moreover, this method only exploited ID distinguishers with direct contradictions at the junction of two deterministic transitions. In contrast, some ID distinguishers, particularly for ARX and AndRX designs, may not be detectable by checking only the existence of direct contradictions.
This paper fills these gaps by extending Hadipour et al.'s method to handle indirect contradictions and adapting it for ARX and AndRX designs. We also present a similar method for identifying zero-correlation (ZC) distinguishers. Moreover, we extend our new model for finding ID distinguishers to a unified optimization problem that includes both the distinguisher and the key recovery for AndRX designs. Our method improves ID attacks and introduces new distinguishers for several ciphers, such as SIMON, SPECK, Simeck, ChaCha, Chaskey, LEA, and SipHash. For example, we achieve a one-round improvement in the ID attacks against SIMON-64-96, SIMON-64-128, SIMON-128-128, SIMON-128-256 and a two-round improvement in the ID attacks against SIMON-128-192. These results significantly contribute to our understanding of the effectiveness of automated tools in the cryptanalysis of different design paradigms.
This paper fills these gaps by extending Hadipour et al.'s method to handle indirect contradictions and adapting it for ARX and AndRX designs. We also present a similar method for identifying zero-correlation (ZC) distinguishers. Moreover, we extend our new model for finding ID distinguishers to a unified optimization problem that includes both the distinguisher and the key recovery for AndRX designs. Our method improves ID attacks and introduces new distinguishers for several ciphers, such as SIMON, SPECK, Simeck, ChaCha, Chaskey, LEA, and SipHash. For example, we achieve a one-round improvement in the ID attacks against SIMON-64-96, SIMON-64-128, SIMON-128-128, SIMON-128-256 and a two-round improvement in the ID attacks against SIMON-128-192. These results significantly contribute to our understanding of the effectiveness of automated tools in the cryptanalysis of different design paradigms.
Lynn Engelberts, Simona Etinski, Johanna Loyer
Sieving using near-neighbor search techniques is a well-known method in lattice-based cryptanalysis, yielding the current best runtime for the shortest vector problem in both the classical [BDGL16] and quantum [BCSS23] setting. Recently, sieving has also become an important tool in code-based cryptanalysis. Specifically, using a sieving subroutine, [GJN23, DEEK24] presented a variant of the information-set decoding (ISD) framework, which is commonly used for attacking cryptographically relevant instances of the decoding problem. The resulting sieving-based ISD framework yields complexities close to the best-performing classical algorithms for the decoding problem such as [BJMM12, BM18]. It is therefore natural to ask how well quantum versions perform.
In this work, we introduce the first quantum algorithms for code sieving by designing quantum variants of the aforementioned sieving subroutine. In particular, using quantum-walk techniques, we provide a speed-up over the best known classical algorithm from [DEEK24] and over a variant using Grover's algorithm [Gro96]. Our quantum-walk algorithm exploits the structure of the underlying search problem by adding a layer of locality-sensitive filtering, inspired by the quantum-walk algorithm for lattice sieving from [CL21]. We complement our asymptotic analysis of the quantum algorithms with numerical results, and observe that our quantum speed-ups for code sieving behave similarly as those observed in lattice sieving.
In addition, we show that a natural quantum analog of the sieving-based ISD framework does not provide any speed-up over the first presented quantum ISD algorithm [Ber10]. Our analysis highlights that the framework should be adapted in order to outperform the state-of-the-art of quantum ISD algorithms [KT17, Kir18].