IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
25 October 2024
Alexander Poremba, Yihui Quek, Peter Shor
ePrint Report
Random classical codes have good error correcting properties, and yet they are notoriously hard to decode in practice. Despite many decades of extensive study, the fastest known algorithms still run in exponential time. The Learning Parity with Noise (LPN) problem, which can be seen as the task of decoding a random linear code in the presence of noise, has thus emerged as a prominent hardness assumption with numerous applications in both cryptography and learning theory.
Is there a natural quantum analog of the LPN problem? In this work, we introduce the Learning Stabilizers with Noise (LSN) problem, the task of decoding a random stabilizer code in the presence of local depolarizing noise. We give both polynomial-time and exponential-time quantum algorithms for solving LSN in various depolarizing noise regimes, ranging from extremely low noise, to low constant noise rates, and even higher noise rates up to a threshold. Next, we provide concrete evidence that LSN is hard. First, we show that LSN includes LPN as a special case, which suggests that it is at least as hard as its classical counterpart. Second, we prove a worst-case to average-case reduction for variants of LSN. We then ask: what is the computational complexity of solving LSN? Because the task features quantum inputs, its complexity cannot be characterized by traditional complexity classes. Instead, we show that the LSN problem lies in a recently introduced (distributional and oracle) unitary synthesis class. Finally, we identify several applications of our LSN assumption, ranging from the construction of quantum bit commitment schemes to the computational limitations of learning from quantum data.
Is there a natural quantum analog of the LPN problem? In this work, we introduce the Learning Stabilizers with Noise (LSN) problem, the task of decoding a random stabilizer code in the presence of local depolarizing noise. We give both polynomial-time and exponential-time quantum algorithms for solving LSN in various depolarizing noise regimes, ranging from extremely low noise, to low constant noise rates, and even higher noise rates up to a threshold. Next, we provide concrete evidence that LSN is hard. First, we show that LSN includes LPN as a special case, which suggests that it is at least as hard as its classical counterpart. Second, we prove a worst-case to average-case reduction for variants of LSN. We then ask: what is the computational complexity of solving LSN? Because the task features quantum inputs, its complexity cannot be characterized by traditional complexity classes. Instead, we show that the LSN problem lies in a recently introduced (distributional and oracle) unitary synthesis class. Finally, we identify several applications of our LSN assumption, ranging from the construction of quantum bit commitment schemes to the computational limitations of learning from quantum data.
Florian Krieger, Florian Hirner, Ahmet Can Mert, Sujoy Sinha Roy
ePrint Report
Modern cryptographic techniques such as fully homomorphic encryption (FHE) have recently gained broad attention. Most of these cryptosystems rely on lattice problems wherein polynomial multiplication forms the computational bottleneck. A popular method to accelerate these polynomial multiplications is the Number-Theoretic Transformation (NTT). Recent works aim to improve the practical deployability of NTT and propose toolchains supporting the NTT hardware accelerator design processes. However, existing design tools do not provide on-the-fly twiddle factor generation (TFG) which leads to high memory demands. Inspired by this situation, we present OpenNTT, a fully automated, open-source framework to compile NTT hardware accelerators with TFG for various NTT types and parameter sets. We address the challenge of combining conflict-free memory accesses and efficient, linear twiddle factor generation through a dedicated NTT processing order. Following this order, we develop a flexible twiddle factor generation method with minimal memory usage. These core concepts together with a frequency-optimized hardware architecture form our OpenNTT framework. We use OpenNTT to compile and test NTT hardware designs with various parameter sets on FPGAs. The obtained results show a clear memory reduction due to TFG and a speedup by 2.7× in latency and 2.2× in area-time-product, compared to prior arts.
Miranda Christ, Sam Gunn, Tal Malkin, Mariana Raykova
ePrint Report
The recent explosion of high-quality language models has necessitated new methods for identifying AI-generated text. Watermarking is a leading solution and could prove to be an essential tool in the age of generative AI. Existing approaches embed watermarks at inference and crucially rely on the large language model (LLM) specification and parameters being secret, which makes them inapplicable to the open-source setting. In this work, we introduce the first watermarking scheme for open-source LLMs. Our scheme works by modifying the parameters of the model, but the watermark can be detected from just the outputs of the model. Perhaps surprisingly, we prove that our watermarks are unremovable under certain assumptions about the adversary's knowledge. To demonstrate the behavior of our construction under concrete parameter instantiations, we present experimental results with OPT-6.7B and OPT-1.3B. We demonstrate robustness to both token substitution and perturbation of the model parameters. We find that the stronger of these attacks, the model-perturbation attack, requires deteriorating the quality score to 0 out of 100 in order to bring the detection rate down to 50%.
Thomas den Hollander, Sören Kleine, Marzio Mula, Daniel Slamanig, Sebastian A. Spindler
ePrint Report
Proving knowledge of a secret isogeny has recently been proposed as a means to generate supersingular elliptic curves of unknown endomorphism ring, but is equally important for cryptographic protocol design as well as for real world deployments. Recently, Cong, Lai and Levin (ACNS'23) have investigated the use of general-purpose (non-interactive) zero-knowledge proof systems for proving the knowledge of an isogeny of degree $2^k$ between supersingular elliptic curves. In particular, their approach is to model this relation via a sequence of $k$ successive steps of a walk in the supersingular isogeny graph and to show that the respective $j$-invariants are roots of the second modular polynomial. They then arithmetize this relation and show that this approach, when compared to state-of-the-art tailor-made proofs of knowledge by Basso et al. (EUROCRYPT'23), gives a 3-10$\times$ improvement in proof and verification times, with comparable proof sizes.
In this paper we ask whether we can further improve the modular polynomial-based approach and generalize its application to primes ${\ell>2}$, as used in some recent isogeny-based constructions. We will answer these questions affirmatively, by designing efficient arithmetizations for each ${\ell \in \{2, 3, 5, 7, 13\}}$ that achieve an improvement over Cong, Lai and Levin of up to 48%.
Our main technical tool and source of efficiency gains is to switch from classical modular polynomials to canonical modular polynomials. Adapting the well-known results on the former to the latter polynomials, however, is not straight-forward and requires some technical effort. We prove various interesting connections via novel use of resultant theory, and advance the understanding of canonical modular polynomials, which might be of independent interest.
In this paper we ask whether we can further improve the modular polynomial-based approach and generalize its application to primes ${\ell>2}$, as used in some recent isogeny-based constructions. We will answer these questions affirmatively, by designing efficient arithmetizations for each ${\ell \in \{2, 3, 5, 7, 13\}}$ that achieve an improvement over Cong, Lai and Levin of up to 48%.
Our main technical tool and source of efficiency gains is to switch from classical modular polynomials to canonical modular polynomials. Adapting the well-known results on the former to the latter polynomials, however, is not straight-forward and requires some technical effort. We prove various interesting connections via novel use of resultant theory, and advance the understanding of canonical modular polynomials, which might be of independent interest.
Aurore Guillevic, Simon Masson
ePrint Report
Based on the CM method for primality testing (ECPP) by Atkin and Morain published in 1993, we present two algorithms: one to generate embedded elliptic curves of SNARK-friendly curves, with a variable discriminant D; and another to generate families (parameterized by polynomials) with a fixed discriminant D. When D = 3 mod 4, it is possible to obtain a prime-order curve, and form a cycle. We apply our technique first to generate more embedded curves like Bandersnatch with BLS12-381 and we propose a plain twist-secure cycle above BLS12-381 with D = 6673027. We also devise about the scarcity of Bandersnatch-like CM curves, and show that with our algorithm, it is only a question of core-hours to find them. Second, we obtain families of prime-order embedded curves of discriminant D = 3 for BLS and KSS18 curves. Our method obtains families of embedded curves above KSS16 and can work for any KSS family. Our work generalizes the work on Bandersnatch (Masson, Sanso, and Zhang, and Sanso and El Housni).
Sarah Arpin, Tyler Raven Billingsley, Daniel Rayor Hast, Jun Bo Lau, Ray Perlner, Angela Robinson
ePrint Report
We present experimental findings on the decoding failure rate (DFR) of BIKE, a fourth-round candidate in the NIST Post-Quantum Standardization process, at the 20-bit security level using graph-theoretic approaches. We select parameters according to BIKE design principles and conduct a series of experiments using Rust to generate significantly more decoding failure instances than in prior work using SageMath. For each decoding failure, we study the internal state of the decoder at each iteration and find that for 97% of decoding failures at block size $r=587$, the decoder reaches a fixed point within 7 iterations. We then consider the corresponding Tanner graphs of each decoding failure instance to determine whether the decoding failures are due to absorbing sets. We find that 81% of decoding failures at $r=587$ were caused by absorbing sets, and of these the majority were $(d,d)$-near codewords.
Jiangshan Long, Changhai Ou, Zhu Wang, Fan Zhang
ePrint Report
Evaluation of cryptographic implementations with respect to side-channels has been mandated at high security levels nowadays. Typically, the evaluation involves four stages: detection, modeling, certification and secret recovery. In pursuit of specific goal at each stage, inherently different techniques used to be considered necessary. However, since the recent works of Eurocrypt2022 and Eurocrypt2024, linear regression analysis (LRA) has uniquely become the technique that is well-applied throughout all the stages. In this paper, we concentrate on this silver bullet technique within the field of side-channel. First, we address the fundamental problems of why and how to use LRA. The discussion of nominal and binary nature explains its strong applicability. To sustain effective outcomes, we provide in-depth analyses about the design matrix, regarding the sample distribution of plaintext and the chosen polynomial degree. We summarize ideal conditions that totally avoid multicollinearity problem, and explore the novel evaluator-advantageous property of LRA by means of model diagnosis. Then, we trace the roots where we theoretically elaborate its connections with traditional side-channel techniques, including Correlation Power Analysis (CPA), Distance-of-Means analysis (DoM) and Partition Power Analysis (PPA), in terms of regression coefficients, regression model and coefficient of determination. Finally, we probe into the state-of-the-art combined LRA with the so-called collapse function, demonstrating its relationship with another refined technique, G-DoM. We argue that properly relaxing the definition of bit groups equally satisfies our conclusions. Experimental results are in line with the theory, confirming its correctness.
Kung-Wei Hu, Huan-Chih Wang, Ja-Ling Wu
ePrint Report
This paper explores advancements in the Gentry-Sahai-Waters (GSW) fully homomorphic encryption scheme, addressing challenges related to message data range limitations and ciphertext size constraints. We introduce a novel approach utilizing the Chinese Remainder Theorem (CRT) for message decomposition, significantly expanding the allowable message range to the entire plaintext space. This method enables unrestricted message selection and supports parallel homomorphic operations without intermediate decryption. Additionally, we adapt existing ciphertext compression techniques, such as the PVW-like scheme, to reduce memory overhead associated with ciphertexts. Our experimental results demonstrate the effectiveness of the CRT-based decomposition in increasing the upper bound of message values and improving the scheme's capacity for consecutive homomorphic operations. However, compression introduces a trade-off, necessitating a reduced message range due to error accumulation. This research contributes to enhancing the practicality and efficiency of the GSW encryption scheme for complex computational scenarios while managing the balance between expanded message range, computational complexity, and storage requirements.
Umberto Cerruti
ePrint Report
This is a survey on the One Time Pad (OTP) and its derivatives, from its origins to modern times. OTP, if used correctly, is (the only) cryptographic code that no computing power, present or future, can break. Naturally, the discussion shifts to the creation of long random sequences, starting from short ones, which can be easily shared. We could call it the Short Key Dream. Many problems inevitably arise, which affect many fields of computer science, mathematics and knowledge in general. This work presents a vast bibliography that includes fundamental classical works and current papers on randomness, pseudorandom number generators, compressibility, unpredictability and more.
Sabrina Kunzweiler, Luciano Maino, Tomoki Moriya, Christophe Petit, Giacomo Pope, Damien Robert, Miha Stopar, Yan Bo Ti
ePrint Report
We provide explicit descriptions for radical 2-isogenies in dimensions
one, two and three using theta coordinates. These formulas allow us to efficiently
navigate in the corresponding isogeny graphs.
As an application of this, we implement different versions of the CGL hash func-
tion. Notably, the three-dimensional version is fastest, which demonstrates yet
another potential of using higher dimensional isogeny graphs in cryptography.
Benedikt Bünz, Pratyush Mishra, Wilson Nguyen, William Wang
ePrint Report
Proof-Carrying Data (PCD) is a foundational tool for ensuring the correctness of incremental distributed computations that has found numerous applications in theory and practice. The state-of-the-art PCD constructions are obtained via accumulation or folding schemes. Unfortunately, almost all known constructions of accumulation schemes rely on homomorphic vector commitments (VCs), which results in relatively high computational costs and insecurity in the face of quantum adversaries. A recent work of Bünz, Mishra, Nguyen, and Wang removes the dependence on homomorphic VCs by relying only on the random oracle model, but introduces a bound on the number of consecutive accumulation steps, which in turn bounds the depth of the PCD computation graph and greatly affects prover and verifier efficiency.
In this work, we propose Arc, a novel hash-based accumulation scheme that overcomes this restriction and supports an unbounded number of accumulation steps. The core building block underlying Arc is a new accumulation scheme for claims about proximity of claimed codewords to the Reed--Solomon code. Our approach achieves near-optimal efficiency, requiring a small number of Merkle tree openings relative to the code rate, and avoids the efficiency loss associated with bounded accumulation depth. Unlike prior work, our scheme is also able to accumulate claims up to list-decoding radius, resulting in concrete efficiency improvements.
We use this accumulation scheme to construct two distinct accumulation schemes, again relying solely on random oracles. The first approach accumulates RS proximity claims and can be used as an almost-drop-in replacement in existing PCD deployments based on IOP-based SNARKs. The second approach directly constructs an accumulation scheme for rank-1 constraint systems (and more generally polynomial constraint systems) that is simpler and more efficient than the former and prior approaches.
We introduce the notion of Interactive Oracle Reductions (IORs) to enable a modular and simple security analysis. These extend prior notions of Reductions of Knowledge to the setting of IOPs.
In this work, we propose Arc, a novel hash-based accumulation scheme that overcomes this restriction and supports an unbounded number of accumulation steps. The core building block underlying Arc is a new accumulation scheme for claims about proximity of claimed codewords to the Reed--Solomon code. Our approach achieves near-optimal efficiency, requiring a small number of Merkle tree openings relative to the code rate, and avoids the efficiency loss associated with bounded accumulation depth. Unlike prior work, our scheme is also able to accumulate claims up to list-decoding radius, resulting in concrete efficiency improvements.
We use this accumulation scheme to construct two distinct accumulation schemes, again relying solely on random oracles. The first approach accumulates RS proximity claims and can be used as an almost-drop-in replacement in existing PCD deployments based on IOP-based SNARKs. The second approach directly constructs an accumulation scheme for rank-1 constraint systems (and more generally polynomial constraint systems) that is simpler and more efficient than the former and prior approaches.
We introduce the notion of Interactive Oracle Reductions (IORs) to enable a modular and simple security analysis. These extend prior notions of Reductions of Knowledge to the setting of IOPs.
Aikata Aikata, Sujoy Sinha Roy
ePrint Report
Fully Homomorphic Encryption (FHE) is a promising privacy-enhancing technique that enables secure and private data processing on untrusted servers, such as privacy-preserving neural network (NN) evaluations. However, its practical application presents significant challenges. Limitations in how data is stored within homomorphic ciphertexts and restrictions on the types of operations that can be performed create computational bottlenecks. As a result, a growing body of research focuses on optimizing existing evaluation techniques for efficient execution in the homomorphic domain.
One key operation in this space is matrix multiplication, which forms the foundation of most neural networks. Several studies have even proposed new FHE schemes specifically to accelerate this operation. The optimization of matrix multiplication is also the primary goal of our work. We leverage the Single Instruction Multiple Data (SIMD) capabilities of FHE to increase data packing and significantly reduce the KeySwitch operation count— an expensive low-level routine in homomorphic encryption. By minimizing KeySwitching, we surpass current state-of-the-art solutions, requiring only a minimal multiplicative depth of two.
The best-known complexity for matrix multiplication at this depth is $\mathcal{O}(d)$ for matrices of size $d\times d$. Remarkably, even the leading techniques that require a multiplicative depth of three still incur a KeySwitch complexity of $\mathcal{O}(d)$. In contrast, our method reduces this complexity to $\mathcal{O}(\log{d})$ while maintaining the same level of data packing. Our solution broadly applies to all FHE schemes supporting Single Instruction Multiple Data (SIMD) operations. We further generalize the technique in two directions: allowing arbitrary packing availability and extending it to rectangular matrices. This versatile approach offers significant improvements in matrix multiplication performance and enables faster evaluation of privacy-preserving neural network applications.
One key operation in this space is matrix multiplication, which forms the foundation of most neural networks. Several studies have even proposed new FHE schemes specifically to accelerate this operation. The optimization of matrix multiplication is also the primary goal of our work. We leverage the Single Instruction Multiple Data (SIMD) capabilities of FHE to increase data packing and significantly reduce the KeySwitch operation count— an expensive low-level routine in homomorphic encryption. By minimizing KeySwitching, we surpass current state-of-the-art solutions, requiring only a minimal multiplicative depth of two.
The best-known complexity for matrix multiplication at this depth is $\mathcal{O}(d)$ for matrices of size $d\times d$. Remarkably, even the leading techniques that require a multiplicative depth of three still incur a KeySwitch complexity of $\mathcal{O}(d)$. In contrast, our method reduces this complexity to $\mathcal{O}(\log{d})$ while maintaining the same level of data packing. Our solution broadly applies to all FHE schemes supporting Single Instruction Multiple Data (SIMD) operations. We further generalize the technique in two directions: allowing arbitrary packing availability and extending it to rectangular matrices. This versatile approach offers significant improvements in matrix multiplication performance and enables faster evaluation of privacy-preserving neural network applications.
Supriya Adhikary, Wai Kong Lee, Angshuman Karmakar, Yongwoo Lee, Seong Oun Hwang, Ramachandra Achar
ePrint Report
Large polynomial multiplication is one of the computational bottlenecks in fully homomorphic encryption implementations. Usually, these multiplications are implemented using the number-theoretic transformation to speed up the computation. State-of-the-art GPU-based implementation of fully homomorphic encryption computes the number theoretic transformation in two different kernels, due to the necessary synchronization between GPU blocks to ensure correctness in computation. This can be a serious limitation in embedded systems that only have constrained computational resources to support the time-consuming homomorphic encryption. In this paper, we proposed a series of techniques to improve the performance of number theoretic transform targeting homomorphic encryption on a GPU device. Firstly, we proposed to arrange the polynomials in a transposed manner and skip the last two levels of radix-4 number theoretic transformation, allowing us to completely
avoid the block synchronization in NTT implementation. This technique improved the performance of homomorphic encryption by 1.37× and 1.34× on RTX 4060 and Jetson Orin Nano respectively, compared to the conventional approach that uses full NTT without skipping any levels. However, such an approach also introduces extra overhead in the subsequent point-wise multiplication, which slows down the homomorphic multiplication. To reduce this negative impact, a fast 16 × 16 point-wise
multiplication implementation was proposed, which relies on the heavily optimized Toom-Cook 4-way algorithm. Experimental results show that our proposed homomorphic multiplication can achieve similar latency compared to Jung et al. and Yang et al., which are the best results to date. This shows that the proposed cuTraNTT is able to reduce the latency of homomorphic encryption without sacrificing the performance in homomorphic multiplication.
Yuuki Fujita, Yusuke Sakai, Kyosuke Yamashita, Goichiro Hanaoka
ePrint Report
When we use signature schemes in practice, we sometimes should consider security beyond unforgeability.
This paper considers security against key substitution attacks of multi-signer signatures (i.e., aggregate signatures and multi-signatures).
Intuitively, this security property ensures that a malicious party cannot claim the ownership of a signature that is created by an honest signer.
We investigate security against key substitution attacks of a wide range of aggregate signature schemes and multi-signature schemes: the Boneh-Gentry-Lynn-Shacham aggregate signature scheme, the sequential aggregate signature scheme by Lysyanskaya et al., the multi-signature scheme by Bellare and Neven, MuSig2, and the ordered multi-signature scheme by Boldyreva et al.
Furthermore, if the scheme does not provide security against key substitution attacks, then we modify the scheme to become secure against the attacks.
Joseph Carolan, Alexander Poremba, Mark Zhandry
ePrint Report
Indifferentiability is a popular cryptographic paradigm for analyzing the security of ideal objects---both in a classical as well as in a quantum world. It is typically stated in the form of a composable and simulation-based definition, and captures what it means for a construction (e.g., a cryptographic hash function) to be ``as good as'' an ideal object (e.g., a random oracle). Despite its strength, indifferentiability is not known to offer security against pre-processin} attacks in which the adversary gains access to (classical or quantum) advice that is relevant to the particular construction. In this work, we show that indifferentiability is (generically) insufficient for capturing pre-computation. To accommodate this shortcoming, we propose a strengthening of indifferentiability which is not only composable but also takes arbitrary pre-computation into account. As an application, we show that the one-round sponge is indifferentiable (with pre-computation) from a random oracle. This yields the first (and tight) classical/quantum space-time trade-off for one-round sponge inversion.
Omar Amer, Kaushik Chakraborty, David Cui, Fatih Kaleoglu, Charles Lim, Minzhao Liu, Marco Pistoia
ePrint Report
Liu et al. (ITCS22) initiated the study of designing a secure position verification protocol based on a specific proof of quantumness protocol and classical communication. In this paper, we study this interesting topic further and answer some of the open questions that are left in that paper. We provide a new generic compiler that can convert any single round proof of quantumness-based certified randomness protocol to a secure classical communication-based position verification scheme. Later, we extend our compiler to different kinds of multi-round proof of quantumness-based certified randomness protocols. Moreover, we instantiate our compiler with a random circuit sampling (RCS)-based certified randomness protocol proposed by Aaronson and Hung (STOC 23). RCS-based techniques are within reach of today's NISQ devices; therefore, our design overcomes the limitation of the Liu et al. protocol that would require a fault-tolerant quantum computer to realize. Moreover, this is one of the first cryptographic applications of RCS-based techniques other than certified randomness.
Sayon Duttagupta, Dave Singelée
ePrint Report
In recent years, urban areas have experienced a rapid increase in vehicle numbers, while the availability of parking spaces has remained largely static, leading to a significant shortage of parking spots. This shortage creates considerable inconvenience for drivers and contributes to traffic congestion. A viable solution is the temporary use of private parking spaces by homeowners during their absence, providing a means to alleviate the parking problem and generate additional income for the owners. However, current systems for sharing parking spaces often neglect security and privacy concerns, exposing users to potential risks.
This paper presents PISA, a novel Privacy-Preserving Smart Parking scheme designed to address these issues through a cryptographically secure protocol. PISA enables the anonymous sharing of parking spots and allows vehicle owners to park without revealing any personal identifiers. Our primary contributions include the development of a comprehensive bi-directional anonymity framework that ensures neither party can identify the other, and the use of formal verification methods to substantiate the soundness and reliability of our security measures. Unlike existing solutions, which often lack a security focus, fail to provide formal validation, or are computationally intensive, PISA is designed to be both secure and efficient.
Lior Rotem, Stefano Tessaro
ePrint Report
The Fiat-Shamir (FS) transform is the standard approach to compiling interactive proofs into non-interactive ones. However, the fact that knowledge extraction typically requires rewinding limits its applicability without having to rely on further heuristic conjectures. A better alternative is a transform that guarantees straight-line knowledge extraction. Two such transforms were given by Pass (CRYPTO '03) and Fischlin (CRYPTO '05), respectively, with the latter giving the most practical parameters. Pass's approach, which is based on cut-and-choose, was also adapted by Unruh (EUROCRYPT '12, '14, '15) to the quantum setting, where rewinding poses a different set of challenges. All of these transforms are tailored at the case of three-round Sigma protocols, and do not apply to a number of popular paradigms for building succinct proofs (e.g., those based on folding or sumcheck) which rely on multi-round protocols.
This work initiates the study of transforms with straight-line knowledge extraction for multi-round protocols. We give two transforms, which can be thought of as multi-round analogues of those by Fischlin and Pass. Our first transform leads to more efficient proofs, but its usage applies to a smaller class of protocols than the latter one. Our second transform also admits a proof of security in the Quantum Random Oracle Model (QROM), making it the first transform for multi-round protocols which does not incur the super-polynomial security loss affecting the existing QROM analysis of the FS transform (Don et al., CRYPTO '20).
This work initiates the study of transforms with straight-line knowledge extraction for multi-round protocols. We give two transforms, which can be thought of as multi-round analogues of those by Fischlin and Pass. Our first transform leads to more efficient proofs, but its usage applies to a smaller class of protocols than the latter one. Our second transform also admits a proof of security in the Quantum Random Oracle Model (QROM), making it the first transform for multi-round protocols which does not incur the super-polynomial security loss affecting the existing QROM analysis of the FS transform (Don et al., CRYPTO '20).
Avijit Dutta, Eik List
ePrint Report
Since designing a dedicated secure symmetric PRF is difficult, various works studied optimally secure PRFs from the sum of independent permutations (SoP).
At CRYPTO'20, Gunsing and Mennink proposed the Summation-Truncation Hybrid (STH).
While based on SoP, STH releases additional $a \leq n$ bits of the permutation calls and sums $n-a$ bits of them.
Thus, it produces $n+a$ bits at $O(n-a/2)$-bit PRF security.
Both SoP or STH can be used directly in encryption schemes or MACs in place of permutation calls for higher security.
However, simply replacing every call as in GCM-SIV$r$ would demand more calls.
For encryption schemes, Iwata's XORP scheme is long known to provide a better trade-off between efficiency and security. It extends SoP to variable-length-outputs by using $r+1$ calls to a block cipher where the output of one call is added to each of the other $r$ outputs. A similar extension can be conducted for STH that we call XTH, the XORP-Truncation Hybrid. Such an extension was already suggested in the final discussion by Gunsing and Mennink, but left as an open problem. This work fills the gap by formalizing and proving the security of XTH. For a rate of $r/(r+1)$ as in XORP, we show $O(n-a/2-1.5\log(r))$-bit security for XTH.
For encryption schemes, Iwata's XORP scheme is long known to provide a better trade-off between efficiency and security. It extends SoP to variable-length-outputs by using $r+1$ calls to a block cipher where the output of one call is added to each of the other $r$ outputs. A similar extension can be conducted for STH that we call XTH, the XORP-Truncation Hybrid. Such an extension was already suggested in the final discussion by Gunsing and Mennink, but left as an open problem. This work fills the gap by formalizing and proving the security of XTH. For a rate of $r/(r+1)$ as in XORP, we show $O(n-a/2-1.5\log(r))$-bit security for XTH.
Gajraj Kuldeep, Rune Hylsberg Jacobsen
ePrint Report
This paper addresses the problem of factoring composite numbers by introducing a novel approach to represent their prime divisors. We develop a method to efficiently identify smaller divisors based on the difference between the primes involved in forming the composite number. Building on these insights, we propose an algorithm that significantly reduces the computational complexity of factoring, requiring half as many iterations as traditional quadratic residue-based methods. The presented algorithm offers a more efficient solution for factoring composite numbers, with potential applications in fields such as cryptography and computational number theory.