International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

18 October 2024

Tomáš Novotný
ePrint Report ePrint Report
Cryptographic protocols such as zkSNARKs use 2-cycles of elliptic curves for efficiency, often relying on pairing computations. However, 2-cycles of pairing-friendly curves are hard to find, and the only known cases consist of an MNT4 and an MNT6 curve. In this work, we prove that a 2-cycle containing an MNT3 curve cannot be pairing-friendly. For other curve families, we have a similar result for cryptographically attractive field sizes. Thus we cannot hope to find new pairing-friendly 2-cycles using the current methods.

Furthermore, we show that there are no SNARK-friendly 2-chains of elliptic curves from combinations of MNT, Freeman and BN curves of reasonable size, except for the (MNT4, MNT6) chains.
Expand
Haoxiang Jin, Feng-Hao Liu, Zhedong Wang, Yang Yu, Dawu Gu
ePrint Report ePrint Report
This work conducts a comprehensive investigation on determining the entropic hardness of (R/M)LWR under polynomial modulus. Particularly, we establish the hardness of (M)LWR for general entropic secret distributions from (Module) LWE assumptions based on a new conceptually simple framework called rounding lossiness. By combining this hardness result and a trapdoor inversion algorithm with asymptotically the most compact parameters, we obtain a compact lossy trapdoor function (LTF) with improved efficiency. Extending our LTF with other techniques, we can derive a compact all-but-many LTF and PKE scheme against selective opening and chosen ciphertext attacks, solely based on (Module) LWE assumptions within a polynomial modulus. Additionally, we show a search-to-decision reduction for RLWR with Gaussian secrets from a new R\'enyi Divergence-based analysis.
Expand
Haoxiang Jin, Feng-Hao Liu, Zhedong Wang, Dawu Gu
ePrint Report ePrint Report
The Leftover Hash Lemma (LHL) is a powerful tool for extracting randomness from an entropic distribution, with numerous applications in cryptography. LHLs for discrete Gaussians have been explored in both integer settings by Gentry et al. (GPV, STOC'08) and algebraic ring settings by Lyubashevsky et al. (LPR, Eurocrypt'13). However, the existing LHLs for discrete Gaussians have two main limitations: they require the Gaussian parameter to be larger than certain smoothing parameters, and they cannot handle cases where fixed and arbitrary information is leaked.

In this work, we present new LHLs for discrete Gaussians in both integer and ring settings. Our results show that the Gaussian parameter can be improved by a factor of $\omega(\sqrt{\log\lambda})$ and $O(\sqrt{N})$ compared to the regularity lemmas of GPV and LPR, respectively, under similar parameter choices such as the dimension and ring. Furthermore, our new LHLs can be applied to leaked discrete Gaussians, and the result can be used to establish asymptotic hardness of the extended MLWE assumptions, addressing an open question in recent works by Lyubashevsky et al. (LNP, Crypto'22). Our central techniques involve new fine-grained analyses of the min-entropy in discrete Gaussians modulo sublattices and should be of interest.
Expand
Vlad-Florin Drăgoi, Brice Colombier, Nicolas Vallet, Pierre-Louis Cayrel, Vincent Grosso
ePrint Report ePrint Report
Classic McEliece is one of the three code-based candidates in the fourth round of the NIST post-quantum cryptography standardization process in the Key Encapsulation Mechanism category. As such, its decapsulation algorithm is used to recover the session key associated with a ciphertext using the private key. In this article, we propose a new side-channel attack on the syndrome computation in the decapsulation algorithm that recovers the private key, which consists of the private Goppa polynomial $g$ and the permuted support $\mathcal{L}$. The attack relies on both practical aspects and theoretical contributions, namely that the side-channel distinguisher can accurately discriminate elements of the permuted support $\mathcal{L}$, while relying only on a standard noisy Hamming weight leakage assumption and that there exists a cubic-time algorithm that uses this information to recover the private Goppa polynomial $g$. Compared with previous work targeting the Classic McEliece private key, this drastically improves both on the assumptions made in the attacker model and on the overall efficiency of the key-recovery algorithm. We have carried out the attack in practice on a microcontroller target running the reference implementation of Classic McEliece, and make the full attack source code available.
Expand
Claude Carlet
ePrint Report ePrint Report
In two recent papers, we introduced and studied the notion of $k$th-order sum-freedom of a vectorial function $F:\mathbb F_2^n\to \mathbb F_2^m$. This notion generalizes that of almost perfect nonlinearity (which corresponds to $k=2$) and has some relation with the resistance to integral attacks of those block ciphers using $F$ as a substitution box (S-box), by preventing the propagation of the division property of $k$-dimensional affine spaces. In the present paper, we show that this notion, which is rarely satisfied by vectorial functions, can be weakened while retaining the property that the S-boxes do not propagate the division property of $k$-dimensional affine spaces. This leads us to the property that we name $k$th-order $t$-degree-sum-freedom, whose strength decreases when $t$ increases, and which coincides with $k$th-order sum-freedom when $t=1$. The condition for $k$th-order $t$-degree-sum-freedom is that, for every $k$-dimensional affine space $A$, there exists a non-negative integer $j$ of 2-weight at most $t$ such that $\sum_{x\in A}(F(x))^j\neq 0$. We show, for a general $k$th-order $t$-degree-sum-free function $F$, that $t$ can always be taken smaller than or equal to $\min(k,m)$ under some reasonable condition on $F$, and that it is larger than or equal to $\frac k{\deg(F)}$, where $\deg(F)$ is the algebraic degree of $F$. We study examples for $k=2$ (case in which $t=1$ corresponds to APNness) showing that finding $j$ of 2-weight 2 can be challenging, and we begin the study of power functions, and in particular, of the multiplicative inverse function (used as S-box in the AES), for which we extend to $k$th-order $t$-degree-sum-freedom the result that it is $k$th-order sum-free if and only if it is $(n-k)$th-order sum-free. We begin the study of the cases of $k\in \{2,3,n-3,n-2,n-1,n\}$.
Expand
Joao F. Doriguello, George Giapitzakis, Alessandro Luongo, Aditya Morolia
ePrint Report ePrint Report
One of the main candidates of post-quantum cryptography is lattice-based cryptography. Its cryptographic security against quantum attackers is based on the worst-case hardness of lattice problems like the shortest vector problem (SVP), which asks to find the shortest non-zero vector in an integer lattice. Asymptotic quantum speedups for solving SVP are known and rely on Grover's search. However, to assess the security of lattice-based cryptography against these Grover-like quantum speedups, it is necessary to carry out a precise resource estimation beyond asymptotic scalings. In this work, we perform a careful analysis on the resources required to implement several sieving algorithms aided by Grover's search for dimensions of cryptographic interests. For such, we take into account fixed-point quantum arithmetic operations, non-asymptotic Grover's search, the cost of using quantum random access memory (QRAM), different physical architectures, and quantum error correction. We find that even under very optimistic assumptions like circuit-level noise of $10^{-5}$, code cycles of 100 ns, reaction time of 1 $\mu$s, and using state-of-the-art arithmetic circuits and quantum error-correction protocols, the best sieving algorithms require $\approx 10^{13}$ physical qubits and $\approx 10^{31}$ years to solve SVP on a lattice of dimension 400, which is roughly the dimension for minimally secure post-quantum cryptographic standards currently being proposed by NIST. We estimate that a 6-GHz-clock-rate single-core classical computer would take roughly the same amount of time to solve the same problem. We conclude that there is currently little to no quantum speedup in the dimensions of cryptographic interest and the possibility of realising a considerable quantum speedup using quantum sieving algorithms would require significant breakthroughs in theoretical protocols and hardware development.
Expand
Giuseppe D'Alconzo, Andrea Flamini, Alessio Meneghetti, Edoardo Signorini
ePrint Report ePrint Report
A multi-signature scheme allows a list of signers to sign a common message. They are widely used in scenarios where the same message must be signed and transmitted by $N$ users, and, instead of concatenating $N$ individual signatures, employing a multi-signature can reduce the data to be sent. In recent years there have been numerous practical proposals in the discrete logarithm setting, such as MuSig2 (CRYPTO'21) for the Schnorr signature. Recently, these attempts have been extended to post-quantum assumptions, with lattice-based proposals such as MuSig-L (CRYPTO'22). Given the growth of group action-based signatures, a natural question is whether a multi-signature can be built on the same models. In this work, we present the first construction of such a primitive relying on group action assumptions. We obtain a 3-round scheme achieving concurrent security in the ROM. Moreover, we instantiate it using the three candidates to the additional post-quantum NIST's call, namely LESS, MEDS and ALTEQ, obtaining a good compression rate for different parameters sets.
Expand
Dominique Bazin, Ryo Nishimaki
ePrint Report ePrint Report
Bartusek and Raizes (CRYPTO 2024) proposed two security definitions for secret sharing, no-signaling certified deletion and adaptive certified deletion. We prove that adaptive certified deletion does not imply no-signaling certified deletion.
Expand
Joohee Lee, Joon-Woo Lee
ePrint Report ePrint Report
Fully homomorphic encryption enables computations over encrypted data, which allows privacy-preserving services to be held between a server and a client. However, real-world applications demand practical considerations, especially concerning public safety and legal investigations. Existing FHE schemes focus solely on privacy, neglecting the societal risks posed by criminal activities utilizing privacy-preserving services. This paper introduces Homomorphic Encryption with Authority (HEwA), a novel framework that balances data privacy with public safety by incorporating an "authority" party. The proposed HEwA system operates in two phases: a normal phase, where client data privacy is protected, and an investigative phase, where the authority referring to a legally authorized entity such as government agencies exerts the right to recover suspicious client’s data. We formalize the security model for HEwA, ensuring that client privacy is protected during the normal phase while enabling authorities to recover encrypted data in the investigative phase. As a concrete example, we design an efficient HEwA system solely based on the CKKS homomorphic encryption scheme, which supports approximate computations over real-number data, making it highly suitable for fruitful applications in AI such as secure genomic analysis. We further provide rigorous security proofs. This new approach addresses the tension between privacy and public safety in cloud services, paving the way for responsible use of homomorphic encryption in practice.
Expand
Christof Beierle
ePrint Report ePrint Report
For a $q$-polynomial $L$ over a finite field $\mathbb{F}_{q^n}$, we characterize the differential spectrum of the function $f_L\colon \mathbb{F}_{q^n} \rightarrow \mathbb{F}_{q^n}, x \mapsto x \cdot L(x)$ and show that, for $n \leq 5$, it is completely determined by the image of the rational function $r_L \colon \mathbb{F}_{q^n}^* \rightarrow \mathbb{F}_{q^n}, x \mapsto L(x)/x$. This result follows from the classification of the pairs $(L,M)$ of $q$-polynomials in $\mathbb{F}_{q^n}[X]$, $n \leq 5$, for which $r_L$ and $r_M$ have the same image, obtained in [B. Csajbok, G. Marino, and O. Polverino. A Carlitz type result for linearized polynomials. Ars Math. Contemp., 16(2):585–608, 2019]. For the case of $n>5$, we pose an open question on the dimensions of the kernels of $x \mapsto L(x) - ax$ for $a \in \mathbb{F}_{q^n}$.

We further present a link between functions $f_L$ of differential uniformity bounded above by $q$ and scattered $q$-polynomials and show that, for odd values of $q$, we can construct CCZ-inequivalent functions $f_M$ with bounded differential uniformity from a given function $f_L$ fulfilling certain properties.
Expand
Prabhanjan Ananth, Saachi Mutreja, Alexander Poremba
ePrint Report ePrint Report
Fundamental principles of quantum mechanics have inspired many new research directions, particularly in quantum cryptography. One such principle is quantum no-cloning which has led to the emerging field of revocable cryptography. Roughly speaking, in a revocable cryptographic primitive, a cryptographic object (such as a ciphertext or program) is represented as a quantum state in such a way that surrendering it effectively translates into losing the capability to use this cryptographic object. All of the revocable cryptographic systems studied so far have a major drawback: the recipient only receives one copy of the quantum state. Worse yet, the schemes become completely insecure if the recipient receives many identical copies of the same quantum state---a property that is clearly much more desirable in practice.

While multi-copy security has been extensively studied for a number of other quantum cryptographic primitives, it has so far received only little treatment in context of unclonable primitives. Our work, for the first time, shows the feasibility of revocable primitives, such as revocable encryption and revocable programs, which satisfy multi-copy security in oracle models. This suggest that the stronger notion of multi-copy security is within reach in unclonable cryptography more generally, and therefore could lead to a new research direction in the field.
Expand
Zehou Wu
ePrint Report ePrint Report
We prove that the existence of a CPA-secure encryption scheme that is insecure in the presence of key cycles of length $n$ implies the existence of such a scheme for key cycles of any length less than $n$. Equivalently, if every encryption scheme in a class is $n$-circular secure and this class is closed under our construction, then every encryption scheme in this class is $n'$-circular secure for $n' > n$.
Expand
Chaya Ganesh, Sikhar Patranabis, Shubh Prakash, Nitin Singh
ePrint Report ePrint Report
We propose a generic framework called GAPP for aggregation of polynomial protocols. This allows proving $n$ instances of a polynomial protocol using a single aggregate proof that has $O(\log n)$ size, and can be verified using $O(\log^2 n)$ operations. The satisfiability of several univariate polynomial identities over a domain is reduced to the satisfiability of a single bivariate polynomial identity over a related domain, where the bivariate polynomials interpolate a batch of univariate polynomials over the domain. We construct an information-theoretic protocol for proving the satisfiability of the bivariate polynomial identity, which is then compiled using any bivariate polynomial commitment scheme (PCS) to yield an argument of knowledge for the aggregation relation. GAPP can be applied to several popular SNARKs over bilinear groups that are modeled as polynomial protocols in a black-box way.

We present a new bivariate polynomial commitment scheme, bPCLB, with succinct verification that yields an efficient instantiation of GAPP. In addition, the prover only performs sublinear cryptographic operations in the evaluation proof. Towards constructing bPCLB, we show a new folding technique that we call Lagrangian folding. The bivariate PCS bPCLB and the Lagrangian folding scheme are of independent interest. We implement bPCLB and experimentally validate the practical efficiency of our GAPP instantiation. For the popular PLONK proof system, we achieve $25$-$30\%$ faster proof generation than the naive baseline of generating $n$ separate PLONK proofs. Compared to all existing aggregation schemes that incur additional prover overheads on top of the baseline, we achieve significantly more efficient proving, while retaining succinct verification.

We demonstrate the versatility of our GAPP framework by outlining applications of practical interest: tuple lookups that significantly outperform existing lookup arguments in terms of prover overheads; and proofs for non-uniform computation with a la carte prover cost.
Expand
Mariana Gama, Emad Heydari Beni, Jiayi Kang, Jannik Spiessens, Frederik Vercauteren
ePrint Report ePrint Report
In this paper, we show for the first time it is practical to privately delegate proof generation of zkSNARKs proving up to $2^{20}$ R1CS constraints to a single server. We achieve this by homomorphically computing zkSNARK proof generation, an approach we call blind zkSNARKs. We formalize the concept of blind proofs, analyze their cryptographic properties and show that the resulting blind zkSNARKs remain sound when compiled using BCS compilation. Garg et al. gave a similar framework at CRYPTO 2024, but no practical instantiation for proving non-trivial computations was known. By delegating proof generation, we are able to reduce client computation time from 10 minutes to mere seconds, while server computation time remains limited to 20 minutes. We also propose a practical construction for vCOED supporting constraint sizes four orders of magnitude larger than the current state-of-the-art verifiable FHE-based approaches. These results are achieved by optimizing Fractal for the GBFV homomorphic encryption scheme, e.g. by designing specialized homomorphic circuits with two dimensional NTTs. Furthermore, we make the proofs publicly-verifiable by appending a zero-knowledge Proof of Decryption (PoD). We propose a new construction for PoDs, optimized for low proof generation time, exploiting modulus and ring switching in GBFV; these techniques might be of independent interest. Finally, we implement the latter protocol in C and report on execution time and proof sizes.
Expand
Arthur Mehta, Anne Müller
ePrint Report ePrint Report
In a functional encryption (FE) scheme, a user that holds a ciphertext and a function-key can learn the result of applying the function to the plaintext message. Security requires that the user does not learn anything beyond the function evaluation. On the other hand, unclonable encryption (UE) is a uniquely quantum primitive, which ensures that an adversary cannot duplicate a ciphertext to decrypt the same message multiple times. In this work we introduce unclonable quantum functional encryption (UFE), which both extends the notion of FE to the quantum setting and also possesses the unclonable security of UE. We give a construction for UFE that supports arbitrary quantum messages and polynomialy-sized circuits, and achieves unclonable-indistinguishable security for dependently sampled function keys. In particular, our UFE guarantees that two parties cannot simultaneously recover the correct function outputs using two independently sampled function keys. Our construction combines quantum garbled circuits [BY22], and quantum-key unclonable encryption [AKY24], and leverages techniques from the plaintext expansion arguments in [Hir+23]. As an application we give the first construction for public-key UE with variable decryption keys. Lastly, we establish a connection between quantum indistinguishability obfuscation (qiO) and quantum functional encryption (QFE); Showing that any multi-input indistinguishability-secure quantum functional encryption scheme unconditionally implies the existence of qiO.
Expand
Jovan Komatovic, Joachim Neu, Tim Roughgarden
ePrint Report ePrint Report
Multi-valued validated Byzantine agreement (MVBA), a fundamental primitive of distributed computing, enables $n$ processes to agree on a valid $\ell$-bit value, despite $t$ faulty processes behaving arbitrarily. Among hash-based protocols for the asynchronous setting with adaptive faults, the state-of-the-art HMVBA protocol has optimal $O(1)$ time complexity and near-optimal $O(n \ell + n^2 \kappa \log n)$ bit complexity, but tolerates only $t < n/5$ faults. We present REDUCER, an MVBA protocol that matches HMVBA's time and bit complexity and improves resilience to $t < n/4$. Like HMVBA, REDUCER relies solely on collision-resistant hash functions. Toward optimal one-third resilience, we also propose REDUCER++, an MVBA protocol with further improved $t < (1/3 - \epsilon)n$ resilience, for any fixed $\epsilon > 0$, assuming hash functions modeled as random oracles. Time and bit complexity of REDUCER++ remain constant and quasi-quadratic, respectively, with constants depending on $\epsilon$.
Expand
Sebastien Balny, Claire Delaplace, Gilles Dequen
ePrint Report ePrint Report
We present a new variant of the LLL lattice reduction algorithm, inspired by Lagrange notion of pair-wise reduction, called L4. Similar to LLL, our algorithm is polynomial in the dimension of the input lattice, as well as in $\log M$, where $M$ is an upper-bound on the norm of the longest vector of the input basis. We experimentally compared the norm of the first basis vector obtained with LLL and L4 up to dimension 200. On average we obtain vectors that are up to $16\%$ shorter. We also used our algorithm as a pre-processing step for the BKZ lattice reduction algorithm with blocksize 24. In practice, up to dimension 140, this allows us to reduce the norm of the shortest basis vector on average by $3\%$, while the runtime does not significantly increases. In $10\%$ of our tests, the whole process was even faster.
Expand
Giulia Scaffino, Karl Wüst, Deepak Maram, Alberto Sonnino, Lefteris Kokoris-Kogias
ePrint Report ePrint Report
The increased throughput offered by modern blockchains, such as Sui, Aptos, and Solana, enables processing thousands of transactions per second, but it also introduces higher costs for decentralized application (dApp) developers who need to track and verify changes in the state of their application. This is true because dApp developers run full nodes, which download and re-execute every transaction to track the global state of the chain. However, this becomes prohibitively expensive for high-throughput chains due to high bandwidth, computational, and storage requirements. A common alternative is to use light nodes. However, light nodes only verify the inclusion of a set of transactions and have no guarantees that the set is complete, i.e., that includes all relevant transactions. Under a dishonest majority, light nodes can also be tricked into accepting invalid transactions.

To bridge the gap between full and light nodes, we propose and formalize a new type of blockchain node: the sparse node. A sparse node tracks only a subset of the blockchain’s state: it verifies that the received set of transactions touching the substate is complete, and re-executes those transactions to assess their validity. A sparse node retains important security properties even under adversarial majorities, and requires an amount of resources proportional to the number of transactions in the substate and to the size of the substate itself.

We further present Sunfish, an instantiation of a sparse node protocol. Our analysis and evaluation show that Sunfish reduces the bandwidth consumption of real blockchain applications by several orders of magnitude when compared to a full node.
Expand
Giulia Cavicchioni, Alessio Meneghetti, Giovanni Tognolini
ePrint Report ePrint Report
Information set decoding (ISD) algorithms currently offer the most powerful tool to solve the two archetypal problems of coding theory, namely the Codeword Finding Problem and the Syndrome Decoding Problem. Traditionally, ISD have primarily been studied for linear codes over finite fields, equipped with the Hamming metric. However, recently, other possibilities have also been explored. These algorithms have been adapted to different ambient spaces and metrics, such as the rank metric or the Lee metric over $\mathbb Z_m$. In this paper, we show that it is possible to leverage the ring structure to construct more efficient decoding algorithms than those obtained by simply adapting ISD. In particular, we describe a framework that can be applied to any additive metric including Hamming and Lee, and that can be adapted to the case of the rank metric, providing algorithms to solve the two aforementioned problems, along with their average computational costs.
Expand
Jules Baudrin, Christof Beierle, Patrick Felke, Gregor Leander, Patrick Neumann, Léo Perrin, Lukas Stennes
ePrint Report ePrint Report
Recently, Baudrin et al. analyzed a special case of Wagner's commutative diagram cryptanalysis, referred to as commutative cryptanalysis. For a family $(E_k)_k$ of permutations on a finite vector space $G$, commutative cryptanalysis exploits the existence of affine permutations $A,B \colon G \rightarrow G$, $I \notin \{A,B\}$ such that $E_k \circ A (x) = B \circ E_k(x)$ holds with high probability, taken over inputs $x$, for a significantly large set of weak keys $k$. Several attacks against symmetric cryptographic primitives can be formulated within the framework of commutative cryptanalysis, most importantly differential attacks, as well as rotational and rotational-differential attacks. Besides, the notion of $c$-differentials on S-boxes can be analyzed as a special case within this framework. We discuss the relations between a general notion of commutative cryptanalysis, with $A$ and $B$ being arbitrary functions over a finite Abelian group, and differential cryptanalysis, both from the view of conducting an attack on a symmetric cryptographic primitive, as well as from the view of a theoretical study of cryptographic S-boxes.
Expand
◄ Previous Next ►