International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

24 September 2023

Ali Şah Özcan, Erkay Savaş
ePrint Report ePrint Report
The number theoretic transform (NTT) permits a very efficient method to perform multiplication of very large degree polynomials, which is the most time-consuming operation in fully homomorphic encryption (FHE) schemes and a class of non-interactive succinct zero-knowledge proof systems such as zk-SNARK. Efficient modular arithmetic plays an important role in the performance of NTT, and therefore it is studied extensively. The access pattern to the memory, on the other hand, may play much greater role, as the NTT execution time is mostly memory-bound due to large degree polynomials. In this paper, we propose two algorithms for fast computation of NTT on a class of graphical processing units (GPU) by optimizing the memory access patterns. We present an approach i) to optimize the number of accesses to slow global memory for thread synchronization, and ii) to make better use of spatial locality in global memory accesses. It turns out that by controlling certain parameters in CUDA platform for general-purpose GPU computing (GPGPU) such as kernel count, block size and block shape, we can affect the performance of NTT. To best of our knowledge, this work is unique for it suggests a recipe for selecting optimum CUDA parameters to obtain the best NTT performance for a given polynomial degree. Our implementation results on various GPU devices for all power-of-two polynomial degrees from $2^{12}$ to $2^{28}$ show that our algorithms compare favorably with the other state-of-the-art GPU implementations in the literature with the optimum selection of these three CUDA parameters.
Expand
Jonas Meers, Julian Nowakowski
ePrint Report ePrint Report
We define and analyze the Commutative Isogeny Hidden Number Problem which is the natural analogue of the Hidden Number Problem in the CSIDH and CSURF setting. In short, the task is as follows: Given two supersingular elliptic curves \(E_A\), \(E_B\) and access to an oracle that outputs some of the most significant bits of the \({\mathsf{CDH}}\) of two curves, an adversary must compute the shared curve \(E_{AB}={\mathsf{CDH}}(E_A,E_B)\). We show that we can recover \(E_{AB}\) in polynomial time by using Coppersmith's method as long as the oracle outputs \({\frac{13}{24}} + \varepsilon \approx 54\%\) (CSIDH) and \({\frac{31}{41}} + \varepsilon \approx 76\%\) (CSURF) of the most significant bits of the \({\mathsf{CDH}}\), where $\varepsilon > 0$ is an arbitrarily small constant. To this end, we give a purely combinatorial restatement of Coppersmith's method, effectively concealing the intricate aspects of lattice theory and allowing for near-complete automation. By leveraging this approach, we attain recovery attacks with $\varepsilon$ close to zero within a few minutes of computation.
Expand
Jianhua Wang, Lu Qin, Baofeng Wu
ePrint Report ePrint Report
In this paper, we improve the cube attack by exploiting low-degree factors of the superpoly w.r.t. certain "special" index set of cube (ISoC). This can be viewed as a special case of the correlation cube attack proposed at Eurocrypt 2018, but under our framework more beneficial equations on the key variables can be obtained in the key-recovery phase. To mount our attack, one has two challenging problems: (1) effectively recover algebraic normal form of the superpoly and extract out its low-degree factors; and (2) efficiently search a large quantity of good ISoCs. We bring in new techniques to solve both of them.

First, we propose the variable substitution technique for middle rounds of a cipher, in which polynomials on the key variables in the algebraic expressions of internal states are substituted by new variables. This will improve computational complexity of the superpoly recovery and promise more compact superpolys that can be easily decomposed with respect to the new variables. Second, we propose the vector numeric mapping technique, which seeks out a tradeoff between efficiency of the numeric mapping technique (Crypto 2019) and accuracy of the monomial prediction technique (Asiacrypt 2020) in degree evaluation of superpolys. Combining with this technique, a fast pruning method is given and modeled by MILP to filter good ISoCs of which the algebraic degree satisfies some fixed threshold. Thanks to automated MILP solvers, it becomes practical to comprehensively search for good cubes across the entire search space.

To illustrate the power of our techniques, we apply all of them to Trivium stream cipher. As a result, we have recovered the superpolys of three cubes given by Kesarwani et al. in 2020, only to find they do not have zero-sum property up to 842 rounds as claimed in their paper. To our knowledge, the previous best practical key recovery attack was on 820-round Trivium with complexity $2^{53.17}$. We put forward 820-, 825- and 830-round practical key-recovery attacks, in which there are $\mathbf{2^{80}\times 87.8\%}$, $\mathbf{2^{80}\times 83\%}$ and $\mathbf{2^{80}\times 65.7\%}$ keys that could be practically recovered, respectively, if we consider $\mathbf{2^{60}}$ as the upper bound for practical computational complexity. Besides, even for computers with computational power not exceeding $\mathbf{2^{52}}$ (resp. $\mathbf{2^{55}}$), we can still recover $\mathbf{58\%}$ (resp. $\mathbf{46.6\%}$) of the keys in the key space for 820 rounds (resp. 830 rounds). Our attacks have led 10 rounds more than the previous best practical attack.
Expand
JINGWEI HU
ePrint Report ePrint Report
In this paper, we consider how to use fully homomorphic encryptions (FHEs) to solve the problem of secure computations over set intersection where one party holds a relatively set of size $N_s$ and the other party holds a relatively small set of size $N_r$ collaboratively compute some functionality over their set intersection without revealing other information. This problem has many applications for online collaboration, for example, fingerprint matching, online dating, and shareriding.
Expand
George Kadianakis, Mary Maller, Andrija Novakovic
ePrint Report ePrint Report
This paper introduces Sigmabus, a technique designed to enhance the efficiency of zero-knowledge circuits by relocating computationally expensive operations outside the circuit. Specifically, Sigmabus focuses on moving elliptic curve group operations, typically proven with expensive non-native field arithmetic, to external computations. By leveraging Sigma protocols, elliptic curve group operations are proven outside the circuit, while additional constraints are applied to the circuit to ensure correct execution of the Sigma protocol. This approach can achieve significant performance improvements in zero-knowledge circuits. This paper presents the Sigmabus protocol along with its security proofs, and demonstrates its practical implications through various use cases.
Expand
Valerio Cini, Russell W. F. Lai, Giulio Malavolta
ePrint Report ePrint Report
Succinct arguments allow a prover to convince a verifier of the validity of any statement in a language, with minimal communication and verifier's work. Among other approaches, lattice-based protocols offer solid theoretical foundations, post-quantum security, and a rich algebraic structure. In this work, we present some new approaches to constructing efficient lattice-based succinct arguments. Our main technical ingredient is a new commitment scheme based on vanishing polynomials, a notion borrowed from algebraic geometry. We analyse the security of such a commitment scheme, and show how to take advantage of the additional algebraic structure to build new lattice-based succinct arguments. A few highlights amongst our results are:

- The first recursive folding (i.e. Bulletproofs-like) protocol for linear relations with polylogarithmic verifier runtime. Traditionally, the verifier runtime has been the efficiency bottleneck for such protocols (regardless of the underlying assumptions). - The first verifiable delay function (VDF) based on lattices, building on a recently introduced sequential relation. - The first lattice-based \emph{linear-time prover} succinct argument for NP, in the preprocessing model. The soundness of the scheme is based on (knowledge)-k-R-ISIS assumption [Albrecht et al., CRYPTO'22].
Expand
Charlotte Hoffmann, Pavel Hubáček, Chethan Kamath, Tomáš Krňák
ePrint Report ePrint Report
Lucas sequences are constant-recursive integer sequences with a long history of applications in cryptography, both in the design of cryptographic schemes and cryptanalysis. In this work, we study the sequential hardness of computing Lucas sequences over an RSA modulus.

First, we show that modular Lucas sequences are at least as sequentially hard as the classical delay function given by iterated modular squaring proposed by Rivest, Shamir, and Wagner (MIT Tech. Rep. 1996) in the context of time-lock puzzles. Moreover, there is no obvious reduction in the other direction, which suggests that the assumption of sequential hardness of modular Lucas sequences is strictly weaker than that of iterated modular squaring. In other words, the sequential hardness of modular Lucas sequences might hold even in the case of an algorithmic improvement violating the sequential hardness of iterated modular squaring. Moreover, we note that modular Lucas sequences also yield a time-lock puzzle, similar to the classical construction of Rivest, Shamir and Wagner.

Second, we demonstrate the feasibility of constructing practically-efficient verifiable delay functions based on the sequential hardness of modular Lucas sequences. Our construction builds on the work of Pietrzak (ITCS 2019) by leveraging the intrinsic connection between the problem of computing modular Lucas sequences and exponentiation in an appropriate extension field.
Expand
Marc Fischlin, Felix Rohrbach
ePrint Report ePrint Report
Extremely Lossy Functions (ELFs) are families of functions that, depending on the choice during key generation, either operate in injective mode or instead have only a polynomial image size. The choice of the mode is indistinguishable to an outsider. ELFs were introduced by Zhandry (Crypto 2016) and have been shown to be very useful in replacing random oracles in a number of applications.

One open question is to determine the minimal assumption needed to instantiate ELFs. While all constructions of ELFs depend on some form of exponentially-secure public-key primitive, it was conjectured that exponentially-secure secret-key primitives, such as one-way functions, hash functions or one-way product functions, might be sufficient to build ELFs. In this work we answer this conjecture mostly negative: We show that no primitive, which can be derived from a random oracle (which includes all secret-key primitives mentioned above), is enough to construct even moderately lossy functions in a black-box manner. However, we also show that (extremely) lossy functions themselves do not imply public-key cryptography, leaving open the option to build ELFs from some intermediate primitive between the classical categories of secret-key and public-key cryptography.
Expand
Sara Logsdon
ePrint Report ePrint Report
This paper offers a mathematical introduction to fully homomorphic encryption, a concept that enables computation on encrypted data. We trace the historical development of FHE, describe Fully Homomorphic Encryption over the Torus (TFHE) and how it performs certain mathematical operations, and explore bootstrapping and the possibility for adjusting computational depth. This paper equips readers with a brief understanding of FHE's evolution and the essential mechanisms facilitating practical implementation.
Expand
Roman Langrehr
ePrint Report ePrint Report
Non-interactive key exchange (NIKE) schemes like the Diffie-Hellman key exchange are a widespread building block in several cryptographic protocols. Since the Diffie-Hellman key exchange is not post-quantum secure, it is important to investigate post-quantum alternatives.

We analyze the security of the LWE-based NIKE by Ding et al. (ePrint 2012) and Peikert (PQCrypt 2014) in a multi-user setting where the same public key is used to generate shared keys with multiple other users. The Diffie-Hellman key exchange achieves this security notion. The mentioned LWE-based NIKE scheme comes with an inherent correctness error (Guo et al., PKC 2020), and this has significant implications for the multi-user security, necessitating a closer examination.

Single-user security generically implies multi-user security when all users generate their keys honestly for NIKE schemes with negligible correctness error. However, the LWE-based NIKE requires a super-polynomial modulus to achieve a negligible correctness error, which makes the scheme less efficient. We show that - generically, single-user security does not imply multi-user security when the correctness error is non-negligible, but despite this - the LWE-based NIKE with polynomial modulus is multi-user secure for honest users when the number of users is fixed in advance. This result takes advantage of the leakage-resilience properties of LWE.

We then turn to a stronger model of multi-user security that allows adversarially generated public keys. For this model, we consider a variant of the LWE-based NIKE where each public key is equipped with a NIZKPoK of the secret key. Adding NIZKPoKs is a standard technique for this stronger model and Hesse et al. (Crypto 2018) showed that this is sufficient to achieve security in the stronger multi-user security model for perfectly correct NIKEs (which the LWE-based NIKE is not). We show that - for certain parameters that include all parameters with polynomial modulus, the LWE-based NIKE can be efficiently attacked with adversarially generated public keys, despite the use of NIZKPoKs, but - for suitable parameters (that require a super-polynomial modulus), this security notion is achieved by the LWE-based NIKE with NIZKPoKs. This stronger security notion has been previously achieved for LWE-based NIKE only in the QROM, while all our results are in the standard model.
Expand
Calvin Abou Haidar, Alain Passelègue, Damien Stehlé
ePrint Report ePrint Report
Updatable public key encryption has recently been introduced as a solution to achieve forward-security in the context of secure group messaging without hurting efficiency, but so far, no efficient lattice-based instantiation of this primitive is known.

In this work, we construct the first LWE-based UPKE scheme with polynomial modulus-to-noise rate, which is CPA-secure in the standard model. At the core of our security analysis is a generalized reduction from the standard LWE problem to (a stronger version of) the Extended LWE problem. We further extend our construction to achieve stronger security notions by proposing two generic transforms. Our first transform allows to obtain CCA security in the random oracle model and adapts the Fujisaki-Okamoto transform to the UPKE setting. Our second transform allows to achieve security against malicious updates by adding a NIZK argument in the update mechanism. In the process, we also introduce the notion of Updatable Key Encapsulation Mechanism (UKEM), as the updatable variant of KEMs. Overall, we obtain a CCA-secure UKEM in the random oracle model whose ciphertext sizes are of the same order of magnitude as that of CRYSTALS-Kyber.
Expand

21 September 2023

Aurel Page, Benjamin Wesolowski
ePrint Report ePrint Report
The supersingular Endomorphism Ring problem is the following: given a supersingular elliptic curve, compute all of its endomorphisms. The presumed hardness of this problem is foundational for isogeny-based cryptography. The One Endomorphism problem only asks to find a single non-scalar endomorphism. We prove that these two problems are equivalent, under probabilistic polynomial time reductions.

We prove a number of consequences. First, assuming the hardness of the endomorphism ring problem, the Charles–Goren–Lauter hash function is collision resistant, and the SQIsign identification protocol is sound. Second, the endomorphism ring problem is equivalent to the problem of computing arbitrary isogenies between supersingular elliptic curves, a result previously known only for isogenies of smooth degree. Third, there exists an unconditional probabilistic algorithm to solve the endomorphism ring problem in time $\tilde O(p^{1/2})$, a result that previously required to assume the generalized Riemann hypothesis.

To prove our main result, we introduce a flexible framework for the study of isogeny graphs with additional information. We prove a general and easy-to-use rapid mixing theorem.
Expand
Nina Bindel, Nicolas Gama, Sandra Guasch, Eyal Ronen
ePrint Report ePrint Report
FIDO2 is currently the main initiative for passwordless authentication in web servers. It mandates the use of secure hardware authenticators to protect the authentication protocol’s secrets from compromise. However, to ensure that only secure authenticators are being used, web servers need a method to attest their properties. The FIDO2 specifications allow for authenticators and web servers to choose between different attestation modes to prove the characteristics of an authenticator, however the properties of most these modes have not been analysed in the context of FIDO2. In this work, we analyse the security and privacy properties of FIDO2 when different attestation modes included in the standard are used, and show that they lack good balance between security, privacy and revocation of corrupted devices. For example, the basic attestation mode prevents remote servers from tracing user’s actions across different services while requiring reduced trust assumptions. However in case one device is compromised, all the devices from the same batch (e.g., of the same brand or model) need to be recalled, which can be quite complex (and arguably impractical) in consumer scenarios. As a consequence we suggest a new attestation mode based on the recently proposed TokenWeaver, which provides more convenient mechanisms for revoking a single token while maintaining user privacy.
Expand
Kaiyi Zhang, Qingju Wang, Yu Yu, Chun Guo, Hongrui Cui
ePrint Report ePrint Report
Picnic is a NIST PQC Round 3 Alternate signature candidate that builds upon symmetric primitives following the MPC-in-the-head paradigm. Recently, researchers have been exploring more secure/efficient signature schemes from conservative one-way functions based on AES, or new low complexity one-way functions like Rain (CCS 2022) and AIM (CCS 2023). The signature schemes based on Rain and AIM are currently the most efficient among MPC-in-the-head-based schemes, making them promising post-quantum digital signature candidates.

However, the exact hardness of these new one-way functions deserves further study and scrutiny. This work presents algebraic attacks on RAIN and AIM for certain instances, where one-round Rain can be compromised in $2^{n/2}$ for security parameter $n\in \{128,192,256\}$, and two-round Rain can be broken in $2^{120.3}$, $2^{180.4}$, and $2^{243.1}$ encryptions, respectively. Additionally, we demonstrate an attack on AIM-III (which aims at 192-bit security) with a complexity of $2^{186.5}$ encryptions. These attacks exploit the algebraic structure of the power function over fields with characteristic 2, which provides potential insights into the algebraic structures of some symmetric primitives and thus might be of independent interest.
Expand
David Jacquemin, Anisha Mukherjee, Ahmet Can Mert, Sujoy Sinha Roy
ePrint Report ePrint Report
The long running time of isogeny-based cryptographic constructions has proved to be a boon in disguise for one particular type of primitive called Verifiable Delay Functions (VDFs). VDFs are characterised by sequential function evaluation but an immediate output verification. In order to ensure secure use of VDFs in real-world applications, it is important to determine the fastest implementation. Considering the point of view of an attacker (say with unbounded resources), this paper aims to achieve the fastest possible hardware implementation of isogeny-based VDFs. It is the first work that implements the $2^T$-isogeny walk involved in the evaluation step of an isogeny VDF. To meet our goal, we use redundant representations of integers and introduce a new lookup table-based algorithm for modular reduction. We also provide a survey of elliptic curve arithmetic to arrive at the most cost-effective curve computations and propose an improvement of the point doubling algorithm for better parallelism. The evaluation step of a VDF is defined to be sequential, which means that there is limited scope for parallelism. Nevertheless, taking this constraint into account our proposed design targets the highest levels of parallelism possible on an architectural level of an isogeny VDF implementation. We provide detailed analysis of all our arithmetic modules as well as estimates for their critical path delays and area consumption. Our 28nm ASIC design computes a $4^{100} = 2^{200}$-isogeny in 7.1$\mu s$. It is the first high-performance ASIC implementation for evaluation of isogeny VDFs.
Expand
Ronan Lashermes, Hélène Le Bouder
ePrint Report ePrint Report
We introduce a novel side-channel-based reverse engineering technique capable of reconstructing a procedure solely from inputs, outputs, and traces of execution. Beyond generic restrictions, we do not assume any prior knowledge of the procedure or the chip it operates on. These restrictions confine our analysis to 8-bit RISC constant-time software implementations.

Specifically, we demonstrate the feasibility of reconstructing a symmetric cryptographic cipher, even in scenarios where traces are sampled with information loss and noise, such as when measuring the power consumption of the chip.
Expand

18 September 2023

Omer Paneth, Rafael Pass
ePrint Report ePrint Report
Non-interactive delegation schemes enable producing succinct proofs (that can be efficiently verified) that a machine $M$ transitions from $c_1$ to $c_2$ in a certain number of deterministic steps. We here consider the problem of efficiently \emph{merging} such proofs: given a proof $\Pi_1$ that $M$ transitions from $c_1$ to $c_2$, and a proof $\Pi_2$ that $M$ transitions from $c_2$ to $c_3$, can these proofs be efficiently merged into a single short proof (of roughly the same size as the original proofs) that $M$ transitions from $c_1$ to $c_3$? To date, the only known constructions of such a mergeable delegation scheme rely on strong non-falsifiable ``knowledge extraction" assumptions. In this work, we present a provably secure construction based on the standard LWE assumption.

As an application of mergeable delegation, we obtain a construction of incrementally verifiable computation (IVC) (with polylogarithmic length proofs) for any (unbounded) polynomial number of steps based on LWE; as far as we know, this is the first such construction based on any falsifiable (as opposed to knowledge-extraction) assumption. The central building block that we rely on, and construct based on LWE, is a rate-1 batch argument (BARG): this is a non-interactive argument for NP that enables proving $k$ NP statements $x_1,..., x_k$ with communication/verifier complexity $m+o(m)$, where $m$ is the length of one witness. Rate-1 BARGs are particularly useful as they can be recursively composed a super-constant number of times.
Expand
Prashant Agrawal, Kabir Tomer, Abhinav Nakarmi, Mahabir Prasad Jhanwar, Subodh Sharma, Subhashis Banerjee
ePrint Report ePrint Report
In this paper we address the problem of recovery from failures without re-running entire elections when elections fail to verify. We consider the setting of $\textit{dual voting}$ protocols, where the cryptographic guarantees of end-to-end verifiable voting (E2E-V) are combined with the simplicity of audit using voter-verified paper records (VVPR). We first consider the design requirements of such a system and then suggest a protocol called $\textit{OpenVoting}$, which identifies a verifiable subset of error-free votes consistent with the VVPRs, and the polling booths corresponding to the votes that fail to verify with possible reasons for the failures.
Expand
Yi Liu, Junzuo Lai, Qi Wang, Xianrui Qin, Anjia Yang, Jian Weng
ePrint Report ePrint Report
Protocols with \emph{publicly verifiable covert (PVC) security} offer high efficiency and an appealing feature: a covert party may deviate from the protocol, but with a probability (\eg $90\%$, referred to as the \emph{deterrence factor}), the honest party can identify this deviation and expose it using a publicly verifiable certificate. These protocols are particularly suitable for practical applications involving reputation-conscious parties.

However, in the cases where misbehavior goes undetected (\eg with a probability of $10\%$), \emph{no security guarantee is provided for the honest party}, potentially resulting in a complete loss of input privacy and output correctness.

In this paper, we tackle this critical problem by presenting a highly effective solution. We introduce and formally define an enhanced notion called \emph{robust PVC security}, such that even if the misbehavior remains undetected, the malicious party can only gain an additional $1$-bit of information about the honest party's input while maintaining the correctness of the output. We propose a novel approach leveraging \emph{dual execution} and \emph{time-lock puzzles} to design a robust PVC-secure two-party protocol with \emph{low overhead} (depending on the deterrence factor). For instance, with a deterrence factor of $90\%$, our robust PVC-secure protocol incurs \emph{only additional ${\sim}10\%$ overhead} compared to the state-of-the-art PVC-secure protocol.

Given the stronger security guarantees with low overhead, our protocol is highly suitable for practical applications of secure two-party computation.
Expand
Zhenzhen Bao, Jinyu Lu, Yiran Yao, Liu Zhang
ePrint Report ePrint Report
In CRYPTO 2019, Gohr showed that well-trained neural networks could perform cryptanalytic distinguishing tasks superior to differential distribution table (DDT)-based distinguishers. This suggests that the differential-neural distinguisher (ND) may use additional information besides pure ciphertext differences. However, the explicit knowledge beyond differential distribution is still unclear. In this work, we provide explicit rules that can be used alongside DDTs to enhance the effectiveness of distinguishers compared to pure DDT-based distinguishers. These rules are based on strong correlations between bit values in right pairs of XOR-differential propagation through addition modulo $2^n$. Interestingly, they can be closely linked to the earlier study of the multi-bit constraints and the recent study of the fixed-key differential probability. In contrast, combining these rules does not improve the NDs' performance. This suggests that these rules or their equivalent form have already been exploited by NDs, highlighting the power of neural networks in cryptanalysis. In addition, we find that to enhance the differential-neural distinguisher's accuracy and the number of rounds, regulating the differential propagation is imperative. Introducing differences into the keys is typically believed to help eliminate differences in encryption states, resulting in stronger differential propagations. However, differential-neural attacks differ from traditional ones as they don't specify output differences or follow a single differential trail. This questions the usefulness of introducing differences in a key in differential-neural attacks and the resistance of Speck against such attacks in the related-key setting. This work shows that the power of differential-neural cryptanalysis in the related-key setting can exceed that in the single-key setting by successfully conducting a 14-round key recovery attack on Speck32/64.
Expand
◄ Previous Next ►