International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

11 October 2025

Alain Couvreur, Thomas Debris-Alazard, Philippe Gaborit, Adrien Vinçotte
ePrint Report ePrint Report
We present Miranda, the first family of full-domain-hash signatures based on matrix codes. This signature scheme fulfils the paradigm of Gentry, Peikert and Vaikuntanathan (GPV), which gives strong security guarantees. Our trapdoor is very simple and generic: if we propose it with matrix codes, it can actually be instantiated in many other ways since it only involves a subcode of a decodable code (or lattice) in a unique decoding regime of parameters. Though Miranda signing algorithm relies on a decoding task where there is exactly one solution, there are many possible signatures given a message to sign and we ensure that signatures are not leaking information on their underlying trapdoor by means of a very simple procedure involving the drawing of a small number of uniform bits. In particular Miranda does not use a rejection sampling procedure which makes its implementation a very simple task contrary to other GPV-like signatures schemes such as Falcon or even Wave.

We instantiate Miranda with the famous family of Gabidulin codes represented as spaces of matrices and we study thoroughly its security (in the EUF-CMA security model). For 128 bits of classical security, the signature sizes are as low as 90 bytes and the public key sizes are in the order of 2.6 megabytes.
Expand
George Lu, Jad Silbak, Daniel Wichs
ePrint Report ePrint Report
We study error-detection and error-correction codes for computationally bounded adversarial channels. We consider seeded codes where the polynomial-time encoding and decoding procedures share a public random seed, but are otherwise deterministic. An adversarial channel gets this seed and can perform arbitrary polynomial-time computation to adaptively select both the message to be encoded and a bounded number of errors to be added to the resulting codeword. The goal is to detect or correct such errors with overwhelming probability, while achieving better trade-offs between rate and error tolerance than those possible for computationally unbounded channels. For large alphabets, prior work (ITCS '25) achieves essentially optimal parameters under minimal cryptographic assumptions. However, for the binary alphabet, prior works (TCC '20, EUROCRYPT '25) either only achieved a weaker notion of selective security under the learning with errors (LWE) assumption, or relied on non-standard cryptographic assumptions to get the full notion of adaptive security. In this work, we construct binary codes that achieve the full notion of adaptive security assuming trapdoor hashing, which can in turn be instantiated under a variety of standard cryptographic assumptions such as LWE, or Decisional Diffie-Hellman (DDH), or Quadratic Residuosity (QR), or Decisional Composite Residuosity (DCR). For error detection, our codes get essentially optimal rate $R \approx 1$ and relative error tolerance $p \approx \frac{1}{2}$. For error correction, they can uniquely correct $p < 1/4$ fraction of errors with a rate $R$ matching that of the best known list-decodable codes for this error tolerance.

As a central technical tool of potentially independent interest, we construct multi-input correlation intractable hashing for ``shifted output relations'' under the standard cryptographic assumptions above.
Expand
Hossein Hafezi, Gaspard Anthoine, Matteo Campanelli, Dario Fiore
ePrint Report ePrint Report
Lookup arguments have become a central tool in proof systems, powering a range of practical applications. They enable the efficient enforcement of non-native operations, such as bit decomposition, range checks, comparisons, and floating-point arithmetic. They underpin zk-VMs by modelling instruction tables, provide set membership proofs in stateful computations, and strengthen extractors by ensuring witnesses belong to small domains. Despite these broad uses, existing lookup constructions vary widely in assumptions, efficiency, and composability. In this work, we systematize the design of lookup arguments and the cryptographic primitives they rely on. We introduce a unified and modular framework that covers standard, projective, indexed, vector, and decomposable lookups. We classify existing protocols by proof technique—multiset equality, Logup-based, accumulators, and subvector extraction (matrix–vector)—as well as by composition style. We survey and evaluate existing protocols along dimensions such as prover cost, dependence on table size, and compatibility with recursive proofs. From this analysis, we distill lessons and guidelines for choosing lookup constructions in practice and highlight the benefits and limitations of emerging directions in literature, such as preprocessing and decomposability.
Expand
Sana Boussam
ePrint Report ePrint Report
Non profiled attacks are a type of attacks in which an attacker aims at retrieving secret information from any device with no prior knowledge about leakage model characteristics. In practice, Differential Power Analysis (DPA), Correlation Power Analysis (CPA) and Linear Regression based Attack (LRA) which are the most common non profiled attacks require an a priori about leakage model to be used nowadays. The development of a generic attack in which no assumptions are made about the leakage model remains therefore an open issue to this day and has been investigated for over 10 years by the side channel community. Among all state-of-the-art non profiled attacks, it has been showed by Whitnall et al. that Linear Regression based Attack (LRA) corresponds to a generic attack when all predictors are considered i.e. LRA captures the dependencies between the bits of the secret information and their interactions and the physical traces. However, in practice, LRA cannot be carried out considering all predictors, as it is subject to multiple limitations, namely the problem of multicollinearity related to linear regression and the use of inappropriate distinguishers as the latter lose their discriminating ability when targeting injective functions. In this paper, we aim at finding a solution to this issue and providing a significant improvement in generic attacks research topic. First, we show that the use of Walsh-Hadamard basis prevent multicollinearity problem from which LRA suffers and allows thus to perform generic LRA i.e. LRA in which all predictors can be considered, without incuring a loss of precision of the estimated coefficients. From this observation, we demonstrate the limitations of using the classical distinguishers in LRA (i.e. Euclidean distance based distinguishers) and propose novel alternatives that are more suitable for LRA. To motivate the choice of these new distinguishers, we formally demonstrate their soundness against linear and non-linear operations. These choices result in the first generic non profiled attack which considers all predictors. Finally, we experimentally assess and validate all our theoretical results using simulations and publicly available datasets, thus covering a wide range of use-cases.
Expand
Yevhen Hrubiian, Illia Melnyk, Volodymyr Dubinin, Oleksandr Kurbatov, Serhii Volynets, Roman Perebynos, Yevhenii Serdiukov
ePrint Report ePrint Report
The majority of modern e2e private applications face scalability issues that limit their functionality, broader adoption, and overall user experience. Some of them organize private groups as a set of peer-to-peer chats, which leads to an overall quadratic complexity in the size of group communication and a linear time complexity in the number of members for encryption. Others apply more scalable group key establishment constructions (such as ART), but at the same time, they do not support verifiable and concurrent updates. In this paper, we introduce the 0-ART protocol, which aims to address the aforementioned issues by (1) verifiable group operations; (2) a causal tree construction allowing for multiple concurrent updates and efficient member removal; (3) anonymous credentials, making privacy in the group available while keeping operations authentic. We implemented the 0-ART framework and applied it to a decentralized collaborative work application. According to our benchmark, executing the most complex operation (performing the provable $\mathsf{AddMember}$ operation in a group of size $2^{20}$) takes 1.57 seconds on the user's device and $24$Kb of proof size.
Expand
Olivier Blazy, Lola-Baie Mallordy
ePrint Report ePrint Report
As anonymous social networks continue to grow in popularity, they raise serious challenges around abuse, misinformation, and harmful behavior. While traditional de-anonymization techniques can address misuse, they often come at the cost of user privacy. Protecting honest users' privacy while keeping malicious ones accountable remains a complex challenge. A promising alternative is conditional deanonymization, where anonymity is lifted only when abuse is collectively reported and verified by an authority.

In this work, we present a new cryptographic framework for conditional de-anonymization that preserves user anonymity unless a threshold number of validated abuse reports is reached. Our design leverages threshold cryptography to ensure that de-anonymization can only be triggered under well-defined, collectively enforced conditions—resisting false reports, misuse, and collusion. We formalize the model and provide a generic construction from standard cryptographic primitives, proving strong security guarantees even in the presence of adversarial authorities or malicious users. To illustrate its viability, we also propose a concrete instantiation based on lattice-based assumptions, making it a compelling candidate for post-quantum settings.
Expand
Carolina Ortega Pérez, Thomas Ristenpart, Julia Len
ePrint Report ePrint Report
The recent Digital Markets Act (DMA), a regulation passed by the European Union in 2022, requires messaging applications with large user bases to support interoperable end-to-end encrypted (E2EE) communication. This raises numerous questions about how to adapt cryptographic protocols to this setting in a way that preserves security and privacy. This question is not only limited to the main messaging protocols, but also extends to protocols for abuse mitigation such as the symmetric message franking protocol first proposed by Facebook. The latter uses symmetric cryptography to enable reporting abusive E2EE messages in a way that allows the platform to cryptographically verify the report's veracity.

In this paper, we initiate a formal treatment of interoperable symmetric message franking (IMF). We focus on a server-to-server messaging flow, where messages are routed sequentially through the sender's and recipient's service providers, but allow the recipient to dynamically choose who to send a report to. We formalize the security definitions for IMF including adapting the sender and recipient binding definitions into various reportability and unforgeability definitions that take into account one of the service providers misbehaving. We also prove relations among these new definitions. Finally, we detail an IMF construction that satisfies the security definitions, and include a discussion of users' identity privacy goals and deployment considerations.
Expand

09 October 2025

Fuyuki Kitagawa, Jiahui Liu, Shota Yamada, Takashi Yamakawa
ePrint Report ePrint Report
Secure key leasing allows a cryptographic key to be leased as a quantum state in such a way that the key can later be revoked in a verifiable manner. In this work, we propose a modular framework for constructing secure key leasing with a classical-lessor, where the lessor is entirely classical and, in particular, the quantum secret key can be both leased and revoked using only classical communication. Based on this framework, we obtain classical-lessor secure key leasing schemes for public-key encryption (PKE), pseudorandom function (PRF), and digital signature. We adopt the strong security notion known as security against verification key revealing attacks (VRA security) proposed by Kitagawa et al. (Eurocrypt 2025) into the classical-lessor setting, and we prove that all three of our schemes satisfy this notion under the learning with errors assumption. Our PKE scheme improves upon the previous construction by Goyal et al. (Eurocrypt 2025), and our PRF and digital signature schemes are respectively the first PRF and digital signature with classical-lessor secure key leasing property.
Expand
Sora Suegami, Enrico Bottazzi
ePrint Report ePrint Report
Lattice-based key-homomorphic encodings introduced by Boneh et al.~(Eurocrypt'14)---known as BGG+ encodings---underpin many primitives, including key-policy attribute-based encryption (KP-ABE). Many applications beyond KP-ABE require simulating the homomorphic evaluation of FHE ciphertexts over BGG+ encodings, which involves nonlinear operations on integers of magnitude up to the ciphertext modulus $q$. However, due to noise growth incurred by multiplication, the encodable integers must be kept small, typically bits, thereby forcing nonlinear operations to be simulated by Boolean circuits and incurring a circuit-size blow-up polynomial in $\log_2 q$. Apart from resorting to costly bootstrapping for BGG+ encodings, no method is known to beat this baseline. We propose a method to evaluate lookup tables~(LUTs) over BGG+ encodings that operates directly on base-$B$ digit representations for $2
Expand
Jiayu Xu, Zhiyuan Zhao
ePrint Report ePrint Report
An asymmetric Password-Authenticated Key Exchange (aPAKE) protocol allows a client and a server to agree upon a cryptographic key, with the only information shared in advance being a low-entropy password (memorized by the client) and a corresponding password file (stored in the server). The standard security definition for aPAKE is in the Universally Composable (UC) framework. Since aPAKE does not rely on a PKI, such protocols have received much attention in recent years due to their potential of replacing the traditional “password-over-TLS” approach for client-server authentication on the Internet.

One of the only two aPAKE protocols currently deployed in practice is Secure Remote Password (SRP) (Wu, NDSS 1998), which is used by millions of users on the Internet. A formal security analysis of SRP has long been elusive; the only security proof to date is the one by Dayanikli and Lehmann (CSF 2024) which is in a (somewhat non-standard) variant of UC called UC with angels. This framework allows the simulator access to an additional oracle that solves certain hard cryptographic problems.

In this work, we present a comprehensive new analysis of SRP, and clarify a number of ambiguities about its security:

1. We formally prove that the SRP is UC-secure if and only if one computational problem in the field is hard and one decisional problem is easy. As the decisional problem is likely hard in the field, this strongly suggests that SRP is not UC-secure and hence justifies the usage of UC with angels in the security analysis;

2. On the other hand, we show that the “angel” given to the simulator in the Dayanikli–Lehmann analysis is stronger than necessary, and can be replaced by a weaker oracle;

3. Finally, we prove that UC-with-angels-security is still stronger than the game-based security for aPAKE, i.e., SRP is still game-based secure under some reasonable assumptions.

Overall, we pinpoint the exact conditions under which SRP can be proven secure, reducing its security to a number of underlying hardness problems. Along the way we also identify and bridge multiple gaps in the Dayanikli–Lehmann analysis — most notably, the concrete security bound — and apply novel proof techniques that might find application in other contexts.
Expand
Akira Ito, Takayuki Miura, Yosuke Todo
ePrint Report ePrint Report
Deep Neural Networks (DNNs) have attracted significant attention, and their internal models are now considered valuable intellectual assets. Extracting these internal models through access to a DNN is conceptually similar to extracting a secret key via oracle access to a block cipher. Consequently, cryptanalytic techniques, particularly differential-like attacks, have been actively explored recently. ReLU-based DNNs are the most commonly and widely deployed architectures. While early works (e.g., Crypto 2020, Eurocrypt 2024) assume access to exact output logits, which are usually invisible, more recent works (e.g., Asiacrypt 2024, Eurocrypt 2025) focus on the hard-label setting, where only the final classification result (e.g., "dog" or "car") is available to the attacker. Notably, Carlini et al. (Eurocrypt 2025) demonstrated that model extraction is feasible in polynomial time even under this restricted setting.

In this paper, we first show that the assumptions underlying their attack become increasingly unrealistic as the attack-target depth grows. In practice, satisfying these assumptions requires an exponential number of queries with respect to the attack depth, implying that the attack does not always run in polynomial time. To address this critical limitation, we propose a novel attack method called CrossLayer Extraction. Instead of directly extracting the secret parameters (e.g., weights and biases) of a specific neuron, which incurs exponential cost, we exploit neuron interactions across layers to extract this information from deeper layers. This technique significantly reduces query complexity and mitigates the limitations of existing model extraction approaches.
Expand

08 October 2025

Jipeng Zhang, Jiaheng Zhang
ePrint Report ePrint Report
Falcon, a NTRU-based digital signature algorithm, has been selected by NIST as one of the post-quantum cryptography (PQC) standards. Compared to verification, the signature generation of Falcon is relatively slow. One of the core operations in signature generation is discrete Gaussian sampling, which involves a component known as the BaseSampler. The BaseSampler accounts for up to 30% of the time required for signature generation, making it a significant performance bottleneck. This work aims to address this bottleneck.

We design a vectorized version of the BaseSample and provide optimized implementations across six different instruction sets: SSE2, AVX2, AVX-512F, NEON, RISC-V Vector (RVV), and RV64IM. The AVX2 implementation, for instance, achieves an 8.4× speedup over prior work. Additionally, we optimize the FFT/iFFT operations using RVV and RV64D. For the RVV implementation, we introduce a new method using strided load/store instructions, with 4+4 and 4+5 layer merging strategies for Falcon-512 and Falcon-1024, respectively, resulting in a speedup of more than 4×.

Finally, we present the results of our optimized implementations across eight different instruction sets for signature generation of Falcon. For instance, our AVX2, AVX-512F, and RV64GCVB implementations achieve performance improvements of 23%, 36%, and 59%, respectively, for signature generation of Falcon-512.
Expand
Zhe Li, Chaoping Xing, Yizhou Yao, Chen Yuan, Mengmeng Zhou
ePrint Report ePrint Report
Dittmer, Ishai and Ostrovsky (ITC'21) proposed {\em line-point zero-knowledge proof} (LPZK), a simple ``commit-and-prove'' system, motivated by practical protocols for compressing correlated pseudorandomness used in secure multiparty computation (MPC). Typically, LPZK admits concretely efficient ZK protocols with a streaming, linear time prover, {\em but a linear size proof}. A natural question raised in the context is how far can we go in minimizing the proof size, while maintaining the prover efficiency. Though a recent work by Lin, Xing and Yao (ASIACRYPT'24) gives an interactive LPZK with a sublinear proof size $O(n+d^2\log{|\mathcal{C}|})$, it is still far from being {\em succinct}, where $n,d,|\mathcal{C}|$ are referred to as input size, circuit depth, and circuit size, respectively.

In this work, we beat the proof size barrier and propose {\em succinct LPZK arguments}, by distilling techniques from orthogonal studies on homomorphic secret sharing and succinct garbling. Specifically, under variants of group/lattice-based assumptions, we show the followings:

i) There exist succinct LPZK arguments with common reference string (CRS) size $O(n^{2/3})$, proof size $O(n^{2/3})$, prover time $O(n^{4/3}+|\mathcal{C}|)$, verification time $O(n+|\mathcal{C}|)$, and negligible soundness error, where both the prover and the verifier executions and be run in a streaming fashion.

ii) The above proof size can be further optimized to $O(1)$, at the cost of a larger CRS size $O(n)$, and prover time increased to $O(n^{2}+|\mathcal{C}|)$.

In general, our succinct LPZK arguments pave a new way for building designated-verifier zero-knowledge succinct non-interactive arguments of knowledge (dv-zkSNARKs), and new interesting features (e.g., streaming, constant sized proof with CRS size not proportional to the circuit size) are obtained for the first time along the way.
Expand
Youngjin Bae, Jung Hee Cheon, Minsik Kang, Taeseong Kim
ePrint Report ePrint Report
Fully Homomorphic encryption (FHE) allows computation without decryption, but often suffers from a ciphertext expansion ratio and overhead. On the other hand, AES is a widely adopted symmetric block cipher known for its efficiency and compact ciphertext size. However, its symmetric nature prevents direct computation on encrypted data. Homomorphic transciphering bridges these two approaches by enabling computation on AES-encrypted data using FHE-encrypted AES keys, thereby combining the compactness of AES with the flexibility of FHE.

In this work, we present a high-throughput homomorphic AES transciphering based on the CKKS scheme. Our design takes advantage of the ring conversion technique to the conjugate-invariant ring \cite{real_heaan} during the transciphering circuit, including bootstrapping, along with a suite of optimizations that reduce computational overhead. As a result, we achieved a throughput (per-block evaluation time) of 0.994ms—less than 1ms— outperforming the state-of-the-art \cite{xboot} by $1.58\times$ when processing $2^{15}$ AES-128 blocks under the same implementation environment, and support processing $2^{9}$ blocks within $3s$ on a single GPU.
Expand
Aditya Gulati, Yao-Ting Lin, Tomoyuki Morimae, Shogo Yamada
ePrint Report ePrint Report
Pseudorandom functions (PRFs) are one of the most fundamental primitives in classical cryptography. On the other hand, in quantum cryptography, it is possible that PRFs do not exist but their quantum analogues could exist, and still enabling many applications including SKE, MACs, commitments, multiparty computations, and more. Pseudorandom unitaries (PRUs) [Ji, Liu, Song, Crypto 2018], pseudorandom isometries (PRIs) [Ananth, Gulati, Kaleoglu, Lin, Eurocrypt 2024], and pseudorandom function-like state generators (PRFSGs) [Ananth, Qian, Yuen, Crypto 2022] are major quantum analogs of PRFs. PRUs imply PRIs, and PRIs imply PRFSGs, but the converse implications remain unknown. An important open question is whether these natural quantum analogues of PRFs are equivalent. In this paper, we partially resolve this question by ruling out black-box constructions of them: \begin{enumerate} \item There are no black-box constructions of $O(\log\lambda)$-ancilla PRUs from PRFSGs. \item There are no black-box constructions of $O(\log\lambda)$-ancilla PRIs with $O(\log\lambda)$ stretch from PRFSGs. \item There are no black-box constructions of $O(\log\lambda)$-ancilla PRIs with $O(\log\lambda)$ stretch from PRIs with $\Omega(\lambda)$ stretch. \end{enumerate} Here, $O(\log\lambda)$-ancilla means that the generation algorithm uses at most $O(\log\lambda)$ ancilla qubits. PRIs with $s(\lambda)$ stretch is PRIs mapping $\lambda$ qubits to $\lambda+s(\lambda)$ qubits. To rule out the above black-box constructions, we construct a unitary oracle that separates them. For the separations, we construct an adversary based on the quantum singular value transformation, which would be independent of interest and should be useful for other oracle separations in quantum cryptography.
Expand
Samuel Bouaziz--Ermann, Minki Hhan, Garazi Muguruza, Quoc-Huy Vu
ePrint Report ePrint Report
There are various notions of quantum pseudorandomness, such as pseudorandom unitaries (PRUs), pseudorandom state generators (PRSGs), and pseudorandom function-like state generators (PRFSGs). Unlike the different notions of classical pseudorandomness, which are known to be existentially equivalent to each other, the relationship between quantum pseudorandomness has yet to be fully established.

We present some evidence suggesting that some quantum pseudorandomness is unlikely to be constructed from others, or at least is hard to construct unless some conjectures are false. This indicates that quantum pseudorandomness could behave quite differently from classical pseudorandomness. We study new oracle worlds where one form of quantum pseudorandomness exists but another does not, under certain assumptions or constraints, and we provide potential directions toward achieving full black-box separation. More precisely: - We give a unitary oracle relative to which PRFSGs exist, but PRUs without using ancilla do not. This can be extended to general PRUs if a structural property of the PRU algorithm can be proven. - Assuming a conjecture similar to an isoperimetric inequality, we show a unitary oracle world where log-length output PRFSGs exist, but proving the existence of quantum-computable pseudorandom generators (QPRGs) with negligible correctness error is as hard as proving that BQP ≠ QCMA. This result suggests that the inverse-polynomial error in the state-of-the-art construction of QPRGs from log-length PRSGs is inherent. - Assuming the same conjecture, we prove that some natural methods of constructing super-log-length output PRSGs from log-length output PRFSGs are impossible. This partly complements the known hardness of shrinking the PRSG output lengths. Along the way, we also discuss other potential approaches to extend the PRSG output lengths.

All our worlds are based on (variants of) oracles that output Haar-random quantum states for each bit string, which can be viewed as a quantum version of the random oracle model, where output strings are replaced by quantum states.

Our results highlight technical difficulties when dealing with ancillary registers, measurements, and adaptivity in the quantum setting. As one of our key technical tools, we show an intriguing gentle behavior of intermediate measurements in algorithms producing outcome states with high purity, which may be of independent interest.
Expand
Yiting Liu, Biming Zhou, Haodong Jiang
ePrint Report ePrint Report
In the post-quantum migration of the traditional key establishment protocol, hybrid key encapsulation mechanisms (KEMs) are recommended by standards bodies, including NIST, ETSI, and national security agencies like NCSC-UK, BSI-Germany etc. Recently, several hybrid KEMs with CCA security such as XOR-then-MAC, Dual-PRF and X-Wing (being standardized by IETF) are proposed based on CCA KEMs obtained by applying the complicated Fujisaki-Okamoto transform to public-key encryption (PKE) schemes. In some cryptographic protocols such as PQ-Noise and Signal, 1CCA security (similar to CCA security except that the adversary is restricted to one single decapsulation query) is required. However, no specific scheme has been designed to specifically achieve 1CCA security (excluding the schemes that aim to achieve CCA security, as they inherently encompass 1CCA security).

In this paper, we propose CUKEM, a concise and unified hybrid KEM framework built directly on PKEs, and its variant CUKEM+, which achieves CCA security by replacing one PKE component with a nominal group. We prove that our schemes, equipped with different modules, achieve standard security notions in both the random oracle model and the quantum random oracle model, including IND-CPA, IND-1CCA, and IND-CCA. Compared to existing KEM-based constructions, \sys and CUKEM+ are more concise, as they simplify or even eliminate certain hash operations without compromising security. Our evaluation shows that the CCA-secure CUKEM+ achieves encapsulation and decapsulation speedups of up to 22.28% and 16.22%, respectively, over X-Wing, while the 1CCA-secure CUKEM attains gains of up to 13.97% and 104.31%.
Expand
Lewis Glabush, Patrick Longa, Michael Naehrig, Chris Peikert, Douglas Stebila, Fernando Virdia
ePrint Report ePrint Report
Large-scale quantum computers capable of implementing Shor's algorithm pose a significant threat to the security of the most widely used public-key cryptographic schemes. This risk has motivated substantial efforts by standards bodies and government agencies to identify and standardize quantum-safe cryptographic systems. Among the proposed solutions, lattice-based cryptography has emerged as the foundation for some of the most promising protocols. This paper describes FrodoKEM, a family of conservative key-encapsulation mechanisms (KEMs) whose security is based on generic, "unstructured" lattices. FrodoKEM is proposed as an alternative to the more efficient lattice schemes that utilize algebraically structured lattices, such as the recently standardized ML-KEM scheme. By relying on generic lattices, FrodoKEM minimizes the potential for future attacks that exploit algebraic structures while enabling simple and compact implementations. Our plain C implementations demonstrate that, despite its conservative design and parameterization, FrodoKEM remains practical. For instance, the full protocol at NIST security level 1 runs in approximately 0.97 ms on a server-class processor, and 4.98 ms on a smartphone-class processor. FrodoKEM obtains (single-target) IND-CCA security using a variant of the Fujisaki-Okamoto transform, applied to an underlying public-key encryption scheme called FrodoPKE. In addition, using a new tool called the Salted Fujisaki-Okamoto (SFO) transform, FrodoKEM is also shown to tightly achieve multi-target security, without increasing the FrodoPKE message length and with a negligible performance impact, based on the multi-target IND-CPA security of FrodoPKE.
Expand
Theophilus Agama
ePrint Report ePrint Report
We prove an extension of the lower bound due to Sch\"onhage on addition chains.
Expand
Pierrick Dartois, Jonathan Komada Eriksen, Riccardo Invernizzi, Frederik Vercauteren
ePrint Report ePrint Report
In this paper, we revisit the recent PEGASIS algorithm that computes an effective group action of the class group of any imaginary quadratic order $R$ on a set of supersingular elliptic curves primitively oriented by $R$. Although PEGASIS was the first algorithm showing the practicality of computing unrestricted class group actions at higher security levels, it is complicated and prone to failures, which leads to many rerandomizations.

In this work, we present a new algorithm, qt-Pegasis, which is much simpler, but at the same time faster and removes the need for rerandomization of the ideal we want to act with, since it never fails. It leverages the main technique of the recent qlapoti approach. However, qlapoti solves a norm equation in a quaternion algebra, which corresponds to the full endomorphism ring of a supersingular elliptic curve. We show that the algorithm still applies in the quadratic setting, by embedding the quadratic ideal into a quaternion ideal using a technique similar to the one applied in KLaPoTi. This way, we can reinterpret the output of qlapoti as four equivalent quadratic ideals, instead of two equivalent quaternion ideals. We then show how to construct a Clapoti-like diagram in dimension $2$, which embeds the action of the ideal in a $4$-dimensional isogeny.

We implemented our qt-Pegasis algorithm in SageMath for the CSURF group action, and we achieve a speedup over PEGASIS of $1.8\times$ for the 500-bit parameters and $2.6\times$ for the 4000-bit parameters.
Expand
◄ Previous Next ►