International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

08 October 2025

Seunghyun Cho, Eunyoung Seo, Young-Sik Kim
ePrint Report ePrint Report
We propose Locally Recoverable Data Availability Sampling (LR-DAS), which upgrades binary, threshold-based availability to graded verification by leveraging optimal locally recoverable codes (e.g., Tamo-Barg). Local groups of size $r+\alpha$ serve as atomic certification units: once $r$ verified openings fix a degree-$
Expand
Benedikt Bünz, Jessica Chen, Zachary DeStefano
ePrint Report ePrint Report
Permutation and lookup arguments are at the core of most deployed SNARK protocols today. Most modern techniques for performing them require a grand product check. This requires either committing to large field elements (E.g. in Plonk) or using GKR (E.g. in Spartan) which has worse verifier cost and proof size. Sadly, both have a soundness error that grows linearly with the input size.

We present two permutation arguments that have $\text{polylog}(n)/|\mathbb{F}|$ soundness error -- for reasonable input size $n=2^{32}$ and field size $|\mathbb{F}|=2^{128}$, the soundness error improves significantly from $2^{-96}$ to $2^{-120}$. Moreover, the arguments achieve $\log(n)$ verification cost and proof size without ever needing to commit to anything beyond the witness. $\mathsf{BiPerm}$ only requires the prover to perform $O(n)$ field operations on top of committing to the witness, but at the cost of limiting the choices of PCS. We show a stronger construction, $\mathsf{MulPerm}$, which has no restriction on the PCS choice and its prover performs essentially linear field operations, $n\cdot \tilde O(\sqrt{\log(n)})$.

Our permutation arguments generalize to lookups. We demonstrate how our arguments can be used to improve SNARK systems such as HyperPlonk and Spartan, and build a GKR-based protocol for proving non-uniform circuits.
Expand
Kunming Jiang, Fraser Brown, Riad S. Wahby
ePrint Report ePrint Report
General-purpose probabilistic proof systems operate on programs expressed as systems of arithmetic constraints—an unfriendly representation. There are two broad approaches in the literature to turning friendlier, high-level programs into constraints suitable for proof systems: direct translation and CPU emulation. Direct translators compile a program into highly optimized constraints; unfortunately, this process requires expressing all possible paths through the program, which results in compile times that scale with the program’s runtime rather than its size. In addition, the prover must pay the cost of every possible program path, even those untaken by a given input. In contrast, CPU emulators don’t compile programs to constraints; instead, they “execute” those programs, expressed as CPU instructions, on a CPU emulator that is itself expressed as constraints. As a result, this approach can’t perform powerful, program-specific op- timizations, and may require thousands of constraints where direct translation could use a clever handful. Worse, CPU emulators inherit an impractically expensive program state representation from the CPUs they emulate. This paper presents a compiler and proof system, CoBBl, that combines the benefits of CPU emulation and direct translation: it takes advantage of program-specific optimizations, but doesn’t pay for an unnecessary state representation or unexecuted computation. COBBL outper- forms CirC, a state-of-the-art direct translator, by $1–30\times$ on compile time and $26–350\times$ on prover time, and outperforms Jolt, a state-of-the-art CPU emulator, on prover time by $1.1– 1.8\times$ on Jolt-friendly benchmarks, and up to $100\times$ on other benchmarks.
Expand
Anindya Ganguly, Angshuman Karmakar, Suparna Kundu, Debranjan Pal, Sumanta Sarkar
ePrint Report ePrint Report
Blind signatures (BS) allow a signer to produce a valid signature on a message without learning the message itself. They have niche applications in privacy-preserving protocols such as digital cash and electronic voting. Non-interactive blind signatures (NIBS) remove the need for interaction between the signer and the user. In the post-quantum era, lattice-based NIBS schemes are studied as candidates for long-term security.

In Asiacrypt 2024, Baldimtsi et al. proposed the first lattice-based NIBS construction, whose security relies on the random one-more inhomogeneous short integer solution (rOM-ISIS) assumption. This rOM-ISIS is considered to be a non-standard assumption. Later, Zhang et al. introduced another lattice-based construction in ProvSec 2024, and proved its security under the standard module short integer solution (MSIS) assumption. We analyse the security of the latter scheme. In the random oracle model, we show that it fails to achieve both nonce blindness and receiver blindness. We present explicit attacks where an adversary breaks both properties with probability~1. Our attack is based on a crucial observation that uncovers a flaw in the design. Specifically, this flaw allows an attacker to link a message-signature pair with its presignature-nonce pair. In addition, we also identify a flaw in the unforgeability proof. Finally, we suggest a modification to address the issue, which is similar to Baldimtsi et al. construction, and its security relies again on the non-standard rOM-ISIS assumption. This work again raises the question of the feasibility of achieving NIBS from standard assumptions.
Expand
Konrad Hanff, Anja Lehmann, Cavit Özbay
ePrint Report ePrint Report
Privacy Pass is an anonymous authentication protocol which was initially designed by Davidson et al. (PETS’18) to reduce the number of CAPTCHAs that TOR users must solve. It issues single-use authentication tokens with anonymous and unlinkable redemption guarantees. The issuer and verifier of the protocol share a symmetric key, and tokens are privately verifiable. The protocol has sparked interest from both academia and industry, which led to an Internet Engineering Task Force (IETF) standard. While Davidson et al. formally analyzed the original protocol, the IETF standard introduces several changes to their protocol. Thus, the standardized version’s formal security remains unexamined. We fill this gap by analyzing the IETF standard’s privately verifiable Privacy Pass protocol.

In particular, there are two main discrepancies between the analyzed and standardized version: First, the IETF version introduces a redemption context, that can be used for blindly embedding a validity period into the Privacy Pass tokens. We show that this variant has significant differences to public metadata extension that has been proposed for the same purpose in the literature. Redemption context offers better privacy and security than public metadata. We capture both stronger guarantees through game-based security definitions and show that the currently considered one-more unforgeability notion for Privacy Pass is insufficient when a redemption context is used. Thus, we propose a new property, targeted context unforgeability, and prove its incomparability to one-more unforgeability. Second, Davidson et al. focused on a concrete Diffie-Hellman based construction, whereas the IETF version is built generically from a verifiable oblivious pseudorandom function (VOPRF). Further, the analyzed protocol omitted the full redemption phase needed to prevent double-spending. We prove that the generic IETF construction satisfies the desired security and privacy guarantees covering the full life-cycle of tokens. Our analysis relies on natural security properties of VOPRFs, providing compatibility with any secure VOPRF instantiation. This enables crypto agility, e.g., allowing to switch to efficient quantum-safe VOPRFs when they become available.
Expand
Barbara Jiabao Benedikt, Marc Fischlin
ePrint Report ePrint Report
Fiat-Shamir signatures replace the challenge in interactive identification schemes by a hash value over the commitment, the message, and possibly the signer's public key. This construction paradigm is well known and widely used in cryptography, for example, for Schnorr signatures and CRYSTALS-Dilithium. There is no ``general recipe'' for constructing Fiat-Shamir signatures regarding the inputs and their order for the hash computation, though, since the hash function is usually modeled as a monolithic random oracle. In practice, however, the hash function is based on the Merkle-Damgård or the sponge design.

Our work investigates whether there are advisable or imprudent input orders for hashing in Fiat-Shamir signatures. We examine Fiat-Shamir signatures with plain and nested hashing using Merkle-Damgård or sponge-based hash functions. We analyze these constructions in both classical and quantum settings. As part of our investigations, we introduce new security properties following the idea of quantum-annoyance of Eaton and Stebila (PQCrypto 2021), called annoyance for user exposure and signature forgeries. These properties ensure that an adversary against the hash function cannot gain a significant advantage when attempting to extend a successful attack on a single signature forgery to multiple users or to multiple forgeries of a single user. Instead, the adversary must create extra forgeries from scratch. Based on our analysis, we derive a simple rule: When using Fiat-Shamir signatures, one should hash the commitment before the message; all other inputs may be ordered arbitrarily.
Expand
Ganyuan Cao, Sylvain Chatel, Christian Knabenhans
ePrint Report ePrint Report
On-the-fly multi-party computation (MPC), introduced by López-Alt, Tromer, and Vaikuntanathan (STOC 2012), enables clients to dynamically join a computation without remaining continuously online. Yet, the original proposal suffers from substantial efficiency and expressivity limitations hindering practical deployments. Even though various techniques have been proposed to mitigate these shortcomings, seeing on-the-fly MPC as a combination of independent building blocks jeopardizes the security of the original model.

Thus, we revisit on-the-fly MPC in light of recent advances and extend its formal framework to incorporate efficiency and expressivity improvements. Our approach is built around \emph{multi-group homomorphic encryption} (MGHE), which generalizes threshold and multi-key HE and serves as the core primitive for on-the-fly MPC. Our contributions are fourfold: i) We propose new security notions for MGHE (e.g., IND-CPA with partial decryption, circuit privacy) and justify their suitability to the on-the-fly MPC. ii) We present the first ideal functionality for MGHE in the Universal Composability (UC) framework and characterize the conditions under which it can be realized, via reductions to our proposed security notions. iii) We present a generic protocol that securely realizes our on-the-fly MPC functionality against a semi-malicious adversary from our MGHE functionality. iv) Finally, we provide two generic compilers that lift these protocols to withstand a fully malicious adversary by leveraging zero-knowledge arguments.

Our analysis in the UC framework enables modular protocol analysis, where more efficient schemes can be seamlessly substituted as long as they meet the required security defined by the functionalities, retaining the security guarantees offered by the original construction.
Expand
Jonas Janneck
ePrint Report ePrint Report
Following the announcement of the first winners of the NIST post-quantum cryptography standardization process in 2022, cryptographic protocols are now undergoing migration to the newly standardized schemes. In most cases, this transition is realized through a hybrid approach, in which algorithms based on classical hardness assumptions, such as the discrete logarithm problem, are combined with post-quantum algorithms that rely on quantum-resistant assumptions, such as the Short Integer Solution (SIS) problem. A combiner for signature schemes can be obtained by simply concatenating the signatures of both schemes. This construction preserves unforgeability of the underlying schemes; however, it does not extend to stronger notions, such as strong unforgeability. Several applications, including authenticated key exchange and secure messaging, inherently require strong unforgeability, yet no existing combiner is known to achieve this property. This work introduces three practical combiners that preserve strong unforgeability and all BUFF (beyond unforgeability features) properties. Each combiner is tailored to a specific class of classical signature schemes capturing all broadly used schemes that are strongly unforgeable. Remarkably, all combiners can be instantiated with any post-quantum signature scheme in a black-box way making deployment practical and significantly less error prone. The proposed solutions are further highly efficient and have signatures that are at most the size of the (insecure) concatenation combiner. For instance, our most efficient combiner enables the combination of EdDSA with ML-DSA, yielding a signature size that is smaller than the sum of an individual EdDSA signature and an individual ML-DSA signature. Additionally, we identify a novel signature property that we call random-message validity and show that it can be used to replace the BUFF transform with the more efficient Pornin-Stern transform. The notion may be of independent interest.
Expand
Barbara Jiabao Benedikt, Sebastian Clermon, Marc Fischlin, Tobias Schmalz
ePrint Report ePrint Report
Signal's handshake protocol non-interactively generates a shared key between two parties for secure communication. The underlying protocol X3DH, on which the post-quantum hybrid successor, PQXDH, builds, computes three to four individual Diffie-Hellman (DH) keys by combining the long-term identity keys and the ephemeral secrets of the two parties. Each of these DH operations serves a different purpose, either to authenticate the derived key or to provide forward secrecy.

We present here an improved protocol for X3DH, which we call MuDH, and an improved protocol for PQXDH, pq-MuDH. Instead of computing the individual DH keys, we run a single multi-valued DH operation for integrating all contributions simultaneously into a single DH key. Our approach is based on techniques from batch verification (Bellare et al., Eurocrypt 1998), where one randomizes each contribution of the individual keys to get a secure scheme.

The solution results in execution times of roughly 60% of the original protocol, both in theory and in our benchmarks on mobile and desktop platforms. Our modifications are confined to the key derivation step, both Signal’s server infrastructure for public key retrieval and the message flow remain unchanged.
Expand
Fuyuki Kitagawa, Ryo Nishimaki, Nikhil Pappu
ePrint Report ePrint Report
Secure key leasing (SKL) enables the holder of a secret key for a cryptographic function to temporarily lease the key using quantum information. Later, the recipient can produce a deletion certificate—a proof that they no longer have access to the secret key. The security guarantee ensures that even a malicious recipient cannot continue to evaluate the function, after producing a valid deletion certificate.

Most prior work considers an adversarial recipient that obtains a single leased key, which is insufficient for many applications. In the more realistic collusion-resistant setting, security must hold even when polynomially many keys are leased (and subsequently deleted). However, achieving collusion-resistant SKL from standard assumptions remains poorly understood, especially for functionalities beyond decryption.

We improve upon this situation by introducing new pathways for constructing collusion-resistant SKL. Our main contributions are as follows:

- A generalization of quantum-secure collusion-resistant traitor tracing called multi-level traitor tracing (MLTT), and a compiler that transforms an MLTT scheme for a primitive $X$ into a collusion-resistant SKL scheme for primitive $X$.

- The first bounded collusion-resistant SKL scheme for PRFs, assuming LWE.

- A compiler that upgrades any single-key secure SKL scheme for digital signatures into one with unbounded collusion-resistance, assuming OWFs.

- A compiler that upgrades collusion-resistant SKL schemes with classical certificates to ones having verification-query resilience, assuming OWFs.
Expand
Xinyu Zhang, Ziyi Li, Ron Steinfeld, Raymond K. Zhao, Joseph K. Liu, Tsz Hon Yuen
ePrint Report ePrint Report
In this work, we present a novel commit-and-open $\Sigma$-protocol based on the Legendre and power residue PRFs. Our construction leverages the oblivious linear evaluation (OLE) correlations inherent in PRF evaluations and requires only black-box access to a tree-PRG-based vector commitment. By applying the standard Fiat-Shamir transform, we obtain a post-quantum signature scheme, Pegasus, which achieves short signature sizes (6025 to 7878 bytes) with efficient signing (3.910 to 19.438 ms) and verification times (3.942 to 18.999 ms). Furthermore, by pre-computing the commitment phase, the online response time can be reduced to as little as 0.047 to 0.721 ms. We prove the security of Pegasus in both the classical random oracle model (ROM) and the quantum random oracle model (QROM), filling a gap left by prior PRF-based signature schemes.

We further develop a ring signature scheme, PegaRing, that preserves the three-move commit-and-open structure of Pegasus. Compared to previous PRF-based ring signature called DualRing-PRF (ACISP 2024), PegaRing reduces the constant communication overhead by more than half and achieves significantly faster signing and verification. For a ring size of 1024, PegaRing yields signatures of 29 to 32 KB, with signing times of 8 to 44 ms, and verification times of 6 to 31 ms, depending on the parameters. Finally, we prove the security of PegaRing in both the ROM and the QROM, which is, to the best of our knowledge, the first symmetric-key primitives-based ring signature with practical performances and provable QROM security.
Expand
Tomoyuki Morimae, Yuki Shirakawa, Takashi Yamakawa
ePrint Report ePrint Report
One-way puzzles (OWPuzzs) introduced by Khurana and Tomer [STOC 2024] are a natural quantum analogue of one-way functions (OWFs), and one of the most fundamental primitives in ''Microcrypt'' where OWFs do not exist but quantum cryptography is possible. OWPuzzs are implied by almost all quantum cryptographic primitives, and imply several important applications such as non-interactive commitments and multi-party computations. A significant goal in the field of quantum cryptography is to base OWPuzzs on plausible assumptions that will not imply OWFs. In this paper, we base OWPuzzs on hardness of non-collapsing measurements. To that end, we introduce a new complexity class, $\mathbf{SampPDQP}$, which is a sampling version of the decision class $\mathbf{PDQP}$ introduced in [Aaronson, Bouland, Fitzsimons, and Lee, ITCS 2016]. We show that if $\mathbf{SampPDQP}$ is hard on average for quantum polynomial time, then OWPuzzs exist. We also show that if $\mathbf{SampPDQP}\not\subseteq\mathbf{SampBQP}$, then auxiliary-input OWPuzzs exist. $\mathbf{SampPDQP}$ is the class of sampling problems that can be solved by a classical polynomial-time deterministic algorithm that can make a single query to a non-collapsing measurement oracle, which is a ''magical'' oracle that can sample measurement results on quantum states without collapsing the states. Such non-collapsing measurements are highly unphysical operations that should be hard to realize in quantum polynomial-time, and therefore our assumptions on which OWPuzzs are based seem extremely plausible. Moreover, our assumptions do not seem to imply OWFs, because the possibility of inverting classical functions would not be helpful to realize quantum non-collapsing measurements. We also study upperbounds of the hardness of $\mathbf{SampPDQP}$. We introduce a new primitive, distributional collision-resistant puzzles (dCRPuzzs), which are a natural quantum analogue of distributional collision-resistant hashing [Dubrov and Ishai, STOC 2006]. We show that dCRPuzzs imply average-case hardness of $\mathbf{SampPDQP}$ (and therefore OWPuzzs as well). We also show that two-message honest-statistically-hiding commitments with classical communication and one-shot message authentication codes (MACs), which are a privately-verifiable version of one-shot signatures [Amos, Georgiou, Kiayias, Zhandry, STOC 2020], imply dCRPuzzs.
Expand
Supriya Adhikary, Puja Mondal, Angshuman Karmakar
ePrint Report ePrint Report
Zero-knowledge succinct arguments of knowledge (zkSNARK) provide short privacy-preserving proofs for general NP problems. Public verifiability of zkSNARK protocols is a desirable property, where the proof can be verified by anyone once generated. Designated-verifier zero-knowledge is useful when it is necessary that only one or a few individuals should have access to the verification result. All zkSNARK schemes are either fully publicly verifiable or can be verified by a designated verifier with a secret verification key.

In this work, we propose a new notion of a hybrid verification mechanism. Here, the prover generates a proof that can be verified by a designated verifier. For this proof, the designated verifier can generate auxiliary information with its secret key. The combination of this proof and the auxiliary information allows any public verifier to verify the proof without any other information. We also introduce necessary security notions and mechanisms to identify a cheating designated verifier or the prover. Our hybrid verification zkSNARK construction is based on module lattices and adapts the zkSNARK construction by Ishai et al. (CCS 2021). In this construction, the designated verifier is required only once after proof generation to create the publicly verifiable proof. Our construction achieves a small constant-size proof and fast verification time, which is linear in the statement size.
Expand
Puja Mondal, Suparna Kundu, Hikaru Nishiyama, Supriya Adhikary, Daisuke Fujimoto, Yuichi Hayashi, Angshuman Karmakar
ePrint Report ePrint Report
LESS is a digital signature scheme that is currently in the second round of the National Institute of Standards and Technology's (NIST's) ongoing additional call for post-quantum standardization. LESS has been designed using a zero-knowledge identification scheme using a Fiat-Shamir transformation. The design of LESS is based on the hardness of the linear equivalence problem. However, the designers updated the scheme in the second round to improve efficiency and signature size. As we have seen before, in the NIST standardization process, analysis of physical security attacks such as side-channel and fault-injection attacks is considered nearly as important as mathematical security. In recent years, several works have been shown on LESS version 1 (LESS-v1). Among these, the work by Mondal et al. that appeared in Asiacrypt-2024 is most notable. This work showed several attack surfaces on LESS-v1 that can be exploited using different fault attacks. However, the implementation of LESS version 2 (LESS-v2) has not yet been explored in this direction.

In this work, we analyze the new structure of LESS-v2 signature scheme and propose a process of signature forgery. These techniques do not require the full signing key, but some other secret-related information. Our attacks uncovered multiple such attack surfaces in the design of LESS-v2 from where we can recover this secret-related information. We assume an adversary can use interrupted execution techniques, such as fault attacks, to recover this extra information. We have analyzed the average number of required faults for two particular fault models to recover the secret equivalent component and observed that we need only $1$ faulted signature for most of the parameter sets. Our attacks rely on very simple and standard fault models. We demonstrated these using both simulation and a simple experimental setup.
Expand
Minki Hhan, Tomoyuki Morimae, Yasuaki Okinaka, Takashi Yamakawa
ePrint Report ePrint Report
With the rapid advances in quantum computer architectures and the emerging prospect of large-scale quantum memory, it is becoming essential to classically verify that remote devices genuinely allocate the promised quantum memory with specified number of qubits and coherence time. In this paper, we introduce a new concept, proofs of quantum memory (PoQM). A PoQM is an interactive protocol between a classical probabilistic polynomial-time (PPT) verifier and a quantum polynomial-time (QPT) prover over a classical channel where the verifier can verify that the prover has possessed a quantum memory with a certain number of qubits during a specified period of time. PoQM generalize the notion of proofs of quantumness (PoQ) [Brakerski, Christiano, Mahadev, Vazirani, and Vidick, JACM 2021]. Our main contributions are a formal definition of PoQM and its constructions based on hardness of LWE. Specifically, we give two constructions of PoQM. The first is of a four-round and has negligible soundness error under subexponential-hardness of LWE. The second is of a polynomial-round and has inverse-polynomial soundness error under polynomial-hardness of LWE. As a lowerbound of PoQM, we also show that PoQM imply one-way puzzles. Moreover, a certain restricted version of PoQM implies quantum computation classical communication (QCCC) key exchange.
Expand
Yang Liu, Zhen Shi, Chenhui Jin, Jiyan Zhang, Ting Cui, Dengguo Feng
ePrint Report ePrint Report
LOL (League of Legends) is a general framework for designing blockwise stream ciphers to achieve higher software performance in 5G/6G. Following this framework, stream ciphers LOL-MINI and LOL-DOUBLE are proposed, each with a 256-bit key. This paper focuses on analyzing their security against correlation attacks. Additionally, we propose an observation on constructing linear approximation trails. The repeated linear approximations in trails may not only increase the number of active S-boxes but also make the correlations of corresponding approximations zero. We propose a new rule to address this situation and develop multiple correlation attacks. Furthermore, we use theoretical cryptanalysis to lower the time complexity of aggregation and construct multiple linear approximations efficiently. For LOL-MINI, we construct approximately $2^{32.17}$ linear approximations with absolute correlations ranging from $2^{-118.38}$ to $2^{-127}$. We launch the developed multiple correlation attack on LOL-MINI with a time complexity of $2^{250.56}$, a data complexity of $2^{250}$, a memory complexity of $2^{250}$ and a success probability of 99.4$\%$. For LOL-DOUBLE, there similarly exist approximately ${{2}^{22.86}}$ linear approximations with absolute correlations ranging from $2^{-147.23}$ to $2^{-156}$. It is worth noting that our results do not violate the declaration of LOL-MINI since the maximum length of keystreams for a single pair of key and IV is less than $2^{64}$.
Expand
Sabine Oechsner, Vitor Pereira, Peter Scholl
ePrint Report ePrint Report
Computer-aided cryptography, with particular emphasis on formal verification, promises an interesting avenue to establish strong guarantees about cryptographic primitives. The appeal of formal verification is to replace the error-prone pen-and-paper proofs with a proof that was checked by a computer and, therefore, does not need to be checked by a human. In this paper, we ask the question of how reliable are these machine-checked proofs by analyzing a formally verified implementation of the Line-Point Zero-Knowledge (LPZK) protocol (Dittmer, Eldefrawy, Graham-Lengrand, Lu, Ostrovsky and Pereira, CCS 2023). The implementation was developed in EasyCrypt and compiled into OCaml code that was claimed to be high-assurance, i.e., that offers the formal guarantees of guarantees of completeness, soundness, and zero knowledge. We show that despite these formal claims, the EasyCrypt model was flawed, and the implementation (supposed to be high-assurance) had critical security vulnerabilities. Concretely, we demonstrate that: 1) the EasyCrypt soundness proof was incorrectly done, allowing an attack on the scheme that leads honest verifiers into accepting false statements; and 2) the EasyCrypt formalization inherited a deficient model of zero knowledge for a class of non-interactive zero knowledge protocols that also allows the verifier to recover the witness. In addition, we demonstrate 3) a gap in the proof of the perfect zero knowledge property of the LPZK variant of Dittmer, Ishai, Lu and Ostrovsky (CCS 2022) that the EasyCrypt proof is based, which, depending on the interpretation of the protocol and security claim, could allow a malicious verifier to learn the witness. Our findings highlight the importance of scrutinizing machine-checked proofs, including their models and assumptions. We offer lessons learned for both users and reviewers of tools like EasyCrypt, aimed at improving the transparency, rigor, and accessibility of machine-checked proofs. By sharing our methodology and challenges, we hope to foster a culture of deeper engagement with formal verification in the cryptographic community.
Expand
Zhenkai Hu, Haofei Liang, Xiao Wang, Xiang Xie, Kang Yang, Yu Yu, Wenhao Zhang
ePrint Report ePrint Report
Threshold fully homomorphic encryption (ThFHE) enables multiple parties to perform arbitrary computation over encrypted data, while the secret key is distributed across the parties. The main task of designing ThFHE is to construct threshold key-generation and decryption protocols for FHE schemes. Among existing FHE schemes, FHEW-like cryptosystems enjoy the advantage of fast bootstrapping and small parameters. However, known ThFHE solutions use the ``noise-flooding'' technique to realize threshold decryption, which requires either large parameters or switching to a scheme with large parameters via bootstrapping, leading to a slow decryption process. Besides, for key generation, existing ThFHE schemes either assume a generic MPC or a trusted setup, or incur noise growth that is linear in the number $n$ of parties. In this paper, we propose a fast ThFHE scheme Ajax, by designing threshold key-generation and decryption protocols for FHEW-like cryptosystems. In particular, for threshold decryption, we eliminate the need for noise flooding, and instead present a new technique called ``mask-then-open'' based on random double sharings over different rings, while keeping the advantage of small parameters. For threshold key generation, we show a simple approach to reduce the noise growth from $n$ times to $max(0.038n,2)$ times in the honest-majority setting, where at most $t=\floor{(n-1)/2}$ parties are corrupted. Our end-to-end implementation reports the running time 17.6 $s$ and 0.9 $ms$ (resp., 91.9 $s$ and 4.4 $ms$) of generating a set of keys and decrypting a single ciphertext respectively, for $n=3$ (resp., $n=21$) parties under the network of 1 Gbps bandwidth and 1 $ms$ ping time. Compared to the state-of-the-art implementation, our protocol improves the end-to-end performance of the threshold decryption protocol by a factor of at least $5.7\times$ $\sim$ $283.6\times$ across different network latencies from $t=1$ to $t=13$. Our approaches can also be applied in other types of FHE schemes like BGV, BFV, and CKKS. In addition, we show how to strengthen our protocols in order to guarantee the security against malicious adversaries.
Expand
Rohit Chatterjee, Changrui Mu, Prashant Nalini Vasudevan
ePrint Report ePrint Report
We construct a public-key encryption scheme from the hardness of the (planted) MinRank problem over uniformly random instances. This corresponds to the hardness of decoding random linear rank-metric codes. Existing constructions of public-key encryption from such problems require hardness for structured instances arising from the masking of efficiently decodable codes. Central to our construction is the development of a new notion of duality for rank-metric codes.
Expand
Anik Basu Bhaumik, Suman Dutta, Siyi Wang, Anubhab Baksi, Kyungbae Jang, Amit Saha, Hwajeong Seo, Anupam Chattopadhyay
ePrint Report ePrint Report
The ZUC stream cipher is integral to modern mobile communication standards, including 4G and 5G, providing secure data transmission across global networks. Recently, Dutta et al. (Indocrypt, 2024) presented the first quantum resource estimation of ZUC under Grover's search, Although preliminary, this work marks the beginning of quantum security analysis for ZUC. In this paper, we present an improved quantum resource estimation for ZUC, offering tighter bounds for Grover-based exhaustive key search. Beyond traditional quantum resource estimations, we also provide a concrete timescale required to execute such attacks using the specified quantum resources. Our findings show that a full-round, low depth implementation of ZUC-128 can be realized with a maximum of $375$ ancilla qubits, a T-count of $106536$, and a T-depth of $15816$. Furthermore, the Grover-based key search can be performed most efficiently using $1201$ logical qubits, $170681$ T gates, and a T-depth of $78189$, resulting in a runtime of $1.78\times10^{11}$ years, an improvement of 93.43% over the estimated $2.71 \times 10^{12}$ years by the implementation given by Dutta et al., we also provide akin analysis for ZUC-256 with an 99.23% decrease in time. These estimations are done assuming state-of-the-art superconducting qubit error-correcting technology.
Expand
◄ Previous Next ►