International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

08 October 2025

Barbara Jiabao Benedikt, Sebastian Clermon, Marc Fischlin, Tobias Schmalz
ePrint Report ePrint Report
Signal's handshake protocol non-interactively generates a shared key between two parties for secure communication. The underlying protocol X3DH, on which the post-quantum hybrid successor, PQXDH, builds, computes three to four individual Diffie-Hellman (DH) keys by combining the long-term identity keys and the ephemeral secrets of the two parties. Each of these DH operations serves a different purpose, either to authenticate the derived key or to provide forward secrecy.

We present here an improved protocol for X3DH, which we call MuDH, and an improved protocol for PQXDH, pq-MuDH. Instead of computing the individual DH keys, we run a single multi-valued DH operation for integrating all contributions simultaneously into a single DH key. Our approach is based on techniques from batch verification (Bellare et al., Eurocrypt 1998), where one randomizes each contribution of the individual keys to get a secure scheme.

The solution results in execution times of roughly 60% of the original protocol, both in theory and in our benchmarks on mobile and desktop platforms. Our modifications are confined to the key derivation step, both Signal’s server infrastructure for public key retrieval and the message flow remain unchanged.
Expand
Fuyuki Kitagawa, Ryo Nishimaki, Nikhil Pappu
ePrint Report ePrint Report
Secure key leasing (SKL) enables the holder of a secret key for a cryptographic function to temporarily lease the key using quantum information. Later, the recipient can produce a deletion certificate—a proof that they no longer have access to the secret key. The security guarantee ensures that even a malicious recipient cannot continue to evaluate the function, after producing a valid deletion certificate.

Most prior work considers an adversarial recipient that obtains a single leased key, which is insufficient for many applications. In the more realistic collusion-resistant setting, security must hold even when polynomially many keys are leased (and subsequently deleted). However, achieving collusion-resistant SKL from standard assumptions remains poorly understood, especially for functionalities beyond decryption.

We improve upon this situation by introducing new pathways for constructing collusion-resistant SKL. Our main contributions are as follows:

- A generalization of quantum-secure collusion-resistant traitor tracing called multi-level traitor tracing (MLTT), and a compiler that transforms an MLTT scheme for a primitive $X$ into a collusion-resistant SKL scheme for primitive $X$.

- The first bounded collusion-resistant SKL scheme for PRFs, assuming LWE.

- A compiler that upgrades any single-key secure SKL scheme for digital signatures into one with unbounded collusion-resistance, assuming OWFs.

- A compiler that upgrades collusion-resistant SKL schemes with classical certificates to ones having verification-query resilience, assuming OWFs.
Expand
Xinyu Zhang, Ziyi Li, Ron Steinfeld, Raymond K. Zhao, Joseph K. Liu, Tsz Hon Yuen
ePrint Report ePrint Report
In this work, we present a novel commit-and-open $\Sigma$-protocol based on the Legendre and power residue PRFs. Our construction leverages the oblivious linear evaluation (OLE) correlations inherent in PRF evaluations and requires only black-box access to a tree-PRG-based vector commitment. By applying the standard Fiat-Shamir transform, we obtain a post-quantum signature scheme, Pegasus, which achieves short signature sizes (6025 to 7878 bytes) with efficient signing (3.910 to 19.438 ms) and verification times (3.942 to 18.999 ms). Furthermore, by pre-computing the commitment phase, the online response time can be reduced to as little as 0.047 to 0.721 ms. We prove the security of Pegasus in both the classical random oracle model (ROM) and the quantum random oracle model (QROM), filling a gap left by prior PRF-based signature schemes.

We further develop a ring signature scheme, PegaRing, that preserves the three-move commit-and-open structure of Pegasus. Compared to previous PRF-based ring signature called DualRing-PRF (ACISP 2024), PegaRing reduces the constant communication overhead by more than half and achieves significantly faster signing and verification. For a ring size of 1024, PegaRing yields signatures of 29 to 32 KB, with signing times of 8 to 44 ms, and verification times of 6 to 31 ms, depending on the parameters. Finally, we prove the security of PegaRing in both the ROM and the QROM, which is, to the best of our knowledge, the first symmetric-key primitives-based ring signature with practical performances and provable QROM security.
Expand
Tomoyuki Morimae, Yuki Shirakawa, Takashi Yamakawa
ePrint Report ePrint Report
One-way puzzles (OWPuzzs) introduced by Khurana and Tomer [STOC 2024] are a natural quantum analogue of one-way functions (OWFs), and one of the most fundamental primitives in ''Microcrypt'' where OWFs do not exist but quantum cryptography is possible. OWPuzzs are implied by almost all quantum cryptographic primitives, and imply several important applications such as non-interactive commitments and multi-party computations. A significant goal in the field of quantum cryptography is to base OWPuzzs on plausible assumptions that will not imply OWFs. In this paper, we base OWPuzzs on hardness of non-collapsing measurements. To that end, we introduce a new complexity class, $\mathbf{SampPDQP}$, which is a sampling version of the decision class $\mathbf{PDQP}$ introduced in [Aaronson, Bouland, Fitzsimons, and Lee, ITCS 2016]. We show that if $\mathbf{SampPDQP}$ is hard on average for quantum polynomial time, then OWPuzzs exist. We also show that if $\mathbf{SampPDQP}\not\subseteq\mathbf{SampBQP}$, then auxiliary-input OWPuzzs exist. $\mathbf{SampPDQP}$ is the class of sampling problems that can be solved by a classical polynomial-time deterministic algorithm that can make a single query to a non-collapsing measurement oracle, which is a ''magical'' oracle that can sample measurement results on quantum states without collapsing the states. Such non-collapsing measurements are highly unphysical operations that should be hard to realize in quantum polynomial-time, and therefore our assumptions on which OWPuzzs are based seem extremely plausible. Moreover, our assumptions do not seem to imply OWFs, because the possibility of inverting classical functions would not be helpful to realize quantum non-collapsing measurements. We also study upperbounds of the hardness of $\mathbf{SampPDQP}$. We introduce a new primitive, distributional collision-resistant puzzles (dCRPuzzs), which are a natural quantum analogue of distributional collision-resistant hashing [Dubrov and Ishai, STOC 2006]. We show that dCRPuzzs imply average-case hardness of $\mathbf{SampPDQP}$ (and therefore OWPuzzs as well). We also show that two-message honest-statistically-hiding commitments with classical communication and one-shot message authentication codes (MACs), which are a privately-verifiable version of one-shot signatures [Amos, Georgiou, Kiayias, Zhandry, STOC 2020], imply dCRPuzzs.
Expand
Supriya Adhikary, Puja Mondal, Angshuman Karmakar
ePrint Report ePrint Report
Zero-knowledge succinct arguments of knowledge (zkSNARK) provide short privacy-preserving proofs for general NP problems. Public verifiability of zkSNARK protocols is a desirable property, where the proof can be verified by anyone once generated. Designated-verifier zero-knowledge is useful when it is necessary that only one or a few individuals should have access to the verification result. All zkSNARK schemes are either fully publicly verifiable or can be verified by a designated verifier with a secret verification key.

In this work, we propose a new notion of a hybrid verification mechanism. Here, the prover generates a proof that can be verified by a designated verifier. For this proof, the designated verifier can generate auxiliary information with its secret key. The combination of this proof and the auxiliary information allows any public verifier to verify the proof without any other information. We also introduce necessary security notions and mechanisms to identify a cheating designated verifier or the prover. Our hybrid verification zkSNARK construction is based on module lattices and adapts the zkSNARK construction by Ishai et al. (CCS 2021). In this construction, the designated verifier is required only once after proof generation to create the publicly verifiable proof. Our construction achieves a small constant-size proof and fast verification time, which is linear in the statement size.
Expand
Puja Mondal, Suparna Kundu, Hikaru Nishiyama, Supriya Adhikary, Daisuke Fujimoto, Yuichi Hayashi, Angshuman Karmakar
ePrint Report ePrint Report
LESS is a digital signature scheme that is currently in the second round of the National Institute of Standards and Technology's (NIST's) ongoing additional call for post-quantum standardization. LESS has been designed using a zero-knowledge identification scheme using a Fiat-Shamir transformation. The design of LESS is based on the hardness of the linear equivalence problem. However, the designers updated the scheme in the second round to improve efficiency and signature size. As we have seen before, in the NIST standardization process, analysis of physical security attacks such as side-channel and fault-injection attacks is considered nearly as important as mathematical security. In recent years, several works have been shown on LESS version 1 (LESS-v1). Among these, the work by Mondal et al. that appeared in Asiacrypt-2024 is most notable. This work showed several attack surfaces on LESS-v1 that can be exploited using different fault attacks. However, the implementation of LESS version 2 (LESS-v2) has not yet been explored in this direction.

In this work, we analyze the new structure of LESS-v2 signature scheme and propose a process of signature forgery. These techniques do not require the full signing key, but some other secret-related information. Our attacks uncovered multiple such attack surfaces in the design of LESS-v2 from where we can recover this secret-related information. We assume an adversary can use interrupted execution techniques, such as fault attacks, to recover this extra information. We have analyzed the average number of required faults for two particular fault models to recover the secret equivalent component and observed that we need only $1$ faulted signature for most of the parameter sets. Our attacks rely on very simple and standard fault models. We demonstrated these using both simulation and a simple experimental setup.
Expand
Minki Hhan, Tomoyuki Morimae, Yasuaki Okinaka, Takashi Yamakawa
ePrint Report ePrint Report
With the rapid advances in quantum computer architectures and the emerging prospect of large-scale quantum memory, it is becoming essential to classically verify that remote devices genuinely allocate the promised quantum memory with specified number of qubits and coherence time. In this paper, we introduce a new concept, proofs of quantum memory (PoQM). A PoQM is an interactive protocol between a classical probabilistic polynomial-time (PPT) verifier and a quantum polynomial-time (QPT) prover over a classical channel where the verifier can verify that the prover has possessed a quantum memory with a certain number of qubits during a specified period of time. PoQM generalize the notion of proofs of quantumness (PoQ) [Brakerski, Christiano, Mahadev, Vazirani, and Vidick, JACM 2021]. Our main contributions are a formal definition of PoQM and its constructions based on hardness of LWE. Specifically, we give two constructions of PoQM. The first is of a four-round and has negligible soundness error under subexponential-hardness of LWE. The second is of a polynomial-round and has inverse-polynomial soundness error under polynomial-hardness of LWE. As a lowerbound of PoQM, we also show that PoQM imply one-way puzzles. Moreover, a certain restricted version of PoQM implies quantum computation classical communication (QCCC) key exchange.
Expand
Yang Liu, Zhen Shi, Chenhui Jin, Jiyan Zhang, Ting Cui, Dengguo Feng
ePrint Report ePrint Report
LOL (League of Legends) is a general framework for designing blockwise stream ciphers to achieve higher software performance in 5G/6G. Following this framework, stream ciphers LOL-MINI and LOL-DOUBLE are proposed, each with a 256-bit key. This paper focuses on analyzing their security against correlation attacks. Additionally, we propose an observation on constructing linear approximation trails. The repeated linear approximations in trails may not only increase the number of active S-boxes but also make the correlations of corresponding approximations zero. We propose a new rule to address this situation and develop multiple correlation attacks. Furthermore, we use theoretical cryptanalysis to lower the time complexity of aggregation and construct multiple linear approximations efficiently. For LOL-MINI, we construct approximately $2^{32.17}$ linear approximations with absolute correlations ranging from $2^{-118.38}$ to $2^{-127}$. We launch the developed multiple correlation attack on LOL-MINI with a time complexity of $2^{250.56}$, a data complexity of $2^{250}$, a memory complexity of $2^{250}$ and a success probability of 99.4$\%$. For LOL-DOUBLE, there similarly exist approximately ${{2}^{22.86}}$ linear approximations with absolute correlations ranging from $2^{-147.23}$ to $2^{-156}$. It is worth noting that our results do not violate the declaration of LOL-MINI since the maximum length of keystreams for a single pair of key and IV is less than $2^{64}$.
Expand
Sabine Oechsner, Vitor Pereira, Peter Scholl
ePrint Report ePrint Report
Computer-aided cryptography, with particular emphasis on formal verification, promises an interesting avenue to establish strong guarantees about cryptographic primitives. The appeal of formal verification is to replace the error-prone pen-and-paper proofs with a proof that was checked by a computer and, therefore, does not need to be checked by a human. In this paper, we ask the question of how reliable are these machine-checked proofs by analyzing a formally verified implementation of the Line-Point Zero-Knowledge (LPZK) protocol (Dittmer, Eldefrawy, Graham-Lengrand, Lu, Ostrovsky and Pereira, CCS 2023). The implementation was developed in EasyCrypt and compiled into OCaml code that was claimed to be high-assurance, i.e., that offers the formal guarantees of guarantees of completeness, soundness, and zero knowledge. We show that despite these formal claims, the EasyCrypt model was flawed, and the implementation (supposed to be high-assurance) had critical security vulnerabilities. Concretely, we demonstrate that: 1) the EasyCrypt soundness proof was incorrectly done, allowing an attack on the scheme that leads honest verifiers into accepting false statements; and 2) the EasyCrypt formalization inherited a deficient model of zero knowledge for a class of non-interactive zero knowledge protocols that also allows the verifier to recover the witness. In addition, we demonstrate 3) a gap in the proof of the perfect zero knowledge property of the LPZK variant of Dittmer, Ishai, Lu and Ostrovsky (CCS 2022) that the EasyCrypt proof is based, which, depending on the interpretation of the protocol and security claim, could allow a malicious verifier to learn the witness. Our findings highlight the importance of scrutinizing machine-checked proofs, including their models and assumptions. We offer lessons learned for both users and reviewers of tools like EasyCrypt, aimed at improving the transparency, rigor, and accessibility of machine-checked proofs. By sharing our methodology and challenges, we hope to foster a culture of deeper engagement with formal verification in the cryptographic community.
Expand
Zhenkai Hu, Haofei Liang, Xiao Wang, Xiang Xie, Kang Yang, Yu Yu, Wenhao Zhang
ePrint Report ePrint Report
Threshold fully homomorphic encryption (ThFHE) enables multiple parties to perform arbitrary computation over encrypted data, while the secret key is distributed across the parties. The main task of designing ThFHE is to construct threshold key-generation and decryption protocols for FHE schemes. Among existing FHE schemes, FHEW-like cryptosystems enjoy the advantage of fast bootstrapping and small parameters. However, known ThFHE solutions use the ``noise-flooding'' technique to realize threshold decryption, which requires either large parameters or switching to a scheme with large parameters via bootstrapping, leading to a slow decryption process. Besides, for key generation, existing ThFHE schemes either assume a generic MPC or a trusted setup, or incur noise growth that is linear in the number $n$ of parties. In this paper, we propose a fast ThFHE scheme Ajax, by designing threshold key-generation and decryption protocols for FHEW-like cryptosystems. In particular, for threshold decryption, we eliminate the need for noise flooding, and instead present a new technique called ``mask-then-open'' based on random double sharings over different rings, while keeping the advantage of small parameters. For threshold key generation, we show a simple approach to reduce the noise growth from $n$ times to $max(0.038n,2)$ times in the honest-majority setting, where at most $t=\floor{(n-1)/2}$ parties are corrupted. Our end-to-end implementation reports the running time 17.6 $s$ and 0.9 $ms$ (resp., 91.9 $s$ and 4.4 $ms$) of generating a set of keys and decrypting a single ciphertext respectively, for $n=3$ (resp., $n=21$) parties under the network of 1 Gbps bandwidth and 1 $ms$ ping time. Compared to the state-of-the-art implementation, our protocol improves the end-to-end performance of the threshold decryption protocol by a factor of at least $5.7\times$ $\sim$ $283.6\times$ across different network latencies from $t=1$ to $t=13$. Our approaches can also be applied in other types of FHE schemes like BGV, BFV, and CKKS. In addition, we show how to strengthen our protocols in order to guarantee the security against malicious adversaries.
Expand
Rohit Chatterjee, Changrui Mu, Prashant Nalini Vasudevan
ePrint Report ePrint Report
We construct a public-key encryption scheme from the hardness of the (planted) MinRank problem over uniformly random instances. This corresponds to the hardness of decoding random linear rank-metric codes. Existing constructions of public-key encryption from such problems require hardness for structured instances arising from the masking of efficiently decodable codes. Central to our construction is the development of a new notion of duality for rank-metric codes.
Expand
Anik Basu Bhaumik, Suman Dutta, Siyi Wang, Anubhab Baksi, Kyungbae Jang, Amit Saha, Hwajeong Seo, Anupam Chattopadhyay
ePrint Report ePrint Report
The ZUC stream cipher is integral to modern mobile communication standards, including 4G and 5G, providing secure data transmission across global networks. Recently, Dutta et al. (Indocrypt, 2024) presented the first quantum resource estimation of ZUC under Grover's search, Although preliminary, this work marks the beginning of quantum security analysis for ZUC. In this paper, we present an improved quantum resource estimation for ZUC, offering tighter bounds for Grover-based exhaustive key search. Beyond traditional quantum resource estimations, we also provide a concrete timescale required to execute such attacks using the specified quantum resources. Our findings show that a full-round, low depth implementation of ZUC-128 can be realized with a maximum of $375$ ancilla qubits, a T-count of $106536$, and a T-depth of $15816$. Furthermore, the Grover-based key search can be performed most efficiently using $1201$ logical qubits, $170681$ T gates, and a T-depth of $78189$, resulting in a runtime of $1.78\times10^{11}$ years, an improvement of 93.43% over the estimated $2.71 \times 10^{12}$ years by the implementation given by Dutta et al., we also provide akin analysis for ZUC-256 with an 99.23% decrease in time. These estimations are done assuming state-of-the-art superconducting qubit error-correcting technology.
Expand
David Heath, Nakul Khambhati, Rafail Ostrovsky, Turan Vural
ePrint Report ePrint Report
Authenticated garbling, introduced by Wang et al., CCS'17, is a leading paradigm for achieving constant-round maliciously-secure 2PC. In this work, we upgrade authenticated garbling to efficiently support tensor gates (introduced in the context of semi-honest garbling by Heath et al., CCS'21) using the one-hot garbling technique. Our maliciously-secure garbled tensor gate computes $\boldsymbol {x}\otimes \boldsymbol{y}$ for $\boldsymbol{x}\in \{0,1\}^n,\boldsymbol{y} \in \{0,1\}^m$ with $O(n+m)\kappa + O(nm)$ bits of communication, where $\kappa$ is the computational security parameter and $n,m$ are logarithmic in $\kappa$. This improves the best prior constant-round maliciously secure approach, which incurs $O(nm)\kappa$ communication. Our protocol is concretely efficient and allows improvement over state-of-the-art for applications including integer multiplication, matrix multiplication, and more. We benchmark the online phase of our protocol and observe a $5.41\times$ improvement in communication and $3.35\times$ improvement in wall clock time when computing a $128\times 128$ bit tensor product.
Expand
Goutam Paul, Anup Kumar Kundu, Sucheta Chakrabarti
ePrint Report ePrint Report
ChaCha and Salsa are two ARX based stream ciphers which are widely used in data encryption including TLS v1.3 standard, VPN software etc. Exploiting Probabilistic Neutral Bits (PNB) is one of the most significant cryptanalysis strategies for reduced round versions of these ciphers. The seminal work using PNB by Aumasson et al. (FSE 2008) claims that the PNB set mostly depends on the output bit difference occurring in the intermediate round. The subsequent works mainly relied on the differential or differential-linear cryptanalysis, or multiple distinct input-output differentials for which the bias is higher than a threshold in the intermediate round. In this paper, we propose a new PNB set construction based on multiple output bit differences with respect to a single input bit difference only. We exploit the differentials to mount key recovery attacks using a multi-step procedure depending on our new PNB set. Our attack achieves a time complexity of $2^{167.9008}$ for ChaCha20/7 and $2^{183.5361}$ for Salsa20/8, beating all the existing PNB-based attacks on ChaCha20/7 and Salsa20/8 by a significant margin. Further, both our time and data complexities for ChaCha20/7.5 are better than the latest published work by Flórez-Gutiérrez and Todo (Eurocrypt 2025). We have also verified our attack experimentally on a published toy version of ChaCha (FSE 2023).
Expand
Joachim Neu, Javier Nieto, Ling Ren
ePrint Report ePrint Report
Proof-of-stake blockchains require consensus protocols that support Dynamic Availability and Reconfiguration (so-called DAR setting), where the former means that the consensus protocol should remain live even if a large number of nodes temporarily crash, and the latter means it should be possible to change the set of operating nodes over time. State-of-the-art protocols for the DAR setting, such as Ethereum, Cardano’s Ouroboros, or Snow White, require unrealistic additional assumptions, such as social consensus, or that key evolution is performed even while nodes are not participating. In this paper, we identify the necessary and sufficient adversarial condition under which consensus can be achieved in the DAR setting without additional assumptions. We then introduce a new and realistic additional assumption: honest nodes dispose of their cryptographic keys the moment they express intent to exit from the set of operating nodes. To add reconfiguration to any dynamically available consensus protocol, we provide a bootstrapping gadget that is particularly simple and efficient in the common optimistic case of few reconfigurations and no double-spending attempts.
Expand
Vladimir Kolesnikov, Stanislav Peceny, Rahul Rachuri, Srinivasan Raghuraman, Peter Rindal, Harshal Shah
ePrint Report ePrint Report
Linear error-correcting codes with fast encoding and high minimum distance are a central primitive across modern cryptography. They appear most prominently in two domains: (1) pseudorandom correlation generators (PCGs), which enable sublinear-communication generation of correlations such as oblivious transfer (OT) and vector oblivious linear evaluation (VOLE), and (2) zero-knowledge (ZK) proof systems, where linear-time encoders underpin proof soundness and scalability. In both settings, the prover or sender must multiply by a large generator matrix $\mathbf{G}$, often with dimensions in the millions, making computational efficiency the dominant bottleneck.

We propose a generalized paradigm for building LPN-friendly codes with provable minimum distance. Roughly speaking, these codes are based on the idea of randomized turbo codes such as repeat accumulate codes. To prove their minimum distance, we present a generalized enumeration technique, which allows us to precisely compute the minimum distance for a broad class of codes. Although we do not prove their asymptotic behavior, the concrete parameters essentially give a linear-time encoder.

Armed with these new techniques, we construct several novel codes, the most promising of which we call Block-Accumulate codes. Our original design goal was to construct codes that run efficiently on GPUs. Surprisingly, we find that our newly constructed codes are the fastest on both GPUs and CPUs, while at the same time achieve a better minimum distance. If we restrict our attention to codes with proofs, our code is $8\times$ faster than state of the art on a CPU and $50\times$ faster on a GPU. Even if we use aggressive parameters, our code is $3$ and $20\times$ faster, respectively. Under these parameters, this yields overall PCG speedups of $2.5\times$ on the CPU and $15\times$ on the GPU, achieving about 200 million OTs or binary Beaver triples per second on the GPU (excluding the one-time 10 ms GGM seed expansion). We expect similar improvements when applied to the ZK space.
Expand
Jules Maire, Alan Pulval-Dady
ePrint Report ePrint Report
Blind signatures have become a cornerstone for privacy-sensitive applications such as digital cash, anonymous credentials, and electronic voting. The elliptic curve variant of the Digital Signature Algorithm (ECDSA) is widely adopted due to its efficiency in resource-constrained environments, such as mobile devices and blockchain systems. Building blind ECDSA is hence a natural goal. One presents the first such construction relying solely on the ECDSA assumption. Despite the inherent complexities in integrating blindness with ECDSA, we design a protocol that ensures both unforgeability and blindness without introducing new computational assumptions and ensuring concurrent security. It involves zero-knowledge proofs based on the MPC-in-the-head paradigm for complex statements combining relations on encrypted elliptic curve points, their coordinates, and discrete logarithms.
Expand
Vipul Goyal, Justin Raizes
ePrint Report ePrint Report
A central challenge in data security is not just preventing theft, but detecting whether it has occurred. Classically, this is impossible because a perfect copy leaves no evidence. Quantum mechanics, on the other hand, forbids general duplication, opening up new possibilities.

We introduce Proofs of No Intrusion, which enable a classical client to remotely test whether a quantum server has been hacked and the client's data stolen. Crucially, the test does not destroy the data being tested, avoiding the need to store a backup elsewhere. We define and construct proofs of no intrusion for ciphertexts assuming fully homomorphic encryption. Additionally, we show how to equip several constructions of unclonable primitives with proofs of non-intrusion, such as unclonable decryption keys and signature tokens. Conceptually, proofs of non-intrusion can be defined for essentially any unclonable primitive.

At the heart of our techniques is a new method for non-destructively testing coset states with classical communication. It can be viewed as a non-destructive proof of knowledge of a measurement result of the coset state.
Expand
Koen de Boer, Joël Felderhoff
ePrint Report ePrint Report
We present a novel analysis of a quantum algorithm computing the S-unit group for a number field from Eisenträger et al. [EHKS14a] and Biasse and Song [BS16]. We prove that this quantum algorithm runs within polynomial time, where we explicitly quantify the polynomials of the quantum gate and memory complexity (under GRH). We do so by carefully analyzing an implementation of an Continuous Hidden Subgroup Problem (CHSP) oracle function whose period is the (logarithm of the) S-unit group, and provide it to an CHSP-solving algorithm as in [BDF19]. Our analysis is novel due to minimizing the use of the quantum memory-inefficient LLL-reduction, by resorting to strategically chosen precomputations of approximations of high powers of prime ideals. Additionally, we provide a new quantum algorithm computing a discrete Gaussian superposition analogue of the GPV algorithm by Gentry et al. [GPV08]. Lastly, we include a full and rigorous numerical analysis of all parts of the oracle-function computing algorithm, allowing to use fixed-point precision arithmetic and thus to precisely quantify the run-time and memory.
Expand
Nikita Snetkov, Mihkel Jaas Karu, Jelizaveta Vakarjuk, Alisa Pankova
ePrint Report ePrint Report
Privacy-preserving technologies are a well-established area of research for the modern digital systems, driven by the regulatory mandates, such as the GDPR, and growing demand for the users' privacy. One example from the different available technologies are blind signatures: a primitive that supports creation of privacy-sensitive applications, starting from secure e-voting and anonymous digital cash to privacy-preserving credentials and identity management systems. In this work, we introduce Coppercloud, a blind server-supported RSA signature scheme designed to enhance privacy in digital identity systems. Coppercloud enables a user to obtain a signature on a message, without revealing its content to the supporting server, while distributing the signing key between the user's device and the supporting server. We formalize the security requirements for blind server-supported signing by defining an ideal functionality, and prove that Coppercloud securely realizes this functionality in the Universal Composability (UC) model.
Expand
◄ Previous Next ►