International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

08 October 2025

Anna Guinet, Carina Graw, Lukas Koletzko, Jan Richter-Brockmann, Holger Dette, Tim Güneysu
ePrint Report ePrint Report
The random probing model is a theoretical model that abstracts the physical leakage of an embedded device running a cryptographic scheme with more realistic assumptions compared to the threshold probing model. It assumes that the wires of the target device leak their assigned values with probability $p$, and the said values may reveal information about secret data, which could lead to a security violation. From that, we can compute the probability $\epsilon$ that a side-channel adversary may learn secret data from any random combination of wires as a function of the number of wire combinations that breaches security with rate $p$. This model is used to evaluate the security of masked cryptographic implementations, or simply named circuits; and the research community has been focusing so far on approximating or estimating the probability $\epsilon$ for one circuit. Yet, no proposition has been made to quickly compare the probability $\epsilon$ of different circuits, e.g., a circuit and its optimized version. In this context, we present two statistical tests to make decisions about the level of security in the random probing model: the equivalence test compares the security of two circuits in terms of $\epsilon$'s and the superiority test decides whether the undetermined $\epsilon$ of one circuit falls below a security threshold $\epsilon_0$, both with quantified uncertainty about the computations of the probabilities $\epsilon$'s. The validity of these tests is proven mathematically sound and verified via empirical studies on small masked S-boxes.
Expand
André Chailloux, Paul Hermouet
ePrint Report ePrint Report
Chen, Liu, and Zhandry [CLZ22] introduced the problems S|LWE⟩ and C|LWE⟩ as quantum analogues of the Learning with Errors problem, designed to construct quantum algorithms for the Inhomogeneous Short Integer Solution (ISIS) problem. Several later works have used this framework for constructing new quantum algorithms in specific cases. However, the general relation between all these problems is still unknown. In this paper, we investigate the equivalence between S|LWE⟩ and ISIS. We present the first fully generic reduction from ISIS to S|LWE⟩, valid even in the presence of errors in the underlying algorithms. We then explore the reverse direction, introducing an inhomogeneous variant of C|LWE⟩, denoted IC|LWE⟩, and show that IC|LWE⟩ reduces to S|LWE⟩. Finally, we prove that, under certain recoverability conditions, an algorithm for ISIS can be transformed into one for S|LWE⟩. We instantiate this reverse reduction by tweaking a known algorithm for (I)SIS∞ in order to construct quantum algorithm for S|LWE⟩ when the alphabet size q is a small power of 2, recovering some results of Bai et al. [BJK+ 25]. Our results thus clarify the landscape of reductions between S|LWE⟩ and ISIS, and we show both their strong connection as well as the remaining barriers for showing full equivalence.
Expand
Yuval Efron, Joachim Neu, Ling Ren, Ertem Nusret Tas
ePrint Report ePrint Report
In the context of Byzantine consensus problems such as Byzantine broadcast (BB) and Byzantine agreement (BA), the good-case setting aims to study the minimal possible latency of a BB or BA protocol under certain favorable conditions, namely the designated leader being correct (for BB), or all parties having the same input value (for BA). We provide a full characterization of the feasibility and impossibility of good-case latency, for both BA and BB, in the synchronous sleepy model. Surprisingly to us, we find irrational resilience thresholds emerging: 2-round good-case BB is possible if and only if at all times, at least $\frac{1}{\varphi} \approx 0.618$ fraction of the active parties are correct, where $\varphi = \frac{1+\sqrt{5}}{2} \approx 1.618$ is the golden ratio; 1-round good-case BA is possible if and only if at least $\frac{1}{\sqrt{2}} \approx 0.707$ fraction of the active parties are correct.
Expand
Prabhanjan Ananth, Eli Goldin
ePrint Report ePrint Report
Quantum cryptographic definitions are often sensitive to the number of copies of the cryptographic states revealed to an adversary. Making definitional changes to the number of copies accessible to an adversary can drastically affect various aspects including the computational hardness, feasibility, and applicability of the resulting cryptographic scheme. This phenomenon appears in many places in quantum cryptography, including quantum pseudorandomness and unclonable cryptography.

To address this, we present a generic approach to boost single-copy security to multi-copy security and apply this approach to many settings. As a consequence, we obtain the following new results: • One-copy stretch pseudorandom state generators (under mild assumptions) imply the existence of t-copy stretch pseudorandom state generators, for any fixed polynomial t. • One-query pseudorandom unitaries with short keys (under mild assumptions) imply the existence of t-query pseudorandom unitaries with short keys, for any fixed polynomial t. • Assuming indistinguishability obfuscation and other standard cryptographic assumptions, there exist identical-copy secure unclonable primitives such as public-key quantum money and quantum copy-protection.
Expand
Alisa Pankova, Jelizaveta Vakarjuk
ePrint Report ePrint Report
European Digital Identity (EUDI) Wallet aims to provide end users with a way to get attested credentials from issuers, and present them to different relying parties. An important property mentioned in the regulatory frameworks is the possibility to revoke a previously issued credential. While it is possible to issue a short-lived credential, in some cases it may be inconvenient, and a separate revocation service which allows to revoke a credential at any time may be necessary.

In this work, we propose a full end-to-end description of a generic credential revocation system, which technically relies on a single server and secure transmission channels between parties. We prove security of the proposed revocation functionality in the universal composability model, and estimate its efficiency based on a proof-of-concept implementation.
Expand
Antonin Leroux, Maxime Roméas
ePrint Report ePrint Report
Updatable Encryption (UE) allows ciphertexts to be updated under new keys without decryption, enabling efficient key rotation. Constructing post-quantum UE with strong security guarantees is challenging: the only known CCA-secure scheme, COM-UE, uses bitwise encryption, resulting in large ciphertexts and high computational costs.

We introduce DINE, a CCA-secure, isogeny-based post-quantum UE scheme that is both compact and efficient. Each encryption, decryption, or update requires only a few power-of-2 isogeny computations in dimension 2 to encrypt 28B messages, yielding 320B ciphertexts and 224B update tokens at NIST security level 1---significantly smaller than prior constructions. Our full C implementation demonstrates practical performances: updates in 7ms, encryptions in 48ms, and decryptions in 86ms.

Our design builds on recent advances in isogeny-based cryptography, combining high-dimensional isogeny representations with the Deuring correspondence. We also introduce new algorithms for the Deuring correspondence which may be of independent interest. Moreover, the security of our scheme relies on new problems that might open interesting perspectives in isogeny-based cryptography.
Expand
Martin R. Albrecht, Joël Felderhoff, Russell W. F. Lai, Oleksandra Lapiha, Ivy K. Y. Woo
ePrint Report ePrint Report
Leftover Hash Lemma (LHL) states that \(\mathbf{X} \cdot \mathbf{v}\) for a Gaussian \(\mathbf{v}\) is an essentially independent Gaussian sample. It has seen numerous applications in cryptography for hiding sensitive distributions of \(\mathbf{v}\). We generalise the Gaussian LHL initially stated over \(\mathbb{Z}\) by Agrawal, Gentry, Halevi, and Sahai (2013) to modules over number fields. Our results have a sub-linear dependency on the degree of the number field and require only polynomial norm growth: \(\lVert\mathbf{v}\rVert/\lVert\mathbf{X}\rVert\). To this end, we also prove when \(\mathbf{X}\) is surjective (assuming the Generalised Riemann Hypothesis) and give bounds on the smoothing parameter of the kernel of \(\mathbf{X}\). We also establish when the resulting distribution is independent of the geometry of \(\mathbf{X}\) and establish the hardness of the \(k\)-SIS and \(k\)-LWE problems over modules (\(k\)-MSIS/\(k\)-MLWE) based on the hardness of SIS and LWE over modules (MSIS/MLWE) respectively, which was assumed without proof in prior works.
Expand
Seunghyun Cho, Eunyoung Seo, Young-Sik Kim
ePrint Report ePrint Report
We propose Locally Recoverable Data Availability Sampling (LR-DAS), which upgrades binary, threshold-based availability to graded verification by leveraging optimal locally recoverable codes (e.g., Tamo-Barg). Local groups of size $r+\alpha$ serve as atomic certification units: once $r$ verified openings fix a degree-$
Expand
Benedikt Bünz, Jessica Chen, Zachary DeStefano
ePrint Report ePrint Report
Permutation and lookup arguments are at the core of most deployed SNARK protocols today. Most modern techniques for performing them require a grand product check. This requires either committing to large field elements (E.g. in Plonk) or using GKR (E.g. in Spartan) which has worse verifier cost and proof size. Sadly, both have a soundness error that grows linearly with the input size.

We present two permutation arguments that have $\text{polylog}(n)/|\mathbb{F}|$ soundness error -- for reasonable input size $n=2^{32}$ and field size $|\mathbb{F}|=2^{128}$, the soundness error improves significantly from $2^{-96}$ to $2^{-120}$. Moreover, the arguments achieve $\log(n)$ verification cost and proof size without ever needing to commit to anything beyond the witness. $\mathsf{BiPerm}$ only requires the prover to perform $O(n)$ field operations on top of committing to the witness, but at the cost of limiting the choices of PCS. We show a stronger construction, $\mathsf{MulPerm}$, which has no restriction on the PCS choice and its prover performs essentially linear field operations, $n\cdot \tilde O(\sqrt{\log(n)})$.

Our permutation arguments generalize to lookups. We demonstrate how our arguments can be used to improve SNARK systems such as HyperPlonk and Spartan, and build a GKR-based protocol for proving non-uniform circuits.
Expand
Kunming Jiang, Fraser Brown, Riad S. Wahby
ePrint Report ePrint Report
General-purpose probabilistic proof systems operate on programs expressed as systems of arithmetic constraints—an unfriendly representation. There are two broad approaches in the literature to turning friendlier, high-level programs into constraints suitable for proof systems: direct translation and CPU emulation. Direct translators compile a program into highly optimized constraints; unfortunately, this process requires expressing all possible paths through the program, which results in compile times that scale with the program’s runtime rather than its size. In addition, the prover must pay the cost of every possible program path, even those untaken by a given input. In contrast, CPU emulators don’t compile programs to constraints; instead, they “execute” those programs, expressed as CPU instructions, on a CPU emulator that is itself expressed as constraints. As a result, this approach can’t perform powerful, program-specific op- timizations, and may require thousands of constraints where direct translation could use a clever handful. Worse, CPU emulators inherit an impractically expensive program state representation from the CPUs they emulate. This paper presents a compiler and proof system, CoBBl, that combines the benefits of CPU emulation and direct translation: it takes advantage of program-specific optimizations, but doesn’t pay for an unnecessary state representation or unexecuted computation. COBBL outper- forms CirC, a state-of-the-art direct translator, by $1–30\times$ on compile time and $26–350\times$ on prover time, and outperforms Jolt, a state-of-the-art CPU emulator, on prover time by $1.1– 1.8\times$ on Jolt-friendly benchmarks, and up to $100\times$ on other benchmarks.
Expand
Anindya Ganguly, Angshuman Karmakar, Suparna Kundu, Debranjan Pal, Sumanta Sarkar
ePrint Report ePrint Report
Blind signatures (BS) allow a signer to produce a valid signature on a message without learning the message itself. They have niche applications in privacy-preserving protocols such as digital cash and electronic voting. Non-interactive blind signatures (NIBS) remove the need for interaction between the signer and the user. In the post-quantum era, lattice-based NIBS schemes are studied as candidates for long-term security.

In Asiacrypt 2024, Baldimtsi et al. proposed the first lattice-based NIBS construction, whose security relies on the random one-more inhomogeneous short integer solution (rOM-ISIS) assumption. This rOM-ISIS is considered to be a non-standard assumption. Later, Zhang et al. introduced another lattice-based construction in ProvSec 2024, and proved its security under the standard module short integer solution (MSIS) assumption. We analyse the security of the latter scheme. In the random oracle model, we show that it fails to achieve both nonce blindness and receiver blindness. We present explicit attacks where an adversary breaks both properties with probability~1. Our attack is based on a crucial observation that uncovers a flaw in the design. Specifically, this flaw allows an attacker to link a message-signature pair with its presignature-nonce pair. In addition, we also identify a flaw in the unforgeability proof. Finally, we suggest a modification to address the issue, which is similar to Baldimtsi et al. construction, and its security relies again on the non-standard rOM-ISIS assumption. This work again raises the question of the feasibility of achieving NIBS from standard assumptions.
Expand
Konrad Hanff, Anja Lehmann, Cavit Özbay
ePrint Report ePrint Report
Privacy Pass is an anonymous authentication protocol which was initially designed by Davidson et al. (PETS’18) to reduce the number of CAPTCHAs that TOR users must solve. It issues single-use authentication tokens with anonymous and unlinkable redemption guarantees. The issuer and verifier of the protocol share a symmetric key, and tokens are privately verifiable. The protocol has sparked interest from both academia and industry, which led to an Internet Engineering Task Force (IETF) standard. While Davidson et al. formally analyzed the original protocol, the IETF standard introduces several changes to their protocol. Thus, the standardized version’s formal security remains unexamined. We fill this gap by analyzing the IETF standard’s privately verifiable Privacy Pass protocol.

In particular, there are two main discrepancies between the analyzed and standardized version: First, the IETF version introduces a redemption context, that can be used for blindly embedding a validity period into the Privacy Pass tokens. We show that this variant has significant differences to public metadata extension that has been proposed for the same purpose in the literature. Redemption context offers better privacy and security than public metadata. We capture both stronger guarantees through game-based security definitions and show that the currently considered one-more unforgeability notion for Privacy Pass is insufficient when a redemption context is used. Thus, we propose a new property, targeted context unforgeability, and prove its incomparability to one-more unforgeability. Second, Davidson et al. focused on a concrete Diffie-Hellman based construction, whereas the IETF version is built generically from a verifiable oblivious pseudorandom function (VOPRF). Further, the analyzed protocol omitted the full redemption phase needed to prevent double-spending. We prove that the generic IETF construction satisfies the desired security and privacy guarantees covering the full life-cycle of tokens. Our analysis relies on natural security properties of VOPRFs, providing compatibility with any secure VOPRF instantiation. This enables crypto agility, e.g., allowing to switch to efficient quantum-safe VOPRFs when they become available.
Expand
Barbara Jiabao Benedikt, Marc Fischlin
ePrint Report ePrint Report
Fiat-Shamir signatures replace the challenge in interactive identification schemes by a hash value over the commitment, the message, and possibly the signer's public key. This construction paradigm is well known and widely used in cryptography, for example, for Schnorr signatures and CRYSTALS-Dilithium. There is no ``general recipe'' for constructing Fiat-Shamir signatures regarding the inputs and their order for the hash computation, though, since the hash function is usually modeled as a monolithic random oracle. In practice, however, the hash function is based on the Merkle-Damgård or the sponge design.

Our work investigates whether there are advisable or imprudent input orders for hashing in Fiat-Shamir signatures. We examine Fiat-Shamir signatures with plain and nested hashing using Merkle-Damgård or sponge-based hash functions. We analyze these constructions in both classical and quantum settings. As part of our investigations, we introduce new security properties following the idea of quantum-annoyance of Eaton and Stebila (PQCrypto 2021), called annoyance for user exposure and signature forgeries. These properties ensure that an adversary against the hash function cannot gain a significant advantage when attempting to extend a successful attack on a single signature forgery to multiple users or to multiple forgeries of a single user. Instead, the adversary must create extra forgeries from scratch. Based on our analysis, we derive a simple rule: When using Fiat-Shamir signatures, one should hash the commitment before the message; all other inputs may be ordered arbitrarily.
Expand
Ganyuan Cao, Sylvain Chatel, Christian Knabenhans
ePrint Report ePrint Report
On-the-fly multi-party computation (MPC), introduced by López-Alt, Tromer, and Vaikuntanathan (STOC 2012), enables clients to dynamically join a computation without remaining continuously online. Yet, the original proposal suffers from substantial efficiency and expressivity limitations hindering practical deployments. Even though various techniques have been proposed to mitigate these shortcomings, seeing on-the-fly MPC as a combination of independent building blocks jeopardizes the security of the original model.

Thus, we revisit on-the-fly MPC in light of recent advances and extend its formal framework to incorporate efficiency and expressivity improvements. Our approach is built around \emph{multi-group homomorphic encryption} (MGHE), which generalizes threshold and multi-key HE and serves as the core primitive for on-the-fly MPC. Our contributions are fourfold: i) We propose new security notions for MGHE (e.g., IND-CPA with partial decryption, circuit privacy) and justify their suitability to the on-the-fly MPC. ii) We present the first ideal functionality for MGHE in the Universal Composability (UC) framework and characterize the conditions under which it can be realized, via reductions to our proposed security notions. iii) We present a generic protocol that securely realizes our on-the-fly MPC functionality against a semi-malicious adversary from our MGHE functionality. iv) Finally, we provide two generic compilers that lift these protocols to withstand a fully malicious adversary by leveraging zero-knowledge arguments.

Our analysis in the UC framework enables modular protocol analysis, where more efficient schemes can be seamlessly substituted as long as they meet the required security defined by the functionalities, retaining the security guarantees offered by the original construction.
Expand
Jonas Janneck
ePrint Report ePrint Report
Following the announcement of the first winners of the NIST post-quantum cryptography standardization process in 2022, cryptographic protocols are now undergoing migration to the newly standardized schemes. In most cases, this transition is realized through a hybrid approach, in which algorithms based on classical hardness assumptions, such as the discrete logarithm problem, are combined with post-quantum algorithms that rely on quantum-resistant assumptions, such as the Short Integer Solution (SIS) problem. A combiner for signature schemes can be obtained by simply concatenating the signatures of both schemes. This construction preserves unforgeability of the underlying schemes; however, it does not extend to stronger notions, such as strong unforgeability. Several applications, including authenticated key exchange and secure messaging, inherently require strong unforgeability, yet no existing combiner is known to achieve this property. This work introduces three practical combiners that preserve strong unforgeability and all BUFF (beyond unforgeability features) properties. Each combiner is tailored to a specific class of classical signature schemes capturing all broadly used schemes that are strongly unforgeable. Remarkably, all combiners can be instantiated with any post-quantum signature scheme in a black-box way making deployment practical and significantly less error prone. The proposed solutions are further highly efficient and have signatures that are at most the size of the (insecure) concatenation combiner. For instance, our most efficient combiner enables the combination of EdDSA with ML-DSA, yielding a signature size that is smaller than the sum of an individual EdDSA signature and an individual ML-DSA signature. Additionally, we identify a novel signature property that we call random-message validity and show that it can be used to replace the BUFF transform with the more efficient Pornin-Stern transform. The notion may be of independent interest.
Expand
Barbara Jiabao Benedikt, Sebastian Clermon, Marc Fischlin, Tobias Schmalz
ePrint Report ePrint Report
Signal's handshake protocol non-interactively generates a shared key between two parties for secure communication. The underlying protocol X3DH, on which the post-quantum hybrid successor, PQXDH, builds, computes three to four individual Diffie-Hellman (DH) keys by combining the long-term identity keys and the ephemeral secrets of the two parties. Each of these DH operations serves a different purpose, either to authenticate the derived key or to provide forward secrecy.

We present here an improved protocol for X3DH, which we call MuDH, and an improved protocol for PQXDH, pq-MuDH. Instead of computing the individual DH keys, we run a single multi-valued DH operation for integrating all contributions simultaneously into a single DH key. Our approach is based on techniques from batch verification (Bellare et al., Eurocrypt 1998), where one randomizes each contribution of the individual keys to get a secure scheme.

The solution results in execution times of roughly 60% of the original protocol, both in theory and in our benchmarks on mobile and desktop platforms. Our modifications are confined to the key derivation step, both Signal’s server infrastructure for public key retrieval and the message flow remain unchanged.
Expand
Fuyuki Kitagawa, Ryo Nishimaki, Nikhil Pappu
ePrint Report ePrint Report
Secure key leasing (SKL) enables the holder of a secret key for a cryptographic function to temporarily lease the key using quantum information. Later, the recipient can produce a deletion certificate—a proof that they no longer have access to the secret key. The security guarantee ensures that even a malicious recipient cannot continue to evaluate the function, after producing a valid deletion certificate.

Most prior work considers an adversarial recipient that obtains a single leased key, which is insufficient for many applications. In the more realistic collusion-resistant setting, security must hold even when polynomially many keys are leased (and subsequently deleted). However, achieving collusion-resistant SKL from standard assumptions remains poorly understood, especially for functionalities beyond decryption.

We improve upon this situation by introducing new pathways for constructing collusion-resistant SKL. Our main contributions are as follows:

- A generalization of quantum-secure collusion-resistant traitor tracing called multi-level traitor tracing (MLTT), and a compiler that transforms an MLTT scheme for a primitive $X$ into a collusion-resistant SKL scheme for primitive $X$.

- The first bounded collusion-resistant SKL scheme for PRFs, assuming LWE.

- A compiler that upgrades any single-key secure SKL scheme for digital signatures into one with unbounded collusion-resistance, assuming OWFs.

- A compiler that upgrades collusion-resistant SKL schemes with classical certificates to ones having verification-query resilience, assuming OWFs.
Expand
Xinyu Zhang, Ziyi Li, Ron Steinfeld, Raymond K. Zhao, Joseph K. Liu, Tsz Hon Yuen
ePrint Report ePrint Report
In this work, we present a novel commit-and-open $\Sigma$-protocol based on the Legendre and power residue PRFs. Our construction leverages the oblivious linear evaluation (OLE) correlations inherent in PRF evaluations and requires only black-box access to a tree-PRG-based vector commitment. By applying the standard Fiat-Shamir transform, we obtain a post-quantum signature scheme, Pegasus, which achieves short signature sizes (6025 to 7878 bytes) with efficient signing (3.910 to 19.438 ms) and verification times (3.942 to 18.999 ms). Furthermore, by pre-computing the commitment phase, the online response time can be reduced to as little as 0.047 to 0.721 ms. We prove the security of Pegasus in both the classical random oracle model (ROM) and the quantum random oracle model (QROM), filling a gap left by prior PRF-based signature schemes.

We further develop a ring signature scheme, PegaRing, that preserves the three-move commit-and-open structure of Pegasus. Compared to previous PRF-based ring signature called DualRing-PRF (ACISP 2024), PegaRing reduces the constant communication overhead by more than half and achieves significantly faster signing and verification. For a ring size of 1024, PegaRing yields signatures of 29 to 32 KB, with signing times of 8 to 44 ms, and verification times of 6 to 31 ms, depending on the parameters. Finally, we prove the security of PegaRing in both the ROM and the QROM, which is, to the best of our knowledge, the first symmetric-key primitives-based ring signature with practical performances and provable QROM security.
Expand
Tomoyuki Morimae, Yuki Shirakawa, Takashi Yamakawa
ePrint Report ePrint Report
One-way puzzles (OWPuzzs) introduced by Khurana and Tomer [STOC 2024] are a natural quantum analogue of one-way functions (OWFs), and one of the most fundamental primitives in ''Microcrypt'' where OWFs do not exist but quantum cryptography is possible. OWPuzzs are implied by almost all quantum cryptographic primitives, and imply several important applications such as non-interactive commitments and multi-party computations. A significant goal in the field of quantum cryptography is to base OWPuzzs on plausible assumptions that will not imply OWFs. In this paper, we base OWPuzzs on hardness of non-collapsing measurements. To that end, we introduce a new complexity class, $\mathbf{SampPDQP}$, which is a sampling version of the decision class $\mathbf{PDQP}$ introduced in [Aaronson, Bouland, Fitzsimons, and Lee, ITCS 2016]. We show that if $\mathbf{SampPDQP}$ is hard on average for quantum polynomial time, then OWPuzzs exist. We also show that if $\mathbf{SampPDQP}\not\subseteq\mathbf{SampBQP}$, then auxiliary-input OWPuzzs exist. $\mathbf{SampPDQP}$ is the class of sampling problems that can be solved by a classical polynomial-time deterministic algorithm that can make a single query to a non-collapsing measurement oracle, which is a ''magical'' oracle that can sample measurement results on quantum states without collapsing the states. Such non-collapsing measurements are highly unphysical operations that should be hard to realize in quantum polynomial-time, and therefore our assumptions on which OWPuzzs are based seem extremely plausible. Moreover, our assumptions do not seem to imply OWFs, because the possibility of inverting classical functions would not be helpful to realize quantum non-collapsing measurements. We also study upperbounds of the hardness of $\mathbf{SampPDQP}$. We introduce a new primitive, distributional collision-resistant puzzles (dCRPuzzs), which are a natural quantum analogue of distributional collision-resistant hashing [Dubrov and Ishai, STOC 2006]. We show that dCRPuzzs imply average-case hardness of $\mathbf{SampPDQP}$ (and therefore OWPuzzs as well). We also show that two-message honest-statistically-hiding commitments with classical communication and one-shot message authentication codes (MACs), which are a privately-verifiable version of one-shot signatures [Amos, Georgiou, Kiayias, Zhandry, STOC 2020], imply dCRPuzzs.
Expand
Supriya Adhikary, Puja Mondal, Angshuman Karmakar
ePrint Report ePrint Report
Zero-knowledge succinct arguments of knowledge (zkSNARK) provide short privacy-preserving proofs for general NP problems. Public verifiability of zkSNARK protocols is a desirable property, where the proof can be verified by anyone once generated. Designated-verifier zero-knowledge is useful when it is necessary that only one or a few individuals should have access to the verification result. All zkSNARK schemes are either fully publicly verifiable or can be verified by a designated verifier with a secret verification key.

In this work, we propose a new notion of a hybrid verification mechanism. Here, the prover generates a proof that can be verified by a designated verifier. For this proof, the designated verifier can generate auxiliary information with its secret key. The combination of this proof and the auxiliary information allows any public verifier to verify the proof without any other information. We also introduce necessary security notions and mechanisms to identify a cheating designated verifier or the prover. Our hybrid verification zkSNARK construction is based on module lattices and adapts the zkSNARK construction by Ishai et al. (CCS 2021). In this construction, the designated verifier is required only once after proof generation to create the publicly verifiable proof. Our construction achieves a small constant-size proof and fast verification time, which is linear in the statement size.
Expand
◄ Previous Next ►