IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
19 July 2025
Yuntian Chen, Zhanyong Tang, Tianpei Lu, Bingsheng Zhang, Zhiying Shi, Zhiyuan Ning
Privacy-preserving machine learning (PPML) is critical for protecting sensitive data in domains like healthcare, finance, and recommendation systems. Fully Homomorphic Encryption (FHE) and Secure Multi-Party Computation (MPC) are key enablers of secure computation, yet existing hybrid approaches often suffer from fixed protocol assignments, resulting in inefficiencies across diverse network environments, such as LANs and WANs. To address this, we introduce CostSphere, a cost-model-driven framework that dynamically assigns FHE and MPC protocols to optimize computational efficiency under varying network conditions. Utilizing a predictive cost model based on MLIR’s TOSA-level dialect and an ILP-based solver, CostSphere ensures robust performance for Transformer-based models. Experimental results demonstrate that CostSphere delivers $6.68\times$ to $12.92\times$ improvements in inference runtime compared to state-of-the-art solutions like BumbleBee (NDSS ’25), enabling scalable and network-agnostic PPML across diverse computational scenarios.
Jianhua Wang, Tao Huang, Guang Zeng, Tianyou Ding, Shuang Wu, Siwei Sun
We introduce a concrete instance of the LRW+ paradigm: the Three-Hash Framework (THF), a mode for constructing tweakable block ciphers that employs three hash functions to process tweak inputs. Through a general practical cryptanalysis of THF in both single-tweak and multiple-tweak settings, and by comparing it with two representative modes, 4-LRW1 and 2-LRW2, we demonstrate that THF has the potential to achieve lower latency due to its more balanced resistance to both single- and multiple-tweak attacks.
Based on this framework, we design Blink, a family of tweakable block ciphers specifically optimized for minimal latency. The hash functions in Blink were carefully chosen to align with the low-latency requirements, ensuring both efficiency and strong security. A thorough cryptanalysis of Blink is provided, with an emphasis on its resistance to multiple-tweak attacks.
Finally, hardware evaluations highlight the exceptional low-latency performance of Blink, distinguishing it as one of the most efficient tweakable block ciphers.
Shuaishuai Li, Anyu Wang, Cong Zhang, Xiaoyun Wang
The field of private information retrieval (PIR) has made significant strides with a recent focus on protocols that offer sublinear online time, ensuring efficient access to public databases without compromising the privacy of the queries. The pioneering two-server PIR protocols developed by Corrigan-Gibbs and Kogan (EUROCRYPT 2020) enjoy the dual benefits of sublinear online time and statistical security. This allows their protocols to provide high efficiency and resist computationally unbounded adversaries. In this work, we extend this seminal work to the symmetric PIR (SPIR) context, where the protocol must ensure that the client is privy only to the requested database entries, with no knowledge of the remaining data. This enhancement aligns with scenarios where the confidentiality of non-requested information is as critical as the query itself. Our main result is the introduction of the first two-server SPIR protocols that achieve both sublinear online time and statistical security, together with an enhancement for achieving sublinear amortized time. Our protocols require a pragmatic level of shared randomness between the servers, which however is necessary for implementing statistical security in two-server SPIR, as showed by Gertner et al. (STOC 1998).
Gökçe Düzyol, Muhammed Said Gündoğan, Atakan Arslan
FrodoKEM is a post-quantum key encapsulation mechanism based on plain Learning With Errors (LWE). In contrast to module-lattice-based schemes, it relies on an unstructured variant of the LWE problem, providing more conservative and better-understood security guarantees. As a result, FrodoKEM has been recommended by European cybersecurity agencies such as BSI and ANSSI, and has also been proposed in international standardization efforts, including ISO and the IETF Internet-Draft process.
In this paper, we explore hardware-level parallelization techniques for FrodoKEM. To date, the only notable attempt to parallelize FrodoKEM in hardware was made by Howe et al. in 2021. In their work, the SHAKE function was identified as a performance bottleneck and replaced by the Trivium stream cipher. However, this replacement renders the implementation incompatible with standardized recommendations. In contrast, our work adheres strictly to the original FrodoKEM specification, including its use of SHAKE as the PRNG, and introduces a scalable architecture enabling high-throughput parallel execution.
For FrodoKEM-640, we present parallel architectures for key generation, encapsulation, and decapsulation. Our implementation achieves between 976 and 1077 operations per second, making it the fastest FrodoKEM hardware implementation reported to date. Furthermore, we propose a general architecture that offers a scalable area-throughput trade-off: by increasing the number of DSPs and proportionally scaling BRAM usage, our design can be scaled to achieve significantly higher performance beyond the reported implementation. This demonstrates that SHAKE is not inherently a barrier to parallel matrix multiplication, and that efficient, standard-compliant FrodoKEM implementations are achievable for high-speed cryptographic applications.
In this paper, we explore hardware-level parallelization techniques for FrodoKEM. To date, the only notable attempt to parallelize FrodoKEM in hardware was made by Howe et al. in 2021. In their work, the SHAKE function was identified as a performance bottleneck and replaced by the Trivium stream cipher. However, this replacement renders the implementation incompatible with standardized recommendations. In contrast, our work adheres strictly to the original FrodoKEM specification, including its use of SHAKE as the PRNG, and introduces a scalable architecture enabling high-throughput parallel execution.
For FrodoKEM-640, we present parallel architectures for key generation, encapsulation, and decapsulation. Our implementation achieves between 976 and 1077 operations per second, making it the fastest FrodoKEM hardware implementation reported to date. Furthermore, we propose a general architecture that offers a scalable area-throughput trade-off: by increasing the number of DSPs and proportionally scaling BRAM usage, our design can be scaled to achieve significantly higher performance beyond the reported implementation. This demonstrates that SHAKE is not inherently a barrier to parallel matrix multiplication, and that efficient, standard-compliant FrodoKEM implementations are achievable for high-speed cryptographic applications.
Dimitri Koshelev, Youssef El Housni, Georgios Fotiadis
A major challenge in elliptic curve cryptosystems consists in mitigating efficiently the small-subgroup attack. This paper explores batch subgroup membership testing (SMT) on pairing-friendly curves, particularly for the Barreto--Lynn--Scott family of embedding degree 12 (BLS12) due to its critical role in modern pairing-based cryptography. Our research introduces a novel two-step procedure for batch SMT to rapidly verify multiple points at once, cleverly combining the already existing tests based on the Tate pairing and a non-trivial curve endomorphism. We clarify why the invented technique is significantly faster (despite a negligible error probability) than testing each point individually. Moreover, it is applicable to prominent curves like BLS12-381 and BLS12-377 being frequently employed in zero-knowledge applications. Nonetheless, to further enhance the speed (or reduce the error probability) of the proposed batch point validation, we have generated two new BLS12 curves that are specifically optimized for this purpose. We also provide an open-source high-speed software implementation in Go, showcasing explicitly significant performance improvements achieved by our work.
El Hadji Mamadou DIA, Walid ARABI, Anis BKAKRIA, Reda YAICH
Decision trees are extensively employed in artificial intelligence and machine learning due to their interpretability, efficiency, and robustness-qualities that are particularly valued in sensitive domains such as healthcare, finance, and cybersecurity. In response to evolving data privacy regulations, there is an increasing demand for models that ensure data confidentiality during both training and inference. Homomorphic encryption emerges as a promising solution by enabling computations directly on encrypted data without exposing plaintext inputs. This survey provides a comprehensive review of privacy-preserving decision tree protocols leveraging homomorphic encryption. After
introducing fundamental concepts and the adopted methodology,
a dual-layer taxonomy is presented, encompassing system and
data characteristics as well as employed processing techniques.
This taxonomy facilitates the classification and comparison of
existing protocols, evaluating their effectiveness in addressing key
challenges related to privacy, efficiency, usability, and deploy-
ment. Finally, current limitations, emerging trends, and future
research directions are discussed to enhance the security and
practicality of homomorphic encryption frameworks for decision
trees in privacy-sensitive applications.
Sengim Karayalcin, Marina Krcek, Stjepan Picek
Deep learning-based side-channel analysis (DLSCA) represents a powerful paradigm for running side-channel attacks. DLSCA in a state-of-the-art can break multiple targets with only a single attack trace, requiring minimal feature engineering. As such, DLSCA also represents an extremely active research domain for both industry and academia. At the same time, due to domain activity, it becomes more difficult to understand what the current trends and challenges are.
In this systematization of knowledge, we provide a critical outlook on a number of developments in DLSCA in the last year, allowing us to offer concrete suggestions. Moreover, we examine the reproducibility perspective, finding that many works still struggle to provide results that can be used by the community.
In this systematization of knowledge, we provide a critical outlook on a number of developments in DLSCA in the last year, allowing us to offer concrete suggestions. Moreover, we examine the reproducibility perspective, finding that many works still struggle to provide results that can be used by the community.
Elie Eid, Aurélien Greuet, Nathan Reboud, Rina Zeitoun
FrodoKEM is a conservative lattice-based KEM based on the Learning With Errors problem. While it was not selected for NIST standardization, it remains a strong candidate for high-security applications and is recommended by several national agencies, including BSI, ANSSI, and the EUCC. Its reliance on CDT-based Gaussian sampling presents a significant challenge for side-channel secure implementations. While recent work by Gérard and Guerreau [GG25] has shown that masking FrodoKEM is feasible, the Gaussian sampler remains a major bottleneck, accounting for between 34% and 65% of the execution time. In this work, we introduce a new high-order masking gadget for CDT sampling, provably secure in the ISW probing model and significantly more efficient than previous approaches. We instantiate and evaluate our design in the context of FrodoKEM, with a complete first-order implementation on Cortex-M3, reflecting the most relevant threat model in practice.
Compared with [GG25] at first order, the cost of the sampler is reduced by at least 82% and the number of random generations by at least 69%.
Higher-order security is also fully supported through a generic C implementation, with some selected gadgets hand-optimized in assembly to improve efficiency.
Tim Ruffing
As of November 2021, Bitcoin supports “Taproot” spending policies whose on-chain format is a single elliptic curve point. A transaction spending the funds associated with a Taproot policy can be authorized by interpreting the curve point either (a) as a public key of the Schnorr signature scheme and providing a suitable signature, or (b) as a commitment to alternative spending conditions and satisfying those.
Since a sufficiently powerful quantum adversary would be able to forge Schnorr signatures, an upgrade to Bitcoin may, at some point in the future, disable the ability to spend existing funds via Schnorr signatures in order to prevent the havoc created by leaving a large fraction of the currency supply prone to theft. However, to avoid irrevocably losing all funds not migrated in time to (yet to be added) post-quantum signature schemes, it will be desirable for an upgrade disabling Schnorr signatures to retain the ability to spend funds by interpreting the curve point in a Taproot policy as a commitment to alternative spending conditions.
This paper justifies such an upgrade strategy by demonstrating the post-quantum security of Taproot as a commitment scheme. Specifically, it provides concrete upper bounds on the probability that a quantum adversary making some number of queries to a quantum random oracle can break the binding or hiding property. Since the bounds follow from powerful existing results, which enable reasoning as if dealing with a classical adversary, the proofs are accessible without a background in quantum computing.
Yufei Yuan, Haiyi Xu, Lei Zhang, Wenling Wu
In this paper, we revisit the standard approach to constructing neural distinguishers in symmetric cryptanalysis and introduce a game-like model, the Coin-Tossing model, to generalize this methodology. From the perspective of Boolean functions, we show that many classical cryptanalytic techniques can be generalized as a specific family of Boolean functions, termed the CPF class. We further explore the connection between learning CPF Boolean functions in the Coin-Tossing model and the well-known Learning Parity with Noise (LPN) problem. Leveraging the theoretical analysis, we identify key attributes of CPF functions that significantly affect how effectively machine learning algorithms can learn them. To validate our conclusions, we also conduct extensive experiments based on machine learning algorithms. Incorporating our theoretical insights, we propose an advanced 8-round and 9-round neural distinguisher for SPECK32/64 by reducing the problem complexity. Additionally, we propose a method based on the Goldreich-Levin algorithm to analyze and interpret what black-box distinguishers learn. Using this approach, we reinterpret several established neural distinguishers in terms of Fourier expansion. It is able to resolve the previous neural distinguisher in several Fourier terms. Notably, we identify a new type of distinguisher from neural networks that has not been discovered by cryptanalysts, which can be considered as a variant of the Differential-Linear distinguisher. We also demonstrate that the neural network not only learned the optimal Differential-Linear (DL) distinguishers found using existing MILP/MIQCP models, but also discovered even superior ones.
Keewoo Lee
A Private Information Retrieval (PIR) scheme allows a client to retrieve data from a database hosted on a remote server without revealing which location is being accessed. In Doubly-Efficient PIR (DEPIR), the server preprocesses the database offline into a data structure that enables it to answer any client query in sublinear time with respect to the database size $N$. The breakthrough work of Lin-Mook-Wichs (STOC’23) presented the first DEPIR construction from the Ring-LWE assumption. This remains essentially the only known construction to date, and realizing DEPIR from alternative foundations is an open problem. In particular, constructing DEPIR from plain LWE is still open.
In this work, we present a simple LWE-based construction of DEPIR in the CRS model. Our construction is inspired by SimplePIR (USENIX’23) and leverages the matrix–vector multiplication preprocessing of Williams (SODA’07). By applying the compiler of Lin–Mook–Wichs (Eurocrypt’25), we also obtain an sk-DEPIR scheme, a keyed variant of DEPIR, based on LWE in the standard model. All existing sk-DEPIR constructions, except the one implied by LMW23 based on Ring-LWE, rely on non-standard code-based assumptions. While our constructions are only barely doubly-efficient, with server computation of $O(N / \log{N})$, it was previously unknown whether even such modest sublinear efficiency could be achieved from unstructured, plain LWE.
In this work, we present a simple LWE-based construction of DEPIR in the CRS model. Our construction is inspired by SimplePIR (USENIX’23) and leverages the matrix–vector multiplication preprocessing of Williams (SODA’07). By applying the compiler of Lin–Mook–Wichs (Eurocrypt’25), we also obtain an sk-DEPIR scheme, a keyed variant of DEPIR, based on LWE in the standard model. All existing sk-DEPIR constructions, except the one implied by LMW23 based on Ring-LWE, rely on non-standard code-based assumptions. While our constructions are only barely doubly-efficient, with server computation of $O(N / \log{N})$, it was previously unknown whether even such modest sublinear efficiency could be achieved from unstructured, plain LWE.
Anders Lindman
Cascader, a novel key-exchange protocol based on an iterative multiplicative recurrence over a finite field, is introduced. In contrast to standard methods, e.g., traditional Diffie–Hellman and ECC, it replaces exponentiation and scalar multiplication with layered products, achieving commutativity and deterministic pseudorandom behavior.
Hugo Beeloo-Sauerbier Couvée, Antonia Wachter-Zeh, Violetta Weger
The seminal work of [Bardet et al., 2020] has shown the potential of algebraic modelings to solve the Rank Syndrome Decoding Problem (R-SDP). For most parameter ranges, their algorithm first needs to perform a guessing step, making it a hybrid approach between combinatorial and algebraic solvers. In this paper, we present a novel guessing technique, which, when applied to [Bardet et al., 2020], is able to attack the proposed parameters of RYDE, a second-round candidate for the additional NIST standardization process. In particular, we show that all of the proposed RYDE parameters have to be updated: We can reduce the security of RYDE-1 to 104 bits, RYDE-3 to 153 bits, and RYDE-5 to 210 bits.
Janis Adamek, Aikata Aikata, Ahmad Al Badawi, Andreea Alexandru, Armen Arakelov, Philipp Binfet, Victor Correa, Jules Dumezy, Sergey Gomenyuk, Valentina Kononova, Dmitrii Lekomtsev, Vivian Malone ...
Fully Homomorphic Encryption (FHE) enables computation over
encrypted data and is considered a fundamental tool for privacy-preserving systems.
Despite significant theoretical progress, its practical adoption remains limited. One
contributing factor is the absence of reusable, application-level components suitable
for integration into real-world systems.
This work introduces a library of FHE components developed through a competition-
based framework. The components are outcomes of a series of formalized challenges
published on the FHERMA platform, each targeting a specific challenge—such
as comparison, sorting, or matrix operations—under concrete cryptographic and
performance constraints.
This initial release includes contributions from independent researchers and reflects
a variety of approaches across different FHE schemes. The library is intended to
expand over time as new challenges are introduced and solved, forming a foundation
for building and evaluating privacy-preserving applications.
16 July 2025
Jules Dumezy, Andreea Alexandru, Yuriy Polyakov, Pierre-Emmanuel Clet, Olive Chakraborty, Aymen Boudguiga
The Cheon--Kim--Kim--Song (CKKS) scheme is a fully homomorphic encryption scheme that traditionally supports only the evaluation of smooth functions. Recent works have enabled the evaluation of arbitrary (discontinuous) integer functions represented as lookup tables (LUT) on small inputs using the method of functional bootstrapping (FBT). Although well-suited for small integers (up to around 10 bits), the efficiency of FBT quickly declines for large LUTs, and a considerable increase in both runtime and memory requirements is observed. Building on CKKS functional bootstrapping, we propose in this paper two functional bootstrapping algorithms, specifically designed to target larger LUTs (up to 20 bits). For a 16-bit LUT, our implementation in OpenFHE achieves a speed-up of 47.5 in amortized time and 95.1 in latency for single-threaded execution, compared to the state-of-the-art CKKS-based functional bootstrapping method of Alexandru et al. (CRYPTO'25).
Pierre Daix-Moreux, Chengru Zhang
Despite the growing popularity of blockchains, their scalability remains a significant challenge. Layer-2s (L2s) aim to address this by introducing an operator to process transactions off-chain and post compact summaries to the Layer-1 (L1). However, existing L2 designs struggle with unsatisfactory throughput improvements, complex exit games, limited data availability, or high computational overhead for users.
This paper introduces PlasmaFold, a novel L2 designed to overcome these limitations. PlasmaFold utilizes a hybrid architecture: an operator (aggregator) generates proofs on server side for the honest construction of blocks, while users maintain balance proofs on their own devices. This separation of concerns enables instant, non-interactive exits via balance proofs, while block proofs handle most of the validations, minimizing users’ costs. By leveraging Incrementally Verifiable Computation (IVC), PlasmaFold achieves concrete efficiency. Users can update their balance proofs within a browser in under 1 second per transaction using less than 1 GB of RAM. Furthermore, only the identities of users who have acknowledged data receipt are posted to L1, ensuring data availability with a minimal on-chain footprint. This design keeps L1 costs extremely low, enabling a theoretical throughput of over 14000 transactions per second.
This paper introduces PlasmaFold, a novel L2 designed to overcome these limitations. PlasmaFold utilizes a hybrid architecture: an operator (aggregator) generates proofs on server side for the honest construction of blocks, while users maintain balance proofs on their own devices. This separation of concerns enables instant, non-interactive exits via balance proofs, while block proofs handle most of the validations, minimizing users’ costs. By leveraging Incrementally Verifiable Computation (IVC), PlasmaFold achieves concrete efficiency. Users can update their balance proofs within a browser in under 1 second per transaction using less than 1 GB of RAM. Furthermore, only the identities of users who have acknowledged data receipt are posted to L1, ensuring data availability with a minimal on-chain footprint. This design keeps L1 costs extremely low, enabling a theoretical throughput of over 14000 transactions per second.
Décio Luiz Gazzoni Filho, Gora Adj, Slim Bettaieb, Alessandro Budroni, Jorge Chávez-Saab, Francisco Rodríguez-Henríquez
Many post-quantum cryptosystems require generating an $n$-bit binary vector with a prescribed Hamming weight $\omega$, a process known as \emph{fixed-weight sampling}. When $\omega = O(n)$, we call this \emph{dense} fixed-weight sampling, which commonly appears in lattice-based cryptosystems, like those in the NTRU family. In contrast, code-based cryptosystems typically use \emph{sparse} fixed-weight sampling with $\omega = O(\sqrt{n})$. Sparse fixed-weight sampling generally involves three constant-time steps to keep the sampled vector secret: 1. sample $\omega$ nearly uniform random integers from a series of decreasing intervals; 2. map these integers into a set of $\omega$ distinct indices in $[0, n)$, called the \emph{support}; 3. generate a binary $n$-bit vector with bits set only at the support indices. Remarkably, some of the core algorithms employed in fixed-weight sampling date back to nearly a century, yet developing efficient and secure techniques remains essential for modern post-quantum cryptographic applications. In this paper, we present novel algorithms for steps two and three of the fixed-weight sampling process. We demonstrate their practical applicability by replacing the current fixed-weight sampling routine in the HQC post-quantum key exchange mechanism, recently selected for NIST standardization. We rigorously prove that our procedures are sound, secure, and introduce little to no bias. Our implementation of the proposed algorithms accelerates step 2 by up to $2.63\times$ and step 3 by up to $5.20\times$ compared to an optimized version of the fixed-weight sampler currently used in HQC. Since fixed-weight sampling constitutes a significant portion of HQC’s execution time, these speedups translate into protocol-level improvements of up to $1.36\times$, $1.23\times$ and $1.17\times$ for key generation, encapsulation and decapsulation, respectively.
Jung Hee Cheon, Jihwan Kim, Yongdong Yeo
The Cheon-Kim-Kim-Song (CKKS) homomorphic encryption scheme is widely adopted for securely evaluating circuits over real numbers, such as those arising in privacy-preserving machine learning (PPML), because it efficiently supports approximate floating-point arithmetic of messages. A CKKS ciphertext has a finite level, which corresponds to the budget for how many multiplicative operations can be applied. Once these levels are consumed, the ciphertext must be refreshed through a bootstrapping procedure to restore its capacity for further computation. However, bootstrapping itself also consumes a significant number of levels, leaving fewer levels after each bootstrapping.
In this work, we propose three techniques—OverModRaise1, OverModRaise2, and Tuple-C2S/S2C—that target reductions in the modulus consumption of C2S/S2C among the CKKS bootstrapping procedures, without introducing substantial overhead or compromising security. By combining these techniques, our implementation demonstrates a 27–61% throughput improvement compared to the state-of-the-art bootstrapping.
In this work, we propose three techniques—OverModRaise1, OverModRaise2, and Tuple-C2S/S2C—that target reductions in the modulus consumption of C2S/S2C among the CKKS bootstrapping procedures, without introducing substantial overhead or compromising security. By combining these techniques, our implementation demonstrates a 27–61% throughput improvement compared to the state-of-the-art bootstrapping.
Takeshi Yoshida, Keita Emura
Ateniese et al. (CRYPTO 2019/JoC 2021) introduced a cryptographic primitive which they call matchmaking encryption (ME), and Identity-based ME (IB-ME) is its identity-based variant. IB-ME supports an equality matching where a sender (encryptor) indicates a receiver's (decryptor's) identity (rcv) in addition to their own ID ($\sigma$), and a receiver indicates a sender's identity (snd) in addition to the own identity ($\rho$). A ciphertext is decrypted if $(\sigma,\rho)=$(snd,rcv). In this paper, we pay attention to the search condition of public key authenticated encryption with keyword search (PAEKS) (Huang-Li, Information Sciences 2017) is reminiscent of the equality matching. We introduce a public key variant of ME which we call PK-ME, and propose a generic construction of PK-ME from PAEKS. As a conceptual contribution, our work lies in revealing a connection between ME and public key searchable encryption, which were independently researched so far. Due to the generic construction of PAEKS (Li-Boyen, IACR CiC 2024), we can instantiate the proposed generic construction from pairings or lattices. We also introduce a weaker version of authenticity and show that it can be instantiated from several complexity assumptions. Finally, we discuss the advantage/disadvantage of PK-ME compared to IB-ME.
Rahul Ilango
A zero-knowledge proof demonstrates that a fact (like that a Sudoku puzzle has a solution) is true while, counterintuitively, revealing nothing else (like what the solution actually is). This remarkable guarantee is extremely useful in cryptographic applications, but it comes at a cost. A classical impossibility result by Goldreich and Oren [J. Cryptol. '94] shows that zero-knowledge proofs must necessarily sacrifice basic properties of traditional mathematical proofs --- namely perfect soundness (that no proof of a false statement exists) and non-interactivity (that a proof can be transmitted in a single message).
Contrary to this impossibility, we show that zero-knowledge with perfect soundness and no interaction is effectively possible. We do so by defining and constructing a powerful new relaxation of zero-knowledge. Intuitively, while the classical zero-knowledge definition requires that an object called a simulator actually exists, our new definition only requires that one cannot rule out that a simulator exists (in a particular logical sense). Using this, we show that **every falsifiable security property of (classical) zero-knowledge can be achieved with no interaction, no setup, and perfect soundness.** This enables us to remove interaction and setup from (classical) zero-knowledge in essentially all of its applications in the literature, at the relatively mild cost that such applications now have security that is "game-based" instead of "simulation-based."
Our construction builds on the work of Kuykendall and Zhandry [TCC '20] and relies on two central, longstanding, and well-studied assumptions that we show are also necessary. The first is the existence of non-interactive witness indistinguishable proofs, which follows from standard assumptions in cryptography. The second is Krajícek and Pudlák's 1989 conjecture that no optimal proof system exists. This is one of the main conjectures in the field of proof complexity and is the natural finitistic analogue of the impossibility of Hilbert's second problem (and, hence, also Gödel's incompleteness theorem). Our high-level idea is to use these assumptions to construct a prover and verifier where no simulator exists, but the non-existence of a simulator is independent (in the logical sense of unprovability) of an arbitrarily strong logical system. One such logical system is the standard axioms of mathematics: ZFC.
Contrary to this impossibility, we show that zero-knowledge with perfect soundness and no interaction is effectively possible. We do so by defining and constructing a powerful new relaxation of zero-knowledge. Intuitively, while the classical zero-knowledge definition requires that an object called a simulator actually exists, our new definition only requires that one cannot rule out that a simulator exists (in a particular logical sense). Using this, we show that **every falsifiable security property of (classical) zero-knowledge can be achieved with no interaction, no setup, and perfect soundness.** This enables us to remove interaction and setup from (classical) zero-knowledge in essentially all of its applications in the literature, at the relatively mild cost that such applications now have security that is "game-based" instead of "simulation-based."
Our construction builds on the work of Kuykendall and Zhandry [TCC '20] and relies on two central, longstanding, and well-studied assumptions that we show are also necessary. The first is the existence of non-interactive witness indistinguishable proofs, which follows from standard assumptions in cryptography. The second is Krajícek and Pudlák's 1989 conjecture that no optimal proof system exists. This is one of the main conjectures in the field of proof complexity and is the natural finitistic analogue of the impossibility of Hilbert's second problem (and, hence, also Gödel's incompleteness theorem). Our high-level idea is to use these assumptions to construct a prover and verifier where no simulator exists, but the non-existence of a simulator is independent (in the logical sense of unprovability) of an arbitrarily strong logical system. One such logical system is the standard axioms of mathematics: ZFC.