International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

20 February 2022

Charles Bouillaguet
ePrint Report ePrint Report
This paper discusses the implications of choosing a computational model to study the cost of cryptographic attacks and therefore quantify how dangerous they are. This choice is often unconscious and the chosen model itself is usually implicit; but it has repercussions on security evaluations.

We compare three reasonable computational models: $i$) the usual Random Access Machine (RAM) model; $ii$) the ``Expensive Memory Model'' explicitly introduced by several 3rd-round submissions to the Post-Quantum NIST competition (it states that a single access to a large memory costs as much as many local operations); $iii)$ the venerable VLSI model using the Area-Time cost measure.

It is well-known that costs in the RAM model are lower that costs in the last two models. These have been claimed to be more realistic, and therefore to lead to more precise security evaluations. The main technical contribution of this paper is to show that the last two these models are incomparable. We identify a situation where the expensive memory model overestimates costs compared to the (presumably even more realistic) VLSI model.

In addition, optimizing the cost in each model is a distinct objective that leads to different attack parameters, and raises the question of what is the ``best'' way to proceed for an eventual attacker. We illustrate these discrepancies by studying several generic attacks against hash function and Feistel networks in the three models.
Expand
Ariana Goh, Lim Chu Wee, Yan Bo Ti
ePrint Report ePrint Report
In this paper we generalise Ti's fault attack and the loop abort fault attacks on supersingular isogeny cryptosystems (genus one) to genus two. Genus two isogeny based cryptosystems are generalisations of its genus one counterpart, as such, attacks on the the latter are believed to generalise to the former.

Fault attacks on supersingular elliptic curve isogeny cryptography has been shown to be practical. We show in this paper that fault attacks continue to be practical in genus two, albeit with a few additional traces required.
Expand
Richard Allen, Ratip Emin Berker, Sílvia Casacuberta, Michael Gul
ePrint Report ePrint Report
In this paper, we provide a comprehensive overview of a recent debate over the quantum versus classical solvability of bounded distance decoding (BDD). Specifically, we review the work of Eldar and Hallgren [EH22], [Hal21] demonstrating a quantum algorithm solving $\lambda_1 2^{-\Omega(\sqrt{k \log q})}$-BDD in polynomial time for lattices of periodicity $q$, finite group rank $k$, and shortest lattice vector length $\lambda_1$. Subsequently, we prove the results of [DvW21a], [DvW21b] with far greater detail and elaboration than in the original work. Namely, we show that there exists a deterministic, classical algorithm achieving the same result.
Expand
Senyang Huang, Orr Dunkelman, Orna Agmon Ben-Yehuda, Alexander Maximov
ePrint Report ePrint Report
The Keccak sponge function family, designed by Bertoni et al. in 2007, was selected by the U.S. National Institute of Standards and Technology (NIST) in 2012 as the next generation of Secure Hash Algorithm (SHA-3). Due to its theoretical and practical importance, cryptanalysis against SHA-3 has attracted an increasing attention. To the best of our knowledge, the most powerful collision attack on SHA-3 up till now is the linearisation technique proposed by Jian Guo et al. However, that technique is infeasible to work on variants with a smaller input space, such as SHA3-384.

In this work we improve previous results with three ideas which were not used in previous works against SHA-3. First, in order to reduce constraints and increase flexibility in our solutions, we use 2-block messages instead of 1-block messages. Second, we reduce the connectivity problem into a satisfiability (SAT) problem, instead of applying the linearisation technique. Finally, we consider two new non-random properties of the Keccak non-linear layer and propose an efficient deduce-and-sieve algorithm based on these properties.

The resulting collision-finding algorithm on 4-round SHA3-384 has a practical time complexity of $2^{59.64}$ (and memory complexity of $2^{45.93}$). This greatly improves the previous best known collision attack by Dinur et al., whose $2^{147}$ time complexity was far from being practical. Although our attack does not threaten the security margin of the SHA-3 hash function, the tools developed in this paper could be useful in future analysis of other cryptographic primitives and also in development of new and faster SAT solvers.
Expand
Adithya Bhat, Aniket Kate, Kartik Nayak, Nibesh Shrestha
ePrint Report ePrint Report
Distributed random beacons publish random numbers at regular intervals, which anyone can obtain and verify. The design of public distributed random beacons has been an exciting research direction with significant implication to blockchains, voting and beyond. Random beacons, in addition to being bias-resistant and unpredictable, also need to have low communication cost, low latency, and ease of reconfigurability. Existing works on synchronous random beacons sacrifice one or more of these properties.

In this work, we design an efficient unpredictable synchronous random beacon protocol, OptRand, with quadratic (in the number $n$ of system nodes) communication complexity per beacon output. First, we innovate by employing a novel combination of bilinear pairing based publicly verifiable secret sharing and non-interactive zero-knowledge proofs to build a linear (in $n$) sized publicly verifiable random sharing. Second, we develop a state machine replication protocol with linear-sized inputs that is also optimistically responsive, i.e., it can progress responsively at actual network speed during optimistic conditions, despite the synchrony assumption, and thus incur low latency. In addition, we present an efficient reconfiguration mechanism for OptRand that allows nodes to leave and join the system.
Expand
Lawrence Roy
ePrint Report ePrint Report
Given a small number of base oblivious transfers (OTs), how does one generate a large number of extended OTs as efficiently as possible? The answer has long been the seminal work of IKNP (Ishai et al., Crypto 2003) and the family of protocols it inspired, which only use Minicrypt assumptions. Recently, Boyle et al. (Crypto 2019) proposed the Silent-OT technique that improves on IKNP, but at the cost of a much stronger, non-Minicrypt assumption: the learning parity with noise (LPN) assumption. We present SoftSpokenOT, the first OT extension to improve on IKNP's communication cost in the Minicrypt model. While IKNP requires security parameter $\lambda$ bits of communication for each OT, SoftSpokenOT only needs $\lambda / k$ bits, for any $k$, at the expense of requiring $2^{k-1} / k$ times the computation. For small values of $k$, this tradeoff is favorable since IKNP-style protocols are network-bound. We implemented SoftSpokenOT and found that our protocol gives almost a $5 \times$ speedup over IKNP in the LAN setting.

Our technique is based on a novel silent protocol for vector oblivious linear evaluation (VOLE) over polynomial-sized fields. We created a framework to build maliciously secure 1-of-N OT extension from this VOLE, revisiting the existing work for each step. Along the way, we found several flaws in the existing work, including a practical attack against the consistency check of Patra et al. (NDSS 2017), while also making some improvements.
Expand
Andrew Park, Wei-Kai Lin, Elaine Shi
ePrint Report ePrint Report
We propose a new garbled RAM construction called NanoGRAM, which achieves an amortized cost of $\widetilde{O}(\lambda \cdot (W \log N + \log^3 N))$ bits per memory access, where $\lambda$ is the security parameter, $W$ is the block size, and $N$ is the total number of blocks, and $\widetilde{O}(\cdot)$ hides $poly\log\log$ factors. For sufficiently large blocks where $W = \Omega(\log^2 N)$, our scheme achieves $\widetilde{O}(\lambda \cdot W \log N)$ cost per memory access, where the dependence on $N$ is optimal (barring $poly\log\log$ factors), in terms of the evaluator's runtime. Our asymptotical performance matches even the {\it interactive} state-of-the-art (modulo $poly\log\log$ factors), that is, running Circuit ORAM atop garbled circuit, and yet we remove the logarithmic number of interactions necessary in this baseline. Furthermore, we achieve asymptotical improvement over the recent work of Heath et al. Our scheme adopts the same assumptions as the mainstream literature on practical garbled circuits, i.e., circular correlation-robust hashes or a random oracle. We evaluate the concrete performance of NanoGRAM and compare it with a couple of baselines that are asymptotically less efficient. We show that NanoGRAM starts to outperform the naive linear-scan garbled RAM at a memory size of $N = 2^9$ and starts to outperform the recent construction of Heath et al. at $N = 2^{13}$.

Finally, as a by product, we also show the existence of a garbled RAM scheme assuming only one-way functions, with an amortized cost of $\widetilde{O}(\lambda^2 \cdot (W \log N + \log^3 N))$ per memory access. Again, the dependence on $N$ is nearly optimal for blocks of size $W = \Omega(\log^2 N)$ bits.
Expand
Arasu Arun, Joseph Bonneau, Jeremy Clark
ePrint Report ePrint Report
We introduce the short-lived proof, a non-interactive proof of knowledge with a novel feature: after a specified period of time, the proof is no longer convincing. This time-delayed loss of soundness happens "naturally" without further involvement from the prover or any third party. We propose formal definitions for short-lived proofs as well as the special case of short-lived signatures. We show several practical constructions built using verifiable delay functions (VDFs). The key idea in our approach is to allow any party to forge any proof by executing a large sequential computation. Some constructions achieve a stronger property called reusable forgeability in which one sequential computation allows forging an arbitrary number of proofs of different statements. Our work also introduces two novel types of VDFs, re-randomizable VDFs and zero-knowledge VDFs, which may be of independent interest.
Expand
André Schrottenloher, Marc Stevens
ePrint Report ePrint Report
Meet-in-the-middle (MITM) is a general paradigm where internal states are computed along two independent paths ('forwards' and 'backwards') that are then matched. Over time, MITM attacks improved using more refined techniques and exploiting additional freedoms and structure, which makes it more involved to find and optimize such attacks. This has led to the use of detailed attack models for generic solvers to automatically search for improved attacks, notably a MILP model developed by Bao et al. at EUROCRYPT 2021. In this paper, we study a simpler MILP modeling combining a greatly reduced attack representation as input to the generic solver, together with a theoretical analysis that, for any solution, proves the existence and complexity of a detailed attack. This modeling allows to find both classical and quantum attacks on a broad class of cryptographic permutations. First, Present-like constructions, with the permutations of the Spongent hash functions: we improve the MITM step in distinguishers by up to 3 rounds. Second, AES-like designs: despite being much simpler than Bao et al.'s, our model allows to recover the best previous results. The only limitation is that we do not use degrees of freedom from the key schedule. Third, we show that the model can be extended to target more permutations, like Feistel networks. In this context we give new Guess-and-determine attacks on reduced Simpira v2 and Sparkle. Finally, using our model, we find several new quantum preimage and pseudo-preimage attacks (e.g. Haraka v2, Simpira v2 ... ) targeting the same number of rounds as the classical attacks.
Expand
Thibauld Feneuil, Antoine Joux, Matthieu Rivain
ePrint Report ePrint Report
Zero-knowledge proofs of knowledge are useful tools to design signature schemes. The ongoing effort to build a quantum computer urges the cryptography community to develop new secure cryptographic protocols based on quantum-hard cryptographic problems. One of the few directions is code-based cryptography for which the strongest problem is the syndrome decoding (SD) for random linear codes. This problem is known to be NP-hard and the cryptanalysis state of the art has been stable for many years. A zero-knowledge protocol for this problem was pioneered by Stern in 1993. Since its publication, many articles proposed optimizations, implementation, or variants.

In this paper, we introduce a new zero-knowledge proof for the syndrome decoding problem on random linear codes. Instead of using permutations like most of the existing protocols, we rely on the MPC-in-the-head paradigm in which we reduce the task of proving the low Hamming weight of the SD solution to proving some relations between specific polynomials. Specifically, we propose a 5-round zero-knowledge protocol that proves the knowledge of a vector x such that y=Hx and wt(x)<= w and which achieves a soundness error closed to 1/N for an arbitrary N.

While turning this protocol into a signature scheme, we achieve a signature size of 11-12 KB for 128-bit security when relying on the hardness of the SD problem on binary fields. Using larger fields (like $\mathbb{F}_{2^8}$), we can produce fast signatures of around 8 KB. This allows us to outperform Picnic3 and to be competitive with SPHINCS+, both post-quantum signature candidates in the ongoing NIST standardization effort. Moreover, our scheme outperforms all the existing code-based signature schemes for the common ``signature size + public key size'' metric.
Expand
Sebastian Kolby, Divya Ravi, Sophia Yakoubov
ePrint Report ePrint Report
YOSO MPC (Gentry et al., Crypto 2021) is a new MPC framework where each participant can speak at most once. This models an adaptive adversary’s ability to watch the network and corrupt or destroy parties it deems significant based on their communication. By using private channels to anonymous receivers (e.g. by encrypting to a public key whose owner is unknown), the communication complexity of YOSO MPC can scale sublinearly with the total number $N$ of available parties, even when the adversary’s corruption threshold is linear in $N$ (e.g. just under $N/2$). It was previously an open problem whether YOSO MPC can achieve guaranteed output delivery in a constant number of rounds without relying on trusted setup. In this work, we show that this can indeed be accomplished. Using linearly homomorphic encryption and secret sharing, we construct YOSO-LHE, which is the first realistically efficient YOSO MPC protocol that achieves guaranteed output delivery without trusted setup. YOSO-LHE is not itself constant-round; it takes $O(d)$ rounds of communication, where $d$ is the depth of the circuit being computed. However, YOSO-LHE can be used to bootstrap any constant-round YOSO protocol that requires setup, by generating that setup within YOSO-LHE. As long as the complexity of the setup is independent of the circuit to be evaluated, the bootstrapped protocol will be constant-round.
Expand
Seunghwan Lee, Dong-Joon Shin
ePrint Report ePrint Report
We propose a floating-point fully homomorphic encryption(FPFHE) is proposed based on torus fully homomorphic encryptionequipped with programmable bootstrapping. Specifically, FPFHEfor 32-bit and 64-bit floating-point messages is implemented, which isstate-of-the-art in terms of precision. Also, a ciphertext is constructedto check if an overflow had occurred or not while evaluating arithmeticcircuits with FPFHE, which is useful when the message space or arith-metic circuit is too complex to estimate a bound of outputs such as deeplearning applications
Expand
Nir Bitansky, Sapir Freizeit
ePrint Report ePrint Report
We construct a two-message oblivious transfer protocol with statistical sender privacy (SSP OT) based on the Learning Parity with Noise (LPN) Assumption and a standard Nisan-Wigderson style derandomization assumption. Beyond being of interest on their own, SSP OT protocols have proven to be a powerful tool toward minimizing the round complexity in a wide array of cryptographic applications from proofs systems, through secure computation protocols, to hard problems in statistical zero knowledge (SZK). The protocol is plausibly post-quantum secure. The only other construction with plausible post quantum security is that of Brakerski and Do ̈tling (TCC ‘18) based on the Learning with Errors (LWE) Assumption. Lacking the geometric structure of LWE, our construction and analysis rely on a different set of techniques. Technically, we first construct an SSP OT protocol in the common random string model from LPN alone, and then derandomize the common random string. Most of the technical difficulty lies in the first step. Here we prove a robustness property of the inner product randomness extractor to a certain type of linear splitting attacks. A caveat of our construction is that it relies on the so called low noise regime of LPN. This aligns with our current complexity-theoretic understanding of LPN, which only in the low noise regime is known to imply hardness in SZK.
Expand
Jian Guo, Guozhen Liu, Ling Song, Yi Tu
ePrint Report ePrint Report
In this work, we focus on collision attacks against instances of \shac hash family in both classical and quantum settings. Since the 5-round collision attacks on \shacc-256 and other variants proposed by Guo \etal at JoC~2020, no other essential progress has been published. With a thorough investigation, we identify that the challenges of extending such collision attacks on \shac to more rounds lie in the inefficiency of differential trail search. To overcome this obstacle, we develop a \sat automatic search toolkit. The tool is used in multiple intermediate steps of the collision attacks and exhibits surprisingly high efficiency in differential trail search and other optimization problems encountered in the process. As a result, we present the first 6-round classical collision attack on \shakea with time complexity \cpshake, which also forms a quantum collision attack with quantum time \cpshakeq, and the first 6-round quantum collision attack on \shacc-224 and \shacc-256 with quantum time \cpshattf and \cpshatfs, both with negligible requirement of classical and quantum memory. The fact that classical collision attacks do not apply to 6-round \shacc-224 and \shacc-256 shows the higher coverage of quantum collision attacks, which is consistent with that on SHA-2 observed by Hosoyamada and Sasaki at CRYPTO~2021.
Expand
Liu zhang, Zilong Wang, Boyang Wang
ePrint Report ePrint Report
In CRYPTO'19, Gohr proposed a new cryptanalysis strategy using machine learning algorithms. Combining the neural distinguisher with a differential path and integrating the advanced key recovery procedure, Gohr achieved a 12-round key recovery attack on Speck32/64. Chen and Yu improved accuracy of neural distinguisher considering derived features from multiple-ciphertext pairs instead of single-ciphertext pairs. Bao et al. presented the concept of generalized neutral bits, improved 12-round, and devised a practical 13-round key recovery attack on Speck32/64 by enhancing the differential path prepended on the top of neural distinguisher. To capture more dimensional information, inspired by the Inception Blocks in GoogLeNet, we use multiple parallel convolutional layers as the core of the network structure to train neural distinguisher. For Speck32/64, we improve the accuracy of (5-8)-round neural distinguisher and train a new 9-round neural distinguisher. For Simon32/64, we improve the accuracy of (7-11)-round neural distinguisher and train a new 12-round neural distinguisher. In addition, we extend the idea of neutral bits in Gohr's work to solve the requirement of the same distribution of multiple-plaintext pairs in key recovery attack. Under the combined effect of multiple improvements, the time complexity of our (11-13)-round key recovery attacks for Speck32/64 has been reduced to a certain extent. Surprisingly, the success rate of our 12-round key recovery attack can reach $100\%$. For Simon32/64, we increase the success rate and decrease the time complexity of 16-round key recovery attack. Also, we successfully implement a 17-round key recovery attack. Sadly, because the accuracy of 9-round neural distinguisher of Speck32/64 and 12-round neural distinguisher of Simon32/64 is lower, they can't be used for more rounds key recovery attack. \footnote{The source codes are available in https://drive.google.com/drive/folders/17AWYupor_bX2rEe000tPuppdNH4dlmHm?usp=sharing.
Expand
Si Gao, Elisabeth Oswald
ePrint Report ePrint Report
Non-specific leakage detection (initially introduced as “Test Vector Leakage Assessment”, short TVLA) plays a vital role in practice because it detects (potential) leaks independently of assumptions about the leakage model and any specific attack vector. However, the nonspecific nature means detected leaks might not be exploitable, and thus the current state of the art is to employ a battery of specific attacks to confirm the detection outcomes.

We propose a novel leakage assessment framework which enables to link non-specific leakage detection outcomes with size of the key guess that is necessary to exploit them. We therefore solve the problem of deciding if or not a leak is exploitable without the need for specific attacks. Our methodology furthermore enables (for a detected leak) to reveal the specific key bytes, and with that, it allows the construction of confirmatory attacks. This novel approach is enabled by proposing to cast the leakage detection problem as the statistical task of building key-dependent regression models: if such a model exists, then we know that the point leaks. Depending on the size and nature of the model, we can further judge the exploitability and provide a concrete attack vector.
Expand
Thomas Attema, Ignacio Cascudo, Ronald Cramer, Ivan Bjerre Damgård, Daniel Escudero
ePrint Report ePrint Report
Compressed $\Sigma$-Protocol Theory (CRYPTO 2020) presents an ``alternative'' to Bulletproofs that achieves the same communication complexity while adhering more elegantly to existing $\Sigma$-protocol theory, which enables their techniques to be directly applicable to other widely used settings in the context of ``plug \& play'' algorithmics. Unfortunately, their techniques are restricted to arithmetic circuits over \emph{prime} fields, which rules out the possibility of using more machine-friendly moduli such as powers of $2$, which have proven to improve efficiency in applications. In this work we show that such techniques can be generalized to the case of arithmetic circuits modulo \emph{any} number. This enables the use of powers of $2$, which can prove to be beneficial for efficiency, but it also facilitates the use of other moduli that might prove useful in different applications.

In order to achieve this, we first present an instantiation of the main building block of the theory of compressed $\Sigma$-protocols, namely compact vector commitments. Our construction, which may be of independent interest, is homomorphic modulo any positive integer $m$, a result that was not known in the literature before. Second, we generalize the Compressed $\Sigma$-Protocol Theory from finite fields to $\mathbb{Z}_m$. The main challenge here is ensuring that there are large enough challenge sets as to fulfill the necessary soundness requirements, which is achieved by considering certain ring extensions. Our techniques have application as publicly verifiable zero knowledge proofs of correct computation on homomorphically encrypted data, where for a more flexible parameter instantiation it is useful that the ciphertext space is allowed to be a modular or Galois ring rather than a field: concretely, our protocols can be plugged as a commit-and-proof argument into a recent result on efficient verifiable computation schemes on encrypted data with context-hiding (PKC 21) which exploited this advantage.
Expand
Orel Cosseron, Clément Hoffmann, Pierrick Méaux, Françoix-Xavier Standaert
ePrint Report ePrint Report
Hybrid Homomorphic Encryption (HHE) reduces the amount of computation client-side and band- width usage in a Fully Homomorphic Encryption (FHE) framework. HHE requires the usage of specific sym- metric schemes that can be evaluated homomorphically efficiently. In this paper, we introduce the paradigm of Group Filter Permutator (GFP) as a generalization of the Improved Filter Permutator paradigm introduced by M ́eaux et al. From this paradigm, we specify Elisabeth , a family of stream cipher and give an instance: Elisabeth-4 . After proving the security of this scheme, we provide a Rust implementation of it and ensure its performance is comparable to state-of-the-art HHE. The true strength of Elisabeth lies in the available opera- tions server-side: while the best HHE applications were limited to a few multiplications server-side, we used data sent through Elisabeth-4 to homomorphically evaluate a neural network inference. Finally, we discuss the improvement and loss between the HHE and the FHE framework and give ideas to build more efficient schemes from the Elisabeth family
Expand
Rishab Goyal, Vinod Vaikuntanathan
ePrint Report ePrint Report
Aggregate signatures (Boneh, Gentry, Lynn, Shacham, Eurocrypt 2003) enable compressing a set of $N$ signatures on $N$ different messages into a short aggregate signature. This reduces the space complexity of storing the signatures from linear in $N$ to a fixed constant (that depends only on the security parameter). However, verifying the aggregate signature requires access to all $N$ messages, resulting in the complexity of verification being at least $\Omega(N)$.

In this work, we introduce the notion of locally verifiable aggregate signatures that enable efficient verification: given a short aggregate signature $\sigma$ (corresponding to a set $\mathcal{M}$ of $N$ messages), the verifier can check whether a particular message $m$ is in the set, in time independent of $N$. Verification does not require knowledge of the entire set $\mathcal{M}$. We demonstrate many natural applications of locally verifiable aggregate signature schemes: in the context of certificate transparency logs; in blockchains; and for redacting signatures, even when all the original signatures are produced by a single user.

We provide two constructions of single-signer locally verifiable aggregate signatures, the first based on the RSA assumption and the second on the bilinear Diffie-Hellman inversion assumption, both in the random oracle model.

As an additional contribution, we introduce the notion of compressing cryptographic keys in identity-based encryption (IBE) schemes, show applications of this notion, and construct an IBE scheme where the secret keys for $N$ identities can be compressed into a single aggregate key, which can then be used to decrypt ciphertexts sent to any of the $N$ identities.
Expand
Iftach Haitner, Daniel Nukrai, Eylon Yogev
ePrint Report ePrint Report
Succinct non-interactive arguments (SNARGs) have become a fundamental primitive in the cryptographic community. The focus of this work is constructions of SNARGs in the Random Oracle Model (ROM). Such SNARGs enjoy post-quantum security and can be deployed using lightweight cryptography to heuristically instantiate the random oracle. A ROM-SNARG is \emph{$(t,\varepsilon)$-sound} if no $t$-query malicious prover can convince the verifier to accept a false statement with probability larger than $\varepsilon$. Recently, Chiesa-Yogev (CRYPTO '21) presented a ROM-SNARG of length ${\Theta}(\log (t/\varepsilon) \cdot \log t)$ (ignoring $\log n$ factors, for $n$ being the instance size). This improvement, however, is still far from the (folklore) lower bound of $\Omega(\log (t/\varepsilon))$.

Assuming the \textit{randomized exponential-time hypothesis}, we prove a tight lower bound of ${\Omega}(\log (t/\varepsilon) \cdot \log t)$ for the length of {$(t,\varepsilon)$-sound} ROM-SNARGs. Our lower bound holds for constructions with non-adaptive verifiers and strong soundness notion called \textit{salted soundness}, restrictions that hold for \emph{all} known constructions (ignoring contrived counterexamples). We prove our lower bound by transforming any short ROM-SNARG (of the considered family) into a same length ROM-SNARG in which the verifier asks only a \emph{few} oracles queries, and then apply the recent lower bound of Chiesa-Yogev (TCC '20) for such SNARGs.
Expand
◄ Previous Next ►