International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

12 April 2023

Raine Nieminen, Thomas Schneider
ePrint Report ePrint Report
Garbled circuits are a fundamental cryptographic primitive that allows two or more parties to securely evaluate an arbitrary Boolean circuit without revealing any information beyond the output using a constant number of communication rounds. Garbled circuits have been introduced by Yao (FOCS’86) and generalized to the multi-party setting by Beaver, Micali and Rogaway (STOC’90). Since then, several works have improved their efficiency by providing different garbling schemes and several implementations exist. Starting with the seminal Fairplay compiler (USENIX Security’04), several implementation frameworks decoupled the task of compiling the function to be evaluated into a Boolean circuit from the engine that securely evaluates that circuit, e.g., using a secure two-party computation protocol based on garbled circuits. In this paper, we show that this decoupling of circuit generation and evaluation allows a subtle attack on several prominent garbling schemes. It occurs when violating the implicit assumption on the circuit that gates have different input wires which is most often not explicitly specified in the respective papers. The affected garbling schemes use separate calls to a deterministic encryption function for the left and right input wire of a gate to derive pseudo-random encryption pads that are XORed together. When a circuit contains a gate where the left and right input wire are the same, these two per-wire encryption pads cancel out and we demonstrate that this can result in a complete break of privacy. We show how the vulnerable garbling schemes can be fixed easily.
Expand
Ivan Damgård, Divya Ravi, Daniel Tschudi, Sophia Yakoubov
ePrint Report ePrint Report
In this paper, we explore the feasibility of reliable and private communication in dynamic networks, where in each round the adversary can choose which direct peer-to-peer links are available in the network graph, under the sole condition that the graph is k-connected at each round (for some k).

We show that reliable communication is possible in such a dynamic network if and only if k > 2t. We also show that if k = cn > 2t for a constant c, we can achieve reliable communication with polynomial round and communication complexity.

For unconditionally private communication, we show that for a passive adversary, k > t is sufficient (and clearly necessary). For an active adversary, we show that k > 2t is sufficient for statistical security (and clearly necessary), while k > 3t is sufficient for perfect security. We conjecture that, in contrast to the static case, k > 2t is not enough for perfect security, and we give evidence that the conjecture is true.

Once we have reliable and private communication between each pair of parties, we can emulate a complete network with secure channels, and we can use known protocols to do secure computation.
Expand
Yizhi Huang, Rahul Ilango, Hanlin Ren
ePrint Report ePrint Report
It is a long-standing open problem whether the Minimum Circuit Size Problem ($\mathrm{MCSP}$) and related meta-complexity problems are NP-complete. Even for the rare cases where the NP-hardness of meta-complexity problems are known, we only know very weak hardness of approximation.

In this work, we prove NP-hardness of approximating meta-complexity with nearly-optimal approximation gaps. Our key idea is to use *cryptographic constructions* in our reductions, where the security of the cryptographic construction implies the correctness of the reduction. We present both conditional and unconditional hardness of approximation results as follows.

$\bullet$ Assuming subexponentially-secure witness encryption exists, we prove essentially optimal NP-hardness of approximating conditional time-bounded Kolmogorov complexity ($\mathrm{K}^t(x \mid y)$) in the regime where $t \gg |y|$. Previously, the best hardness of approximation known was a $|x|^{1/ \mathrm{poly}(\log \log |x|)}$ factor and only in the sublinear regime ($t \ll |y|$). $\bullet$ Unconditionally, we show near-optimal NP-hardness of approximation for the Minimum Oracle Circuit Size Problem (MOCSP), where Yes instances have circuit complexity at most $2^{\varepsilon n}$, and No instances are essentially as hard as random truth tables. Our reduction builds on a witness encryption construction proposed by Garg, Gentry, Sahai, and Waters (STOC'13). Previously, it was unknown whether it is NP-hard to distinguish between oracle circuit complexity $s$ versus $10s\log N$. $\bullet$ Finally, we define a "multi-valued" version of $\mathrm{MCSP}$, called $\mathrm{mvMCSP}$, and show that w.p. $1$ over a random oracle $O$, $\mathrm{mvMCSP}^O$ is NP-hard to approximate under quasi-polynomial-time reductions with $O$ oracle access. Intriguingly, this result follows almost directly from the security of Micali's CS proofs (Micali, SICOMP'00).

In conclusion, we give three results convincingly demonstrating the power of cryptographic techniques in proving NP-hardness of approximating meta-complexity.
Expand
Wen-jie Lu, Zhicong Huang, Qizhi Zhang, Yuchen Wang, Cheng Hong
ePrint Report ePrint Report
Gradient Boosting Decision Tree (GBDT) and its variants are widely used in industry, due to their strong interpretability. Secure multi-party computation allows multiple data owners to compute a function jointly while keeping their input private. In this work, we present Squirrel, a two-party GBDT training framework on a vertically split dataset, where two data owners each hold different features of the same data samples. Squirrel is private against semi-honest adversaries, and no sensitive intermediate information is revealed during the training process. Squirrel is also scalable to datasets with millions of samples even under a Wide Area Network (WAN). Squirrel achieves its high performance via several novel co-designs of the GBDT algorithms and advanced cryptography. Especially, 1) we propose a new and efficient mechanism to hide the sample distribution on each node using oblivious transfer. 2) We propose a highly optimized method for gradient aggregation using lattice-based homomorphic encryption (HE). Our empirical results show that our method can be three orders of magnitude faster than the existing HE approaches. 3) We propose a novel protocol to evaluate the sigmoid func- tion on secretly shared values, showing 19×-200×-fold im- provements over two existing methods. Combining all these improvements, Squirrel costs less than 6 seconds per tree on a dataset with 50 thousands samples which outperforms Pivot (VLDB 2020) by more than 28×. We also show that Squirrel can scale up to datasets with more than one million samples, e.g., about 170 seconds per tree over a WAN.
Expand
Sanketh Menda, Julia Len, Paul Grubbs, Thomas Ristenpart
ePrint Report ePrint Report
A line of recent work has highlighted the importance of context commitment security, which asks that authenticated encryption with associated data (AEAD) schemes will not decrypt the same adversarially-chosen ciphertext under two different, adversarially-chosen contexts (secret key, nonce, and associated data). Despite a spate of recent attacks, many open questions remain around context commitment; most obviously nothing is known about the commitment security of important schemes such as CCM, EAX, and SIV.

We resolve these open questions, and more. Our approach is to, first, introduce a new framework that helps us more granularly define context commitment security in terms of what portions of a context are adversarially controlled. We go on to formulate a new notion, called context discoverability security, which can be viewed as analogous to preimage resistance from the hashing literature. We show that unrestricted context commitment security (the adversary controls all of the two contexts) implies context discoverability security for a class of schemes encompassing most schemes used in practice. Then, we show new context discovery attacks against a wide set of AEAD schemes, including CCM, EAX, SIV, GCM, and OCB3, and, by our general result, this gives new unrestricted context commitment attacks against them.

Finally, we consider restricted context commitment security for the original SIV mode, for which no prior attack techniques work (including our context discovery based ones). We are nevertheless able to give a novel $O(2^{n/3})$ attack using Wagner's k-tree algorithm for the generalized birthday problem.
Expand
Daniele Micciancio, Mark Schultz
ePrint Report ePrint Report
Recent work in the design of rate $1 - o(1)$ lattice-based cryptosystems have used two distinct design paradigms, namely replacing the noise-tolerant encoding $m \mapsto (q/2)m$ present in many lattice-based cryptosystems with a more efficient encoding, and post-processing traditional lattice-based ciphertexts with a lossy compression algorithm, using a technique very similar to the technique of ``vector quantization'' within coding theory. We introduce a framework for the design of lattice-based encryption that captures both of these paradigms, and prove information-theoretic rate bounds within this framework. These bounds separate the settings of trivial and non-trivial quantization, and show the impossibility of rate $1 - o(1)$ encryption using both trivial quantization and polynomial modulus. They furthermore put strong limits on the rate of constructions that utilize lattices built by tensoring a lattice of small dimension with $\mathbb{Z}^k$, which is ubiquitous in the literature. We additionally introduce a new cryptosystem, that matches the rate of the highest-rate currently known scheme, while encoding messages with a ``gadget'', which may be useful for constructions of Fully Homomorphic Encryption.
Expand
Gideon Samid
ePrint Report ePrint Report
highlighting a looming cyber threat emanating from fast developing artificial intelligence. This strategic threat is further magnified with the advent of quantum computers. AI and quantum-AI (QAI) represent a totally new and effective vector of cryptanalytic attack. Much as modern AI successfully completes browser search phrases, so it is increasingly capable of guessing a rather narrow a-priori list of plausible plaintexts. This guessing is most effective over device cryptography where the message space is limited. Matching these guesses with the captured ciphertext will greatly accelerate the code breaking process. We never faced such a plaintext-originated attack on a strategic level, and never had to prepare for it. Now we do. Proposing to apply a well-known martial art tactics: using the opponent's strength against them: constructing ciphertexts that would provide false answers to the AI attacker and lead them astray. We are achieving this defensive measure by pivoting away from the norm of small, known-size key and pattern-loaded ciphers. Using instead large keys of secret size, augmented with ad-hoc unilateral randomness of unbound limits, and deploying a pattern-devoid algorithm with a remarkably low computational burden, so it can easily handle very large keys. Thereby we achieve large as desired unicity distances. This strategy has become feasible just when the AI threat looms. It exploits three new technologies coming together: (i) non-algorithmic randomness, (ii) very large and inexpensive memory chips, and (iii) high throughout communication networks. These pattern-devoid, randomness rich ciphers also turn up to be an important option in the toolbox NIST prepares to meet the quantum challenge. Avoiding the computational load of mainstay ciphers, AIR-cryptography presents itself as the ciphers of choice for medical, military and other battery-limited devices for which data security is paramount. In summary: we are pointing out a fast emerging cyber challenges, and laying out a matching cryptographic answer.
Expand
Frank Denis
ePrint Report ePrint Report
While the round function of the AEGIS authenticated encryption algorithms is highly parallelizable, their mode of operation is not.

We introduce two new modes to overcome that limitation: AEGIS-128X and AEGIS-256X, that require minimal changes to existing implementations and retain the security properties of AEGIS-128L and AEGIS-256.
Expand
JP Aumasson, Dmitry Khovratovich, Bart Mennink, Porçu Quine
ePrint Report ePrint Report
From hashing and commitment schemes to Fiat-Shamir and encryption, hash functions are everywhere in zero-knowledge proofsystems (ZKPs), and minor performance changes in ``vanilla'' implementations can translate in major discrepancies when the hash is processed as a circuit within the proofsystem.

Protocol designers have resorted to a number of techniques and custom modes to optimize hash functions for ZKPs settings, but so far without a single established, well-studied construction. To address this need, we define the Sponge API for Field Elements (SAFE), a unified framework for permutation-based schemes (including AEAD, Sigma, PRNGs, and so on). SAFE eliminates the performance overhead, is pluggable in any field-oriented protocol, and is suitable for any permutation algorithm.

SAFE is implemented in Filecoin's Neptune hash framework, {which is} our reference implementation (in Rust). SAFE is also being integrated in other prominent ZKP projects. This report specifies SAFE and describes some use cases.

Among other improvements, our construction is among the first to store the protocol metadata in the sponge inner part in a provably secure way, which may be of independent interest to the sponge use cases outside of ZKP.
Expand
David Bruce Cousins, Yuriy Polyakov, Ahmad Al Badawi, Matthew French, Andrew Schmidt, Ajey Jacob, Benedict Reynwar, Kellie Canida, Akhilesh Jaiswal, Clynn Mathew, Homer Gamil, Negar Neda, Deepraj ...
ePrint Report ePrint Report
Secure computation is of critical importance to not only the DoD, but across financial institutions, healthcare, and anywhere personally identifiable information (PII) is accessed. Traditional security techniques require data to be decrypted before performing any computation. When processed on untrusted systems the decrypted data is vulnerable to attacks to extract the sensitive information. To address these vulnerabilities Fully Homomorphic Encryption (FHE) keeps the data encrypted during computation and secures the results, even in these untrusted environments. However, FHE requires a significant amount of computation to perform equivalent unencrypted operations. To be useful, FHE must significantly close the computation gap (within 10x) to make encrypted processing practical. To accomplish this ambitious goal the TREBUCHET project is leading research and development in FHE processing hardware to accelerate deep computations on encrypted data, as part of the DARPA MTO Data Privacy for Virtual Environments (DPRIVE) program. We accelerate the major secure standardized FHE schemes (BGV, BFV, CKKS, FHEW, etc.) at >=128-bit security while integrating with the open-source PALISADE and OpenFHE libraries currently used in the DoD and in industry. We utilize a novel tile-based chip design with highly parallel ALUs optimized for vectorized 128b modulo arithmetic. The TREBUCHET coprocessor design provides a highly modular, flexible, and extensible FHE accelerator for easy reconfiguration, deployment, integration and application on other hardware form factors, such as System-on-Chip or alternate chip areas
Expand
Dmitry Khovratovich, Mario Marhuenda Beltrán, Bart Mennink
ePrint Report ePrint Report
We provide security foundations for SAFE, a recently introduced API framework for sponge-based hash functions tailored to prime-field-based protocols. SAFE aims to provide a robust and foolproof interface, has been implemented in the Neptune hash framework and some zero-knowledge proof projects, but currently lacks any security proof. In this work we identify the SAFECore as versatile variant sponge construction underlying SAFE, we prove indifferentiability of SAFECore for all (binary and prime) fields up to around $|\mathbb{F}_p|^{c/2}$ queries, where $\mathbb{F}_p$ is the underlying field and $c$ the capacity, and we apply this security result to various use cases. We show that the SAFE-based protocols of plain hashing, authenticated encryption, verifiable computation, non-interactive proofs, and commitment schemes are secure against a wide class of adversaries, including those dealing with multiple invocations of a sponge in a single application. Our results pave the way of using SAFE with the full taxonomy of hash functions, including SNARK-, lattice-, and x86-friendly hashes.
Expand
Farshid Haidary Makoui, Thomas Aaron Guliver
ePrint Report ePrint Report
Digital signatures ensure legitimate access through identity authentication. It is also used to build blocks in blockchains and to authenticate transactions. The Courtois-Finiasz-Sendrier (CFS) digital signature is a well-known code-based digital signature scheme based on the Niederreiter cryptosystem. The CFS signature, however, is not widely used due to the long processing time required by its signing algorithm. Most code-based digital signature schemes are based on Niederreiter. The paper proposes a new code-based digital signature based on the McEliece cryptosystem. The proposed McEliece code-based scheme also gives less complexity and a higher success rate. The scheme provides an efficient code-based algorithm to sign a document in a shorter processing time. The scheme is also secure against public key structural attacks. The proposed scheme is the efficient code-based digital signature based on McEliece with a lower processing time required to construct a valid digital signature. The proposed signing algorithm also creates smaller signatures. In addition, the verification algorithm checks the integrity value to avoid any forgery before final verification.
Expand
Lingyue Qin, Boxin Zhao, Jialiang Hua, Xiaoyang Dong, Xiaoyun Wang
ePrint Report ePrint Report
Besides the U.S. NIST standard SHA-3(Keccak), another sponge-based primitive Ascon was selected as the NIST standard for lightweight applications, recently. Exploring the security against attacks on the sponge-based hash functions is very important. At EUROCRYPT 2023, Qin et al. introduced the MitM preimage attack framework and the automatic tools for Keccak, Ascon, and Xoodyak.

In this paper, we extend Qin et al.'s MitM attack framework into collision attack and also develop various techniques to improve the automatic tools for both preimage and collision attacks. We introduce a novel initial structure called weak-diffusion structure that enjoys many more degrees of freedom to build the blue/red neutral sets than Qin et al.'s. In addition, a more flexible condition scheme is introduced to reduce the diffusion of variables. To further accelerate the solving of automatic model, we propose a heuristic two-stage searching strategy, which first finds many blue neutral sets with naturally weak-diffusion properties, and then solves different automatic models with different blue neutral sets prefixed. Also symmetry property of Keccak is applied to speed up the search.

At last, we introduce the first collision attack on 4-round Keccak-512. Besides, the first MitM-based preimage attack on 4-round Keccak-384 is found that outperforms all previous attacks, while Qin et al. only found attack on Keccak-512. Moreover, we find collision attacks on reduced Xoodyak and Ascon with 1-2 rounds improvements than before. The complexities of preimage attacks on reduced Xoodyak and Ascon are also improved.
Expand

11 April 2023

B. Pinkas, T. Schneider, N. P. Smart, S. Williams
ePrint Report ePrint Report
Secure multi-party computation has been considered by the cryptographic community for a number of years. Until recently it has been a purely theoretical area, with few implementations with which to test various ideas. This has led to a number of optimisations being proposed which are quite restricted in their application. In this paper we describe an implementation of the two-party case, using Yao’s garbled circuits, and present various algorithmic protocol improvements. These optimisations are analysed both theoretically and empirically, using experiments of various adversarial situations. Our experimental data is provided for reasonably large circuits, including one which performs an AES encryption, a problem which we discuss in the context of various possible applications.
Expand

10 April 2023

Aikata Aikata, Andrea Basso, Gaetan Cassiers, Ahmet Can Mert, Sujoy Sinha Roy
ePrint Report ePrint Report
Lattice-based cryptography has laid the foundation of various modern-day cryptosystems that cater to several applications, including post-quantum cryptography. For structured lattice-based schemes, polynomial arithmetic is a fundamental part. In several instances, the performance optimizations come from implementing compact multipliers due to the small range of the secret polynomial coefficients. However, this optimization does not easily translate to side-channel protected implementations since masking requires secret polynomial coefficients to be distributed over a large range. In this work, we address this problem and propose two novel generalized techniques, one for the number theoretic transform (NTT) based and another for the non-NTT-based polynomial arithmetic. Both these proposals enable masked polynomial multiplication while utilizing and retaining the small secret property.

For demonstration, we used the proposed technique and instantiated masked multipliers for schoolbook as well as NTT-based polynomial multiplication. Both of these can utilize the compact multipliers used in the unmasked implementations. The schoolbook multiplication requires an extra polynomial accumulation along with the two polynomial multiplications for a first-order protected implementation. However, this cost is nothing compared to the area saved by utilizing the existing cheap multiplication units. We also extensively test the side-channel resistance of the proposed design through TVLA to guarantee its first-order security.
Expand
Atsunori Ichikawa, Ilan Komargodski, Koki Hamada, Ryo Kikuchi, Dai Ikarashi
ePrint Report ePrint Report
A distributed oblivious RAM (DORAM) is a method for accessing a secret-shared memory while hiding the accessed locations. DORAMs are the key tool for secure multiparty computation (MPC) for RAM programs that avoids expensive RAM-to-circuit transformations.

We present new and improved 3-party DORAM protocols. For a logical memory of size $N$ and for each logical operation, our DORAM requires $O(\log N)$ local CPU computation steps. This is known to be asymptotically optimal. Our DORAM satisfies passive security in the honest majority setting. Our technique results with concretely-efficient protocols and does not use expensive cryptography (such as re-randomizable or homomorphic encryption). Specifically, our DORAM is 25X faster than the known most efficient DORAM in the same setting.

Lastly, we extend our technique to handle malicious attackers at the expense of using slightly larger blocks (i.e., $\omega(\log^2 N)$ vs. $\Omega(\log N)$). To the best of our knowledge, this is the first concretely-efficient maliciously secure DORAM.

Technically, our construction relies on a novel concretely-efficient 3-party oblivious permutation protocol. We combine it with efficient non-oblivious hashing techniques (i.e., Cuckoo hashing) to get a distributed oblivious hash table. From this, we build a full-fledged DORAM using a distributed variant of the hierarchical approach of Goldreich and Ostrovsky (J. ACM '96). These ideas, and especially the permutation protocol, are of independent interest.
Expand
Reyhaneh Rabaninejad, Behzad Abdolmaleki, Giulio Malavolta, Antonis Michalas, Amir Nabizadeh
ePrint Report ePrint Report
Proof of Storage-time (PoSt) is a cryptographic primitive that enables a server to demonstrate non-interactive continuous avail- ability of outsourced data in a publicly verifiable way. This notion was first introduced by Filecoin to secure their Blockchain-based decentral- ized storage marketplace, using expensive SNARKs to compact proofs. Recent work [2] employs the notion of trapdoor delay function to address the problem of compact PoSt without SNARKs. This approach however entails statefulness and non-transparency, while it requires an expensive pre-processing phase by the client. All of the above renders their solution impractical for decentralized storage marketplaces, leaving the stateless trapdoor-free PoSt with reduced setup costs as an open problem. In this work, we present stateless and transparent PoSt constructions using probabilistic sampling and a new Merkle variant commitment. In the process of enabling adjustable prover difficulty, we then propose a multi- prover construction to diminish the CPU work each prover is required to do. Both schemes feature a fast setup phase and logarithmic verification time and bandwidth with the end-to-end setup, prove, and verification costs lower than the existing solutions
Expand
Yuval Ishai, Dakshita Khurana, Amit Sahai, Akshayaram Srinivasan
ePrint Report ePrint Report
We revisit the problem of {\em reusable} non-interactive secure computation (NISC). A standard NISC protocol for a sender-receiver functionality $f$ enables the receiver to encrypt its input $x$ such that any sender, on input $y$, can send back a message revealing only $f(x,y)$. Security should hold even when either party can be malicious. A {\em reusable} NISC protocol has the additional feature that the receiver's message can be safely reused for computing multiple outputs $f(x,y_i)$. Here security should hold even when a malicious sender can learn partial information about the honest receiver's outputs in each session.

We present the first reusable NISC protocol for general functions $f$ that only makes a {\em black-box} use of any two-message oblivious transfer protocol, along with a random oracle. All previous reusable NISC protocols either made a non-black-box use of cryptographic primitives (Cachin et al., ICALP 2002) or alternatively required a stronger arithmetic variant of oblivious transfer and were restricted to $f$ in $\mathsf{NC}^1$ or similar classes (Chase et al., Crypto 2019). Our result is obtained via a general compiler from standard NISC to reusable NISC that makes use of special type of honest-majority protocols for secure multiparty computation.

Finally, we extend the above main result to reusable {\em two-sided} NISC, in which two parties can encrypt their inputs in the first round and then reveal different functions of their inputs in multiple sessions. This extension either requires an additional (black-box) use of additively homomorphic commitment or alternatively requires the parties to maintain a state between sessions.
Expand
Elette Boyle, Geoffroy Couteau, Pierre Meyer
ePrint Report ePrint Report
Secure computation enables mutually distrusting parties to jointly compute a function on their secret inputs, while revealing nothing beyond the function output. A long-running challenge is understanding the required communication complexity of such protocols---in particular, when communication can be sublinear in the circuit representation size of the desired function. For certain functions, such as Private Information Retrieval (PIR), this question extends to even sublinearity in the input size.

We develop new techniques expanding the set of computational assumptions for sublinear communication in both settings:

1) [Circuit size] We present sublinear-communication protocols for secure evaluation of general layered circuits, given any 2-round rate-1 batch oblivious transfer (OT) protocol with a particular ``decomposability'' property. In particular, this condition can be shown to hold for the recent batch OT protocols of (Brakerski et al. Eurocrypt 2022), in turn yielding a new sublinear secure computation feasibility result: from Quadratic Residuosity (QR) together with polynomial-noise-rate Learning Parity with Noise (LPN). Our approach constitutes a departure from existing paths toward sublinear secure computation, all based on fully homomorphic encryption or homomorphic secret sharing.

2) [Input size.] We construct single-server PIR based on the Computational Diffie-Hellman (CDH) assumption, with polylogarithmic communication in the database input size $n$. Previous constructions from CDH required communication $\Omega(n)$. In hindsight, our construction comprises of a relatively simple combination of existing tools from the literature.
Expand
Shankara Pailoor, Yanju Chen, Franklyn Wang, Clara Rodríguez, Jacob Van Gaffen, Jason Morton, Michael Chu, Brian Gu, Yu Feng, Isil Dillig
ePrint Report ePrint Report
As zero-knowledge proofs gain increasing adoption, the cryptography community has designed domain-specific languages (DSLs) that facilitate the construction of zero-knowledge proofs (ZKPs). Many of these DSLs, such as Circom, facilitate the construction of arithmetic circuits, which are essentially polynomial equations over a finite field. In particular, given a program in a zero-knowledge proof DSL, the compiler automatically produces the corresponding arithmetic circuit. However, a common and serious problem is that the generated circuit may be underconstrained, either due to a bug in the program or a bug in the compiler itself. Underconstrained circuits admit multiple witnesses for a given input, so a malicious party can generate bogus witnesses, thereby causing the verifier to accept a proof that it should not. Because of the increasing prevalence of such arithmetic circuits in blockchain applications, several million dollars worth of cryptocurrency have been stolen due to underconstrained arithmetic circuits.

Motivated by this problem, we propose a new technique for finding ZKP bugs caused by underconstrained polynomial equations over finite fields. Our method performs semantic reasoning over the finite field equations generated by the compiler to prove whether or not each signal is uniquely determined by the input. Our proposed approach combines SMT solving with lightweight uniqueness inference to effectively reason about underconstrained circuits. We have implemented our proposed approach in a tool called $\mathbf{\mathsf{QED}^2}$ and evaluate it on 163 Circom circuits. Our evaluation shows that $\mathbf{\mathsf{QED}^2}$ can successfully solve 70\% of these benchmarks, meaning that it either verifies the uniqueness of the output signals or finds a pair of witnesses that demonstrate non-uniqueness of the circuit. Furthermore, $\mathbf{\mathsf{QED}^2}$ has found 8 previously unknown vulnerabilities in widely-used circuits.
Expand
◄ Previous Next ►