IACR News item: 06 December 2024
Anubhav Baweja, Pratyush Mishra, Tushar Mopuri, Karan Newatia, Steve Wang
ePrint Report
Succinct non-interactive arguments of knowledge (SNARKs) enable a prover to produce a short and efficiently verifiable proof of the validity of an arbitrary NP statement. Recent constructions of efficient SNARKs have led to interest in using them for a wide range of applications, but unfortunately, deployment of SNARKs in these applications faces a key bottleneck: SNARK provers require a prohibitive amount of time and memory to generate proofs for even moderately large statements. While there has been progress in reducing prover time, prover memory remains an issue.
In this work, we describe Scribe, a new low-memory SNARK that can efficiently prove large statements even on cheap consumer devices such as smartphones by leveraging a plentiful, but heretofore unutilized, resource: disk storage. In more detail, instead of storing its (large) intermediate state in RAM, Scribe's prover instead stores it on disk. To ensure that accesses to state are efficient, we design Scribe's prover in a *read-write streaming* model of computation that allows the prover to read and modify its state only in a streaming manner.
We implement and evaluate Scribe's prover, and show that, on commodity hardware, it can easily scale to circuits of size $2^{28}$ gates while using only 2GB of memory and incurring only minimal proving latency overhead (10-35%) compared to a state-of-the-art memory-intensive baseline (HyperPlonk [EUROCRYPT 2023]) that requires much more memory. Our implementation minimizes overhead by leveraging the streaming access pattern to enable several systems optimizations that together mask I/O costs.
We implement and evaluate Scribe's prover, and show that, on commodity hardware, it can easily scale to circuits of size $2^{28}$ gates while using only 2GB of memory and incurring only minimal proving latency overhead (10-35%) compared to a state-of-the-art memory-intensive baseline (HyperPlonk [EUROCRYPT 2023]) that requires much more memory. Our implementation minimizes overhead by leveraging the streaming access pattern to enable several systems optimizations that together mask I/O costs.
Additional news items may be found on the IACR news page.