IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
11 October 2024
Stephan Krenn, Omid Mir, Daniel Slamanig
ePrint Report
Compressing primitives such as accumulators and vector commitments, allow to rep- resent large data sets with some compact, ideally constant-sized value. Moreover, they support operations like proving membership or non-membership with minimal, ideally also constant- sized, storage and communication overhead. In recent years, these primitives have found nu- merous practical applications, with many constructions based on various hardness assumptions. So far, however, it has been elusive to construct these primitives in a strictly structure-preserving setting, i.e., in a bilinear group in a way that messages, commitments and openings are all ele- ments of the two source groups. Interestingly, backed by existing impossibility results, not even conventional commitments with such constraints are known in this setting.
In this paper we investigate whether strictly structure-preserving compressing primitives can be realized. We close this gap by presenting the first strictly structure-preserving commitment that is shrinking (and in particular constant-size). We circumvent existing impossibility results by employing a more structured message space, i.e., a variant of the Diffie-Hellman message space. Our main results are constructions of structure-preserving vector commitments (SPVC) as well as accumulators. We first discuss generic constructions and then present concrete con- structions under the Diffie-Hellman Exponent assumption. To demonstrate the usefulness of our constructions, we present various applications. Most notable, we present the first entirely prac- tical constant-size ring signature scheme in bilinear groups (i.e., the discrete logarithm setting). Concretely, using the popular BLS12-381 pairing-friendly curve, our ring signatures achieve a size of roughly 6500 bits.
In this paper we investigate whether strictly structure-preserving compressing primitives can be realized. We close this gap by presenting the first strictly structure-preserving commitment that is shrinking (and in particular constant-size). We circumvent existing impossibility results by employing a more structured message space, i.e., a variant of the Diffie-Hellman message space. Our main results are constructions of structure-preserving vector commitments (SPVC) as well as accumulators. We first discuss generic constructions and then present concrete con- structions under the Diffie-Hellman Exponent assumption. To demonstrate the usefulness of our constructions, we present various applications. Most notable, we present the first entirely prac- tical constant-size ring signature scheme in bilinear groups (i.e., the discrete logarithm setting). Concretely, using the popular BLS12-381 pairing-friendly curve, our ring signatures achieve a size of roughly 6500 bits.
Joan Daemen, Seth Hoffert, Silvia Mella, Gilles Van Assche, Ronny Van Keer
ePrint Report
Authenticated encryption (AE) is a cryptographic mechanism that allows communicating parties to protect the confidentiality and integrity of messages exchanged over a public channel, provided they share a secret key. In this work, we present new AE schemes leveraging the SHA-3 standard functions SHAKE128 and SHAKE256, offering 128 and 256 bits of security strength, respectively, and their “Turbo” counterparts. They support session-based communication, where a ciphertext authenticates the sequence of messages since the start of the session. The chaining in the session allows decryption in segments, avoiding the need to buffer the entire deciphered cryptogram between decryption and validation. And, thanks to the collision resistance of (Turbo)SHAKE, they provide so-called CMT-4 committing security, meaning that they provide strong guarantees that a ciphertext uniquely binds to the key, plaintext and associated data. The AE schemes we propose have the unique combination of advantages that 1) their security is based on the security claim of SHAKE, that has received a large amount of public scrutiny, that 2) they make use of the standard KECCAK-p permutation that not only receives more and more dedicated hardware support, but also allows
competitive software-only implementations thanks to the TurboSHAKE instances, and that 3) they do not suffer from a 64-bit birthday bound like most AES-based schemes.
Cong Ling, Andrew Mendelsohn
ePrint Report
In Noncommutative Ring Learning With Errors From Cyclic Algebras, a variant of Learning with Errors from cyclic division algebras, dubbed ‘Cyclic LWE', was developed, and security reductions similar to those known for the ring and module case were given, as well as a Regev-style encryption scheme. In this work, we make a number of improvements to that work: namely, we describe methods to increase the number of cryptographically useful division algebras, demonstrate the hardness of CLWE from ideal lattices obtained from non-maximal orders, and study Learning with Rounding in cyclic division algebras.
Jonas Hofmann, Kien Tuong Truong
ePrint Report
End-to-end encrypted cloud storage offers a way for individuals
and organisations to delegate their storage needs to a third-party,
while keeping control of their data using cryptographic techniques.
We conduct a cryptographic analysis of various products in the
ecosystem, showing that many providers fail to provide an adequate
level of security. In particular, we provide an in-depth analysis of
five end-to-end encrypted cloud storage systems, namely Sync,
pCloud, Icedrive, Seafile, and Tresorit, in the setting of a malicious
server. These companies cumulatively have over 22 million users
and are major providers in the field. We unveil severe cryptographic
vulnerabilities in four of them. Our attacks invalidate the marketing
claims made by the providers of these systems, showing that a
malicious server can, in some cases, inject files in the encrypted
storage of users, tamper with file data, and even gain direct access to
the content of the files. Many of our attacks affect multiple providers
in the same way, revealing common failure patterns in independent
cryptographic designs. We conclude by discussing the significance
of these patterns beyond the security of the specific providers.
Muhammed F. Esgin, Ron Steinfeld, Erkan Tairi, Jie Xu
ePrint Report
In this work, we introduce a more efficient post-quantum oblivious PRF (OPRF) design, called LeOPaRd. Our proposal is round-optimal and supports verifiability and partial obliviousness, all of which are important for practical applications. The main technical novelty of our work is a new method for computing samples of MLWE (module learning with errors) in a two-party setting. To do this, we introduce a new family of interactive lattice problems, called interactive MLWE and rounding with re-use (iMLWER-RU). We rigorously study the hardness of iMLWER-RU and reduce it (under some natural idealized setting) to a more standard MLWE-like problem where the adversary is additionally given access to a randomized MLWE PRF oracle. We believe iMLWER-RU can be of independent interest for other interactive protocols.
LeOPaRd exploits this new iMLWER-RU assumption to realize a lattice-based OPRF design without relying on heavy machinery such as noise flooding and fully homomorphic encryption used in earlier works. LeOPaRd can feature around 136 KB total communication, compared to 300+ KB in earlier works. We also identify gaps in some existing constructions and models, and propose appropriate fixes.
LeOPaRd exploits this new iMLWER-RU assumption to realize a lattice-based OPRF design without relying on heavy machinery such as noise flooding and fully homomorphic encryption used in earlier works. LeOPaRd can feature around 136 KB total communication, compared to 300+ KB in earlier works. We also identify gaps in some existing constructions and models, and propose appropriate fixes.
Amit Jana, Smita Das, Ayantika Chatterjee, Debdeep Mukhopadhyay
ePrint Report
In Africacrypt 2022, Gupta \etal introduced a 64-bit lightweight \mds matrix-based \spn-like block cipher designed to encrypt data in a single clock cycle with minimal implementation cost, particularly when unrolled. While various attack models were discussed, the security of the cipher in the related-key setting was not addressed. In this work, we bridge this gap by conducting a security analysis of the cipher under related-key attacks using \milp(Mixed Integer Linear Programming)-based techniques. Our model enables a related-key distinguishing attack on 8 rounds of FUTURE, requiring $2^{64}$ plaintexts, $2^{63}$ \xor operations, and negligible memory. Additionally, we present a 10-round boomerang distinguisher with a probability of $2^{-45}$, leading to a distinguishing attack with $2^{46}$ plaintexts, $2^{46}$ \xor operations, and negligible memory. This result demonstrates a full break of the cipher’s 64-bit security in the related-key setting.
Carsten Baum, Jens Berlips, Walther Chen, Ivan Damgård, Kevin M. Esvelt, Leonard Foner, Dana Gretton, Martin Kysel, Ronald L. Rivest, Lawrence Roy, Francesca Sage-Ling, Adi Shamir, Vinod Vaikunta ...
ePrint Report
Oblivious Pseudorandom Functions (OPRFs) allow a client to evaluate a pseudorandom function (PRF) on her secret input based on a key that is held by a server. In the process, the client only learns the PRF output but not the key, while the server neither learns the input nor the output of the client. The arguably most popular OPRF is due to Naor, Pinkas and Reingold (Eurocrypt 2009). It is based on an Oblivious Exponentiation by the server, with passive security under the Decisional Diffie-Hellman assumption. In this work, we strengthen the security guarantees of the NPR OPRF by protecting it against active attacks of the server. We have implemented our solution and report on the performance.
Our main result is a new batch OPRF protocol which is secure against maliciously corrupted servers, but is essentially as efficient as the semi-honest solution. More precisely, the computation (and communication) overhead is a multiplicative factor $o(1)$ as the batch size increases. The obvious solution using zero-knowledge proofs would have a constant factor overhead at best, which can be too expensive for certain deployments.
Our protocol relies on a novel version of the DDH problem, which we call the Oblivious Exponentiation Problem (OEP), and we give evidence for its hardness in the Generic Group model. We also present a variant of our maliciously secure protocol that does not rely on the OEP but nevertheless only has overhead $o(1)$ over the known semi-honest protocol. Moreover, we show that our techniques can also be used to efficiently protect threshold blind BLS signing and threshold ElGamal decryption against malicious attackers.
Our main result is a new batch OPRF protocol which is secure against maliciously corrupted servers, but is essentially as efficient as the semi-honest solution. More precisely, the computation (and communication) overhead is a multiplicative factor $o(1)$ as the batch size increases. The obvious solution using zero-knowledge proofs would have a constant factor overhead at best, which can be too expensive for certain deployments.
Our protocol relies on a novel version of the DDH problem, which we call the Oblivious Exponentiation Problem (OEP), and we give evidence for its hardness in the Generic Group model. We also present a variant of our maliciously secure protocol that does not rely on the OEP but nevertheless only has overhead $o(1)$ over the known semi-honest protocol. Moreover, we show that our techniques can also be used to efficiently protect threshold blind BLS signing and threshold ElGamal decryption against malicious attackers.
Haoxing Lin, Prashant Nalini Vasudevan
ePrint Report
The $k$-Tree algorithm [Wagner 02] is a non-trivial algorithm for the average-case $k$-SUM problem that has found widespread use in cryptanalysis. Its input consists of $k$ lists, each containing $n$ integers from a range of size $m$. Wagner's original heuristic analysis suggested that this algorithm succeeds with constant probability if $n \approx m^{1/(\log{k}+1)}$, and that in this case it runs in time $O(kn)$. Subsequent rigorous analysis of the algorithm [Lyubashevsky 05, Shallue 08, Joux-Kippen-Loss 24] has shown that it succeeds with high probability if the input list sizes are significantly larger than this.
We present a broader rigorous analysis of the $k$-Tree algorithm, showing upper and lower bounds on its success probability and complexity for any size of the input lists. Our results confirm Wagner's heuristic conclusions, and also give meaningful bounds for a wide range of list sizes that are not covered by existing analyses. We present analytical bounds that are asymptotically tight, as well as an efficient algorithm that computes (provably correct) bounds for a wide range of concrete parameter settings. We also do the same for the $k$-Tree algorithm over $\mathbb{Z}_m$. Finally, we present experimental evaluation of the tightness of our results.
We present a broader rigorous analysis of the $k$-Tree algorithm, showing upper and lower bounds on its success probability and complexity for any size of the input lists. Our results confirm Wagner's heuristic conclusions, and also give meaningful bounds for a wide range of list sizes that are not covered by existing analyses. We present analytical bounds that are asymptotically tight, as well as an efficient algorithm that computes (provably correct) bounds for a wide range of concrete parameter settings. We also do the same for the $k$-Tree algorithm over $\mathbb{Z}_m$. Finally, we present experimental evaluation of the tightness of our results.
Jiaxing He, Kang Yang, Guofeng Tang, Zhangjie Huang, Li Lin, Changzheng Wei, Ying Yan, Wei Wang
ePrint Report
We present $\textit{Rhombus}$, a new secure matrix-vector multiplication (MVM) protocol in the semi-honest two-party setting, which is able to be seamlessly integrated into existing privacy-preserving machine learning (PPML) frameworks and serve as the basis of secure computation in linear layers.
$\textit{Rhombus}$ adopts RLWE-based homomorphic encryption (HE) with coefficient encoding, which allows messages to be chosen from not only a field $\mathbb{F}_p$ but also a ring $\mathbb{Z}_{2^\ell}$, where the latter supports faster computation in non-linear layers. To achieve better efficiency, we develop an input-output packing technique that reduces the communication cost incurred by HE with coefficient encoding by about $21\times$, and propose a split-point picking technique that reduces the number of rotations to that sublinear in the matrix dimension. Compared to the recent protocol $\textit{HELiKs}$ by Balla and Koushanfar (CCS'23), our implementation demonstrates that $\textit{Rhombus}$ improves the whole performance of an MVM protocol by a factor of $7.4\times \sim 8\times$, and improves the end-to-end performance of secure two-party inference of ResNet50 by a factor of $4.6\times \sim 18\times$.
Stefan Dziembowski, Sebastian Faust, Tomasz Lizurej, Marcin Mielniczuk
ePrint Report
We address the problem of detecting and punishing shareholder collusion in secret-sharing schemes. We do it in the recently proposed cryptographic model called individual cryptography (Dziembowski, Faust, and Lizurej, Crypto 2023), which assumes that there exist tasks that can be efficiently computed by a single machine but distributing this computation across multiple (mutually distrustful devices) is infeasible.
Within this model, we introduce a novel primitive called secret sharing with snitching (SSS), in which each attempt to illegally reconstruct the shared secret $S$ results in a proof that can be used to prove such misbehavior (and, e.g., financially penalize the cheater on a blockchain). This holds in a very strong sense, even if the shareholders attempt not to reconstruct the entire secret~$S$ but only learn some partial information about it. Our notion also captures the attacks performed using multiparty computation protocols (MPCs), i.e., those where the malicious shareholders use MPCs to compute partial information on $S$. The main idea of SSS is that any illegal reconstruction can be proven and punished, which suffices to discourage illegal secret reconstruction. Hence, our SSS scheme effectively prevents shareholders' collusion. We provide a basic definition of threshold ($t$-out-of-$n$) SSS. We then show how to construct it for $t = n$, and later, we use this construction to build an SSS scheme for an arbitrary $t$.
In order to prove the security of our construction, we introduce a generalization of the random oracle model (Bellare, Rogaway, CCS 1993), which allows modelling hash evaluations made inside MPC.
Within this model, we introduce a novel primitive called secret sharing with snitching (SSS), in which each attempt to illegally reconstruct the shared secret $S$ results in a proof that can be used to prove such misbehavior (and, e.g., financially penalize the cheater on a blockchain). This holds in a very strong sense, even if the shareholders attempt not to reconstruct the entire secret~$S$ but only learn some partial information about it. Our notion also captures the attacks performed using multiparty computation protocols (MPCs), i.e., those where the malicious shareholders use MPCs to compute partial information on $S$. The main idea of SSS is that any illegal reconstruction can be proven and punished, which suffices to discourage illegal secret reconstruction. Hence, our SSS scheme effectively prevents shareholders' collusion. We provide a basic definition of threshold ($t$-out-of-$n$) SSS. We then show how to construct it for $t = n$, and later, we use this construction to build an SSS scheme for an arbitrary $t$.
In order to prove the security of our construction, we introduce a generalization of the random oracle model (Bellare, Rogaway, CCS 1993), which allows modelling hash evaluations made inside MPC.
Martijn Brehm, Binyi Chen, Ben Fisch, Nicolas Resch, Ron D. Rothblum, Hadas Zeilberger
ePrint Report
In this work we construct a new and highly efficient multilinear polynomial commitment scheme (MLPCS) over binary fields, which we call \emph{Blaze}. Polynomial commitment schemes allow a server to commit to a large polynomial and later decommit to its evaluations. Such schemes have emerged as a key component in recent efficient SNARK constructions.
Blaze has an extremely efficient prover, both asymptotically and concretely. The commitment is dominated by $8n$ field additions (i.e., XORs) and one Merkle tree computation. The evaluation proof generation is dominated by $6n$ additions and $5n$ multiplications over the field. The verifier runs in time $O_\lambda(\log^2(n))$. Concretely, for sufficiently large message sizes, the prover is faster than all prior schemes except for Brakedown (Golovnev et al., Crypto 2023), but offers significantly smaller proofs than the latter.
The scheme is obtained by combining two ingredients:
1. Building on the code-switching technique (Ron-Zewi and Rothblum, JACM 2024), we show how to compose any error-correcting code together with an interactive oracle proof of proximity (IOPP) underlying existing MLPCS constructions, into a new MLPCS. The new MLPCS inherits its proving time from the code's encoding time, and its verification complexity from the underlying MLPCS. The composition is distinctive in that it is done purely on the information-theoretic side.
2. We apply the above methodology using an extremely efficient error-correcting code known as the Repeat-Accumulate-Accumulate (RAA) code. We give new asymptotic and concrete bounds, which demonstrate that (for sufficiently large message sizes) this code has a better encoding time vs. distance tradeoff than previous linear-time encodable codes that were considered in the literature.
The scheme is obtained by combining two ingredients:
1. Building on the code-switching technique (Ron-Zewi and Rothblum, JACM 2024), we show how to compose any error-correcting code together with an interactive oracle proof of proximity (IOPP) underlying existing MLPCS constructions, into a new MLPCS. The new MLPCS inherits its proving time from the code's encoding time, and its verification complexity from the underlying MLPCS. The composition is distinctive in that it is done purely on the information-theoretic side.
2. We apply the above methodology using an extremely efficient error-correcting code known as the Repeat-Accumulate-Accumulate (RAA) code. We give new asymptotic and concrete bounds, which demonstrate that (for sufficiently large message sizes) this code has a better encoding time vs. distance tradeoff than previous linear-time encodable codes that were considered in the literature.
Zhengan Huang, Junzuo Lai, Gongxian Zeng, Jian Weng
ePrint Report
Many messaging platforms have integrated end-to-end (E2E) encryption into their services. This widespread adoption of E2E encryption has triggered a technical tension between user privacy and illegal content moderation. The existing solutions either support only unframeability or deniability, or they are prone to abuse (the moderator can perform content moderation for all messages, whether illegal or not), or they lack mechanisms for retrospective content moderation.
To address the above issues, we introduce a new primitive called \emph{mild asymmetric message franking} (MAMF) to establish illegal-messages-only and retrospective content moderation for messaging systems, supporting unframeability and deniability simultaneously. We provide a framework to construct MAMF, leveraging two new building blocks, which might be of independent interest.
To address the above issues, we introduce a new primitive called \emph{mild asymmetric message franking} (MAMF) to establish illegal-messages-only and retrospective content moderation for messaging systems, supporting unframeability and deniability simultaneously. We provide a framework to construct MAMF, leveraging two new building blocks, which might be of independent interest.
10 October 2024
Ph.D. Position on Post-Quantum Crypto-Biometrics with Hardware Security for Digital Identity Wallets
University of Seville (Microelectronics Institute of Seville), Spain
Job Posting
Related projects:
HighLoAwallet: Crypto-biometric techniques and hardware-enabled solutions for self-sovereign identity wallets with high level of assurance.
Hard-ID-wallet: Proof of concept of hardware security solutions required by the cryptography and biometrics of digital identity wallet.
VIPS-ID: Verifiable, Private, Distributed, and Secure Identification of People and Things.
LICORICE: reLIable and sCalable tOols foR self-sovereIgn identity and data proteCtion framework.
Period: 4 years, to apply until 20 October 2024, to start on December 2024.
If interested, please submit your application (CV and motivations) or contact to:
lumi@us.es (Prof. Iluminada Baturone) and arjona@imse-cnm.csic.es (Prof. Rosario Arjona)
Gross salary: 23.000 € (first year), 24.100 € (second and third years). The amount will be increased for the fourth year. And economic
support for equipments, training programs, publications, conferences and research stays.
Key words: Cybersecurity, electronic transactions, digital identity, digital identity wallets, self-sovereign identities, privacy, crypto-
biometrics, PUFs, hardware security, post-quantum cryptography, trusted execution, and mobile devices.
Objectives: Design of post-quantum crypto-biometric and hardware-enabled solutions to manage people identities with high level of
assurance in the context of digital identity wallets. Technology transfer and research publications.
Profile: University Degree with at least 300 ECTS credits or Master in the area of Physics, Mathematics, Computer Science,
Electronics, Telecommunications, Cybersecurity or similar. Strong interest in Cybersecurity from a hardware and crypto-biometric
point of view. Programming skills. Good knowledge of mathematics (for cryptography), and signal and image processing (for
biometrics). High-level in English. Self-motivation and commitment to do a Ph.D.
Closing date for applications:
Contact: If interested, please submit your application (CV and motivations) or contact to: lumi@us.es (Prof. Iluminada Baturone) and arjona@imse-cnm.csic.es (Prof. Rosario Arjona)
NUS-Singapore and the University of Sheffield, UK
Job Posting
We are offering fully funded Ph.D opportunities at NUS-Singapore and the University of Sheffield, UK.
The candidates will have opportunities to work in both Singapore and Sheffield (UK).
Requirements for Ph.D. Position :
•Completed Master’s degree (or equivalent) at a top university in information security, computer science, applied mathematics, electrical engineering, or a similar area.
• Research experience (such as publishing papers as a first author in reputable venues)
• Self-motivated, reliable, creative, can work in a team and want to do excellent research on challenging scientific problems with practical relevance.
How to apply? Please send me your CV with detailed information,
please send three of your best papers.
Contact: Dr Prosanta Gope (p.gope@sheffield.ac.uk)
Closing date for applications:
Contact: Dr Prosanta Gope (p.gope@sheffield.ac.uk)
09 October 2024
Jinrong Chen, Yi Wang, Rongmao Chen, Xinyi Huang, Wei Peng
ePrint Report
In this work, we provide new, tighter proofs for the $T_{RH}$-transformation by Jiang et al. (ASIACRYPT 2023), which converts OW-CPA secure PKEs into KEMs with IND-1CCA security, a variant of typical IND-CCA security where only a single decapsulation query is allowed. Such KEMs are efficient and have been shown sufficient for real-world applications by Huguenin-Dumittan and Vaudenay at EUROCRYPT 2022. We reprove Jiang et al.'s $T_{RH}$-transformation in both the random oracle model (ROM) and the quantum random oracle model (QROM), for the case where the underlying PKE is rigid deterministic. In both ROM and QROM models, our reductions achieve security loss factors of $\mathcal{O}(1)$, significantly improving Jiang et al.'s results which have security loss factors of $\mathcal{O}(q)$ in the ROM and $\mathcal{O}(q^2)$ in the QROM respectively. Notably, central to our tight QROM reduction is a new tool called ''reprogram-after-measure'', which overcomes the reduction loss posed by oracle reprogramming in QROM proofs. This technique may be of independent interest and useful for achieving tight QROM proofs for other post-quantum cryptographic schemes. We remark that our results also improve the reduction tightness of the $T_{H}$-transformation (which also converts PKEs to KEMs) by Huguenin-Dumittan and Vaudenay (EUROCRYPT 2022), as Jiang et al. provided a tight reduction from $T_H$-transformation to $T_{RH}$-transformation (ASIACRYPT 2023).
Abhiram Kothapalli, Srinath Setty
ePrint Report
We introduce NeutronNova, a new folding scheme for the zero-check relation: an instance-witness pair is in the zero-check relation if a corresponding multivariate polynomial evaluates to zero for all inputs over a suitable Boolean hypercube. The folding scheme is a two-round protocol, and it internally invokes a \emph{single} round of the sum-check protocol. The folding scheme is more efficient than prior state-of-the-art schemes and directly benefits from recent improvements to the sum-check prover. The prover’s work is the cost to commit to a witness and field operations in a single round of the sum-check protocol. So, if the witness contains "small" field elements, the prover only commits to "small" field elements. The verifier’s work is a constant number of group scalar multiplications, field operations, and hash computations. Moreover, the folding scheme generalizes to fold multiple instances at once and requires only $\log{n}$ rounds of the sum-check protocol, where $n$ is the number of instances folded.
As a corollary, we provide a folding scheme for any relation R for which there is a reduction of knowledge (RoK) from $\mathcal{R}$ to one or more instance-witness pairs in the zero-check relation. Such RoKs appear implicitly in prior lookup arguments (e.g., Lasso) and high-performance zkVMs for RISC-V (e.g., Jolt). We formalize these RoKs for several relations including indexed lookups, grand products, and CCS (which generalizes Plonkish, AIR, and R1CS). These are simple and constant round RoKs that leverage interaction to perform randomized checks on committed witness elements. Instead of proving these resulting zero-check instances as is done in prior proof systems such as Jolt, NeutronNova provides the more efficient option of continual folding of zero-check instances into a single running instance.
As a corollary, we provide a folding scheme for any relation R for which there is a reduction of knowledge (RoK) from $\mathcal{R}$ to one or more instance-witness pairs in the zero-check relation. Such RoKs appear implicitly in prior lookup arguments (e.g., Lasso) and high-performance zkVMs for RISC-V (e.g., Jolt). We formalize these RoKs for several relations including indexed lookups, grand products, and CCS (which generalizes Plonkish, AIR, and R1CS). These are simple and constant round RoKs that leverage interaction to perform randomized checks on committed witness elements. Instead of proving these resulting zero-check instances as is done in prior proof systems such as Jolt, NeutronNova provides the more efficient option of continual folding of zero-check instances into a single running instance.
Arasu Arun, Srinath Setty
ePrint Report
Folding schemes enable prover-efficient incrementally verifiable computation (IVC), where a proof is generated step-by-step, resulting in a space-efficient prover that naturally supports continuations. These attributes make them a promising choice for proving long-running machine executions (popularly, "zkVMs"). A major problem is designing an efficient read-write memory. Another challenge is overheads incurred by unused machine instructions when incrementally proving a program execution step.
Nebula addresses these with new techniques that can paired with modern folding schemes. First, we introduce commitment-carrying IVC, where a proof carries an incremental commitment to the prover’s non-deterministic advice provided at different steps. Second, we show how this unlocks efficient read-write memory (which implies indexed lookups) with a cost-profile identical to that of non-recursive arguments. Third, we provide a new universal "switchboard" circuit construction that combines circuits of different instructions such that one can "turn off" uninvoked circuit elements and constraints, offering a new way to achieve pay-per-use prover costs.
We implement a prototype of a Nebula-based zkVM for the Ethereum virtual machine (EVM). We find that Nebula’s techniques qualitatively provide a $30\times$ smaller constraint system to represent the EVM over standard memory-checking techniques, and lead to over $260\times$ faster proof generation for the standard ERC20 token transfer transaction.
Nebula addresses these with new techniques that can paired with modern folding schemes. First, we introduce commitment-carrying IVC, where a proof carries an incremental commitment to the prover’s non-deterministic advice provided at different steps. Second, we show how this unlocks efficient read-write memory (which implies indexed lookups) with a cost-profile identical to that of non-recursive arguments. Third, we provide a new universal "switchboard" circuit construction that combines circuits of different instructions such that one can "turn off" uninvoked circuit elements and constraints, offering a new way to achieve pay-per-use prover costs.
We implement a prototype of a Nebula-based zkVM for the Ethereum virtual machine (EVM). We find that Nebula’s techniques qualitatively provide a $30\times$ smaller constraint system to represent the EVM over standard memory-checking techniques, and lead to over $260\times$ faster proof generation for the standard ERC20 token transfer transaction.
Changcun Wang, Zhaopeng Dai
ePrint Report
Multiple Matrix congruential generators is an important class of pseudorandom number generators. This paper studies the predictability of a class of truncated multiple matrix congruential generators with unknown parameters. Given a few truncated digits of high-order bits or low-order bits output by a multiple matrix congruential generator, we give a method based on lattice reduction to recover the parameters and the initial state of the generator.
Jiaqi Cheng, Rishab Goyal
ePrint Report
We design a generic compiler to boost any non-trivial succinct non-interactive argument of knowledge (SNARK) to full succinctness. Our results come in two flavors:
For any constant $\epsilon > 0$, any SNARK with proof size $|\pi| < \frac{|\omega|}{\lambda^\epsilon} + \mathsf{poly}(\lambda, |x|)$ can be upgraded to a fully succinct SNARK, where all system parameters (such as proof/CRS sizes and setup/verifier run-times) grow as fixed polynomials in $\lambda$, independent of witness size.
Under an additional assumption that the underlying SNARK has as an \emph{efficient} knowledge extractor, we further improve our result to upgrade any non-trivial SNARK. For example, we show how to design fully succinct SNARKs from SNARKs with proofs of length $|\omega| - \Omega(\lambda)$, or $\frac{|\omega|}{1 + \epsilon} + \mathsf{poly}(\lambda, |x|)$, any constant $\epsilon > 0$.
Our result reduces the long-standing challenge of designing fully succinct SNARKs to designing \emph{arguments of knowledge that beat the trivial construction}. It also establishes optimality of rate-1 arguments of knowledge (such as NIZKs [Gentry-Groth-Ishai-Peikert-Sahai-Smith; JoC'15] and BARGs [Devadas-Goyal-Kalai-Vaikuntanathan, Paneth-Pass; FOCS'22]), and suggests any further improvement is tantamount to designing fully succinct SNARKs, thus requires bypassing established black-box barriers [Gentry-Wichs; STOC'11].
For any constant $\epsilon > 0$, any SNARK with proof size $|\pi| < \frac{|\omega|}{\lambda^\epsilon} + \mathsf{poly}(\lambda, |x|)$ can be upgraded to a fully succinct SNARK, where all system parameters (such as proof/CRS sizes and setup/verifier run-times) grow as fixed polynomials in $\lambda$, independent of witness size.
Under an additional assumption that the underlying SNARK has as an \emph{efficient} knowledge extractor, we further improve our result to upgrade any non-trivial SNARK. For example, we show how to design fully succinct SNARKs from SNARKs with proofs of length $|\omega| - \Omega(\lambda)$, or $\frac{|\omega|}{1 + \epsilon} + \mathsf{poly}(\lambda, |x|)$, any constant $\epsilon > 0$.
Our result reduces the long-standing challenge of designing fully succinct SNARKs to designing \emph{arguments of knowledge that beat the trivial construction}. It also establishes optimality of rate-1 arguments of knowledge (such as NIZKs [Gentry-Groth-Ishai-Peikert-Sahai-Smith; JoC'15] and BARGs [Devadas-Goyal-Kalai-Vaikuntanathan, Paneth-Pass; FOCS'22]), and suggests any further improvement is tantamount to designing fully succinct SNARKs, thus requires bypassing established black-box barriers [Gentry-Wichs; STOC'11].
Leah Namisa Rosenbloom
ePrint Report
This paper extends the dialogue of "The Moral Character of Cryptographic Work" (Rogaway, 2015) and "Crypto for the People" (Kamara, 2020) by examining the relationship between cryptography and collective power. In particular, it considers cryptography in the context of grassroots organizing—a process by which marginalized people build collective power toward effecting systemic change—and illustrates the ways in which cryptography has both helped and hindered organizing efforts. Based on the synthesis of dozens of qualitative studies, scholarly critiques, and historical artifacts, this work introduces a paradigm shift for cryptographic protocol design—general principles and recommendations for building cryptography to address the lived needs and experiences of marginalized people. Finally, it calls for abolition cryptography: cryptographic theories and practices which dismantle harmful systems and replace them with systems that sustain human lives and livelihoods.