IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
09 October 2024
Abhiram Kothapalli, Srinath Setty
ePrint Report
We introduce NeutronNova, a new folding scheme for the zero-check relation: an instance-witness pair is in the zero-check relation if a corresponding multivariate polynomial evaluates to zero for all inputs over a suitable Boolean hypercube. The folding scheme is a two-round protocol, and it internally invokes a \emph{single} round of the sum-check protocol. The folding scheme is more efficient than prior state-of-the-art schemes and directly benefits from recent improvements to the sum-check prover. The prover’s work is the cost to commit to a witness and field operations in a single round of the sum-check protocol. So, if the witness contains "small" field elements, the prover only commits to "small" field elements. The verifier’s work is a constant number of group scalar multiplications, field operations, and hash computations. Moreover, the folding scheme generalizes to fold multiple instances at once and requires only $\log{n}$ rounds of the sum-check protocol, where $n$ is the number of instances folded.
As a corollary, we provide a folding scheme for any relation R for which there is a reduction of knowledge (RoK) from $\mathcal{R}$ to one or more instance-witness pairs in the zero-check relation. Such RoKs appear implicitly in prior lookup arguments (e.g., Lasso) and high-performance zkVMs for RISC-V (e.g., Jolt). We formalize these RoKs for several relations including indexed lookups, grand products, and CCS (which generalizes Plonkish, AIR, and R1CS). These are simple and constant round RoKs that leverage interaction to perform randomized checks on committed witness elements. Instead of proving these resulting zero-check instances as is done in prior proof systems such as Jolt, NeutronNova provides the more efficient option of continual folding of zero-check instances into a single running instance.
As a corollary, we provide a folding scheme for any relation R for which there is a reduction of knowledge (RoK) from $\mathcal{R}$ to one or more instance-witness pairs in the zero-check relation. Such RoKs appear implicitly in prior lookup arguments (e.g., Lasso) and high-performance zkVMs for RISC-V (e.g., Jolt). We formalize these RoKs for several relations including indexed lookups, grand products, and CCS (which generalizes Plonkish, AIR, and R1CS). These are simple and constant round RoKs that leverage interaction to perform randomized checks on committed witness elements. Instead of proving these resulting zero-check instances as is done in prior proof systems such as Jolt, NeutronNova provides the more efficient option of continual folding of zero-check instances into a single running instance.
Arasu Arun, Srinath Setty
ePrint Report
Folding schemes enable prover-efficient incrementally verifiable computation (IVC), where a proof is generated step-by-step, resulting in a space-efficient prover that naturally supports continuations. These attributes make them a promising choice for proving long-running machine executions (popularly, "zkVMs"). A major problem is designing an efficient read-write memory. Another challenge is overheads incurred by unused machine instructions when incrementally proving a program execution step.
Nebula addresses these with new techniques that can paired with modern folding schemes. First, we introduce commitment-carrying IVC, where a proof carries an incremental commitment to the prover’s non-deterministic advice provided at different steps. Second, we show how this unlocks efficient read-write memory (which implies indexed lookups) with a cost-profile identical to that of non-recursive arguments. Third, we provide a new universal "switchboard" circuit construction that combines circuits of different instructions such that one can "turn off" uninvoked circuit elements and constraints, offering a new way to achieve pay-per-use prover costs.
We implement a prototype of a Nebula-based zkVM for the Ethereum virtual machine (EVM). We find that Nebula’s techniques qualitatively provide a $30\times$ smaller constraint system to represent the EVM over standard memory-checking techniques, and lead to over $260\times$ faster proof generation for the standard ERC20 token transfer transaction.
Nebula addresses these with new techniques that can paired with modern folding schemes. First, we introduce commitment-carrying IVC, where a proof carries an incremental commitment to the prover’s non-deterministic advice provided at different steps. Second, we show how this unlocks efficient read-write memory (which implies indexed lookups) with a cost-profile identical to that of non-recursive arguments. Third, we provide a new universal "switchboard" circuit construction that combines circuits of different instructions such that one can "turn off" uninvoked circuit elements and constraints, offering a new way to achieve pay-per-use prover costs.
We implement a prototype of a Nebula-based zkVM for the Ethereum virtual machine (EVM). We find that Nebula’s techniques qualitatively provide a $30\times$ smaller constraint system to represent the EVM over standard memory-checking techniques, and lead to over $260\times$ faster proof generation for the standard ERC20 token transfer transaction.
Changcun Wang, Zhaopeng Dai
ePrint Report
Multiple Matrix congruential generators is an important class of pseudorandom number generators. This paper studies the predictability of a class of truncated multiple matrix congruential generators with unknown parameters. Given a few truncated digits of high-order bits or low-order bits output by a multiple matrix congruential generator, we give a method based on lattice reduction to recover the parameters and the initial state of the generator.
Jiaqi Cheng, Rishab Goyal
ePrint Report
We design a generic compiler to boost any non-trivial succinct non-interactive argument of knowledge (SNARK) to full succinctness. Our results come in two flavors:
For any constant $\epsilon > 0$, any SNARK with proof size $|\pi| < \frac{|\omega|}{\lambda^\epsilon} + \mathsf{poly}(\lambda, |x|)$ can be upgraded to a fully succinct SNARK, where all system parameters (such as proof/CRS sizes and setup/verifier run-times) grow as fixed polynomials in $\lambda$, independent of witness size.
Under an additional assumption that the underlying SNARK has as an \emph{efficient} knowledge extractor, we further improve our result to upgrade any non-trivial SNARK. For example, we show how to design fully succinct SNARKs from SNARKs with proofs of length $|\omega| - \Omega(\lambda)$, or $\frac{|\omega|}{1 + \epsilon} + \mathsf{poly}(\lambda, |x|)$, any constant $\epsilon > 0$.
Our result reduces the long-standing challenge of designing fully succinct SNARKs to designing \emph{arguments of knowledge that beat the trivial construction}. It also establishes optimality of rate-1 arguments of knowledge (such as NIZKs [Gentry-Groth-Ishai-Peikert-Sahai-Smith; JoC'15] and BARGs [Devadas-Goyal-Kalai-Vaikuntanathan, Paneth-Pass; FOCS'22]), and suggests any further improvement is tantamount to designing fully succinct SNARKs, thus requires bypassing established black-box barriers [Gentry-Wichs; STOC'11].
For any constant $\epsilon > 0$, any SNARK with proof size $|\pi| < \frac{|\omega|}{\lambda^\epsilon} + \mathsf{poly}(\lambda, |x|)$ can be upgraded to a fully succinct SNARK, where all system parameters (such as proof/CRS sizes and setup/verifier run-times) grow as fixed polynomials in $\lambda$, independent of witness size.
Under an additional assumption that the underlying SNARK has as an \emph{efficient} knowledge extractor, we further improve our result to upgrade any non-trivial SNARK. For example, we show how to design fully succinct SNARKs from SNARKs with proofs of length $|\omega| - \Omega(\lambda)$, or $\frac{|\omega|}{1 + \epsilon} + \mathsf{poly}(\lambda, |x|)$, any constant $\epsilon > 0$.
Our result reduces the long-standing challenge of designing fully succinct SNARKs to designing \emph{arguments of knowledge that beat the trivial construction}. It also establishes optimality of rate-1 arguments of knowledge (such as NIZKs [Gentry-Groth-Ishai-Peikert-Sahai-Smith; JoC'15] and BARGs [Devadas-Goyal-Kalai-Vaikuntanathan, Paneth-Pass; FOCS'22]), and suggests any further improvement is tantamount to designing fully succinct SNARKs, thus requires bypassing established black-box barriers [Gentry-Wichs; STOC'11].
Leah Namisa Rosenbloom
ePrint Report
This paper extends the dialogue of "The Moral Character of Cryptographic Work" (Rogaway, 2015) and "Crypto for the People" (Kamara, 2020) by examining the relationship between cryptography and collective power. In particular, it considers cryptography in the context of grassroots organizing—a process by which marginalized people build collective power toward effecting systemic change—and illustrates the ways in which cryptography has both helped and hindered organizing efforts. Based on the synthesis of dozens of qualitative studies, scholarly critiques, and historical artifacts, this work introduces a paradigm shift for cryptographic protocol design—general principles and recommendations for building cryptography to address the lived needs and experiences of marginalized people. Finally, it calls for abolition cryptography: cryptographic theories and practices which dismantle harmful systems and replace them with systems that sustain human lives and livelihoods.
Daniel Collins, Yuval Efron, Jovan Komatovic
ePrint Report
It is well known that a trusted setup allows one to solve the Byzantine agreement problem in the presence of $t
We further the study of crypto-agnostic Byzantine agreement among $n$ parties that answers this question in the negative. Specifically, let $t_s$ and $t_i$ denote two parameters such that (1) $2t_i + t_s < n$, and (2) $t_i \leq t_s < n/2$. Crypto-agnostic Byzantine agreement ensures agreement among honest parties if (1) the adversary is computationally bounded and corrupts up to $t_s$ parties, or (2) the adversary is computationally unbounded and corrupts up to $t_i$ parties, and is moreover given all secrets of all parties established during the setup. We propose a compiler that transforms any pair of resilience-optimal Byzantine agreement protocols in the authenticated and information-theoretic setting into one that is crypto-agnostic. Our compiler has several attractive qualities, including using only $O(\lambda n^2)$ bits over the two underlying Byzantine agreement protocols, and preserving round and communication complexity in the authenticated setting. In particular, our results improve the state-of-the-art in bit complexity by at least two factors of $n$ and provide either early stopping (deterministic) or expected constant round complexity (randomized). We therefore provide fallback security for authenticated Byzantine agreement for free for $t_i \leq n/4$.
We further the study of crypto-agnostic Byzantine agreement among $n$ parties that answers this question in the negative. Specifically, let $t_s$ and $t_i$ denote two parameters such that (1) $2t_i + t_s < n$, and (2) $t_i \leq t_s < n/2$. Crypto-agnostic Byzantine agreement ensures agreement among honest parties if (1) the adversary is computationally bounded and corrupts up to $t_s$ parties, or (2) the adversary is computationally unbounded and corrupts up to $t_i$ parties, and is moreover given all secrets of all parties established during the setup. We propose a compiler that transforms any pair of resilience-optimal Byzantine agreement protocols in the authenticated and information-theoretic setting into one that is crypto-agnostic. Our compiler has several attractive qualities, including using only $O(\lambda n^2)$ bits over the two underlying Byzantine agreement protocols, and preserving round and communication complexity in the authenticated setting. In particular, our results improve the state-of-the-art in bit complexity by at least two factors of $n$ and provide either early stopping (deterministic) or expected constant round complexity (randomized). We therefore provide fallback security for authenticated Byzantine agreement for free for $t_i \leq n/4$.
Mingxun Zhou, Elaine Shi, Giulia Fanti
ePrint Report
We propose a new private Approximate Nearest Neighbor (ANN) search scheme named Pacmann that allows a client to perform ANN search in a vector database without revealing the query vector to the server. Unlike prior constructions that run encrypted search on the server side, Pacmann carefully offloads limited computation and storage to the client, no longer requiring computationally-intensive cryptographic techniques. Specifically, clients run a graph-based ANN search, where in each hop on the graph, the client privately retrieves local graph information from the server. To make this efficient, we combine two ideas: (1) we adapt a leading graph-based ANN search algorithm to be compatible with private information retrieval (PIR) for subgraph retrieval; (2) we use a recent class of PIR schemes that trade offline preprocessing for online computational efficiency. Pacmann achieves significantly better search quality than the state-of-the-art private ANN search schemes, showing up to 2.5$\times$ better search accuracy on real-world datasets than prior work and reaching 90\% quality of a state-of-the-art non-private ANN algorithm. Moreover on large datasets with up to 100 million vectors, Pacmann shows better scalability than prior private ANN schemes
with up to 2.6$\times$ reduction in computation time and 1.3$\times$ reduction in overall latency.
Bar Alon, Amos Beimel, Or Lasri
ePrint Report
We consider 3 related cryptographic primitives, private information retrieval (PIR) protocols, conditional disclosure of secrets (CDS) protocols, and secret-sharing schemes; these primitives have many applications in cryptography. We study these primitives requiring information-theoretic security. The complexity of these primitives has been dramatically improved in the last few years are they are closely related, i.e., the the 2-server PIR protocol of Dvir and Gopi (J. ACM 2016) was transformed to construct the CDS protocols of Liu, Vaikuntanathan, and Wee (CRYPTO 2017, Eurocrypt 2018) and these CDS protocols are the main ingredient in the construction of the best known secret-sharing schemes.
To date, the messages size required in PIR and CDS protocols and the share size required in secret-sharing schemes is not understood and there are big gaps between their upper bounds and lower bounds. The goal of this paper is to try to better understand the upper bounds by simplifying current constructions and improving their complexity.
We obtain the following two independent results:
- We simplify, abstract, and generalize the 2-server PIR protocol of
Dvir and Gopi (J. ACM 2016) and the 2-server and multi-server
CDS protocols of Liu et al. (CRYPTO 2017, Eurocrypt 2018) and
Beimel, Farr`as, and Lasri (TCC 2023). This is done by considering
a new variant of matching vectors and by using a general share
conversion. In addition to simplifying previous protocols, our
protocols can use matching vectors over any $m$ that is product
of two distinct primes.
Our construction does not improve the communication
complexity of PIR and CDS protocols; however, construction of
better matching vectors over any $m$ that is product of
two distinct primes will improve their communication complexity.
- In many applications of secret-sharing schemes it is important that
the scheme is linear, e.g., by using the fact that parties can locally
add shares of two secrets and obtain shares of the sum of the
secrets. We provide a construction of linear secret-sharing
schemes for $n$-party access structures with improved share size
of $2^{0.7563n}$. Previously, the best share size for linear secret-
sharing schemes was $2^{0.7576n}$ and it is known that for most
$n$-party access structures the shares size is at least $2^{0.5n}$.
This results is achieved by a reduction to unbalanced CDS
protocols (compared to balanced CDS protocols in previous
constructions).
Sulaiman Alhussaini, Serge˘ı Sergeev
ePrint Report
Recently, a more efficient attack on the initial tropical Stickel protocol has been proposed, different from the previously known Kotov-Ushakov attack, yet equally guaranteed to succeed. Given that the Stickel protocol can be implemented in various ways, such as utilizing platforms beyond the tropical semiring or employing alternative commutative matrix ``classes'' instead of polynomials, we firstly explore the generalizability of this new attack across different implementations of the Stickel protocol. We then conduct a comprehensive security analysis of a tropical variant that successfully resists this new attack, namely the Stickel protocol based on Linde-de la Puente (LdlP) matrices. Additionally, we extend the concept of LdlP matrices beyond the tropical semiring, generalizing it to a broader class of semirings.
Sam Gunn, Xuandong Zhao, Dawn Song
ePrint Report
We present the first undetectable watermarking scheme for generative image models. Undetectability ensures that no efficient adversary can distinguish between watermarked and un-watermarked images, even after making many adaptive queries. In particular, an undetectable watermark does not degrade image quality under any efficiently computable metric. Our scheme works by selecting the initial latents of a diffusion model using a pseudorandom error-correcting code (Christ and Gunn, 2024), a strategy which guarantees undetectability and robustness.
We experimentally demonstrate that our watermarks are quality-preserving and robust using Stable Diffusion 2.1. Our experiments verify that, in contrast to every prior scheme we tested, our watermark does not degrade image quality. Our experiments also demonstrate robustness: existing watermark removal attacks fail to remove our watermark from images without significantly degrading the quality of the images. Finally, we find that we can robustly encode 512 bits in our watermark, and up to 2500 bits when the images are not subjected to watermark removal attacks. Our code is available at https://github.com/XuandongZhao/PRC-Watermark.
We experimentally demonstrate that our watermarks are quality-preserving and robust using Stable Diffusion 2.1. Our experiments verify that, in contrast to every prior scheme we tested, our watermark does not degrade image quality. Our experiments also demonstrate robustness: existing watermark removal attacks fail to remove our watermark from images without significantly degrading the quality of the images. Finally, we find that we can robustly encode 512 bits in our watermark, and up to 2500 bits when the images are not subjected to watermark removal attacks. Our code is available at https://github.com/XuandongZhao/PRC-Watermark.
Jonathan Katz, Ben Sela
ePrint Report
Certified deletion, an inherently quantum capability, allows a party holding a quantum state to prove that they have deleted the information contained in that state. Bartusek and Raizes recently studied certified deletion in the context of secret sharing schemes, and showed constructions with privately verifiable proofs of deletion that can be verified only by the dealer who generated the shares. We give two constructions of secret sharing schemes with publicly verifiable certified deletion. Our first construction is based on the post-quantum security of the LWE problem, and each share requires a number of qubits that is linear in the size of an underlying classical secret sharing scheme for the same set of authorized parties. Our second construction is based on a more general assumption—the existence of post quantum one-way functions— but requires an asymptotically larger number of qubits relative to the share size of the underlying classical scheme.
Yanpei Guo, Xuanming Liu, Kexi Huang, Wenjie Qu, Tianyang Tao, Jiaheng Zhang
ePrint Report
This work presents Deepfold, a novel multilinear polynomial commitment scheme (PCS) based on Reed-Solomon code that offers optimal prover time and a more concise proof size. For the first time, Deepfold adapts the FRI-based multilinear PCS to the list decoding radius setting, requiring significantly fewer query repetitions and thereby achieving a 3× reduction in proof size compared to Basefold (Crypto'24), while preserving its advantages in prover time. Compared with PolyFRIM (USENIX Security'24), Deepfold achieves a 2× improvement in prover time, verifier time, and proof size. Another contribution of this work is a batch evaluation scheme, which enables the FRI-based multilinear PCS to handle polynomials encoded from inputs of arbitrary length without additional padding overhead.
Our scheme has broad applications in zk-SNARKs, since PCS is a key component in modern zk-SNARK constructions. For example, when replacing the PCS component of Virgo (S&P'20) with Deepfold, our scheme achieves a 2.5× faster prover time when proving the knowledge of a Merkle tree with 256 leaves, while maintaining the similar proof size. When replacing the PCS component of HyperPlonk (Eurocrypt'23) with Deepfold, our scheme has about 3.6× faster prover time. Additionally, when applying our arbitrary length input commitment to verifiable matrix multiplications for matrices of size 1200×768 and 768×2304, which are actual use cases in GPT-2 model, the performance showcases a 2.4× reduction in prover time compared to previous approaches.
Our scheme has broad applications in zk-SNARKs, since PCS is a key component in modern zk-SNARK constructions. For example, when replacing the PCS component of Virgo (S&P'20) with Deepfold, our scheme achieves a 2.5× faster prover time when proving the knowledge of a Merkle tree with 256 leaves, while maintaining the similar proof size. When replacing the PCS component of HyperPlonk (Eurocrypt'23) with Deepfold, our scheme has about 3.6× faster prover time. Additionally, when applying our arbitrary length input commitment to verifiable matrix multiplications for matrices of size 1200×768 and 768×2304, which are actual use cases in GPT-2 model, the performance showcases a 2.4× reduction in prover time compared to previous approaches.
Ximing Fu, Mo Li, Shihan Lyu, Chuanyi Liu
ePrint Report
We introduce a powerful attack, termed the bit-fixing correlation attack, on Goldreich's pseudorandom generators (PRGs), specifically focusing on those based on the $\mathsf{XOR}\text{-}\mathsf{THR}$ predicate. By exploiting the bit-fixing correlation property, we derive correlation equations with high bias by fixing certain bits. Utilizing two solvers to handle these high-bias correlation equations, we present inverse attacks on $\mathsf{XOR}\text{-}\mathsf{THR}$ based PRGs within the complexity class $\mathsf{NC}^{0}$.
We efficiently attack the $\mathsf{XOR}\text{-}\mathsf{MAJ}$ challenges (STOC 2016), demonstrating that the $\mathsf{XOR}\text{-}\mathsf{MAJ}$ predicate fails to be $s$-pseudorandom with $n$-bit security even for a stretch factor $s = 1$, where $n$ is the size of the secret seed. For instance, a challenge of $n=42$ and $s = 1$ can be broken using approximately $2^{28}$ calls to Gaussian elimination. We extend our attack to an instance used in constructing silent Oblivious Transfer (OT) protocols (Eurocrypt 2024), with $n=256$. This attack can be realized with approximately $2^{29}$ calls to Gaussian elimination, and we have implemented this attack on a cluster of 32 CPU cores, successfully recovering the secret seed in 5.5 hours. Furthermore, we extend our results to general Piecewise Symmetric Predicates of the form $\mathsf{XOR}\text{-}\mathsf{X}$, showing that $\mathsf{XOR}\text{-}\mathsf{MAJ}$ is far from well designed predicate against bit-fixing correlation attack.
With marginal modification, our attack can also be adapted to the FiLIP cipher instantiated with $\mathsf{THR}$-related predicates, making it effective against most FiLIP instances. For example, the FiLIP cipher instantiated on $\mathsf{XOR} \text{-}\mathsf{THR}$ with key size $982$ can be broken using approximately $2^{51}$ calls to Gaussian elimination.
Based on these findings, we show that the traditional security assumptions for Goldreich's PRGs---namely, (a) $\Omega(s)$-resilience and (b) algebraic immunity---are insufficient to guarantee pseudorandomness or one-wayness.
We efficiently attack the $\mathsf{XOR}\text{-}\mathsf{MAJ}$ challenges (STOC 2016), demonstrating that the $\mathsf{XOR}\text{-}\mathsf{MAJ}$ predicate fails to be $s$-pseudorandom with $n$-bit security even for a stretch factor $s = 1$, where $n$ is the size of the secret seed. For instance, a challenge of $n=42$ and $s = 1$ can be broken using approximately $2^{28}$ calls to Gaussian elimination. We extend our attack to an instance used in constructing silent Oblivious Transfer (OT) protocols (Eurocrypt 2024), with $n=256$. This attack can be realized with approximately $2^{29}$ calls to Gaussian elimination, and we have implemented this attack on a cluster of 32 CPU cores, successfully recovering the secret seed in 5.5 hours. Furthermore, we extend our results to general Piecewise Symmetric Predicates of the form $\mathsf{XOR}\text{-}\mathsf{X}$, showing that $\mathsf{XOR}\text{-}\mathsf{MAJ}$ is far from well designed predicate against bit-fixing correlation attack.
With marginal modification, our attack can also be adapted to the FiLIP cipher instantiated with $\mathsf{THR}$-related predicates, making it effective against most FiLIP instances. For example, the FiLIP cipher instantiated on $\mathsf{XOR} \text{-}\mathsf{THR}$ with key size $982$ can be broken using approximately $2^{51}$ calls to Gaussian elimination.
Based on these findings, we show that the traditional security assumptions for Goldreich's PRGs---namely, (a) $\Omega(s)$-resilience and (b) algebraic immunity---are insufficient to guarantee pseudorandomness or one-wayness.
Chen-Da Liu-Zhang, Christopher Portmann, Guilherme Rito
ePrint Report
Cryptography's most common use is secure communication---e.g. Alice can use encryption to hide the contents of the messages she sends to Bob (confidentiality) and can use signatures to assure Bob she sent these messages (authenticity). While one typically considers stateless security guarantees---for example a channel that Alice can use to send messages securely to Bob---one can also consider stateful ones---e.g. an interactive conversation between Alice, Bob and their friends where participation is dynamic: new parties can join the conversation and existing ones can leave. A natural application of such stateful guarantees are messengers.
We introduce a modular abstraction for stateful group communication, called Chat Sessions, which captures security guarantees that are achievable in fully asynchronous settings when one makes no party-honesty assumptions: anyone (including group members themselves) can be fully dishonest. Our abstraction is parameterized by (and enforces) a permissions policy that defines what operations parties have the right to perform in a given chat state. We show how to construct, use and extend Chat Sessions.
Our construction is fully decentralized (in particular, it need not a delivery service), does not incur additional interaction between chat participants (other than what is inherent from chat operations like sending a message) and liveness depends solely on messages being delivered.
A key feature of Chat Sessions is modularity: we extend Chat Sessions to capture authenticity, confidentiality, anonymity and off-the-record, and show our construction provides these guarantees if the underlying communication channels do too. We complement this by proving Maurer et al.'s Multi-Designated Receiver Public Key Encryption scheme (Eurocrypt '22) constructs matching communication channels (i.e. with all these guarantees).
We use Chat Sessions to construct UatChat: a simple and equally modular messaging application. Since UatChat preserves each of the guarantees mentioned above, this means we give the first fully Off-The-Record messaging application: parties can plausibly deny not only having sent any messages but even of being aware of a chat's existence.
We introduce a modular abstraction for stateful group communication, called Chat Sessions, which captures security guarantees that are achievable in fully asynchronous settings when one makes no party-honesty assumptions: anyone (including group members themselves) can be fully dishonest. Our abstraction is parameterized by (and enforces) a permissions policy that defines what operations parties have the right to perform in a given chat state. We show how to construct, use and extend Chat Sessions.
Our construction is fully decentralized (in particular, it need not a delivery service), does not incur additional interaction between chat participants (other than what is inherent from chat operations like sending a message) and liveness depends solely on messages being delivered.
A key feature of Chat Sessions is modularity: we extend Chat Sessions to capture authenticity, confidentiality, anonymity and off-the-record, and show our construction provides these guarantees if the underlying communication channels do too. We complement this by proving Maurer et al.'s Multi-Designated Receiver Public Key Encryption scheme (Eurocrypt '22) constructs matching communication channels (i.e. with all these guarantees).
We use Chat Sessions to construct UatChat: a simple and equally modular messaging application. Since UatChat preserves each of the guarantees mentioned above, this means we give the first fully Off-The-Record messaging application: parties can plausibly deny not only having sent any messages but even of being aware of a chat's existence.
Steve Thakur
ePrint Report
We describe a fully distributed KZG-based Snark instantiable with any pairing-friendly curve with a sufficiently large scalar field. In particular, the proof system is compatible with Cocks-Pinch
or Brezing-Weng outer curves to the the widely used curves such as secp256k1, ED25519, BLS12-381 and BN254.
This allows us to retain the fully parallelizable nature and the O(1) communication complexity of Pianist ([LXZ+23]) in conjunction with circumventing the huge overhead of non-native arithmetic for prominent use cases such as scalar multiplications and/or pairings for Bitcoin (secp256k1), Cosmos (Ed25519) and Ethereum PoS (BLS12-381) signatures.
As in [LXZ+23], we use a bivariate KZG polynomial commitment scheme, which entails a universal updatable CRS linear in the circuit size. The proof size is constant, as are the verification time - dominated by three pairings - and the communication complexity between the Prover machines. With a 9-limb pairing-friendly outer curve to Ed25519, the proof size is 5 KB. With the same curve, the communication complexity for each worker node is 5 KB and that of the master node is 5 KB per machine.
The effective Prover time for a circuit of size T ·M on M machines is O(T · log(T)+M · log(M)). The work of each Prover machine is dominated by the MSMs of length T in the group G1 and a single sum of univariate polynomial products computed via multimodular FFTs1 of size 2T. Likewise, the work of the master node is dominated by the MSMs of length M in the group G1 and a single sum of univariate polynomial products via multimodular FFTs of size 2M.
This allows us to retain the fully parallelizable nature and the O(1) communication complexity of Pianist ([LXZ+23]) in conjunction with circumventing the huge overhead of non-native arithmetic for prominent use cases such as scalar multiplications and/or pairings for Bitcoin (secp256k1), Cosmos (Ed25519) and Ethereum PoS (BLS12-381) signatures.
As in [LXZ+23], we use a bivariate KZG polynomial commitment scheme, which entails a universal updatable CRS linear in the circuit size. The proof size is constant, as are the verification time - dominated by three pairings - and the communication complexity between the Prover machines. With a 9-limb pairing-friendly outer curve to Ed25519, the proof size is 5 KB. With the same curve, the communication complexity for each worker node is 5 KB and that of the master node is 5 KB per machine.
The effective Prover time for a circuit of size T ·M on M machines is O(T · log(T)+M · log(M)). The work of each Prover machine is dominated by the MSMs of length T in the group G1 and a single sum of univariate polynomial products computed via multimodular FFTs1 of size 2T. Likewise, the work of the master node is dominated by the MSMs of length M in the group G1 and a single sum of univariate polynomial products via multimodular FFTs of size 2M.
Weihao Bai, Long Chen, Qianwen Gao, Zhenfeng Zhang
ePrint Report
The MPC-in-the-Head framework has been pro-
posed as a solution for Non-Interactive Zero-Knowledge Arguments of Knowledge (NIZKAoK) due to its efficient proof generation. However, most existing NIZKAoK constructions using this approach require multiple MPC evaluations to achieve negligible soundness error, resulting in proof size and time that are asymptotically at least λ times the size of the circuit of the NP relation. In this paper, we propose a novel method to eliminate the need for repeated MPC evaluations, resulting in a NIZKAoK protocol for any NP relation that we call Diet. The proof size and time of Diet are asymptotically only polylogarithmic with respect to the size of the circuit C of the NP relation but are independent of the security parameter λ. Hence, both the proof size and time can be significantly reduced.
Moreover, Diet offers promising concrete efficiency for proving Learning With Errors (LWE) problems and its variants. Our solution provides significant advantages over other schemes in terms of both proof size and proof time, when considering both factors together. Specifically, Diet is a promising method for proving knowledge of secret keys for lattice-based key encapsulation mechanisms (KEMs) such as Frodo and Kyber, offering a practical solution to future post-quantum certificate management. For Kyber 512, our implementation achieves an online proof size of 83.65 kilobytes (KB) with a preprocessing overhead of 152.02KB. The implementation is highly efficient, with an online proof time of only 0.68 seconds and a preprocessing time of 0.81 seconds. Notably, our approach provides the first reported implementation of proving knowledge of secret keys for Kyber 512 using post-quantum primitives-based zero-knowledge proofs.
Moreover, Diet offers promising concrete efficiency for proving Learning With Errors (LWE) problems and its variants. Our solution provides significant advantages over other schemes in terms of both proof size and proof time, when considering both factors together. Specifically, Diet is a promising method for proving knowledge of secret keys for lattice-based key encapsulation mechanisms (KEMs) such as Frodo and Kyber, offering a practical solution to future post-quantum certificate management. For Kyber 512, our implementation achieves an online proof size of 83.65 kilobytes (KB) with a preprocessing overhead of 152.02KB. The implementation is highly efficient, with an online proof time of only 0.68 seconds and a preprocessing time of 0.81 seconds. Notably, our approach provides the first reported implementation of proving knowledge of secret keys for Kyber 512 using post-quantum primitives-based zero-knowledge proofs.
08 October 2024
Benjamin Hansen Mortensen, Mathias Karsrud Nordal, Martin Strand
ePrint Report
Vessels can be recognised by their navigation radar due to the characteristics of the emitted radar signal. This is particularly useful if one wants to build situational awareness without revealing one's own presence. Most countries maintain databases of radar fingerprints but will not readily share these due to national security regulations. Sharing of such information will generally require some form of information exchange agreement.
However, all parties in a coalition benefit from correct identification. We use secure multiparty computation to match a radar signal measurement against secret databases and output plausible matches with their likelihoods. We also provide a demonstrator using MP-SPDZ.
However, all parties in a coalition benefit from correct identification. We use secure multiparty computation to match a radar signal measurement against secret databases and output plausible matches with their likelihoods. We also provide a demonstrator using MP-SPDZ.
Aayush Jain, Huijia Lin, Sagnik Saha
ePrint Report
In this work, we introduce the sparse LWE assumption, an assumption that draws inspiration from both Learning with Errors (Regev JACM 10) and Sparse Learning Parity with Noise (Alekhnovich FOCS 02). Exactly like LWE, this assumption posits indistinguishability of $(\mathbf{A}, \mathbf{s}\mathbf{A}+\mathbf{e} \mod p)$ from $(\mathbf{A}, \mathbf{u})$ for a random $\mathbf{u}$ where the secret $\mathbf{s}$, and the error vector $\mathbf{e}$ is generated exactly as in LWE. However, the coefficient matrix $\mathbf{A}$ in sparse LPN is chosen randomly from $\mathbb{Z}^{n\times m}_{p}$ so that each column has Hamming weight exactly $k$ for some small $k$. We study the problem in the regime where $k$ is a constant or polylogarithmic. The primary motivation for proposing this assumption is efficiency. Compared to LWE, the samples can be computed and stored with roughly $O(n/k)$ factor improvement in efficiency. Our results can be summarized as:
Foundations: We show several properties of sparse LWE samples, including: 1) The hardness of LWE/LPN with dimension $k$ implies the hardness of sparse LWE/LPN with sparsity $k$ and arbitrary dimension $n \ge k$. 2) When the number of samples $m=\Omega(n \log p)$, length of the shortest vector of a lattice spanned by rows of a random sparse matrix is large, close to that of a random dense matrix of the same dimension (up to a small constant factor). 3) Trapdoors with small polynomial norm exist for random sparse matrices with dimension $n \times m = O(n \log p)$. 4) Efficient algorithms for sampling such matrices together with trapdoors exist when the dimension is $n \times m = \widetilde{\mathcal{O}}(n^2)$.
Cryptanalysis: We examine the suite of algorithms that have been used to break LWE and sparse LPN. While naively many of the attacks that apply to LWE do not exploit sparsity, we consider natural extensions that make use of sparsity. We propose a model to capture all these attacks. Using this model we suggest heuristics on how to identify concrete parameters. Our initial cryptanalysis suggests that concretely sparse LWE with a modest $k$ and slightly bigger dimension than LWE will satisfy similar level of security as LWE with similar parameters.
Applications: We show that the hardness of sparse LWE implies very efficient homomorphic encryption schemes for low degree computations. We obtain the first secret key Linearly Homomorphic Encryption (LHE) schemes with slightly super-constant, or even constant, overhead, which further has applications to private information retrieval, private set intersection, etc. We also obtain secret key homomorphic encryption for arbitrary constant-degree polynomials with slightly super-constant, or constant, overhead.
We stress that our results are preliminary. However, our results make a strong case for further investigation of sparse LWE.
Foundations: We show several properties of sparse LWE samples, including: 1) The hardness of LWE/LPN with dimension $k$ implies the hardness of sparse LWE/LPN with sparsity $k$ and arbitrary dimension $n \ge k$. 2) When the number of samples $m=\Omega(n \log p)$, length of the shortest vector of a lattice spanned by rows of a random sparse matrix is large, close to that of a random dense matrix of the same dimension (up to a small constant factor). 3) Trapdoors with small polynomial norm exist for random sparse matrices with dimension $n \times m = O(n \log p)$. 4) Efficient algorithms for sampling such matrices together with trapdoors exist when the dimension is $n \times m = \widetilde{\mathcal{O}}(n^2)$.
Cryptanalysis: We examine the suite of algorithms that have been used to break LWE and sparse LPN. While naively many of the attacks that apply to LWE do not exploit sparsity, we consider natural extensions that make use of sparsity. We propose a model to capture all these attacks. Using this model we suggest heuristics on how to identify concrete parameters. Our initial cryptanalysis suggests that concretely sparse LWE with a modest $k$ and slightly bigger dimension than LWE will satisfy similar level of security as LWE with similar parameters.
Applications: We show that the hardness of sparse LWE implies very efficient homomorphic encryption schemes for low degree computations. We obtain the first secret key Linearly Homomorphic Encryption (LHE) schemes with slightly super-constant, or even constant, overhead, which further has applications to private information retrieval, private set intersection, etc. We also obtain secret key homomorphic encryption for arbitrary constant-degree polynomials with slightly super-constant, or constant, overhead.
We stress that our results are preliminary. However, our results make a strong case for further investigation of sparse LWE.
Zhengjun Cao, Lihua Liu
ePrint Report
We show that the outsourcing algorithm for the case of linear constraints [IEEE Trans. Cloud Comput., 2023, 11(1), 484-498] cannot keep output privacy, due to the simple translation transformation. We also suggest a remedy method by adopting a hybrid transformation which combines the usual translation transformation and resizing transformation so as to protect the output privacy.
Robin Geelen, Frederik Vercauteren
ePrint Report
This paper presents a Generalized BFV (GBFV) fully homomorphic encryption scheme that encrypts plaintext spaces of the form $\mathbb{Z}[x]/(\Phi_m(x), t(x))$ with $\Phi_m(x)$ the $m$-th cyclotomic polynomial and $t(x)$ an arbitrary polynomial. GBFV encompasses both BFV where $t(x) = p$ is a constant, and the CLPX scheme (CT-RSA 2018) where $m = 2^k$ and $t(x) = x-b$ is a linear polynomial. The latter can encrypt a single huge integer modulo $\Phi_m(b)$, has much lower noise growth than BFV (linear in $m$ instead of exponential), but cannot be bootstrapped.
We show that by a clever choice of $m$ and higher degree polynomial $t(x)$, our scheme combines the SIMD capabilities of BFV with the low noise growth of CLPX, whilst still being efficiently bootstrappable. Moreover, we present parameter families that natively accommodate packed plaintext spaces defined by a large cyclotomic prime, such as the Fermat prime $\Phi_2(2^{16}) = 2^{16} + 1$ and the Goldilocks prime $\Phi_6(2^{32}) = 2^{64} - 2^{32} + 1$. These primes are often used in homomorphic encryption applications and zero-knowledge proof systems.
Due to the lower noise growth, e.g. for the Goldilocks prime, GBFV can evaluate circuits whose multiplicative depth is more than $5$ times larger than native BFV. As a result, we can evaluate either larger circuits or work with much smaller ring dimensions. In particular, we can natively bootstrap GBFV at 128-bit security for a large prime, already at ring dimension $2^{14}$, which was impossible before. We implemented the GBFV scheme on top of the SEAL library and achieve a latency of only $5$ seconds to bootstrap a ciphertext encrypting $4096$ elements modulo $2^{16}+1$.
We show that by a clever choice of $m$ and higher degree polynomial $t(x)$, our scheme combines the SIMD capabilities of BFV with the low noise growth of CLPX, whilst still being efficiently bootstrappable. Moreover, we present parameter families that natively accommodate packed plaintext spaces defined by a large cyclotomic prime, such as the Fermat prime $\Phi_2(2^{16}) = 2^{16} + 1$ and the Goldilocks prime $\Phi_6(2^{32}) = 2^{64} - 2^{32} + 1$. These primes are often used in homomorphic encryption applications and zero-knowledge proof systems.
Due to the lower noise growth, e.g. for the Goldilocks prime, GBFV can evaluate circuits whose multiplicative depth is more than $5$ times larger than native BFV. As a result, we can evaluate either larger circuits or work with much smaller ring dimensions. In particular, we can natively bootstrap GBFV at 128-bit security for a large prime, already at ring dimension $2^{14}$, which was impossible before. We implemented the GBFV scheme on top of the SEAL library and achieve a latency of only $5$ seconds to bootstrap a ciphertext encrypting $4096$ elements modulo $2^{16}+1$.