IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
14 September 2024
Brennon Brimhall, Orion Weller, Matthew Green, Ian Miers
ePrint Report
We propose waterlogs, a new direction to detect and trace synthetic text outputs from large language models based on transparency logs. Waterlogs offer major categorical advantages over watermarking: it (1) allows for the inclusion of arbitrary metadata to facilitate tracing, (2) is publicly verifiable by third parties, and (3) operates in a distributed manner while remaining robust and efficient.
Waterlogs rely on a verifiable Hamming distance index, a novel data structure that we describe, to efficiently search multi-dimensional semantic hashes of natural language embeddings in a verifiable manner. This data structure may be of independent interest.
We implement a waterlog, which we call DREDGE, and benchmark it using synthetic text generated by GPT-2 1.5B and OPT-13B; embeddings are generated via OpenAI's text-embedding-ada-002 model. We provide empirical benchmarks on the efficiency of appending text to the log and querying it for matches. We compare our results to watermarking and outline areas for further research.
Waterlogs rely on a verifiable Hamming distance index, a novel data structure that we describe, to efficiently search multi-dimensional semantic hashes of natural language embeddings in a verifiable manner. This data structure may be of independent interest.
We implement a waterlog, which we call DREDGE, and benchmark it using synthetic text generated by GPT-2 1.5B and OPT-13B; embeddings are generated via OpenAI's text-embedding-ada-002 model. We provide empirical benchmarks on the efficiency of appending text to the log and querying it for matches. We compare our results to watermarking and outline areas for further research.
11 September 2024
Julien Toulemont, Geoffrey Chancel, Fréderick Mailly, Philippe Maurine, Pascal Nouet
ePrint Report
Among the various threats to secure ICs, many are semi-invasive in the sense that their application requires the removal of the package to gain access to either the front or back of the target IC. Despite this stringent application requirements, little attention is paid to embedded techniques aiming at checking the package's integrity. This paper explores the feasibility of verifying the package integrity of microcontrollers by examining their thermal dissipation capability.
Puja Mondal, Supriya Adhikary, Suparna Kundu, Angshuman Karmakar
ePrint Report
Computationally hard problems based on coding theory, such as the syndrome decoding problem, have been used for constructing secure cryptographic schemes for a long time. Schemes based on these problems are also assumed to be secure against quantum computers. However, these schemes are often considered impractical for real-world deployment due to large key sizes and inefficient computation time. In the recent call for standardization of additional post-quantum digital signatures by the National Institute of Standards and Technology, several code-based candidates have been proposed, including LESS, CROSS, and MEDS. These schemes are designed on the relatively new zero-knowledge framework. Although several works analyze the hardness of these schemes, there is hardly any work that examines the security of these schemes in the presence of physical attacks.
In this work, we analyze these signature schemes from the perspective of fault attacks. All these schemes use a similar tree-based construction to compress the signature size. We attack this component of these schemes. Therefore, our attack is applicable to all of these schemes. In this work, we first analyze the LESS signature scheme and devise our attack. Furthermore, we showed how this attack can be extended to the CROSS signature scheme. Our attacks are built on very simple fault assumptions. Our results show that we can recover the entire secret key of LESS and CROSS using as little as a single fault. Finally, we propose various countermeasures to prevent these kinds of attacks and discuss their efficiency and shortcomings.
Woohyuk Chung, Hwigyeom Kim, Jooyoung Lee, Yeongmin Lee
ePrint Report
This paper studies the provable security of the deterministic random bit generator~(DRBG) utilized in Linux 6.4.8, marking the first analysis of Linux-DRBG from a provable security perspective since its substantial structural changes in Linux 4 and Linux 5.17. Specifically, we prove its security up to $O(\min\{2^{\frac{n}{2}},2^{\frac{\lambda}{2}}\})$ queries in the seedless robustness model, where $n$ is the output size of the internal primitives and $\lambda$ is the min-entropy of the entropy source. Our result implies $128$-bit security given $n=256$ and $\lambda=256$ for Linux-DRBG. We also present two distinguishing attacks using $O(2^{\frac{n}{2}})$ and $O (2^{\frac{\lambda}{2}})$ queries, respectively, proving the tightness of our security bound.
Vincent Ehrmanntraut, Ulrike Meyer
ePrint Report
We present novel Secure Multi-Party Computation (SMPC) protocols to perform Breadth-First-Searches (BFSs) and determine maximal flows on dense secret-shared graphs. In particular, we introduce a novel BFS protocol that requires only $\mathcal{O}(\log n)$ communication rounds on graphs with $n$ nodes, which is a big step from prior work that requires $\mathcal{O}(n \log n)$ rounds. This BFS protocol is then used in a maximal flow protocol based on the Edmonds-Karp algorithm, which requires $\mathcal{O}(n^3 \log n)$ rounds. We further optimize the protocol for cases where an upper bound $U$ on the capacities is publicly known by using a capacity scaling approach. This yields a new protocol which requires $\mathcal{O}(n^2 \log n \log U)$ rounds. Finally, we introduce a novel max flow protocol based on algorithms by Dinic and Tarjan with round complexity $\mathcal{O}(n^3)$.
All protocols presented in this paper use SMPC primitives as a black-box, allowing our protocols to be used as building blocks in a wide range of settings and applications. We evaluate our protocols with semi-honest and malicious security in different network settings. Our logarithmic BFS protocol is up to 69 times faster than prior protocols on small graphs with less than 100 nodes, but is outperformed by protocols with lower computational complexity on graphs with thousands of nodes. Further, we find our Dinic-Tarjan protocol to be faster than the Edmonds-Karp and capacity scaling protocols in our evaluation, albeit trends indicating capacity scaling protocols to be faster on graph sizes not reached in our evaluation.
All protocols presented in this paper use SMPC primitives as a black-box, allowing our protocols to be used as building blocks in a wide range of settings and applications. We evaluate our protocols with semi-honest and malicious security in different network settings. Our logarithmic BFS protocol is up to 69 times faster than prior protocols on small graphs with less than 100 nodes, but is outperformed by protocols with lower computational complexity on graphs with thousands of nodes. Further, we find our Dinic-Tarjan protocol to be faster than the Edmonds-Karp and capacity scaling protocols in our evaluation, albeit trends indicating capacity scaling protocols to be faster on graph sizes not reached in our evaluation.
Shuang Hu, Bingsheng Zhang, Cong Zhang, Kui Ren
ePrint Report
Recently, Masny and Rindal [MR19] formalized a notion called Endemic Oblivious Transfer (EOT), and they proposed a generic transformation from Non-Interactive Key Exchange (NIKE) to EOT with standalone security in the random oracle (RO) model. However, from the model level, the relationship between idealized NIKE and idealized EOT and the relationship between idealized elementary public key primitives have been rarely researched.
In this work, we investigate the relationship between ideal NIKE and ideal one-round EOT, as well as the relationship between ideal public key encryption (PKE) and ideal two-round Oblivious Transfer (OT), in the indifferentiability framework proposed by Maurer et al.(MRH04). Our results are threefold: Firstly, we model ideal PKE without public key validity test, ideal one-round EOT and ideal two-round OT in the indifferentiability framework. Secondly, we show that ideal NIKE and ideal one-round EOT are equivalent, and ideal PKE without public key validity test are equivalent to ideal two-round OT. Thirdly, we show a separation between ideal two-round OT and ideal one-round EOT, which implies a separation between ideal PKE and ideal NIKE.
Robert Hines
ePrint Report
We obfuscate words of a given length in a free monoid on two generators with a simple factorization algorithm (namely $SL_2(\mathbb{N})$) to create a public-key encryption scheme. We provide a reference implementation in Python and suggested parameters. The security analysis is between weak and non-existent, left to future work.
Jeffrey Champion, David J. Wu
ePrint Report
A broadcast encryption scheme allows a user to encrypt a message to $N$ recipients with a ciphertext whose size scales sublinearly with $N$. While broadcast encryption enables succinct encrypted broadcasts, it also introduces a strong trust assumption and a single point of failure; namely, there is a central authority who generates the decryption keys for all users in the system. Distributed broadcast encryption offers an appealing alternative where there is a one-time (trusted) setup process that generates a set of public parameters. Thereafter, users can independently generate their own public keys and post them to a public-key directory. Moreover, anyone can broadcast an encrypted message to any subset of user public keys with a ciphertext whose size scales sublinearly with the size of the broadcast set. Unlike traditional broadcast encryption, there are no long-term secrets in distributed broadcast encryption and users can join the system at any time (by posting their public key to the public-key directory).
Previously, distributed broadcast encryption schemes were known from standard pairing-based assumptions or from powerful tools like indistinguishability obfuscation or witness encryption. In this work, we provide the first distributed broadcast encryption scheme from a falsifiable lattice assumption. Specifically, we rely on the $\ell$-succinct learning with errors (LWE) assumption introduced by Wee (CRYPTO 2024). Previously, the only lattice-based candidate for distributed broadcast encryption goes through general-purpose witness encryption, which in turn is only known from the /private-coin/ evasive LWE assumption, a strong and non-falsifiable lattice assumption. Along the way, we also describe a more direct construction of broadcast encryption from lattices.
Previously, distributed broadcast encryption schemes were known from standard pairing-based assumptions or from powerful tools like indistinguishability obfuscation or witness encryption. In this work, we provide the first distributed broadcast encryption scheme from a falsifiable lattice assumption. Specifically, we rely on the $\ell$-succinct learning with errors (LWE) assumption introduced by Wee (CRYPTO 2024). Previously, the only lattice-based candidate for distributed broadcast encryption goes through general-purpose witness encryption, which in turn is only known from the /private-coin/ evasive LWE assumption, a strong and non-falsifiable lattice assumption. Along the way, we also describe a more direct construction of broadcast encryption from lattices.
Hoeteck Wee
ePrint Report
We present new lattice-based attribute-based encryption (ABE) and
laconic function evaluation (LFE) schemes for circuits with *sublinear*
ciphertext overhead. For depth $d$ circuits over $\ell$-bit inputs, we obtain
* an ABE with ciphertext and secret key size $O(1)$;
* a LFE with ciphertext size $\ell + O(1)$ and digest size $O(1)$;
* an ABE with public key and ciphertext size $O(\ell^{2/3})$ and secret key size $O(1)$,
where $O(\cdot)$ hides $\mbox{poly}(d,\lambda)$ factors. The first two results achieve almost optimal ciphertext and secret key / digest sizes, up to the $\mbox{poly}(d)$ dependencies. The security of our schemes relies on $\ell$-succinct LWE, a falsifiable assumption which is implied by evasive LWE. At the core of our results is a new technique for compressing LWE samples $\mathbf{s}(\mathbf{A}-\mathbf{x} \otimes \mathbf{G})$ as well as the matrix $\mathbf{A}$.
* an ABE with ciphertext and secret key size $O(1)$;
* a LFE with ciphertext size $\ell + O(1)$ and digest size $O(1)$;
* an ABE with public key and ciphertext size $O(\ell^{2/3})$ and secret key size $O(1)$,
where $O(\cdot)$ hides $\mbox{poly}(d,\lambda)$ factors. The first two results achieve almost optimal ciphertext and secret key / digest sizes, up to the $\mbox{poly}(d)$ dependencies. The security of our schemes relies on $\ell$-succinct LWE, a falsifiable assumption which is implied by evasive LWE. At the core of our results is a new technique for compressing LWE samples $\mathbf{s}(\mathbf{A}-\mathbf{x} \otimes \mathbf{G})$ as well as the matrix $\mathbf{A}$.
Arad Kotzer, Ori Rottenstreich
ePrint Report
Light clients implement a simple solution for Bitcoin's scalability problem, as they do not store the entire blockchain but only the state of particular addresses of interest. To be able to keep track of the updated state of their addresses, light clients rely on full nodes to provide them with the required information. To do so, they must reveal information about the addresses they are interested in. This paper studies the two most common light client implementations, SPV and Neutrino with regards to their privacy. We define privacy metrics for comparing the privacy of the different implementations. We evaluate and compare the privacy of the implementations over time on real Bitcoin data and discuss the inherent privacy-communication tradeoff. In addition, we propose general techniques to enhance light client privacy in the existing implementations. Finally, we propose a new SPV-based light client model, the aggregation model, evaluate it, and show it can achieve enhanced privacy than in the existing light client implementations.
Code-Based Zero-Knowledge from VOLE-in-the-Head and Their Applications: Simpler, Faster, and Smaller
Ying Ouyang, Deng Tang, Yanghong Xu
ePrint Report
Zero-Knowledge (ZK) protocols allow a prover to demonstrate the truth of a statement without disclosing additional information about the underlying witness. Code-based cryptography has a long history but did suffer from periods of slow development. Recently, a prominent line of research have been contributing to designing efficient code-based ZK from MPC-in-the-head (Ishai et al., STOC 2007) and VOLE-in-the head (VOLEitH) (Baum et al., Crypto 2023) paradigms, resulting in quite efficient standard signatures. However, none of them could be directly used to construct privacy-preserving cryptographic primitives. Therefore, Stern's protocols remain to be the major technical stepping stones for developing advanced code-based privacy-preserving systems.
This work proposes new code-based ZK protocols from VOLEitH paradigm for various relations and designs several code-based privacy-preserving systems that considerably advance the state-of-the-art in code-based cryptography. Our first contribution is a new ZK protocol for proving the correctness of a regular (non-linear) encoding process, which is utilized in many advanced privacy-preserving systems. Our second contribution are new ZK protocols for concrete code-based relations. In particular, we provide a ZK of accumulated values with optimal witness size for the accumulator (Nguyen et al., Asiacrypt 2019). Our protocols thus open the door for constructing more efficient privacy-preserving systems. Moreover, our ZK protocols have the advantage of being simpler, faster, and smaller compared to Stern-like protocols. To illustrate the effectiveness of our new ZK protocols, we develop ring signature (RS) scheme, group signature (GS) scheme, fully dynamic attribute-based signature scheme from our new ZK. The signature sizes of the resulting schemes are two to three orders of magnitude smaller than those based on Stern-like protocols in various parameter settings. Finally, our first ZK protocol yields a standard signature scheme, achieving ``signature size + public key size'' as small as $3.05$ KB, which is slightly smaller than the state-of-the-art signature scheme (Cui et al., PKC 2024) based on the regular syndrome decoding problems.
This work proposes new code-based ZK protocols from VOLEitH paradigm for various relations and designs several code-based privacy-preserving systems that considerably advance the state-of-the-art in code-based cryptography. Our first contribution is a new ZK protocol for proving the correctness of a regular (non-linear) encoding process, which is utilized in many advanced privacy-preserving systems. Our second contribution are new ZK protocols for concrete code-based relations. In particular, we provide a ZK of accumulated values with optimal witness size for the accumulator (Nguyen et al., Asiacrypt 2019). Our protocols thus open the door for constructing more efficient privacy-preserving systems. Moreover, our ZK protocols have the advantage of being simpler, faster, and smaller compared to Stern-like protocols. To illustrate the effectiveness of our new ZK protocols, we develop ring signature (RS) scheme, group signature (GS) scheme, fully dynamic attribute-based signature scheme from our new ZK. The signature sizes of the resulting schemes are two to three orders of magnitude smaller than those based on Stern-like protocols in various parameter settings. Finally, our first ZK protocol yields a standard signature scheme, achieving ``signature size + public key size'' as small as $3.05$ KB, which is slightly smaller than the state-of-the-art signature scheme (Cui et al., PKC 2024) based on the regular syndrome decoding problems.
Nai-Hui Chia, Kai-Min Chung, Xiao Liang, Jiahui Liu
ePrint Report
Zero-Knowledge (ZK) protocols have been a subject of intensive study due to their fundamental importance and versatility in modern cryptography. However, the inherently different nature of quantum information significantly alters the landscape, necessitating a re-examination of ZK designs.
A crucial aspect of ZK protocols is their round complexity, intricately linked to $\textit{simulation}$, which forms the foundation of their formal definition and security proofs. In the $\textit{post-quantum}$ setting, where honest parties and their communication channels are all classical but the adversaries could be quantum, Chia, Chung, Liu, and Yamakawa [FOCS'21] demonstrated the non-existence of constant-round $\textit{black-box-simulatable}$ ZK arguments (BBZK) for $\mathbf{NP}$ unless $\mathbf{NP} \subseteq \mathbf{BQP}$. However, this problem remains widely open in the full-fledged quantum future that will eventually arrive, where all parties (including the honest ones) and their communication are naturally quantum.
Indeed, this problem is of interest to the broader theory of quantum computing. It has been an important theme to investigate how quantum power fundamentally alters traditional computational tasks, such as the $\textit{unconditional}$ security of Quantum Key Distribution and the incorporation of Oblivious Transfers in MiniQCrypt. Moreover, quantum communication has led to round compression for commitments and interactive arguments. Along this line, the above problem is of great significance in understanding whether quantum computing could also change the nature of ZK protocols in some fundamentally manner.
We resolved this problem by proving that only languages in $\mathbf{BQP}$ admit constant-round $\textit{fully-quantum}$ BBZK. This result holds significant implications. Firstly, it illuminates the nature of quantum zero-knowledge and provides valuable insights for designing future protocols in the quantum realm. Secondly, it relates ZK round complexity with the intriguing problem of $\mathbf{BQP}$ vs $\mathbf{QMA}$, which is out of the reach of previous analogue impossibility results in the classical or post-quantum setting. Lastly, it justifies the need for the $\textit{non-black-box}$ simulation techniques or the relaxed security notions employed in existing constant-round fully-quantum BBZK protocols.
A crucial aspect of ZK protocols is their round complexity, intricately linked to $\textit{simulation}$, which forms the foundation of their formal definition and security proofs. In the $\textit{post-quantum}$ setting, where honest parties and their communication channels are all classical but the adversaries could be quantum, Chia, Chung, Liu, and Yamakawa [FOCS'21] demonstrated the non-existence of constant-round $\textit{black-box-simulatable}$ ZK arguments (BBZK) for $\mathbf{NP}$ unless $\mathbf{NP} \subseteq \mathbf{BQP}$. However, this problem remains widely open in the full-fledged quantum future that will eventually arrive, where all parties (including the honest ones) and their communication are naturally quantum.
Indeed, this problem is of interest to the broader theory of quantum computing. It has been an important theme to investigate how quantum power fundamentally alters traditional computational tasks, such as the $\textit{unconditional}$ security of Quantum Key Distribution and the incorporation of Oblivious Transfers in MiniQCrypt. Moreover, quantum communication has led to round compression for commitments and interactive arguments. Along this line, the above problem is of great significance in understanding whether quantum computing could also change the nature of ZK protocols in some fundamentally manner.
We resolved this problem by proving that only languages in $\mathbf{BQP}$ admit constant-round $\textit{fully-quantum}$ BBZK. This result holds significant implications. Firstly, it illuminates the nature of quantum zero-knowledge and provides valuable insights for designing future protocols in the quantum realm. Secondly, it relates ZK round complexity with the intriguing problem of $\mathbf{BQP}$ vs $\mathbf{QMA}$, which is out of the reach of previous analogue impossibility results in the classical or post-quantum setting. Lastly, it justifies the need for the $\textit{non-black-box}$ simulation techniques or the relaxed security notions employed in existing constant-round fully-quantum BBZK protocols.
Zhengjun Cao, Lihua Liu
ePrint Report
Let $\zeta(z)=\sum_{n=1}^{\infty} \frac{1}{n^z}$, $\psi(z)=\sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n^z}, z\in \mathbb{C}$. We show that $\psi(z)\not=(1-2^{1-z})\zeta(z)$, if $0
Zhengjun Cao, Lihua Liu
ePrint Report
The Li et al.'s scheme [Computer Communications, 186 (2022), 110-120)] uses XOR operation to realize the private transmission of sensitive information, under the assumption that if only one parameter in the expression $ a= b\oplus c $ is known, an adversary cannot retrieve the other two. The assumption neglects that the operands $b$ and $c$ must be of the same bit-length, which leads to the exposure of a substring in the longer operand. The scheme wrongly treats timestamps as random strings to encrypt a confidential parameter. These misuses result in the loss of sensor node's anonymity, the loss of user anonymity and untraceability, insecurity against off-line password guessing attack, and insecurity against impersonation attack. The analysis techniques developed in this note is helpful for the future works on designing such schemes.
Andrija Novakovic, Alireza Kavousi, Kobi Gurkan, Philipp Jovanovic
ePrint Report
This work introduces Cryptobazaar, a novel scalable, private, and decentralized sealed-bid auction protocol. In particular, our protocol protects the privacy of losing bidders by preserving the confidentiality of their bids while ensuring public verifiability of the outcome and relying only on a single untrusted auctioneer for coordination. At its core, Cryptobazaar combines an efficient distributed protocol to compute the logical-OR for a list of unary-encoded bids with various novel zero-knowledge succinct arguments of knowledge that may be of independent interest. We further present variants of our protocol that can be used for efficient first-, second-, and more generally $(p+1)$st-price as well as sequential first-price auctions. Finally, the performance evaluation of our Cryptobazaar implementation shows that it is highly practical. For example, a single run of an auction with $128$ bidders and a price range of $1024$ values terminates in less than $0.5$ sec and requires each bidder to send and receive only about $32$ KB of data.
Jelle Vos, Mauro Conti, Zekeriya Erkin
ePrint Report
In the past decade, tens of homomorphic encryption compilers have been released, and there are good reasons for these compilers to exist. Firstly, homomorphic encryption is a powerful secure computation technique in that it is relatively easy for parties to switch from plaintext computation to secure computations when compared to techniques like secret sharing. However, the technique is mathematically involved and requires expert knowledge to express computations as homomorphic encryption operations. So, these compilers support users who might otherwise not have the time or expertise to optimize the computation manually. Another reason is that homomorphic encryption is still computationally expensive, so compilers allow users to optimize their secure computation tasks.
One major shortcoming of these compilers is that they often do not allow users to use high-level primitives, such as equality checks, comparisons, and AND and OR operations between many operands. The compilers that do are either based on TFHE, requiring large bootstrapping keys that must be sent to the evaluator, or they only work in the Boolean domain, excluding many potentially more performant circuits.
Moreover, compilers must reduce the multiplicative depth of the circuits they generate to minimize the noise growth inherent to these homomorphic encryption schemes. However, many compilers only consider reducing the depth as an afterthought.
We propose the Oraqle compiler, which solves both problems at once by implementing depth-aware arithmetization, a technique for expressing high-level primitives as arithmetic operations that are executable by homomorphic encryption libraries. Instead of generating one possible circuit, the compiler generates multiple circuits that trade off the number of multiplications with the multiplicative depth. If the depth of the resulting circuits is low enough, they may be evaluated using a BFV or BGV library that does not require bootstrapping keys. We demonstrate that our compiler allows for significant performance gains.
Christina Boura, Shahram Rasoolzadeh, Dhiman Saha, Yosuke Todo
ePrint Report
In this paper, we present the first third-party cryptanalysis of SCARF, a tweakable low-latency block cipher designed to thwart contention-based cache attacks through cache randomization. We focus on multiple-tweak differential attacks, exploiting biases across multiple tweaks. We establish a theoretical framework explaining biases for any number of rounds and verify this framework experimentally. Then, we use these properties to develop a key recovery attack on 7-round SCARF with a time complexity of \(2^{76}\), achieving a 98.9% success rate in recovering the 240-bit secret key. Additionally, we introduce a distinguishing attack on the full 8-round SCARF in a multi-key setting, with a complexity of \(c \times 2^{67.55}\), demonstrating that SCARF does not provide 80-bit security under these conditions. We also explore whether our approach could be extended to the single-key model and discuss the implications of different S-box choices on the attack success.
Gowri R. Chandran, Kilian Demuth, Kasra Edalatnejad, Sebastian Linsner, Christian Reuter, Thomas Schneider
ePrint Report
Nowadays, the problem of point-to-point encryption is solved by the wide adaptation of protocols like TLS. However, challenges persist for End-to-End Encryption (E2EE). Current E2EE solutions, such as PGP and secure messengers like Signal, suffer from issues like 1) low usability, 2) small user base, 3) dependence on central service providers, and 4) susceptibility to backdoors. Concerns over legally mandated backdoors are rising as the US and EU are proposing new surveillance regulations requiring chat monitoring. We present a new E2EE solution called Encrypted MultiChannel Communication, based on n-out-of-n secret sharing. EMC2 splits messages into multiple secret shares and sends them through independent channels. We show that multiple independent channels exist between users and EMC2 provides E2EE with no single point of trust, no setup, and is understandable by the general public. Our solution complements existing tools and aims to strengthen the argument against legally enforced backdoors by demonstrating their ineffectiveness.
Ioanna Karantaidou, Omar Renawi, Foteini Baldimtsi, Nikolaos Kamarinakis, Jonathan Katz, Julian Loss
ePrint Report
We propose the first constructions of anonymous tokens with decentralized issuance. Namely, we consider a dynamic set of signers/issuers; a user can obtain a token from any subset of the signers, which is publicly verifiable and unlinkable to the issuance process. To realize this new primitive we formalize the notion of Blind Multi-Signatures (BMS), which allow a user to interact with multiple signers to obtain a (compact) signature; even if all the signers collude they are unable to link a signature to an interaction with any of them.
We then present two BMS constructions, one based on BLS signatures and a second based on discrete logarithms without pairings. We prove security of both our constructions in the Algebraic Group Model.
We also provide a proof-of-concept implementation and show that it has low-cost verification, which is the most critical operation in blockchain applications.
We then present two BMS constructions, one based on BLS signatures and a second based on discrete logarithms without pairings. We prove security of both our constructions in the Algebraic Group Model.
We also provide a proof-of-concept implementation and show that it has low-cost verification, which is the most critical operation in blockchain applications.
Byeongjun Jang, Gweonho Jeong, Hyuktae Kwon, Hyunok Oh, Jihye Kim
ePrint Report
The synergy of commitments and zk-SNARKs is
widely used in various applications, particularly in fields like
blockchain, to ensure data privacy and integrity without revealing
secret information. However, proving multiple commitments in
a batch imposes a large overhead on a zk-SNARK system. One
solution to alleviate the burden is the use of commit-and-prove
SNARK (CP-SNARK) approach. LegoSNARK defines a new
notion called commit-carrying SNARK (cc-SNARK), a special-
ized form of CP-SNARK, and introduces a compiler to build
commit-carrying SNARKs into commit-and-prove SNARKs. Us-
ing this compiler, the paper shows a commit-and-prove version
of Groth16 that improves the proving time (about 5,000×).
However, proving $l$-multiple commitments simultaneously with
this compiler faces a performance issue, as the linking system in
LegoSNARK requires $O(l)$ pairings on the verifier side.
To enhance efficiency, we propose a new batching module
called Lego-DLC, designed for handling multiple commitments. This
module is built by combining a $\Sigma$-protocol with commitment-
carrying SNARKs under Pedersen engines in which our mod-
ule can support all commit-carrying SNARKs under Pedersen
engines. In this paper, we provide the concrete instantiations
for Groth16 and Plonk. In the performance comparison, for
$2^{16}$ commitments, with a verification time of just 0.064s—over
30x faster than LegoSNARK’s 1.972s—our approach shows
remarkable efficiency. The slightly longer prover time of 1.413s
(compared to LegoSNARK’s 0.177s), around 8x is a small trade-
off for this performance gain.