International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

16 February 2024

David Du Pont, Jonas Bertels, Furkan Turan, Michiel Van Beirendonck, Ingrid Verbauwhede
ePrint Report ePrint Report
Fully Homomorphic Encryption (FHE) enables computation on encrypted data, holding immense potential for enhancing data privacy and security in various applications. Presently, FHE adoption is hindered by slow computation times, caused by data being encrypted into large polynomials. Optimized FHE libraries and hardware acceleration are emerging to tackle this performance bottleneck. Often, these libraries implement the Number Theoretic Transform (NTT) algorithm for efficient polynomial multiplication. Existing implementations mostly focus on the case where the polynomials are defined over a power-of-two cyclotomic ring, allowing to make use of the simpler Cooley-Tukey NTT. However, generalized cyclotomics have several benefits in the BGV FHE scheme, including more SIMD plaintext slots and a simpler bootstrapping algorithm.

We present a hardware architecture for the NTT targeting generalized cyclotomics within the context of the BGV FHE scheme. We explore different non-power-of-two NTT algorithms, including the Prime-Factor, Rader, and Bluestein NTTs. Our most efficient architecture targets the 21845-th cyclotomic polynomial --- a practical parameter for BGV --- with ideal properties for use with a combination of the Prime-Factor and Rader algorithms. The design achieves high throughput with optimized resource utilization, by leveraging parallel processing, pipelining, and reusing processing elements. Compared to Wu et al.'s VLSI architecture of the Bluestein NTT, our approach showcases 2$\times$ to 5$\times$ improved throughput and area efficiency. Simulation and implementation results on an AMD Alveo U250 FPGA demonstrate the feasibility of the proposed hardware design for FHE.
Expand
Pedro Branco, Nico Döttling, Akshayaram Srinivasan, Riccardo Zanotto
ePrint Report ePrint Report
Somewhere statistically binding (SSB) hashing allows us to sample a special hashing key such that the digest statistically binds the input at $m$ secret locations. This hash function is said to be somewhere extractable (SE) if there is an additional trapdoor that allows the extraction of the input bits at the $m$ locations from the digest.

Devadas, Goyal, Kalai, and Vaikuntanathan (FOCS 2022) introduced a variant of somewhere extractable hashing called rate-1 fully local SE hash functions. The rate-1 requirement states that the size of the digest is $m + \mathsf{poly}(\lambda)$ (where $\lambda$ is the security parameter). The fully local property requires that for any index $i$, there is a "very short" opening showing that $i$-th bit of the hashed input is equal to $b$ for some $b \in \{0,1\}$. The size of this opening is required to be independent of $m$ and in particular, this means that its size is independent of the size of the digest. Devadas et al. gave such a construction from Learning with Errors (LWE).

In this work, we give a construction of a rate-1 fully local somewhere extractable hash function from Decisional Diffie-Hellman (DDH) and BARGs. Under the same assumptions, we give constructions of rate-1 BARG and RAM SNARG with partial input soundness whose proof sizes are only matched by prior constructions based on LWE.
Expand
Alexander Bienstock, Sarvar Patel, Joon Young Seo, Kevin Yeo
ePrint Report ePrint Report
In this paper, we study two problems: oblivious compression and decompression of ciphertexts. In oblivious compression, a server holds a set of ciphertexts with a subset of encryptions of zeroes whose positions are only known to the client. The goal is for the server to effectively compress the ciphertexts obliviously, while preserving the non-zero plaintexts and without learning the plaintext values. For oblivious decompression, the client, instead, succinctly encodes a sequence of plaintexts such that the server may decode encryptions of all plaintexts value, but the zeroes may be replaced with arbitrary values. We present solutions to both problems that construct lossless compressions only 5% more than the optimal minimum using only additive homomorphism. The crux of both algorithms involve embedding ciphertexts as random linear systems that are efficiently solvable.

Using our compression schemes, we obtain state-of-the-art schemes for batch private information retrieval (PIR) where a client wishes to privately retrieve multiple entries from a server-held database in one query. We show that our compression schemes may be used to reduce communication by up to 30% for batch PIR in both the single- and two-server settings.

Additionally, we study labeled private set intersection (PSI) in the unbalanced setting where one party's set is significantly smaller than the other party's set and each entry has associated data. By utilizing our novel compression algorithm, we present a protocol with 65-88% reduction in communication with comparable computation compared to prior works.
Expand
Michele Battagliola, Andrea Flamini
ePrint Report ePrint Report
The recent surge of distribute technologies caused an increasing interest towards threshold signature protocols, that peaked with the recent NIST First Call for Multi-Party Threshold Schemes.

Since its introduction, the Fiat-Shamir Transform has been the most popular way to design standard digital signature schemes. In this work, we translate the Fiat-Shamir Transform into a multi-party setting, building a framework that seeks to be an alternative, easier, way to design threshold digital signatures. We do that by introducing the concept of threshold identification scheme and threshold sigma protocol, and showing necessary and sufficient conditions to prove the security of the threshold signature schemes derived from them.

Lastly, we show a practical application of our framework providing an alternative security proof for Sparkle, a recent threshold Schnorr signature. In particular, we consider the threshold identification scheme underlying Sparkle and prove the security of the signature derived from it.

We show that using our framework the effort required to prove the security of threshold signatures might be drastically lowered. In fact, instead of reducing explicitly its security to the security of a hard problem, it is enough to prove some properties of the underlying threshold sigma protocol and threshold identification scheme. Then, by applying the results that we prove in this paper it is guaranteed that the derived threshold signature is secure.
Expand
Charlotte Lefevre
ePrint Report ePrint Report
This note examines a nuance in the methods employed for counting the adversarial online complexity in the security proofs of duplex-based modes, with a focus on authenticated encryption. A recent study by Gilbert et al., reveals an attack on a broad class of duplex-based authenticated encryption modes. In particular, their approach to quantifying the adversarial online complexity, which capture realistic attack scenarios, includes certain queries in the count which are not in the security proofs. This note analyzes these differences and concludes that the attack of Gilbert et al, for certain parameter choices, matches the security bound.
Expand
Elijah Pelofske
ePrint Report ePrint Report
Quantum devices offer a highly useful function - that is generating random numbers in a non-deterministic way since the measurement of a quantum state is not deterministic. This means that quantum devices can be constructed that generate qubits in a uniform superposition and then measure the state of those qubits. If the preparation of the qubits in a uniform superposition is unbiased, then quantum computers can be used to create high entropy, secure random numbers. Typically, preparing and measuring such quantum systems requires more time compared to classical pseudo random number generators (PRNGs) which are inherently deterministic algorithms. Therefore, the typical use of quantum random number generators (QRNGs) is to provide high entropy secure seeds for PRNGs. Quantum annealing (QA) is a type of analog quantum computation that is a relaxed form of adiabatic quantum computation and uses quantum fluctuations in order to search for ground state solutions of a programmable Ising model. Here we present extensive experimental random number results from a D-Wave 2000Q quantum annealer, totaling over 20 billion bits of QA measurements, which is significantly larger than previous D-Wave QA random number generator studies. Current quantum annealers are susceptible to noise from environmental sources and calibration errors, and are not in general unbiased samplers. Therefore, it is of interest to quantify whether noisy quantum annealers can effectively function as an unbiased QRNG. The amount of data that was collected from the quantum annealer allows a comprehensive analysis of the random bits to be performed using the NIST SP 800-22 Rev 1a testsuite, as well as min-entropy estimates from NIST SP 800-90B. The randomness tests show that the generated random bits from the D-Wave 2000Q are biased, and not unpredictable random bit sequences. With no server-side sampling post-processing, the $1$ microsecond annealing time measurements had a min-entropy of $0.824$.
Expand
Tao Zhang, Shang Shi, Md Habibur Rahman, Nitin Varshney, Akshay Kulkarni, Farimah Farahmandi, Mark Tehranipoor
ePrint Report ePrint Report
Battery-operated applications have been ubiquitous all over the world ranging from power-intensive electric cars down to low-power smart terminals and embedded devices. Meanwhile, serious incidents around batteries such as swelling, fire, and explosion have been witnessed, which resulted in horribly huge financial and even life loss. People used to attribute such aftermaths to unintentional design mistakes or insufficient quality inspection of original battery manufacturers. However, this is not fair anymore today given the convoluted battery supply chain and the extended cyber-physical attack surface of battery management systems (BMS). In this paper, we will focus on the authenticity and assurance of prevalent (Li-ion) battery instances. We look into battery authenticity by modeling the contemporary battery supply chain and discussing practical concerns such as rewrapping and recycling in-depth at each stage. As for battery assurance, we consider emerging attack vectors that can compromise the confidentiality, integrity, and availability of the microelectronic BMS. Besides, real-world attack examples are highlighted to reflect the capabilities of advanced adversaries. Moreover, promising countermeasures regarding the detection and avoidance of threats on both battery authenticity and assurance are presented, such that researchers will gain insights into how the problem could be addressed/alleviated. We also provide our perspectives on the vulnerabilities of battery systems and their consequent impacts as well as our point of view on potential countermeasure techniques.
Expand

12 February 2024

Dionysis Zindros, Apostolos Tzinas, David Tse
ePrint Report ePrint Report
We observe that most fixed-party distributed protocols can be rewritten by replacing a party with a ledger (such as a blockchain system) and the authenticated channel communication between parties with cross-chain relayers. This transform is useful because blockchain systems are always online and have battle-tested security assumptions. We provide a definitional framework that captures this analogy. We model the transform formally, and posit and prove a generic metatheorem that allows translating all theorems from the party setting into theorems in the emulated setting, while preserving analogies between party honesty and ledger security. In the heart of our proof lies a reduction-based simulation argument. As an example, our metatheorem can be used to construct a consensus protocol on top of other blockchains, creating a reliable rollup that assumes only the majority of the underlying layer-1s are secure.
Expand
Konstantinos Brazitikos, Vassilis Zikas
ePrint Report ePrint Report
Typical results in multi-party computation (in short, MPC) capture faulty parties by assuming a threshold adversary corrupting parties actively and/or fail-corrupting. These corruption types are, however, inadequate for capturing correct parties that might suffer temporary network failures and/or localized faults - these are particularly relevant for MPC over large, global scale networks. Omission faults and general adversary structures have been proposed as more suitable alternatives. However, to date, there is no characterization of the feasibility landscape combining the above ramifications of fault types and patterns. In this work we provide a tight characterization of feasibility of MPC in the presence of general adversaries - characterized by an adversary structure - that combine omission and active corruption. To this front we first provide a tight characterization of feasibility for Byzantine agreement (BA), a key tool in MPC protocols - this BA result can be of its own separate significance. Subsequently, we demonstrate that the common techniques employed in the threshold MPC literature to deal with omission corruptions do not work in the general adversary setting, not even for proving bounds that would appear straightforward, e.g, sufficiency of the well known $Q^3$ condition on omission-only general adversaries. Nevertheless we provide a new protocol that implements general adversary MPC under a surprisingly complex, yet tight as we prove, bound. As a contribution of independent interest, our work puts forth, for the first time, a formal treatment of general-adversary MPC with (active and) omission corruptions in Canetti's universal composition framework.
Expand
Samuel Lavery
ePrint Report ePrint Report
In this work, we introduce a family of asymmetric cryptographic functions based on dynamic number theoretic transformations with multiple rounds of modular arithmetic to enhance diffusion and difficulty of inversion. This function acts as a basic cryptographic building block for a novel communication-efficient zero-knowledge crypto-system. The system as defined exhibits partial homomorphism and behaves as an additive positive accumulator. By using a novel technique to constructively embed lattice problems in a nested fashion, the dimensionality and overall complexity of the lattice structure is increased. This linked lattice framework obscures internal structure and mitigates cryptanalysis by applying a novel ’noisy roots’ technique. By relaxing the need for specifically correct nth ω roots in a given field, we apply offset values to create a framework of consisting of a set of uniquely transforming but arithmetically compatible NTTs. We provide specific parameters for conjectured NIST level V security. Communication costs are extremely low at 288-bytes per public key and 144-bytes per cipher-text or digital signature. Example protocols for key agreement, secure data exchange, additive accumulation, and digital signatures are provided. Peer review is in preliminary stages at time of dissemination. Claims within have not undergone rigorous validation and likely contain inaccuracies, errors, flaws or incomplete analysis. Contents may see significant modification through later iterations.
Expand
Prabhanjan Ananth, Gilad Asharov, Vipul Goyal, Hadar Kaner, Pratik Soni, Brent Waters
ePrint Report ePrint Report
Trusted setup is commonly used for non-interactive proof and argument systems. However, there is no guarantee that the setup parameters in these systems are generated in a trustworthy manner. Building upon previous works, we conduct a systematic study of non-interactive zero-knowledge arguments in the common reference string model where the authority running the trusted setup might be corrupted. We explore both zero-knowledge and soundness properties in this setting. 

- We consider a new notion of NIZK called subversion advice-ZK NIZK that strengthens the notion of zero-knowledge with malicious authority security considered by Ananth, Asharov, Dahari and Goyal (EUROCRYPT'21), and present a construction of a subversion advice-ZK NIZK from the sub-exponential hardness of learning with errors.

- We introduce a new notion that strengthens the traditional definition of soundness, called accountable soundness, and present generic compilers that lift any NIZK for interesting languages in NP to additionally achieve accountable soundness.

- Finally, we combine our results for both subversion advice-ZK and accountable soundness to achieve a subversion advice-ZK NIZK that also satisfies accountable soundness. This results in the first NIZK construction that satisfies meaningful notions of both soundness and zero-knowledge even for maliciously chosen CRS.
Expand
Andi Liu, Yizhong Liu, Zhuocheng Pan, Yinuo Li, Jianwei Liu, Yuan Lu
ePrint Report ePrint Report
Sharding enhances blockchain scalability by dividing the network into shards, each managing specific unspent transaction outputs or accounts. As an introduced new transaction type, cross-shard transactions pose a critical challenge to the security and efficiency of sharding blockchains. Current solutions, however, either prioritize security with assumptions and substantial investments, or focus on reducing overhead and overlooking security considerations.

In this paper, we present Kronos, a generic and efficient sharding blockchain consensus ensuring robust security. At the core of Kronos, we introduce a ''buffer'' mechanism for atomic cross-shard transaction processing. Shard members collectively maintain a buffer to manage cross-shard inputs, ensuring that a transaction is committed only if all inputs are available, and no fund is transferred for invalid requests. While ensuring security including atomicity, Kronos processes transactions with optimal intra-shard communication overhead. A valid cross-shard transaction, involving $x$ input shards and $y$ output shards, is processed with a minimal intra-shard communication overhead factor of $x+y$. Additionally, we propose a reduction for transaction invalidity proof generation to simple and fast multicasting, leading to atomic rejection without executing full-fledged Byzantine fault tolerance (BFT) protocol in optimistic scenarios. Moreover, Kronos adopts a newly designed ''batch'' mechanism, reducing inter-shard message complexity for cross-shard transactions from $\mathcal{O}(\lambda)$ to $\mathcal{O}((m \text{log} m/b)\lambda)$ without sacrificing responsiveness (where $m$ denotes number of shards, $b$ denotes the batch size of intra-shard consensus, and $\lambda$ is security parameter).

Kronos operates without dependence on any time or client honesty assumption, serving as a plug-in sharding blockchain consensus supporting applications in diverse network environments including asynchronous ones. We implement Kronos using two prominent BFT protocols: asynchronous Speeding Dumbo (NDSS'22) and partial synchronous HotStuff (PODC'19). Extensive experiments (over up to $1000$ AWS EC2 nodes across 4 AWS regions) demonstrate Kronos achieving a substantial throughput of $68.6$ktx/sec with $1.7$sec latency. Compared with state-of-the-art solutions, Kronos outperforms in all cases, achieving up to a $42 \times$ improvement in throughput and a $50\%$ reduction in latency when cross-shard transactions dominate the workload.
Expand
ChihYun Chuang, IHung Hsu, TingFang Lee
ePrint Report ePrint Report
In this paper, we propose a novel bi-primality test to determine whether $N=pq$ is the product of two primes on any RSA modulus in which we relaxed the restriction, $p\equiv q \equiv 3 \mbox{ (mod } 4)$, that was assumed in most of current bi-primality tests. Our bi-primality test is generalized from Lucas primality test to the bi-prime case. Our test always accepts when $p$ and $q$ are both prime, and otherwise accepts with probability at most $1/2$. In addition, we also prove that the Boneh-Franklin's bi-primality test accepts composite with probability at most $1/4$ instead of $1/2$, if we add an additional condition $\gcd(N, p+q-1)=1$. Moreover, we design a multiparty protocol against of static semi-honest adversaries in the hybrid model and provide a security proof. We then implement the proposed protocol and run in a single thread on a laptop which turned out with average 224 seconds execution time, given that $N$ is around $2048$-bit.
Expand
Zeyu Liu, Eran Tromer, Yunhao Wang
ePrint Report ePrint Report
Anonymous message delivery, as in privacy-preserving blockchain and private messaging applications, needs to protect recipient metadata: eavesdroppers should not be able to link messages to their recipients. This raises the question: how can untrusted servers assist in delivering the pertinent messages to each recipient, without learning which messages are addressed to whom?

Recent work constructed Oblivious Message Retrieval (OMR) protocols that outsource the message detection and retrieval in a privacy-preserving way, using homomorphic encryption. This exhibits significant costs in computation per message scanned (${\sim}109$ms), as well as in the size of the associated messages (${\sim}1$kB overhead) and public keys (${\sim}132$kB).

This work constructs more efficient OMR schemes, by replacing the LWE-based clue encryption of prior works with a Ring-LWE variant, and utilizing the resulting flexibility to improve several components of the scheme. We thus devise, analyze, and benchmark two protocols:

The first protocol focuses on improving the detector runtime, using a new retrieval circuit that can be homomorphically evaluated more efficiently. Concretely, this construction takes only ${\sim}7.3$ms per message scanned, about $15$x faster than the prior work.

The second protocol focuses on reducing the communication costs, by designing a different homomorphic decryption circuit. While the circuit is less homomorphic-encryption-friendly (than our first construction), it allows the parameter of the Ring-LWE encryption to be set such that both the public key and the message size are greatly reduced. Concretely, the public key size is about $235$x smaller than the prior work, and the message size is roughly $1.6$x smaller. The runtime of this second construction is ${\sim}40.0$ms per message, still more than $2.5$x faster than prior works.
Expand
Andreea Alexandru, Ahmad Al Badawi, Daniele Micciancio, Yuriy Polyakov
ePrint Report ePrint Report
Fully Homomorphic Encryption (FHE) is a powerful tool for performing privacy-preserving analytics over encrypted data. A promising method for FHE over real and complex numbers is approximate homomorphic encryption, instantiated with the Cheon-Kim-Kim-Song (CKKS) scheme. The CKKS scheme enables efficient evaluation for many privacy-preserving machine learning applications. Despite its high efficiency, there is currently a lot of confusion on how to securely instantiate CKKS for a given application, especially after secret-key recovery attacks were proposed by Li and Micciancio (EUROCRYPT'21) for the $IND-CPA^{D}$ setting, i.e., where decryption results are shared with other parties. On the one hand, the generic definition of $IND-CPA^{D}$ is application-agnostic and often requires impractically large parameters. On the other hand, practical CKKS implementations target specific applications and use tighter parameters. A good illustration are the recent secret-key recovery attacks against a CKKS implementation in the OpenFHE library by Guo et al. (USENIX Security'24). We show that these attacks misuse the library by employing different (incompatible) circuits during parameter estimation and run-time computation, yet they do not violate the generic (application-agnostic) $IND-CPA^{D}$ definition.

To address this confusion, we introduce the notion of application-aware homomorphic encryption and devise related security definitions, which correspond more closely to how homomorphic encryption schemes are implemented and used in practice. We then formulate the guidelines for implementing the application-aware homomorphic encryption model to achieve $IND-CPA^{D}$ security for practical applications of CKKS. We also show that our application-aware model can be used for secure, efficient instantiation of exact homomorphic encryption schemes.
Expand
Mark Manulis, Jérôme Nguyen
ePrint Report ePrint Report
We focus on the problem of constructing fully homomorphic encryption (FHE) schemes that achieve some meaningful notion of adaptive chosen-ciphertext security beyond CCA1. Towards this, we propose a new notion, called security against verified chosen-ciphertext attack (vCCA). The idea behind it is to ascertain integrity of the ciphertext by imposing a strong control on the evaluation algorithm. Essentially, we require that a ciphertext obtained by the use of homomorphic evaluation must be "linked" to the original input ciphertexts. We formalize the vCCA notion in two equivalent formulations; the first is in the indistinguishability paradigm, the second follows the non-malleability simulation-based approach, and is a generalization of the targeted malleability introduced by Boneh et al. in 2012.

We strengthen the credibility of our definitions by exploring relations to existing security notions for homomorphic encryption schemes, namely CCA1, RCCA, FuncCPA, CCVA, and HCCA. We prove that vCCA security is the strongest notion known so far, that can be achieved by an FHE scheme; in particular, vCCA is strictly stronger than CCA1.

Finally, we provide a general transformation, that takes any CPA-secure FHE scheme and makes it vCCA-secure. Our transformation first turns an FHE scheme into a CCA2-secure scheme where a part of the ciphertext retains the homomorphic properties and then extends it with a succinct non-interactive argument of knowledge (SNARK) to verifiably control the evaluation algorithm. In fact, we obtain four general variation of this transformation. We handle both the asymmetric and the symmetric key FHE schemes, and for each we give two variations differing in whether the ciphertext integrity can be verified publicly or requires the secret key. We use well-known techniques to achieve CCA security in the first step of our transformation. In the asymmetric case, we use the double encryption paradigm, and in the symmetric case, we use Encrypt-then-MAC techniques. Furthermore, our transformation also gives the first CCA-secure FHE scheme based on bootstrapping techniques.
Expand
Antonio Sanso
ePrint Report ePrint Report
This paper introduces an algorithm to efficiently break the Decisional Diffie-Hellman (DDH) assumption in totally non-maximal imaginary quadratic orders, specifically when $\Delta_1 = 3$, and $f$ is non-prime with knowledge of a single factor. Inspired by Shanks and Dedekind's work on 3-Sylow groups, we generalize their observations to undermine DDH security.
Expand
Karl Kreder, Shreekara Shastry, Apostolos Tzinas, Sriram Vishwanath, Dionysis Zindros
ePrint Report ePrint Report
We propose a modification to the fork choice rule of proof-of-work blockchains. Instead of choosing the heaviest chain, we choose the chain with the most intrinsic work. The intrinsic work of a block is roughly the number of zeroes at the front of its hash. This modification allows us to safely decrease the confirmations required, yielding a $28.5\%$ improvement in confirmation delay or, dually, safely increase the block production rate, yielding a $16.3\%$ improvement in throughput, as compared to the vanilla Bitcoin proof-of-work fork choice rule. Our modification is at the level of the proof-of-work inequality, and thus can be composed with any other methods to improve latency or throughput that have been proposed in the literature. We report the experimental findings by measuring them on a production-grade implementation of our system, whose testnet is already deployed in the wild. Lastly, we formally prove the security of our new protocol in the Bitcoin Backbone model.
Expand
Julien Béguinot, Wei Cheng, Sylvain Guilley, Olivier Rioul
ePrint Report ePrint Report
Masking is one of the most popular countermeasures to side- channel attacks, because it can offer provable security. However, depend- ing on the adversary’s model, useful security guarantees can be hard to provide. At first, masking has been shown secure against t-threshold probing adversaries by Ishai et al. at Crypto’03. It has then been shown secure in the more generic random probing model by Duc et al. at Euro- crypt’14. Prouff and Rivain have introduced the noisy leakage model to capture more realistic leakage at Eurocrypt’13. Reduction from noisy leakage to random probing has been introduced by Duc et al. at Euro- crypt’14, and security guarantees were improved for both models by Prest et al. at Crypto’19, Duc et al. in Eurocrypt’15/J. Cryptol’19, and Masure and Standaert at Crypto’23. Unfortunately, as it turns out, we found that previous proofs in either random probing or noisy leakage models are flawed, and such flaws do not appear easy to fix. In this work, we show that the Doeblin coefficient allows one to over- come these flaws. In fact, it yields optimal reductions from noisy leakage to random probing, thereby providing a correct and usable metric to properly ground security proofs. This shows the inherent inevitable cost of a reduction from the noisy leakages to the random probing model. We show that it can also be used to derive direct formal security proofs using the subsequence decomposition of Prouff and Rivain.
Expand
Sourav Das, Benny Pinkas, Alin Tomescu, Zhuolun Xiang
ePrint Report ePrint Report
Generating and integrating shared randomness into a blockchain can expand applications and strengthen security. We aim to have validators generating blockchain randomness autonomously, and fresh shared randomness is generated for each block. We focus on proof-of-stake blockchains, where each validator has a different amount of stake (aka weight). Such chains introduce a weighted threshold setting where subset authorization relies on the cumulative weight of validators rather than the subset size.

We introduce three cryptographic protocols to enable generating shared randomness in a weighted setting: A publicly verifiable secret sharing scheme (PVSS) which is weighted and aggregatable, a weighted distributed key generation protocol (DKG), and a weighted verifiable unpredictable function (VUF). Importantly, in the VUF protocol, which is the protocol that is run most frequently, the computation and communication costs of participants are independent of their weight. This feature is crucial for scalability.

We implemented our schemes on top of Aptos blockchain, which is a proof-of-stake blockchain deployed in production. Our micro-benchmarks demonstrate that the signing and verification time, as well as the signature size, are independent of the total weight of the parties, whereas the signing time and signature size of the baseline (BLS with virtualization) increase significantly. For instance, our VUF reduces the signature size by factors of 7X and 34X for total weights of 821 and 4053, respectively. We also demonstrate the practicability of our design via an end-to-end evaluation.
Expand
◄ Previous Next ►