International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

15 August 2023

Shuai Han, Shengli Liu, Zhedong Wang, Dawu Gu
ePrint Report ePrint Report
In this work, we construct the first digital signature (SIG) and public-key encryption (PKE) schemes with almost tight multi-user security under adaptive corruptions based on the learning-with-errors (LWE) assumption in the standard model. Our PKE scheme achieves almost tight IND-CCA security and our SIG scheme achieves almost tight strong EUF-CMA security, both in the multi-user setting with adaptive corruptions. The security loss is quadratic in the security parameter, and independent of the number of users, signatures or ciphertexts. Previously, such schemes were only known to exist under number-theoretic assumptions or in classical random oracle model, thus vulnerable to quantum adversaries.

To obtain our schemes from LWE, we propose new frameworks for constructing SIG and PKE with a core technical tool named probabilistic quasi-adaptive hash proof system (pr-QA-HPS). As a new variant of HPS, our pr-QA-HPS provides probabilistic public and private evaluation modes that may toss coins. This is in stark contrast to the traditional HPS [Cramer and Shoup, Eurocrypt 2002] and existing variants like approximate HPS [Katz and Vaikuntanathan, Asiacrypt 2009], whose public and private evaluations are deterministic in their inputs. Moreover, we formalize a new property called evaluation indistinguishability by requiring statistical indistinguishability of the two probabilistic evaluation modes, even in the presence of the secret key. The evaluation indistinguishability, as well as other nice properties resulting from the probabilistic features of pr-QA-HPS, are crucial for the multi-user security proof of our frameworks under adaptive corruptions.

As for instantiations, we construct pr-QA-HPS from the LWE assumption and prove its properties with almost tight reductions, which admit almost tightly secure LWE-based SIG and PKE schemes under our frameworks. Along the way, we also provide new almost-tight reductions from LWE to multi-secret LWE, which may be of independent interest.
Expand
Francesco Sica
ePrint Report ePrint Report
We fix an omission in [Petit17] on torsion point attacks of isogeny-based cryptosystems akin to SIDH, also reprised in [dQuehen-etal21]. In these works, their authors represent certain integers using a norm equation to derive a secret isogeny. However, this derivation uses as a crucial ingredient ([Petit17] Section 4.3), which we show to be incorrect. We then state sufficient conditions allowing to prove a modified version this lemma.

A further idea of parametrizing solutions of the norm equation will show that these conditions can be fulfilled under the same heuristics of these previous works. Our contribution is a theoretical one. It doesn't invalidate the attack, which works as well in practice, but gives a correct mathematical justification for it.

We also simplify the argument of Theorem 3 in [dQuehen-etal21] to show that the requirement that $m$ be small is unnecessary.
Expand
Elizabeth Crites, Chelsea Komlo, Mary Maller, Stefano Tessaro, Chenzhi Zhu
ePrint Report ePrint Report
Both threshold and blind signatures have, individually, received a considerable amount of attention. However little is known about their combination, i.e., a threshold signature which is also blind, in that no coalition of signers learns anything about the message being signed or the signature being produced. Several applications of blind signatures (e.g., anonymous tokens) would benefit from distributed signing as a means to increase trust in the service and hence reduce the risks of key compromise. This paper builds the first blind threshold signatures in pairing-free groups. Our main contribution is a construction that transforms an underlying blind non-threshold signature scheme with a suitable structure into a threshold scheme, preserving its blindness. The resulting signing protocol proceeds in three rounds, and produces signatures consisting of one group element and two scalars. The underlying non-threshold blind signature schemes are of independent interest, and improve upon the current state of the art (Tessaro and Zhu, EUROCRYPT ’22) with shorter signatures (three elements, instead of four) and simpler proofs of security. All of our schemes are proved secure in the Random Oracle and Algebraic Group Models, assuming the hardness of the discrete logarithm problem.
Expand
Kosei Sakamoto, Ryoma Ito, Takanori Isobe
ePrint Report ePrint Report
The most crucial but time-consuming task for differential cryptanalysis is to find a differential with a high probability. To tackle this task, we propose a new SAT-based automatic search framework to efficiently figure out a differential with the highest probability under a specified condition. As the previous SAT methods (e.g., the Sun et al’s method proposed at ToSC 2021(1)) focused on accelerating the search for an optimal single differential characteristic, these are not optimized for evaluating a clustering effect to obtain a tighter differential probability of differentials. In contrast, our framework takes advantage of a method to solve incremental SAT problems in parallel using a multi-threading technique, and consequently, it offers the following advantages compared with the previous methods: (1) speedy identification of a differential with the highest probability under the specified conditions; (2) efficient construction of the truncated differential with the highest probability from the obtained multiple differentials; and (3) applicability to a wide class of symmetric-key primitives. To demonstrate the effectiveness of our framework, we apply it to the block cipher PRINCE and the tweakable block cipher QARMA. We successfully figure out the tight differential bounds for all variants of PRINCE and QARMA within the practical time, thereby identifying the longest distinguisher for all the variants, which improves existing ones by one to four more rounds. Besides, we uncover notable differences between PRINCE and QARMA in the behavior of differential, especially for the clustering effect. We believe that our findings shed light on new structural properties of these important primitives. In the context of key recovery attacks, our framework allows us to derive the key-recovery-friendly truncated differentials for all variants of QARMA. Consequently, we report key recovery attacks based on (truncated) differential cryptanalysis on QARMA for the first time and show these key recovery attacks are competitive with existing other attacks.
Expand
Huayi Qi, Minghui Xu, Dongxiao Yu, Xiuzhen Cheng
ePrint Report ePrint Report
The privacy concern in smart contract applications continues to grow, leading to the proposal of various schemes aimed at developing comprehensive and universally applicable privacy-preserving smart contract (PPSC) schemes. However, the existing research in this area is fragmented and lacks a comprehensive system overview. This paper aims to bridge the existing research gap on PPSC schemes by systematizing previous studies in this field. The primary focus is on two categories: PPSC schemes based on cryptographic tools like zero-knowledge proofs, as well as schemes based on trusted execution environments. In doing so, we aim to provide a condensed summary of the different approaches taken in constructing PPSC schemes. Additionally, we also offer a comparative analysis of these approaches, highlighting the similarities and differences between them. Furthermore, we shed light on the challenges that developers face when designing and implementing PPSC schemes. Finally, we delve into potential future directions for improving and advancing these schemes, discussing possible avenues for further research and development.
Expand
Yuval Ishai, Mahimna Kelkar, Varun Narayanan, Liav Zafar
ePrint Report ePrint Report
Correlated secret randomness is a useful resource for secure computation protocols, often enabling dramatic speedups compared to protocols in the plain model. This has motivated a line of work on identifying and securely generating useful correlations.

Different kinds of correlations can vary greatly in terms of usefulness and ease of generation. While there has been major progress on efficiently generating oblivious transfer (OT) correlations, other useful kinds of correlations are much more costly to generate. Thus, it is highly desirable to develop efficient techniques for securely converting copies of a given source correlation into copies of a given target correlation, especially when the former are cheaper to generate than the latter.

In this work, we initiate a systematic study of such conversions that only involve a single uni-directional message. We refer to such a conversion as a one-message secure reduction (OMSR). Recent works (Agarwal et al, Eurocrypt 2022; Khorasgani et al, Eurocrypt 2022) studied a similar problem when no communication is allowed; this setting is quite restrictive, however, with few non-trivial conversions being feasible. The OMSR setting substantially expands the scope of feasible results, allowing for direct applications to existing MPC protocols.

We obtain the following positive and negative results. - OMSR constructions. We present a general rejection-sampling based technique for OMSR with OT source correlations. We apply it to substantially improve in the communication complexity of optimized protocols for distributed symmetric cryptography (Dinur et al., Crypto 2021).

- OMSR lower bounds. We develop general techniques for proving lower bounds on the communication complexity of OMSR, matching our positive results up to small constant factors.
Expand
Kirill Vedenev, Yury Kosolapov
ePrint Report ePrint Report
In this paper, we study the decoding failure rate (DFR) of non-binary QC-MDPC codes using theoretical tools, extending the results of previous binary QC-MDPC code studies. The theoretical estimates of the DFR are particularly significant for cryptographic applications of QC-MDPC codes. Specifically, in the binary case, it is established that exploiting decoding failures makes it possible to recover the secret key of a QC-MDPC cryptosystem. This implies that to attain the desired security level against adversaries in the CCA2 model, the decoding failure rate must be strictly upper-bounded to be negligibly small. In this paper, we observe that this attack can also be extended to the non--binary case as well, which underscores the importance of DFR estimation. Consequently, we study the guaranteed error-correction capability of non-binary QC-MDPC codes under one-step majority logic (OSML) decoder and provide a theoretical analysis of the 1-iteration parallel symbol flipping decoder and its combination with OSML decoder. Utilizing these results, we estimate the potential public-key sizes for QC-MDPC cryptosystems over $\mathbb{F}_4$ for various security levels. We find that there is no advantage in reducing key sizes when compared to the binary case.
Expand
Johannes Mono, Kamil Kluczniak, Tim Güneysu
ePrint Report ePrint Report
In recent years, the research community has made great progress in improving techniques for privacy-preserving computation such as fully homomorphic encryption (FHE). Despite the progress, there remain open challenges, mostly in the areas of performance and usability, to further advance the adoption of these technologies. This work provides multiple contributions to improve the current state-of-the-art in both areas.

More specifically, we significantly simplify the bootstrapping idea by Carpov, Izabach`ene, and Mollimard [1] for Boolean-based FHE schemes such as FHEW or TFHE, making the concept usable in practice. Based on our simplifications, we provide an easy-to-use interface for amortized bootstrapping implementing our improvements in the open-source library FHE-Deck and provide new parameter sets for multi-bit encryptions with state-of-the-art security.

We build a toolset that compiles high-level code such as C++ to code that executes operations on encrypted data. For this toolset, we propose the first non-trivial FHE-specific optimizations in synthesizing privacy-preserving circuits from high-level code, namely look-up table (LUT) grouping and adder substitution. Using LUT grouping, we reduce the number of bootstrapping required by almost 35 % on average, while for adder substitution, we reduce the number of required bootstrapping by up to 80 % for certain use cases. Overall, the execution time is up to 3.8× faster using our optimizations compared to previous state-of-the-art circuit synthesis.
Expand
Zhipeng Wang, Marko Cirkovic, Duc V. Le, William Knottenbelt, Christian Cachin
ePrint Report ePrint Report
On-chain mixers, such as Tornado Cash (TC), have become a popular privacy solution for many non-privacy-preserving blockchain users. These mixers enable users to deposit a fixed amount of coins and withdraw them to another address, while effectively reducing the linkability between these addresses and securely obscuring their transaction history. However, the high cost of interacting with existing on-chain mixer smart contracts prohibits standard users from using the mixer, mainly due to the use of computationally expensive cryptographic primitives. For instance, the deposit cost of TC on Ethereum is approximately $1.1m$ gas (i.e., $66$ USD in June 2023), which is $53\times$ higher than issuing a base transfer transaction.

In this work, we introduce the Merkle Pyramid Builder approach, to incrementally build the Merkle tree in an on-chain mixer and update the tree per batch of deposits, which can therefore decrease the overall cost of using the mixer. Our evaluation results highlight the effectiveness of this approach, showcasing a significant reduction of up to $7\times$ in the amortized cost of depositing compared to state-of-the-art on-chain mixers. Importantly, these improvements are achieved without compromising users' privacy. Furthermore, we propose the utilization of verifiable computations to shift the responsibility of Merkle tree updates from on-chain smart contracts to off-chain clients, which can further reduce deposit costs. Additionally, our analysis demonstrates that our designs ensure fairness by distributing Merkle tree update costs among clients over time.
Expand
Mario Mastriani
ePrint Report ePrint Report
Quantum key distribution (QKD) constitutes the most widespread family of information preservation techniques in the context of Quantum Cryptography. However, these techniques must deal with a series of technological challenges that prevent their efficient implementation, in space, as well as in exclusively terrestrial configurations. Moreover, the current smallsat constellations of Low Earth Orbit (LEO with an altitude of approx. 500 km) added to other satellites in Medium Earth Orbit (MEO), and geostationary orbits (GEO), plus a large amount of space debris present around the planet, have become a serious obstacle for Astronomy. In this work, a classic alternative to QKD is presented, also based on a symmetric key, but with low cost, and high efficiency, which dispenses with all the implementation problems present in the QKD protocols and does not require the use of satellites, and which we will call non-distributable key sharing (NDKS). Due to its low cost, simplicity of implementation, and high efficiency, NDKS is presented as an ideal solution to the problem of cybersecurity on the Internet of Things (IoT), in general, and IoT-fog-clouds, in particular.
Expand

11 August 2023

Pierre-Augustin BERTHET, Cédric Tavernier, Jean-Luc Danger, Laurent Sauvage
ePrint Report ePrint Report
The recent technological advances in Post-Quantum Cryptography (PQC) rise the questions of robust implementations of new asymmetric cryptographic primitives in today’s technology. This is the case for the lattice-based CRYSTALS-Kyber algorithm which has been selected as the first NIST standard for Public Key Encryption (PKE) and Key Encapsulation Mechanisms (KEM). We have notably to make sure the Kyber implementation is resilient against physical attacks like Side-Channel Analysis (SCA) and Fault Injection Attacks (FIA). To reach this goal, we propose to use the masking countermeasure, more precisely the generic Direct Sum Masking method (DSM). By taking inspiration of a previous paper on AES, we extend the method to finite fields of characteristic prime other than 2 and even-length codes. In particular, we investigated its application to Keccak, which is the hash-based function used in Kyber. We also provided the first masked implementation of Kyber providing both SCA and FIA resilience while not requiring any conversion between different masking methods.
Expand
Marcel Keller, Ke Sun
ePrint Report ePrint Report
Keller and Sun (ICML'22) have found a gap in the accuracy between floating-point deep learning in cleartext and secure quantized deep learning using multi-party computation. We have discovered that this gap is caused by a bug in the implementation of max-pooling. In this note, we present updated figures to support this conclusion. We also add figures for another network on CIFAR-10.
Expand
Nicolas Mohnblatt, Alberto Sonnino, Kobi Gurkan, Philipp Jovanovic
ePrint Report ePrint Report
Contact discovery is a crucial component of social applications, facilitating interactions between registered contacts. This work introduces Arke, a novel approach to contact discovery that addresses the limitations of existing solutions in terms of privacy, scalability, and reliance on trusted third parties. Arke ensures the unlinkability of user interactions, mitigates enumeration attacks, and operates without single points of failure or trust. Notably, Arke is the first contact discovery system whose performance is independent of the total number of users and the first that can operate in a Byzantine setting. It achieves its privacy goals through an unlinkable handshake mechanism built on top of an identity-based non-interactive key exchange. By leveraging a custom distributed architecture, Arke forgoes the expense of consensus to achieve scalability while maintaining consistency in a Byzantine fault tolerant environment. Performance evaluations demonstrate that Arke can support enough throughput to operate at a planetary scale while maintaining sub-second latencies in a large geo-distributed setting.
Expand
Arasu Arun, Srinath Setty, Justin Thaler
ePrint Report ePrint Report
Succinct Non-interactive Arguments of Knowledge (SNARKs) allow an untrusted prover to establish that it correctly ran some "witness-checking procedure" on a witness. A zkVM (short for zero-knowledge Virtual Machine) is a SNARK that allows the witness-checking procedure to be specified as a computer program written in the assembly language of a specific instruction set architecture (ISA).

A $\textit{front-end}$ converts computer programs into a lower-level representation such as an arithmetic circuit or generalization thereof. A SNARK for circuit-satisfiability can then be applied to the resulting circuit.

We describe a new front-end technique called Jolt that applies to a variety of ISAs. Jolt arguably realizes a vision called the $\textit{lookup singularity}$, which seeks to produce circuits that only perform lookups into pre-determined lookup tables. The circuits output by Jolt primarily perform lookups into a gigantic lookup table, of size more than $2^{128}$, that depends only on the ISA. The validity of the lookups are proved via a new $\textit{lookup argument}$ called Lasso described in a companion work (Setty, Thaler, and Wahby, e-print 2023). Although size-$2^{128}$ tables are vastly too large to materialize in full, the tables arising in Jolt are structured, avoiding costs that grow linearly with the table size.

We describe performance and auditability benefits of Jolt compared to prior zkVMs, focusing on the popular RISC-V ISA as a concrete example. The dominant cost for the Jolt prover applied to this ISA (on $64$-bit data types) is cryptographically committing to about six $256$-bit field elements per step of the RISC-V CPU. This compares favorably to prior zkVM provers, even those focused on far simpler VMs.
Expand
Srinath Setty, Justin Thaler, Riad Wahby
ePrint Report ePrint Report
This paper introduces Lasso, a new family of lookup arguments, which allow an untrusted prover to commit to a vector $a \in \mathbb{F}^m$ and prove that all entries of a reside in some predetermined table $t \in \mathbb{F}^n$. Lasso’s performance characteristics unlock the so-called "lookup singularity". Lasso works with any multilinear polynomial commitment scheme, and provides the following efficiency properties.

For $m$ lookups into a table of size $n$, Lasso’s prover commits to just $m + n$ field elements. Moreover, the committed field elements are small, meaning that, no matter how big the field $\mathbb{F}$ is, they are all in the set $\{0, . . . , m\}$. When using a multiexponentiation-based commitment scheme, this results in the prover’s costs dominated by only $O(m + n)$ group operations (e.g., elliptic curve point additions), plus the cost to prove an evaluation of a multilinear polynomial whose evaluations over the Boolean hypercube are the table entries. This represents a significant improvement in prover costs over prior lookup arguments (e.g., plookup, Halo2’s lookups, lookup arguments based on logarithmic derivatives).

Unlike all prior lookup arguments, if the table $t$ is structured (in a precise sense that we define), then no party needs to commit to $t$, enabling the use of much larger tables than prior works (e.g., of size $2^{128}$ or larger). Moreover, Lasso’s prover only "pays" in runtime for table entries that are accessed by the lookup operations. This applies to tables commonly used to implement range checks, bitwise operations, big-number arithmetic, and even transitions of a full-fledged CPU such as RISC-V. Specifically, for any integer parameter $c > 1$, Lasso’s prover’s dominant cost is committing to $3 \cdot c \cdot m + c \cdot n^{1/c}$ field elements. Furthermore, all these field elements are “small”, meaning they are in the set $\{0, . . . , \max{(m, n^{1/c}, q)} − 1\}$, where $q$ is the maximum value in $a$.

Lasso’s starting point is Spark, a time-optimal polynomial commitment scheme for sparse polynomials in Spartan (CRYPTO 2020). We first provide a stronger security analysis for Spark. Spartan’s security analysis assumed that certain metadata associated with a sparse polynomial is committed by an honest party (this is acceptable for its purpose in Spartan, but not for Lasso). We prove that Spark remains secure even when that metadata is committed by a malicious party. This provides the first "standard" commitment scheme for sparse multilinear polynomials with optimal prover costs. We then generalize Spark to directly support a lookup argument for both structured and unstructured tables, with the efficiency characteristics noted above.
Expand
Ripon Patgiri, Laiphrakpam Dolendro Singh
ePrint Report ePrint Report
Password-based authentication is an extensively used method to authenticate users. It uses cryptography to communicate the authentication process. On the contrary, the physically unclonable function (PUF)-based authentication mechanism is also gaining popularity rapidly due to its usability in IoT devices. It is a lightweight authentication mechanism that does not use cryptography protocol. PUF-based authentication mechanisms cannot authenticate users. To overcome the drawback of PUF, we introduce a software-defined unclonable function (SUF, for short). Contrary to the PUF, the SUF is used to authenticate users, not devices. We use SUF to implement a lightweight password-based authentication mechanism termed Authentica. Authentica bridges the gap between the password-based and the PUF-based authentication mechanism. Authentica does not use cryptography for authentication. However, we establish challenge-response using cryptography during the registration phase, which is a one-time cost. Authentica addresses a) impersonation attacks, b) collision attacks, c) dictionary and rainbow table attacks, d) replay attacks, e) DDoS attacks, f) the domino effect issues, and g) the challenge-response database leakage issues.
Expand
Marc Fischlin, Felix Günther
ePrint Report ePrint Report
Common verification steps in cryptographic protocols, such as signature or message authentication code checks or the validation of elliptic curve points, are crucial for the overall security of the protocol. Yet implementation errors omitting these steps easily remain unnoticed, as often the protocol will function perfectly anyways. One of the most prominent examples is Apple's goto fail bug where the erroneous certificate verification skipped over several of the required steps, marking invalid certificates as correctly verified. This vulnerability went undetected for at least 17 months.

We propose here a mechanism which supports the detection of such errors on a cryptographic level. Instead of merely returning the binary acceptance decision, we let the verification return more fine-grained information in form of what we call a confirmation code. The reader may think of the confirmation code as disposable information produced as part of the relevant verification steps. In case of an implementation error like the goto fail bug, the confirmation code would then miss essential elements.

The question arises now how to verify the confirmation code itself. We show how to use confirmation codes to tie security to basic functionality at the overall protocol level, making erroneous implementations be detected through the protocol not functioning properly. More concretely, we discuss the usage of confirmation codes in secure connections, established via a key exchange protocol and secured through the derived keys. If some verification steps in a key exchange protocol execution are faulty, then so will be the confirmation codes, and because we can let the confirmation codes enter key derivation, the connection of the two parties will eventually fail. In consequence, an implementation error like goto fail would now be detectable through a simple connection test.
Expand

10 August 2023

Rei Ueno, Naofumi Homma, Akiko Inoue, Kazuhiko Minematsu
ePrint Report ePrint Report
This paper presents a provably secure, higher-order, and leakage-resilient (LR) rekeying scheme named LR Rekeying with Random oracle Repetition (LR4), along with a quantitative security evaluation methodology. Many existing LR cryptographies are based on a concept of leveled implementation, which still essentially require a leak-free sanctuary (i.e., differential power analysis (DPA)-resistant component(s)) for some parts. In addition, although several LR pseudorandom functions (PRFs) based on only bounded DPA-resistant components have been developed, their validity and effectiveness for rekeying usage still need to be determined. In contrast, LR4 is formally proven under a leakage model that captures the practical goal of side-channel attack (SCA) protection (e.g., masking with a practical order) and assumes no unbounded DPA-resistant sanctuary. This proof suggests that LR4 resists exponential invocations (up to the birthday bound of key size) without using any unbounded leak-free component, which is the first of its kind. Moreover, we present a quantitative SCA success rate evaluation methodology for LR4 that combines the bounded leakage models for LR cryptography and a state-of-the-art information-theoretical SCA evaluation method. We validate its soundness and effectiveness as a DPA countermeasure through a numerical evaluation; that is, the number of secure calls of a symmetric primitive increases exponentially by increasing a security parameter under practical conditions.
Expand
Mustafa Khairallah
ePrint Report ePrint Report
In this paper, we present a new distinguisher for the Tweak-aNd-Tweak (TNT) tweakable block cipher with $O(2^{n/2})$ complexity. The distinguisher is an adaptive chosen ciphertext distinguisher, unlike previous attacks that are only non-adaptive chosen plaintext attacks. However, the attack contradicts the security claims made by the designers. Given TNT can be seen as the three-round CLRW1 tweakable block cipher, our attack matches its more conservative bound. We provide the distinguisher description, a probabilistic analysis of its behaviour, experimental verification and an analysis of why the proof fails to capture the security of TNT. In summary, the distinguisher is based on collision counting and exploits non-uniformity in the statistical behaviour of random permutations. It reduces the goal of finding the collision to solving a difference equation defined over a random permutation. Due to this relation, the number of collisions observed by the distinguisher is twice as expected from an ideal tweakable block cipher.
Expand
Joachim Neu, Srivatsan Sridhar, Lei Yang, David Tse
ePrint Report ePrint Report
Classic BFT consensus protocols guarantee safety and liveness for all clients if fewer than one-third of replicas are faulty. However, in applications such as high-value payments, some clients may want to prioritize safety over liveness. Flexible consensus allows each client to opt for a higher safety resilience, albeit at the expense of reduced liveness resilience. We present the first construction that allows optimal safety--liveness tradeoff for every client simultaneously. This construction is modular and is realized as an add-on applied on top of an existing consensus protocol. The add-on consists of an additional round of voting and permanent locking done by the replicas, to sidestep a sub-optimal quorum-intersection-based constraint present in previous solutions. We adapt our construction to the existing Ethereum protocol to derive optimal flexible confirmation rules that clients can adopt unilaterally without requiring system-wide changes. This is possible because existing Ethereum protocol features can double as the extra voting and locking. We demonstrate an implementation using Ethereum's consensus API.
Expand
◄ Previous Next ►