IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
30 May 2023
Dominique Unruh
Compressed oracles (Zhandry, Crypto 2019) are a powerful technique to reason about quantum random oracles, enabling a sort of lazy sampling in the presence of superposition queries. A long-standing open question is whether a similar technique can also be used to reason about random (efficiently invertible) permutations.
In this work, we make a step towards answering this question. We first define the compressed permutation oracle and illustrate its use. While the soundness of this technique (i.e., the indistinguishability from a random permutation) remains a conjecture, we show a curious 2-for-1 theorem: If we use the compressed permutation oracle methodology to show that some construction (e.g., Luby-Rackoff) implements a random permutation (or strong qPRP), then we get the fact that this methodology is actually sound for free.
Ulrich Haböck
This write-up summarizes the sampling analysis of the expander code from Brakedown [GLSTW21]. We elaborate their convexity argument for general linear expansion bounds, and we combine their approach with the one from Spielman [Sp96] to achieve asymptotic linear-time under constant field size. Choosing tighter expansion bounds we obtain more efficient parameters than [GLSTW21] for their 128 bit large field, reducing the encoding costs by 25% and beyond, and we provide a similar parameter set for the Mersenne prime field with modulus $p = 2^{31} - 1$, optimized by the combined Spielman-Brakedown approach.
Feng Hao, Samiran Bag, Liqun Chen, Paul C. van Oorschot
We present Owl, an augmented password-authenticated key exchange (PAKE) protocol that is both efficient and supported by security proofs. Owl is motivated by recognized limitations in SRP-6a and OPAQUE. SRP-6a is the only augmented PAKE that has enjoyed wide use in practice to date, but it lacks the support of formal security proofs, and does not support elliptic curve settings. OPAQUE was proposed in 2018 as a provably secure and efficient alternative to SRP-6a, and was chosen by the IETF in 2020 for standardization, but open issues leave it unclear whether OPAQUE will replace SRP-6a in practice. Owl is obtained by efficiently adapting J-PAKE to an asymmetric setting, providing additional security against server compromise yet with lower computation than J-PAKE. Our scheme is provably secure, efficient and agile in supporting implementations in diverse multiplicative groups and elliptic curve settings. Owl is the first solution that provides systematic advantages over SRP-6a in terms of security, computation, message sizes, and agility. Owl’s agility across settings also contrasts ongoing issues related to how OPAQUE will instantiate a hash-to-curve operation in the elliptic curve setting (and what impact this will have on efficiency, security and forward compatibility with new elliptic curves in the future).
Andrey Kim, Yongwoo Lee, Maxim Deryabin, Jieun Eom, Rakyong Choi
Fully Homomorphic Encryption (FHE) enables computations to be performed on encrypted data, so one can outsource computations of confidential information to an untrusted party. Ironically, FHE requires the client to generate massive evaluation keys and transfer them to the server side where all computations are supposed to be performed. In this paper, we propose LFHE, the Light-key FHE variant of the FHEW scheme introduced by Ducas and Micciancio in Eurocrypt 2015, and its improvement TFHE scheme proposed by Chillotti et al. in Asiacrypt 2016. In the proposed scheme the client generates small packed evaluation keys, which can be transferred to the server side with much smaller communication overhead compared to the original non-packed variant. The server employs a key reconstruction technique to obtain the evaluation keys needed for computations.
This approach allowed us to achieve the FHE scheme with the packed evaluation key transferring size of less than a Megabyte, which is an order of magnitude improvement compared to the best-known methods.
This approach allowed us to achieve the FHE scheme with the packed evaluation key transferring size of less than a Megabyte, which is an order of magnitude improvement compared to the best-known methods.
Qiqi Lai, Feng-Hao Liu, Anna Lysyanskaya, Zhedong Wang
Anonymous Credentials are an important tool to protect user's privacy for proving possession of certain credentials.
Although various efficient constructions have been proposed based on pre-quantum assumptions, there have been limited accomplishments in the post-quantum and especially practical settings. This research aims to derive new methods that enhance the current state of the art.
To achieve this, we make the following contributions. By distilling prior design insights, we propose a new primitive to instantiate \emph{signature with protocols}, called commit-transferrable signature (\CTS). When combined with a multi-theorem straight-line extractable non-interactive zero-knowledge proof of knowledge (\NIZKPoK), $\CTS$ gives a modular approach to construct anonymous credentials. We then show efficient instantiations of $\CTS$ and the required \NIZKPoK from lattices, which are believed to be post-quantum hard. Finally, we propose concrete parameters for the $\CTS$, \NIZKPoK, and the overall Anonymous Credentials, based on Module-\SIS~and Ring-\LWE. This would serve as an important guidance for future deployment in practice.
To achieve this, we make the following contributions. By distilling prior design insights, we propose a new primitive to instantiate \emph{signature with protocols}, called commit-transferrable signature (\CTS). When combined with a multi-theorem straight-line extractable non-interactive zero-knowledge proof of knowledge (\NIZKPoK), $\CTS$ gives a modular approach to construct anonymous credentials. We then show efficient instantiations of $\CTS$ and the required \NIZKPoK from lattices, which are believed to be post-quantum hard. Finally, we propose concrete parameters for the $\CTS$, \NIZKPoK, and the overall Anonymous Credentials, based on Module-\SIS~and Ring-\LWE. This would serve as an important guidance for future deployment in practice.
Jack Doerner, Yashvanth Kondi, Eysa Lee, abhi shelat
We present a three-round protocol for threshold ECDSA signing with malicious security against a dishonest majority, which information-theoretically UC-realizes a standard threshold signing functionality, assuming ideal commitment and two-party multiplication primitives. Our work improves upon and fully subsumes the DKLs $t$-of-$n$ and 2-of-$n$ protocols. This document focuses on providing a succinct but complete description of the protocol and its security proof, and contains little expository text.
Pascal Bemmann, Sebastian Berndt, Denis Diemert, Thomas Eisenbarth, Tibor Jager
In 2013, the Snowden revelations have shown subversion of cryptographic implementations to be a relevant threat.
Since then, the academic community has been pushing the development of models and constructions
to defend against adversaries able to arbitrarily subvert cryptographic implementations.
To capture these strong capabilities of adversaries, Russell, Tang, Yung, and Zhou (CCS'17) proposed CPA-secure encryption in a model that utilizes a trusted party called a watchdog testing an implementation before use to detect potential subversion.
This model was used to construct subversion-resilient implementations of primitives such as random oracles by Russell, Tang, Yung, and Zhou (CRYPTO'18) or signature schemes by Chow et al. (PKC'19) but primitives aiming for a CCA-like security remained elusive in any watchdog model.
In this work, we present the first subversion-resilient authenticated encryption scheme with associated data (AEAD) without making use of random oracles.
At the core of our construction are subversion-resilient PRFs, which we obtain from weak PRFs in combination with the classical Naor-Reingold transformation.
We revisit classical constructions based on PRFs to obtain subversion-resilient MACs, where both tagging and verification are subject to subversion, as well as subversion-resilient symmetric encryption in the form of stream ciphers.
Finally, we observe that leveraging the classical Encrypt-then-MAC approach yields subversion-resilient AEAD.
Our results are based on the trusted amalgamation model by Russell, Tang, Yung, and Zhou (ASIACRYPT'16) and the assumption of honest key generation.
Miranda Christ, Sam Gunn, Or Zamir
Recent advances in the capabilities of large language models such as GPT-4 have spurred increasing concern about our ability to detect AI-generated text. Prior works have suggested methods of embedding watermarks in model outputs, by $\textit{noticeably}$ altering the output distribution. We ask: Is it possible to introduce a watermark without incurring $\textit{any detectable}$ change to the output distribution?
To this end we introduce a cryptographically-inspired notion of undetectable watermarks for language models. That is, watermarks can be detected only with the knowledge of a secret key; without the secret key, it is computationally intractable to distinguish watermarked outputs from those of the original model. In particular, it is impossible for a user to observe any degradation in the quality of the text. Crucially, watermarks should remain undetectable even when the user is allowed to adaptively query the model with arbitrarily chosen prompts. We construct undetectable watermarks based on the existence of one-way functions, a standard assumption in cryptography.
To this end we introduce a cryptographically-inspired notion of undetectable watermarks for language models. That is, watermarks can be detected only with the knowledge of a secret key; without the secret key, it is computationally intractable to distinguish watermarked outputs from those of the original model. In particular, it is impossible for a user to observe any degradation in the quality of the text. Crucially, watermarks should remain undetectable even when the user is allowed to adaptively query the model with arbitrarily chosen prompts. We construct undetectable watermarks based on the existence of one-way functions, a standard assumption in cryptography.
Sivanarayana Gaddam, Ranjit Kumaresan, Srinivasan Raghuraman, Rohit Sinha
Recently, there have been several proposals for secure computation with fair output delivery that require the use of a bulletin board abstraction (in addition to a trusted execution environment (TEE)). These proposals require all protocol participants to have read/write access to the bulletin board. These works envision the use of (public or permissioned) blockchains to implement the bulletin board abstractions. With the advent of consortium blockchains which place restrictions on who can read/write contents on the blockchain, it is not clear how to extend prior proposals to a setting where (1) not all parties have read/write access on a single consortium blockchain, and (2) not all parties prefer to post on a public blockchain.
In this paper, we address the above by showing the first protocols for fair secure computation in the multi-blockchain setting. More concretely, in a $n$-party setting where at most $t < n$ parties are corrupt, our protocol for fair secure computation works as long as (1) $t$ parties have access to a TEE (e.g., Intel SGX), and (2) each of the above $t$ parties are on some blockchain with each of the other parties. Furthermore, only these $t$ parties need write access on the blockchains.
In an optimistic setting where parties behave honestly, our protocol runs completely off-chain.
In this paper, we address the above by showing the first protocols for fair secure computation in the multi-blockchain setting. More concretely, in a $n$-party setting where at most $t < n$ parties are corrupt, our protocol for fair secure computation works as long as (1) $t$ parties have access to a TEE (e.g., Intel SGX), and (2) each of the above $t$ parties are on some blockchain with each of the other parties. Furthermore, only these $t$ parties need write access on the blockchains.
In an optimistic setting where parties behave honestly, our protocol runs completely off-chain.
Sebastian Angel, Aditya Basu, Weidong Cui, Trent Jaeger, Stella Lau, Srinath Setty, Sudheesh Singanamalla
This paper introduces Nimble, a cloud service that helps applications running in trusted execution environments (TEEs) to detect rollback attacks (i.e., detect whether a data item retrieved from persistent storage is the latest version). To achieve this, Nimble realizes an append-only ledger service by employing a simple state machine running in a TEE in conjunction with a crash fault-tolerant storage service. Nimble then replicates this trusted state machine to ensure the system is available even if a minority of state machines crash. A salient aspect of Nimble is a new reconfiguration protocol that allows a cloud provider to replace the set of nodes running the trusted state machine whenever it wishes—without affecting safety. We have formally verified Nimble’s core protocol in Dafny, and have implemented Nimble such that its trusted state machine runs in multiple TEE platforms (Intel SGX and AMD SNP-SEV). Our results show that a deployment of Nimble on machines running in different availability zones can achieve from tens of thousands of requests/sec with an end-to-end latency of under 3.2 ms (based on an in-memory key-value store) to several thousands of requests/sec with a latency of 30ms (based on Azure Table).
Anton Wahrstätter, Liyi Zhou, Kaihua Qin, Davor Svetinovic, Arthur Gervais
With the emergence of Miner Extractable Value (MEV), block construction markets on blockchains have evolved into a competitive arena. Following Ethereum's transition from Proof of Work (PoW) to Proof of Stake (PoS), the Proposer Builder Separation (PBS) mechanism has emerged as the dominant force in the Ethereum block construction market.
This paper presents an in-depth longitudinal study of the Ethereum block construction market, spanning from the introduction of PoS and PBS in September 2022 to May 2023. We analyze the market shares of builders and relays, their temporal changes, and the financial dynamics within the PBS system, including payments among builders and block proposers---commonly referred to as bribes. We introduce an MEV-time law quantifying the expected MEV revenue wrt. the time elapsed since the last proposed block. We provide empirical evidence that moments of crisis (e.g. the FTX collapse, USDC stablecoin de-peg) coincide with significant spikes in MEV payments compared to the baseline.
Despite the intention of the PBS architecture to enhance decentralization by separating actor roles, it remains unclear whether its design is optimal. Implicit trust assumptions and conflicts of interest may benefit particular parties and foster the need for vertical integration. MEV-Boost was explicitly designed to foster decentralization, causing the side effect of enabling risk-free sandwich extraction from unsuspecting users, potentially raising concerns for regulators.
This paper presents an in-depth longitudinal study of the Ethereum block construction market, spanning from the introduction of PoS and PBS in September 2022 to May 2023. We analyze the market shares of builders and relays, their temporal changes, and the financial dynamics within the PBS system, including payments among builders and block proposers---commonly referred to as bribes. We introduce an MEV-time law quantifying the expected MEV revenue wrt. the time elapsed since the last proposed block. We provide empirical evidence that moments of crisis (e.g. the FTX collapse, USDC stablecoin de-peg) coincide with significant spikes in MEV payments compared to the baseline.
Despite the intention of the PBS architecture to enhance decentralization by separating actor roles, it remains unclear whether its design is optimal. Implicit trust assumptions and conflicts of interest may benefit particular parties and foster the need for vertical integration. MEV-Boost was explicitly designed to foster decentralization, causing the side effect of enabling risk-free sandwich extraction from unsuspecting users, potentially raising concerns for regulators.
Jeongeun Park, Sergi Rovira
In this paper, we introduce a new approach to efficiently compute TFHE bootstrapping keys for (predefined) multiple users. Hence, a fixed number of users can enjoy the same level of efficiency as in the single key setting, keeping their individual input privacy. Our construction relies on a novel algorithm called homomorphic indicator, which can be of independent interest. We provide a detailed analysis of the noise growth and a set of secure parameters suitable to be used in practice. Moreover, we compare the complexity of our technique with other state-of-the-art constructions and show which method performs better in what parameter sets, based on our noise analysis. We also provide a prototype implementation of our technique. To the best of our knowledge, this is the first implementation of TFHE in the multiparty setting.
Laura Hetz, Thomas Schneider, Christian Weinert
Mobile contact discovery is a convenience feature of messengers such as WhatsApp or Telegram that helps users to identify which of their existing contacts are registered with the service. Unfortunately, the contact discovery implementation of many popular messengers massively violates the users' privacy as demonstrated by Hagen et al. (NDSS '21, ACM TOPS '23). Unbalanced private set intersection (PSI) protocols are a promising cryptographic solution to realize mobile private contact discovery, however, state-of-the-art protocols do not scale to real-world database sizes with billions of registered users in terms of communication and/or computation overhead.
In our work, we make significant steps towards truly practical large-scale mobile private contact discovery. For this, we combine and substantially optimize the unbalanced PSI protocol of Kales et al. (USENIX Security '19) and the private information retrieval (PIR) protocol of Kogan and Corrigan-Gibbs (USENIX Security '21). Our resulting protocol has a total communication overhead that is sublinear in the size of the server's user database and also has sublinear online runtimes. We optimize our protocol by introducing database partitioning and efficient scheduling of user queries. To handle realistic change rates of databases and contact lists, we propose and evaluate different possibilities for efficient updates. We implement our protocol on smartphones and measure online runtimes of less than 2s to query up to 1024 contacts from a database with more than two billion entries. Furthermore, we achieve a reduction in setup communication up to factor 32x compared to state-of-the-art mobile private contact discovery protocols.
In our work, we make significant steps towards truly practical large-scale mobile private contact discovery. For this, we combine and substantially optimize the unbalanced PSI protocol of Kales et al. (USENIX Security '19) and the private information retrieval (PIR) protocol of Kogan and Corrigan-Gibbs (USENIX Security '21). Our resulting protocol has a total communication overhead that is sublinear in the size of the server's user database and also has sublinear online runtimes. We optimize our protocol by introducing database partitioning and efficient scheduling of user queries. To handle realistic change rates of databases and contact lists, we propose and evaluate different possibilities for efficient updates. We implement our protocol on smartphones and measure online runtimes of less than 2s to query up to 1024 contacts from a database with more than two billion entries. Furthermore, we achieve a reduction in setup communication up to factor 32x compared to state-of-the-art mobile private contact discovery protocols.
Zhengjun Cao, Lihua Liu
We remark that the key agreement scheme [IEEE Trans. Veh. Technol. 2021, 70(2): 1736--1751] fails to keep anonymity and untraceability, because the user $U_k$ needs to invoke the public key $PK_{U_j}$ to verify the signature generated by the user $U_j$. Since the public key is compulsively linked to the true identity $ID_{U_j}$ for authentication, any adversary can reveal the true identity by checking the signature.
25 May 2023
Carlos Aguilar-Melchor, Andreas Hülsing, David Joseph, Christian Majenz, Eyal Ronen, Dongze Yue
The MPC in the Head (MPCitH) paradigm has recently led to significant improvements for signatures in the code-based setting. In this paper we consider some modifications to a recent twist of MPCitH, called Hypercube-MPCitH, that in the code-based setting provides the currently best known signature sizes. By compressing the Hypercube-MPCitH five round code-based identification into three rounds we obtain two main benefits. On the one hand, it allows us to further
develop recent techniques to provide a tight security proof in the quantum-accessible random oracle model (QROM), avoiding the catastrophic reduction losses incurred using generic QROM-results
for Fiat-Shamir. On the other hand, we can reduce the already low-cost online part of the signature to just a hash and some serialization. In addition, we propose the introduction of proof-of-work techniques to allow for a reduction in signature size. On the technical side, we develop generalizations of several QROM proof techniques and introduce a variant of the recently proposed extractable QROM.
Manuel Barbosa, Andreas Hülsing
In this short note we give another direct proof for the variant of the FO transform used by Kyber in the QROM. At PKC'23 Maram & Xagawa gave the first direct proof which does not require the indirection via FO with explicit rejection, thereby avoiding either a non-tight bound, or the necessity to analyze the failure probability in a new setting. However, on the downside their proof produces a bound that incurs an additive collision bound term. We explore a different approach for a direct proof, which results in a simpler argument closer to prior proofs, but a slightly worse bound.
Nir Bitansky, Chethan Kamath, Omer Paneth, Ron Rothblum, Prashant Nalina Vasudevan
Batch proofs are proof systems that convince a verifier that $x_1,\dots, x_t \in L$, for some $NP$ language $L$, with communication that is much shorter than sending the $t$ witnesses. In the case of statistical soundness (where the cheating prover is unbounded but honest prover is efficient), interactive batch proofs are known for $UP$, the class of unique witness $NP$ languages. In the case of computational soundness (aka arguments, where both honest and dishonest provers are efficient), non-interactive solutions are now known for all of $NP$, assuming standard cryptographic assumptions. We study the necessary conditions for the existence of batch proofs in these two settings. Our main results are as follows.
1. Statistical Soundness: the existence of a statistically-sound batch proof for $L$ implies that $L$ has a statistically witness indistinguishable ($SWI$) proof, with inverse polynomial $SWI$ error, and a non-uniform honest prover. The implication is unconditional for public-coin protocols and relies on one-way functions in the private-coin case.
This poses a barrier for achieving batch proofs beyond $UP$ (where witness indistinguishability is trivial). In particular, assuming that $NP$ does not have $SWI$ proofs, batch proofs for all of $NP$ do not exist. This motivates further study of the complexity class $SWI$, which, in contrast to the related class $SZK$, has been largely left unexplored.
2. Computational Soundness: the existence of batch arguments ($BARG$s) for $NP$, together with one-way functions, implies the existence of statistical zero-knowledge ($SZK$) arguments for $NP$ with roughly the same number of rounds, an inverse polynomial zero-knowledge error, and non-uniform honest prover.
Thus, constant-round interactive $BARG$s from one-way functions would yield constant-round $SZK$ arguments from one-way functions. This would be surprising as $SZK$ arguments are currently only known assuming constant-round statistically-hiding commitments (which in turn are unlikely to follow from one-way functions).
3. Non-interactive: the existence of non-interactive $BARG$s for $NP$ and one-way functions, implies non-interactive statistical zero-knowledge arguments ($NISZKA$) for $NP$, with negligible soundness error, inverse polynomial zero-knowledge error, and non-uniform honest prover. Assuming also lossy public-key encryption, the statistical zero-knowledge error can be made negligible. We further show that $BARG$s satisfying a notion of honest somewhere extractability imply lossy public key encryption.
All of our results stem from a common framework showing how to transform a batch protocol for a language $L$ into an $SWI$ protocol for $L$.
1. Statistical Soundness: the existence of a statistically-sound batch proof for $L$ implies that $L$ has a statistically witness indistinguishable ($SWI$) proof, with inverse polynomial $SWI$ error, and a non-uniform honest prover. The implication is unconditional for public-coin protocols and relies on one-way functions in the private-coin case.
This poses a barrier for achieving batch proofs beyond $UP$ (where witness indistinguishability is trivial). In particular, assuming that $NP$ does not have $SWI$ proofs, batch proofs for all of $NP$ do not exist. This motivates further study of the complexity class $SWI$, which, in contrast to the related class $SZK$, has been largely left unexplored.
2. Computational Soundness: the existence of batch arguments ($BARG$s) for $NP$, together with one-way functions, implies the existence of statistical zero-knowledge ($SZK$) arguments for $NP$ with roughly the same number of rounds, an inverse polynomial zero-knowledge error, and non-uniform honest prover.
Thus, constant-round interactive $BARG$s from one-way functions would yield constant-round $SZK$ arguments from one-way functions. This would be surprising as $SZK$ arguments are currently only known assuming constant-round statistically-hiding commitments (which in turn are unlikely to follow from one-way functions).
3. Non-interactive: the existence of non-interactive $BARG$s for $NP$ and one-way functions, implies non-interactive statistical zero-knowledge arguments ($NISZKA$) for $NP$, with negligible soundness error, inverse polynomial zero-knowledge error, and non-uniform honest prover. Assuming also lossy public-key encryption, the statistical zero-knowledge error can be made negligible. We further show that $BARG$s satisfying a notion of honest somewhere extractability imply lossy public key encryption.
All of our results stem from a common framework showing how to transform a batch protocol for a language $L$ into an $SWI$ protocol for $L$.
Kaizhan Lin, Weize Wang, Zheng Xu, Chang-An Zhao
Isogeny-based cryptography is famous for its short key size. As one of the most compact digital signatures, SQISign (Short Quaternion and Isogeny Signature) is attractive among post-quantum cryptography, but it is ineffcient compared to other post-quantum competitors because of complicated procedures in ideal to isogeny translation, which is the effciency bottleneck of the signing phase.
In this paper, we recall the current implementation of SQISign and
mainly discuss how to improve the execution of ideal to isogeny translation in SQISign. To be precise, we modify the SigningKLPT algorithm to accelerate the performance of generating the ideal $I_\sigma$. In addition, we explore how to save one of the two elliptic curve discrete logarithms and compute the remainder with the help of the reduced Tate pairing correctly and effciently. We speed up other procedures in ideal to isogeny translation with various techniques as well. It should be noted that our improvements also benefit the performances of key generation and verification in SQISign. In particular, in the instantiation with p3923 the improvements lead to a speedup of 8.82%, 8.50% and 18.94% for key generation, signature and verification, respectively
Denis Firsov, Tiago Oliveira, Dominique Unruh
We implement the Schnorr proof system in assembler via the Jasmin toolchain, and prove the security (proof-of-knowledge property) and the absence of leakage through timing side-channels of that implementation in EasyCrypt.
In order to do so, we show how leakage-freeness of Jasmin programs can be proven for probabilistic programs (that are not constant-time). We implement and verify algorithms for fast constant-time modular multiplication and exponentiation (using Barrett reduction and Montgomery ladder). We implement and verify the rejection sampling algorithm. And finally, we put it all together and show the security of the overall implementation (end-to-end verification) of the Schnorr protocol, by connecting our implementation to prior security analyses in EasyCrypt (Firsov, Unruh, CSF 2023).
In order to do so, we show how leakage-freeness of Jasmin programs can be proven for probabilistic programs (that are not constant-time). We implement and verify algorithms for fast constant-time modular multiplication and exponentiation (using Barrett reduction and Montgomery ladder). We implement and verify the rejection sampling algorithm. And finally, we put it all together and show the security of the overall implementation (end-to-end verification) of the Schnorr protocol, by connecting our implementation to prior security analyses in EasyCrypt (Firsov, Unruh, CSF 2023).
Yuval Gelles, Ilan Komargodski
Designing efficient distributed protocols for various agreement tasks such as Byzantine Agreement, Broadcast, and Committee Election is a fundamental problem. We are interested in $scalable$ protocols for these tasks, where each (honest) party communicates a number of bits which is sublinear in $n$, the number of parties. The first major step towards this goal is due to King et al. (SODA 2006) who showed a protocol where each party sends only $\tilde O(1)$ bits throughout $\tilde O(1)$ rounds, but guarantees only that $1-o(1)$ fraction of honest parties end up agreeing on a consistent output, assuming constant $<1/3$ fraction of static corruptions. Few years later, King et al. (ICDCN 2011) managed to get a full agreement protocol in the same model but where each party sends $\tilde O(\sqrt{n})$ bits throughout $\tilde O(1)$ rounds. Getting a full agreement protocol with $o(\sqrt{n})$ communication per party has been a major challenge ever since.
In light of this barrier, we propose a new framework for designing efficient agreement protocols. Specifically, we design $\tilde O(1)$-round protocols for all of the above tasks (assuming constant $<1/3$ fraction of static corruptions) with optimistic and pessimistic guarantees:
$\bullet$ $Optimistic$ $complexity$: In an honest execution, (honest) parties send only $\tilde O(1)$ bits.
$\bullet$ xxx$Pessimistic$ $complexity$: In any other case, (honest) parties send $\tilde O(\sqrt{n})$ bits.
Thus, all an adversary can gain from deviating from the honest execution is that honest parties will need to work harder (i.e., transmit more bits) to reach agreement and terminate. Besides the above agreement tasks, we also use our new framework to get a scalable secure multiparty computation (MPC) protocol with optimistic and pessimistic complexities.
Technically, we identify a relaxation of Byzantine Agreement (of independent interest) that allows us to fall-back to a pessimistic execution in a coordinated way by all parties. We implement this relaxation with $\tilde O(1)$ communication bits per party and within $\tilde O(1)$ rounds.
In light of this barrier, we propose a new framework for designing efficient agreement protocols. Specifically, we design $\tilde O(1)$-round protocols for all of the above tasks (assuming constant $<1/3$ fraction of static corruptions) with optimistic and pessimistic guarantees:
$\bullet$ $Optimistic$ $complexity$: In an honest execution, (honest) parties send only $\tilde O(1)$ bits.
$\bullet$ xxx$Pessimistic$ $complexity$: In any other case, (honest) parties send $\tilde O(\sqrt{n})$ bits.
Thus, all an adversary can gain from deviating from the honest execution is that honest parties will need to work harder (i.e., transmit more bits) to reach agreement and terminate. Besides the above agreement tasks, we also use our new framework to get a scalable secure multiparty computation (MPC) protocol with optimistic and pessimistic complexities.
Technically, we identify a relaxation of Byzantine Agreement (of independent interest) that allows us to fall-back to a pessimistic execution in a coordinated way by all parties. We implement this relaxation with $\tilde O(1)$ communication bits per party and within $\tilde O(1)$ rounds.