International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

01 March 2024

Sebastian Faust, Loïc Masure, Elena Micheli, Maximilian Orlt, François-Xavier Standaert
ePrint Report ePrint Report
Symmetric ciphers operating in (small or mid-size) prime fields have been shown to be promising candidates to maintain security against low-noise (or even noise-free) side-channel leakage. In order to design prime ciphers that best trade physical security and implementation efficiency, it is essential to understand how side-channel security evolves with the field size (i.e., scaling trends). Unfortunately, it has also been shown that such a scaling trend depends on the leakage functions and cannot be explained by the standard metrics used to analyze Boolean masking with noise. In this work, we therefore initiate a formal study of prime field masking for two canonical leakage functions: bit leakages and Hamming weight leakages. By leveraging theoretical results from the leakage-resilient secret sharing literature, we explain formally why (1) bit leakages correspond to a worst-case and do not encourage operating in larger fields, and (2) an opposite conclusion holds for Hamming weight leakages, where increasing the prime field modulus p can contribute to a security amplification that is exponential in the number of shares,with log(p) seen as security parameter like the noise variance in Boolean masking. We combine these theoretical results with experimental ones and show that the interest masking in larger prime fields can degrade gracefully when leakage functions slightly deviate from the Hamming weight abstraction, motivating further research towards characterizing (ideally wide) classes of leakage functions offering such guarantees.
Expand
Gilad Asharov, Anirudh Chandramouli
ePrint Report ePrint Report
We study broadcast protocols in the information-theoretic model under optimal conditions, where the number of corruptions $t$ is at most one-third of the parties, $n$. While worst-case $\Omega(n)$ round broadcast protocols are known to be impossible to achieve, protocols with an expected constant number of rounds have been demonstrated since the seminal work of Feldman and Micali [STOC'88]. Communication complexity for such protocols has gradually improved over the years, reaching $O(nL)$ plus expected $O(n^4\log n)$ for broadcasting a message of size $L$ bits.

This paper presents a perfectly secure broadcast protocol with expected constant rounds and communication complexity of $O(nL)$ plus expected $O(n^3 \log^2n)$ bits. In addition, we consider the problem of parallel broadcast, where $n$ senders, each wish to broadcast a message of size $L$. We show a parallel broadcast protocol with expected constant rounds and communication complexity of $O(n^2L)$ plus expected $O(n^3 \log^2n)$ bits. Moreover, we show a lower bound for parallel broadcast, showing that our protocol is optimal up to logarithmic factors and in expectation.

Our main contribution is a framework for obtaining perfectly secure broadcast with an expected constant number of rounds from a statistically secure verifiable secret sharing. Moreover, we provide a new statistically secure verifiable secret sharing where the broadcast cost per participant is reduced from $O(n \log n)$ bits to only $O({\sf poly} \log n)$ bits. All our protocols are adaptively secure.
Expand
Reo Eriguchi, Kaoru Kurosawa, Koji Nuida
ePrint Report ePrint Report
Motivated by secure database search, we present secure computation protocols for a function $f$ in the client-servers setting, where a client can obtain $f(x)$ on a private input $x$ by communicating with multiple servers each holding $f$. Specifically, we propose generic compilers from passively secure protocols, which only keep security against servers following the protocols, to actively secure protocols, which guarantee privacy and correctness even against malicious servers. Our compilers are applied to protocols computing any class of functions, and are efficient in that the overheads in communication and computational complexity are only polynomial in the number of servers, independent of the complexity of functions. We then apply our compilers to obtain concrete actively secure protocols for various functions including private information retrieval (PIR), bounded-degree multivariate polynomials and constant-depth circuits. For example, our actively secure PIR protocols achieve exponentially better computational complexity in the number of servers than the currently best-known protocols. Furthermore, our protocols for polynomials and constant-depth circuits reduce the required number of servers compared to the previous actively secure protocols. In particular, our protocol instantiated from the sparse Learning Parity with Noise (LPN) assumption is the first actively secure protocol for multivariate polynomials which has the minimum number of servers, without assuming fully homomorphic encryption.
Expand
You Lyu, Shengli Liu, Shuai Han
ePrint Report ePrint Report
In this paper, we construct the first password authenticated key exchange (PAKE) scheme from isogenies with Universal Composable (UC) security in the random oracle model (ROM). We also construct the first two PAKE schemes with UC security in the quantum random oracle model (QROM), one is based on the learning with error (LWE) assumption, and the other is based on the group-action decisional Diffie- Hellman (GA-DDH) assumption in the isogeny setting. To obtain our UC-secure PAKE scheme in ROM, we propose a generic construction of PAKE from basic lossy public key encryption (LPKE) and CCA-secure PKE. We also introduce a new variant of LPKE, named extractable LPKE (eLPKE). By replacing the basic LPKE with eLPKE, our generic construction of PAKE achieves UC security in QROM. The LPKE and eLPKE have instantiations not only from LWE but also from GA-DDH, which admit four specific PAKE schemes with UC security in ROM or QROM, based on LWE or GA-DDH.
Expand
Edith Cohen, Xin Lyu, Jelani Nelson, Tamás Sarlós, Uri Stemmer
ePrint Report ePrint Report
One of the most basic problems for studying the "price of privacy over time" is the so called private counter problem, introduced by Dwork et al. (2010) and Chan et al. (2010). In this problem, we aim to track the number of events that occur over time, while hiding the existence of every single event. More specifically, in every time step $t\in[T]$ we learn (in an online fashion) that $\Delta_t\geq 0$ new events have occurred, and must respond with an estimate $n_t\approx\sum_{j=1}^t \Delta_j$. The privacy requirement is that all of the outputs together, across all time steps, satisfy event level differential privacy.

The main question here is how our error needs to depend on the total number of time steps $T$ and the total number of events $n$. Dwork et al. (2015) showed an upper bound of $O\left(\log(T)+\log^2(n)\right)$, and Henzinger et al. (2023) showed a lower bound of $\Omega\left(\min\{\log n, \log T\}\right)$. We show a new lower bound of $\Omega\left(\min\{n,\log T\}\right)$, which is tight w.r.t. the dependence on $T$, and is tight in the sparse case where $\log^2 n=O(\log T)$. Our lower bound has the following implications:

(1) We show that our lower bound extends to the online thresholds problem, where the goal is to privately answer many "quantile queries" when these queries are presented one-by-one. This resolves an open question of Bun et al. (2017).

(2) Our lower bound implies, for the first time, a separation between the number of mistakes obtainable by a private online learner and a non-private online learner. This partially resolves a COLT'22 open question published by Sanyal and Ramponi.

(3) Our lower bound also yields the first separation between the standard model of private online learning and a recently proposed relaxed variant of it, called private online prediction.
Expand
Pedro Branco, Nico Döttling, Akshayaram Srinivasan
ePrint Report ePrint Report
We give a construction of a two-round batch oblivious transfer (OT) protocol in the CRS model that is UC-secure against malicious adversaries and has (near) optimal communication cost. Specifically, to perform a batch of $k$ oblivious transfers where the sender's inputs are bits, the sender and the receiver need to communicate a total of $3k + o(k) \cdot \mathsf{poly}(\lambda)$ bits. We argue that $3k$ bits are required by any protocol with a black-box and straight-line simulator. The security of our construction is proven assuming the hardness of Quadratic Residuosity (QR) and the Learning Parity with Noise (LPN).
Expand
Seungjun Baek, Giyoon Kim, Jongsung Kim
ePrint Report ePrint Report
Ascon, a family of algorithms that supports authenticated encryption and hashing, has been selected as the new standard for lightweight cryptography in the NIST Lightweight Cryptography Project. Ascon’s permutation and authenticated encryption have been actively analyzed, but there are relatively few analyses on the hashing. In this paper, we concentrate on preimage attacks on Ascon-Xof. We focus on linearizing the polynomials leaked by the hash value to find its inverse. In an attack on 2-round Ascon-Xof, we carefully construct the set of guess bits using a greedy algorithm in the context of guess-and-determine. This allows us to attack Ascon-Xof more efficiently than the method in Dobraunig et al., and we fully implement our attack to demonstrate its effectiveness. We also provide the number of guess bits required to linearize one output bit after 3- and 4-round Ascon’s permutation, respectively. In particular, for the first time, we connect the result for 3-round Ascon to a preimage attack on Ascon-Xof with a 64-bit output. Our attacks primarily focus on analyzing weakened versions of Ascon-Xof, where the weakening involves setting all the IV values to 0 and omitting the round constants. Although our attacks do not compromise the security of the full Ascon-Xof, they provide new insights into their security.
Expand
Daniel Escudero, Yifan Song, Wenhao Wang
ePrint Report ePrint Report
Consider the task of secure multiparty computation (MPC) among $n$ parties with perfect security and guaranteed output delivery, supporting $t
In this work we provide an MPC protocol in this setting: perfect security, G.O.D. and $t
Expand
David Heath, Vladimir Kolesnikov, Lucien K. L. Ng
ePrint Report ePrint Report
Garbled Circuit (GC) is a basic technique for practical secure computation. GC handles Boolean circuits; it consumes significant network bandwidth to transmit encoded gate truth tables, each of which scales with the computational security parameter $\kappa$. GC optimizations that reduce bandwidth consumption are valuable.

It is natural to consider a generalization of Boolean two-input one-output gates (represented by $4$-row one-column lookup tables, LUTs) to arbitrary $N$-row $m$-column LUTs. Known techniques for this do not scale, with naive size-$O(Nm\kappa)$ garbled LUT being the most practical approach in many scenarios.

Our novel garbling scheme -- logrow -- implements GC LUTs while sending only a logarithmic in $N$ number of ciphertexts! Specifically, let $n = \lceil \log_2 N \rceil$. We allow the GC parties to evaluate a LUT for $(n-1)\kappa + nm\kappa + Nm$ bits of communication. logrow is compatible with modern GC advances, e.g. half gates and free XOR.

Our work improves state-of-the-art GC handling of several interesting applications, such as privacy-preserving machine learning, floating-point arithmetic, and DFA evaluation.
Expand
Anand Kumar Narayanan, Youming Qiao, Gang Tang
ePrint Report ePrint Report
We devise algorithms for finding equivalences of trilinear forms over finite fields modulo linear group actions. Our focus is on two problems under this umbrella, Matrix Code Equivalence (MCE) and Alternating Trilinear Form Equivalence (ATFE), since their hardness is the foundation of the NIST round-$1$ signature candidates MEDS and ALTEQ respectively.

We present new algorithms for MCE and ATFE, which are further developments of the algorithms for polynomial isomorphism and alternating trilinear form equivalence, in particular by Bouillaguet, Fouque, and Véber (Eurocrypt 2013), and Beullens (Crypto 2023). Key ingredients in these algorithms are new easy-to-compute distinguishing invariants under the respective group actions.

For MCE, we associate new isomorphism invariants to corank-$1$ points of matrix codes, which lead to a birthday-type algorithm. We present empirical justifications that these isomorphism invariants are easy-to-compute and distinguishing, and provide an implementation of this algorithm. This algorithm has some implications to the security of MEDS.

The invariant function for ATFE is similar, except it is associated with lower rank points. Modulo certain assumptions on turning the invariant function into canonical forms, our algorithm for ATFE improves on the runtime of the previously best known algorithm of Beullens (Crypto 2023).

Finally, we present quantum variants of our classical algorithms with cubic runtime improvements.
Expand
Markku-Juhani O. Saarinen
ePrint Report ePrint Report
We report on efficient and secure hardware implementation techniques for the FIPS 205 SLH-DSA Hash-Based Signature Standard. SLotH supports all 12 parameter sets of SLH-DSA. The configurable architecture contains Keccak/SHAKE, SHA2-256, and SHA2-512 cores, and can protect secret key material with side-channel secure PRF and Winternitz chains. We demonstrate that very significant performance gains can be obtained from hardware features that facilitate hash padding formats and iterative hashing specific to SLH-DSA. These features make SLH-DSA on SLotH many times faster compared to similarly-sized general-purpose hash accelerators. A small RISC-V control core executes the drivers, as is typical in RoT systems such as OpenTitan or Caliptra.

Compared to unaccelerated microcontroller implementations, the performance of SLotH's SHAKE variants is up to $300\times$ faster; signature generation with 128f parameter set is is 4,903,978 cycles, while signature verification with 128s parameter set is only 179,603 cycles. The SLH-DSA-SHA2 parameter sets have approximately half of the speed. We observe that the signature verification performance of SLH-DSA's ``s'' parameter sets is generally better than that of accelerated ECDSA or Dilithium on similarly-sized RoT targets. The area of the full SLotH system is small, from 63 kGE (SHA2, Cat 1 only) to 155 kGe (all parameter sets). Keccak Threshold Implementation adds another 130 kGE.

We provide sensitivity analysis of SLH-DSA in relation to side-channel leakage. We show experimentally that an SLH-DSA implementation with CPU hashing will rapidly leak the SK.seed master key. We perform a 100,000-trace TVLA leakage assessment with a protected SLotH unit.
Expand
Dipayan Das, Antoine Joux
ePrint Report ePrint Report
The Partial Vandermonde (PV) Knapsack problem is an algebraic variant of the low-density inhomogeneous SIS problem. The problem has been used as a building block for various lattice-based constructions, including signatures (ACNS'14, ACISP'18), encryptions (DCC'15,DCC'20), and signature aggregation (Eprint'20). At Crypto'22, Boudgoust, Gachon, and Pellet-Mary proposed a key distinguishing attack on the PV Knapsack exploiting algebraic properties of the problem. Unfortunately, their attack doesn't offer key recovery, except for worst-case keys.

In this paper, we propose an alternative attack on the PV Knapsack problem, which provides key recovery for a much larger set of keys. Like the Crypto'22 attack, it is based on lattice reduction and uses a dimension reduction technique to speed-up the underlying lattice reduction algorithm and enhance its performance. As a side bonus, our attack transforms the PV Knapsack problem into uSVP instances instead of SVP instances in the Crypto'22 attack. This also helps the lattice reduction algorithm, both from a theoretical and practical point of view.

We use our attack to re-assess the hardness of the concrete parameters used in the literature. It appears that many contain a non-negligible fraction of weak keys, which are easily identified and extremely susceptible to our attack. For example, a fraction of $2^{-19}$ of the public keys of a parameter set from ACISP'18 can be solved in about $30$ hours on a moderate server using off-the-shelf lattice reduction. This parameter set was initially claimed to have a $129$-bit security against key recovery attack. Its security was reduced to $87$-bit security using the distinguishing attack from Crypto'22. Similarly, the ACNS'14 proposal also includes a parameter set containing a fraction of $2^{-19}$ of weak keys; those can be solved in about $17$ hours.
Expand
Jakob Feldtkeller, Jan Richter-Brockmann, Pascal Sasdrich, Tim Güneysu
ePrint Report ePrint Report
Physical security is an important aspect of devices for which an adversary can manipulate the physical execution environment. Recently, more and more attention has been directed towards a security model that combines the capabilities of passive and active physical attacks, i.e., an adversary that performs fault-injection and side-channel analysis at the same time. Implementing countermeasures against such a powerful adversary is not only costly but also requires the skillful combination of masking and redundancy to counteract all reciprocal effects.

In this work, we propose a new methodology to generate combined-secure circuits. We show how to transform TI-like constructions to resist any adversary with the capability to tamper with internal gates and probe internal wires. For the resulting protection scheme, we can prove the combined security in a well-established theoretical security model.

Since the transformation preserves the advantages of TI-like structures, the resulting circuits prove to be more efficient in the number of required bits of randomness (up to 100%), the latency in clock cycles (up to 40%), and even the area for pipelined designs (up to 40%) than the state of the art for an adversary restricted to manipulating a single gate and probing a single wire.
Expand
Lars Ran, Simona Samardjiska, Monika Trimoska
ePrint Report ePrint Report
The Alternating Trilinear Form Equivalence (ATFE) problem was recently used by Tang et al. as a hardness assumption in the design of a Fiat-Shamir digital signature scheme ALTEQ. The scheme was submitted to the additional round for digital signatures of the NIST standardization process for post-quantum cryptography.

ATFE is a hard equivalence problem known to be in the class of equivalence problems that includes, for instance, the Tensor Isomorphism (TI), Quadratic Maps Linear Equivalence (QMLE) and the Matrix Code Equivalence (MCE) problems. Due to the increased cryptographic interest, the understanding of its practical hardness has also increased in the last couple of years. Currently, there are several combinatorial and algebraic algorithms for solving it, the best of which is a graph-theoretic algorithm that also includes an algebraic subroutine.

In this paper, we take a purely algebraic approach to the ATFE problem, but we use a coding theory perspective to model the problem. This modelling was introduced earlier for the MCE problem. Using it, we improve the cost of algebraic attacks against ATFE compared to previously known ones.

Taking into account the algebraic structure of alternating trilinear forms, we show that the obtained system has less variables but also less equations than for MCE and gives rise to structural degree-3 syzygies. Under the assumption that outside of these syzygies the system behaves semi-regularly, we provide a concrete, non-asymptotic complexity estimate of the performance of our algebraic attack. Our results show that the complexity of our attack is below the estimated security levels of ALTEQ by more than 20 bits for NIST level I (and more for the others), thus the scheme requires re-parametrization for all three NIST security levels.
Expand
Roozbeh Sarenche, Ren Zhang, Svetla Nikova, Bart Preneel
ePrint Report ePrint Report
A Bitcoin miner who owns a sufficient amount of mining power can perform selfish mining to increase his relative revenue. Studies have demonstrated that the time-averaged profit of a selfish miner starts to rise once the mining difficulty level gets adjusted in favor of the attacker. Selfish mining profitability lies in the fact that orphan blocks are not incorporated into the current version of Bitcoin's difficulty adjustment mechanism (DAM). Therefore, it is believed that considering the count of orphan blocks in the DAM can result in selfish mining unprofitability. In this paper, we disprove this belief by providing a formal analysis of the selfish mining time-averaged profit. We present a precise definition of the orphan blocks that can be incorporated into calculating the next epoch's target and then introduce two modified versions of DAM in which both main-chain blocks and orphan blocks are incorporated. We propose two versions of smart intermittent selfish mining, where the first one dominates the normal intermittent selfish mining and the second one results in selfish mining profitability under the modified DAMs. Moreover, we present the orphan exclusion attack with the help of which the attacker can stop honest miners from reporting the orphan blocks. Using combinatorial tools, we analyze the profitability of selfish mining accompanied by the orphan exclusion attack under the modified DAMs. Our result shows that even when considering the orphan blocks in the DAM, normal selfish mining can still be profitable; however, the level of profitability under the modified DAMs is significantly lower than that observed under the current version of Bitcoin DAM.
Expand
Shan Chen, Marc Fischlin
ePrint Report ePrint Report
Causal reasoning plays an important role in the comprehension of communication, but it has been elusive so far how causality should be properly preserved by instant messaging services. To the best of our knowledge, causality preservation is not even treated as a desired security property by most (if not all) existing secure messaging protocols like Signal. This is probably due to the intuition that causality seems already preserved when all received messages are intact and displayed according to their sending order. Our starting point is to notice that this intuition is wrong.

Until now, for messaging channels (where conversations take place), both the proper causality model and the provably secure constructions have been left open. Our work fills this gap, with the goal to facilitate the formal understanding of causality preservation in messaging.

First, we focus on the common two-user secure messaging channels and model the desired causality preservation property. We take the popular Signal protocol as an example and analyze the causality security of its cryptographic core (the double-ratchet mechanism). We show its inadequacy with a simple causality attack, then fix it such that the resulting Signal channel is causality-preserving, even in a strong sense that guarantees post-compromise security. Our fix is actually generic: it can be applied to any bidirectional channel to gain strong causality security.

Then, we model causality security for the so-called message franking channels. Such a channel additionally enables end users to report individual abusive messages to a server (e.g., the service provider), where this server relays the end-to-end-encrypted communication between users. Causality security in this setting further allows the server to retrieve the necessary causal dependencies of each reported message, essentially extending isolated reported messages to message flows. This has great security merit for dispute resolution, because a benign message may be deemed abusive when isolated from the context. As an example, we apply our model to analyze Facebook’s message franking scheme. We show that a malicious user can easily trick Facebook (i.e., the server) to accuse an innocent user. Then we fix this issue by amending the underlying message franking channel to preserve the desired causality.
Expand
Jiaxin Pan, Doreen Riepel, Runzhi Zeng
ePrint Report ePrint Report
Weak forward secrecy (wFS) of authenticated key exchange (AKE) protocols is a passive variant of (full) forward secrecy (FS). A natural mechanism to upgrade from wFS to FS is the use of key confirmation messages which compute a message authentication code (MAC) over the transcript. Unfortunately, Gellert, Gjøsteen, Jacobson and Jager (GGJJ, CRYPTO 2023) show that this mechanism inherently incurs a loss proportional to the number of users, leading to an overall non-tight reduction, even if wFS was established using a tight reduction.

Inspired by GGJJ, we propose a new notion, called one-way verifiable weak forward secrecy (OW-VwFS), and prove that OW-VwFS can be transformed tightly to FS using key confirmation in the random oracle model (ROM). To implement our generic transformation, we show that several tightly wFS AKE protocols additionally satisfy our OW-VwFS notion tightly. We highlight that using the recent lattice-based protocol from Pan, Wagner, and Zeng (CRYPTO 2023) can give us the first lattice-based tightly FS AKE via key confirmation in the classical random oracle model. Besides this, we also obtain a Decisional-Diffie-Hellman-based protocol that is considerably more efficient than the previous ones.

Finally, we lift our study on FS via key confirmation to the quantum random oracle model (QROM). While our security reduction is overall non-tight, it matches the best existing bound for wFS in the QROM (Pan, Wagner, and Zeng, ASIACRYPT 2023), namely, it is square-root- and session-tight. Our analysis is in the multi-challenge setting, and it is more realistic than the single-challenge setting as in Pan et al..
Expand
Yassine Hamoudi, Qipeng Liu, Makrand Sinha
ePrint Report ePrint Report
Collision-resistant hashing, a fundamental primitive in modern cryptography, ensures that there is no efficient way to find distinct inputs that produce the same hash value. This property underpins the security of various cryptographic applications, making it crucial to understand its complexity. The complexity of this problem is well-understood in the classical setting and $\Theta(N^{1/2})$ queries are needed to find a collision. However, the advent of quantum computing has introduced new challenges since quantum adversaries - equipped with the power of quantum queries - can find collisions much more efficiently. Brassard, Höyer and Tapp and Aaronson and Shi established that full-scale quantum adversaries require $\Theta(N^{1/3})$ queries to find a collision, prompting a need for longer hash outputs, which impacts efficiency in terms of the key lengths needed for security.

This paper explores the implications of quantum attacks in the Noisy-Intermediate Scale Quantum (NISQ) era. In this work, we investigate three different models for NISQ algorithms and achieve tight bounds for all of them: (1) A hybrid algorithm making adaptive quantum or classical queries but with a limited quantum query budget, or (2) A quantum algorithm with access to a noisy oracle, subject to a dephasing or depolarizing channel, or (3) A hybrid algorithm with an upper bound on its maximum quantum depth; i.e., a classical algorithm aided by low-depth quantum circuits.

In fact, our results handle all regimes between NISQ and full-scale quantum computers. Previously, only results for the pre-image search problem were known for these models by Sun and Zheng, Rosmanis, Chen, Cotler, Huang and Li while nothing was known about the collision finding problem.

Along with our main results, we develop an information-theoretic framework for recording query transcripts of quantum-classical algorithms. The main feature of this framework is that it allows us to record queries in two incompatible bases - classical queries in the standard basis and quantum queries in the Fourier basis - consistently. We call the framework the hybrid compressed oracle as it naturally interpolates between the classical way of recording queries and the compressed oracle framework of Zhandry for recording quantum queries.
Expand
Caicai Chen, Chris Jones
ePrint Report ePrint Report
Hudoba proposed a public key encryption (PKE) scheme and conjectured its security to be based on the Planted Clique problem. In this note, we show that this scheme is not secure. We do so by devising an efficient algorithm for the even neighbor independent set problem proposed by Hudoba. This leaves open the possibility of building PKE based on Planted Clique.
Expand
Qi Feng, Kang Yang, Kaiyi Zhang, Xiao Wang, Yu Yu, Xiang Xie, Debiao He
ePrint Report ePrint Report
EdDSA, standardized by both IRTF and NIST, is a variant of the well-known Schnorr signature based on Edwards curves, and enjoys the benefit of statelessly and deterministically deriving nonces (i.e., it does not require reliable source of randomness or state continuity). Recently, NIST calls for multi-party threshold EdDSA signatures in one mode of deriving nonce statelessly and deterministically and verifying such derivation via zero-knowledge (ZK) proofs. Multi-party full-threshold EdDSA signatures in the dishonest-majority malicious setting have the advantage of strong security guarantee, and specially cover the two-party case. However, it is challenging to translate the stateless and deterministic benefit of EdDSA to the multi-party setting, as no fresh randomness is available for the protocol execution.

We present the notion of information-theoretic message authenticated codes (IT-MACs) over groups in the multi-verifier setting, and adopt the recent pseudorandom correlation function (PCF) to generate IT-MACs statelessly and deterministically. Furthermore, we generalize the two-party IT-MACs-based ZK protocol by Baum et al. (Crypto'21) into the multi-verifier setting, which may be of independent interest. Together with multi-verifier extended doubly-authenticated bits (mv-edabits) with errors, we design a multi-verifier zero-knowledge (MVZK) protocol to derive nonces statelessly and deterministically. Building upon the MVZK protocol, we propose a stateless deterministic multi-party EdDSA signature, tolerating all-but-one malicious corruptions. Compared to the state-of-the-art multi-party EdDSA signature by Garillot et al. (Crypto'21), we improve communication cost by a factor of $61\times$, at the cost of increasing computation cost by about $2.25\times$ and requiring three extra rounds.
Expand
◄ Previous Next ►