IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
08 October 2025
David Heath, Nakul Khambhati, Rafail Ostrovsky, Turan Vural
Authenticated garbling, introduced by Wang et al., CCS'17, is a leading paradigm for achieving constant-round maliciously-secure 2PC.
In this work, we upgrade authenticated garbling
to efficiently support tensor gates (introduced in the context of semi-honest garbling by Heath et al., CCS'21) using the one-hot garbling technique.
Our maliciously-secure garbled tensor gate computes $\boldsymbol {x}\otimes \boldsymbol{y}$ for $\boldsymbol{x}\in \{0,1\}^n,\boldsymbol{y} \in \{0,1\}^m$ with $O(n+m)\kappa + O(nm)$ bits of communication, where $\kappa$ is the computational security parameter and $n,m$ are logarithmic in $\kappa$.
This improves the best prior constant-round maliciously secure approach, which incurs $O(nm)\kappa$ communication.
Our protocol is concretely efficient and allows improvement over state-of-the-art for applications including integer multiplication, matrix multiplication, and more. We benchmark the online phase of our protocol and observe a $5.41\times$ improvement in communication and $3.35\times$ improvement in wall clock time when computing a $128\times 128$ bit tensor product.
Goutam Paul, Anup Kumar Kundu, Sucheta Chakrabarti
ChaCha and Salsa are two ARX based stream ciphers which are widely used in data encryption including TLS v1.3 standard, VPN software etc. Exploiting Probabilistic Neutral Bits (PNB) is one of the most significant cryptanalysis strategies for reduced round versions of these ciphers. The seminal work using PNB by Aumasson et al. (FSE 2008) claims that the PNB set mostly depends on the output bit difference occurring in the intermediate round. The subsequent works mainly relied on the differential or differential-linear cryptanalysis, or multiple distinct input-output differentials for which the bias is higher than a threshold in the intermediate round. In this paper, we propose a new PNB set construction based on multiple output bit differences with respect to a single input bit difference only. We exploit the differentials to mount key recovery attacks using a multi-step procedure depending on our new PNB set. Our attack achieves a time complexity of $2^{167.9008}$ for ChaCha20/7 and $2^{183.5361}$ for Salsa20/8, beating all the existing PNB-based attacks on ChaCha20/7 and Salsa20/8 by a significant margin. Further, both our time and data complexities for ChaCha20/7.5 are better than the latest published work by Flórez-Gutiérrez and Todo (Eurocrypt 2025). We have also verified our attack experimentally on a published toy version of ChaCha (FSE 2023).
Joachim Neu, Javier Nieto, Ling Ren
Proof-of-stake blockchains require consensus protocols that support Dynamic Availability and Reconfiguration (so-called DAR setting), where the former means that the consensus protocol should remain live even if a large number of nodes temporarily crash, and the latter means it should be possible to change the set of operating nodes over time. State-of-the-art protocols for the DAR setting, such as Ethereum, Cardano’s Ouroboros, or Snow White, require unrealistic additional assumptions, such as social consensus, or that key evolution is performed even while nodes are not participating. In this paper, we identify the necessary and sufficient adversarial condition under which consensus can be achieved in the DAR setting without additional assumptions. We then introduce a new and realistic additional assumption: honest nodes dispose of their cryptographic keys the moment they express intent to exit from the set of operating nodes. To add reconfiguration to any dynamically available consensus protocol, we provide a bootstrapping gadget that is particularly simple and efficient in the common optimistic case of few reconfigurations and no double-spending attempts.
Vladimir Kolesnikov, Stanislav Peceny, Rahul Rachuri, Srinivasan Raghuraman, Peter Rindal, Harshal Shah
Linear error-correcting codes with fast encoding and high minimum distance are a central primitive across modern cryptography. They appear most prominently in two domains: (1) pseudorandom correlation generators (PCGs), which enable sublinear-communication generation of correlations such as oblivious transfer (OT) and vector oblivious linear evaluation (VOLE), and (2) zero-knowledge (ZK) proof systems, where linear-time encoders underpin proof soundness and scalability. In both settings, the prover or sender must multiply by a large generator matrix $\mathbf{G}$, often with dimensions in the millions, making computational efficiency the dominant bottleneck.
We propose a generalized paradigm for building LPN-friendly codes with provable minimum distance. Roughly speaking, these codes are based on the idea of randomized turbo codes such as repeat accumulate codes. To prove their minimum distance, we present a generalized enumeration technique, which allows us to precisely compute the minimum distance for a broad class of codes. Although we do not prove their asymptotic behavior, the concrete parameters essentially give a linear-time encoder.
Armed with these new techniques, we construct several novel codes, the most promising of which we call Block-Accumulate codes. Our original design goal was to construct codes that run efficiently on GPUs. Surprisingly, we find that our newly constructed codes are the fastest on both GPUs and CPUs, while at the same time achieve a better minimum distance. If we restrict our attention to codes with proofs, our code is $8\times$ faster than state of the art on a CPU and $50\times$ faster on a GPU. Even if we use aggressive parameters, our code is $3$ and $20\times$ faster, respectively. Under these parameters, this yields overall PCG speedups of $2.5\times$ on the CPU and $15\times$ on the GPU, achieving about 200 million OTs or binary Beaver triples per second on the GPU (excluding the one-time 10 ms GGM seed expansion). We expect similar improvements when applied to the ZK space.
We propose a generalized paradigm for building LPN-friendly codes with provable minimum distance. Roughly speaking, these codes are based on the idea of randomized turbo codes such as repeat accumulate codes. To prove their minimum distance, we present a generalized enumeration technique, which allows us to precisely compute the minimum distance for a broad class of codes. Although we do not prove their asymptotic behavior, the concrete parameters essentially give a linear-time encoder.
Armed with these new techniques, we construct several novel codes, the most promising of which we call Block-Accumulate codes. Our original design goal was to construct codes that run efficiently on GPUs. Surprisingly, we find that our newly constructed codes are the fastest on both GPUs and CPUs, while at the same time achieve a better minimum distance. If we restrict our attention to codes with proofs, our code is $8\times$ faster than state of the art on a CPU and $50\times$ faster on a GPU. Even if we use aggressive parameters, our code is $3$ and $20\times$ faster, respectively. Under these parameters, this yields overall PCG speedups of $2.5\times$ on the CPU and $15\times$ on the GPU, achieving about 200 million OTs or binary Beaver triples per second on the GPU (excluding the one-time 10 ms GGM seed expansion). We expect similar improvements when applied to the ZK space.
Jules Maire, Alan Pulval-Dady
Blind signatures have become a cornerstone for privacy-sensitive applications such as digital cash, anonymous credentials, and electronic voting.
The elliptic curve variant of the Digital Signature Algorithm (ECDSA) is widely adopted due to its efficiency in resource-constrained environments, such as mobile devices and blockchain systems.
Building blind ECDSA is hence a natural goal.
One presents the first such construction relying solely on the ECDSA assumption.
Despite the inherent complexities in integrating blindness with ECDSA, we design a protocol that ensures both unforgeability and blindness without introducing new computational assumptions and ensuring concurrent security.
It involves zero-knowledge proofs based on the MPC-in-the-head paradigm for complex statements combining relations on encrypted elliptic curve points, their coordinates, and discrete logarithms.
Vipul Goyal, Justin Raizes
A central challenge in data security is not just preventing theft, but detecting whether it has occurred. Classically, this is impossible because a perfect copy leaves no evidence. Quantum mechanics, on the other hand, forbids general duplication, opening up new possibilities.
We introduce Proofs of No Intrusion, which enable a classical client to remotely test whether a quantum server has been hacked and the client's data stolen. Crucially, the test does not destroy the data being tested, avoiding the need to store a backup elsewhere. We define and construct proofs of no intrusion for ciphertexts assuming fully homomorphic encryption. Additionally, we show how to equip several constructions of unclonable primitives with proofs of non-intrusion, such as unclonable decryption keys and signature tokens. Conceptually, proofs of non-intrusion can be defined for essentially any unclonable primitive.
At the heart of our techniques is a new method for non-destructively testing coset states with classical communication. It can be viewed as a non-destructive proof of knowledge of a measurement result of the coset state.
We introduce Proofs of No Intrusion, which enable a classical client to remotely test whether a quantum server has been hacked and the client's data stolen. Crucially, the test does not destroy the data being tested, avoiding the need to store a backup elsewhere. We define and construct proofs of no intrusion for ciphertexts assuming fully homomorphic encryption. Additionally, we show how to equip several constructions of unclonable primitives with proofs of non-intrusion, such as unclonable decryption keys and signature tokens. Conceptually, proofs of non-intrusion can be defined for essentially any unclonable primitive.
At the heart of our techniques is a new method for non-destructively testing coset states with classical communication. It can be viewed as a non-destructive proof of knowledge of a measurement result of the coset state.
Koen de Boer, Joël Felderhoff
We present a novel analysis of a quantum algorithm computing the S-unit group for a number field from Eisenträger et al. [EHKS14a] and Biasse and Song [BS16]. We prove that this quantum algorithm runs within polynomial time, where we explicitly quantify the polynomials of the quantum gate and memory complexity (under GRH). We do so by carefully analyzing an implementation of an Continuous Hidden Subgroup Problem (CHSP) oracle function whose period is the (logarithm of the) S-unit group, and provide it to an CHSP-solving algorithm as in [BDF19].
Our analysis is novel due to minimizing the use of the quantum memory-inefficient LLL-reduction, by resorting to strategically chosen precomputations of approximations of high powers of prime ideals. Additionally, we provide a new quantum algorithm computing a discrete Gaussian superposition analogue of the GPV algorithm by Gentry et al. [GPV08]. Lastly, we include a full and rigorous numerical analysis of all parts of the oracle-function computing algorithm, allowing to use fixed-point precision arithmetic and thus to precisely quantify the run-time and memory.
Nikita Snetkov, Mihkel Jaas Karu, Jelizaveta Vakarjuk, Alisa Pankova
Privacy-preserving technologies are a well-established area of research for the modern digital systems, driven by the regulatory mandates, such as the GDPR, and growing demand for the users' privacy. One example from the different available technologies are blind signatures: a primitive that supports creation of privacy-sensitive applications, starting from secure e-voting and anonymous digital cash to privacy-preserving credentials and identity management systems. In this work, we introduce Coppercloud, a blind server-supported RSA signature scheme designed to enhance privacy in digital identity systems. Coppercloud enables a user to obtain a signature on a message, without revealing its content to the supporting server, while distributing the signing key between the user's device and the supporting server. We formalize the security requirements for blind server-supported signing by defining an ideal functionality, and prove that Coppercloud securely realizes this functionality in the Universal Composability (UC) model.
Daniele Ballo
This paper presents a unified information-theoretic framework for steganography that simultaneously addresses reliability, statistical undetectability, computational security, and robustness over noisy channels. Unlike prior work that treated these aspects separately under idealized assumptions, the framework formalizes them within a single model, providing definitions, capacity bounds, and impossibility results based on statistical distances. It also considers computationally bounded adversaries and trade-offs between payload, detectability, and error, offering a rigorous foundation for designing secure, robust, and efficient steganographic systems.
Sulaiman Alhussaini, Serge˘ı Sergeev
One-sided linear systems of the form ``$Ax=b$'' are well-known and extensively studied over the tropical (max-plus) semiring and wide classes of related idempotent semirings. The usual approach is to first find the greatest solution to such system in polynomial time and then to solve a much harder problem of finding all minimal solutions. We develop an extension of this approach to the same systems over two well-known extensions of the tropical semiring: symmetrized and supertropical, and discuss the implications of our findings for the tropical cryptography.
Donald Beaver
Mental Poker enables two players to play card games at a distance without having a physical deck of cards at hand, as long as they have reliable trapdoor permutations or access to Oblivious Transfer (OT). OT itself can be stored and extended using just a one-way function. We examine whether Mental Poker can be extended from initial hands of ordinary physical poker.
On the theoretical side, this work establishes OT amplification/extraction as a cryptographic primitive using asymmetrically-viewed fully-randomizing permutations. On the pragmatic, it is naturally related to ``card cryptography'' although in a very restricted sense: 52-card deck protocols using fully scrambling shuffles. While secret-key exchange protocols make strong use of full shuffles, full shuffles are avoided in 2PC/AND/Solitaire card settings because of detrimental over-randomization. Within those domains, this work explains decades-old impossibility conjectures in certain circumstances. More positively, we provide several practical and information-theoretically secure ways to establish base OTs from 5-Card Draw over ordinary decks at rates sufficient to establish a foothold for indefinite Mental Poker extension in just an afternoon of play.
On the theoretical side, this work establishes OT amplification/extraction as a cryptographic primitive using asymmetrically-viewed fully-randomizing permutations. On the pragmatic, it is naturally related to ``card cryptography'' although in a very restricted sense: 52-card deck protocols using fully scrambling shuffles. While secret-key exchange protocols make strong use of full shuffles, full shuffles are avoided in 2PC/AND/Solitaire card settings because of detrimental over-randomization. Within those domains, this work explains decades-old impossibility conjectures in certain circumstances. More positively, we provide several practical and information-theoretically secure ways to establish base OTs from 5-Card Draw over ordinary decks at rates sufficient to establish a foothold for indefinite Mental Poker extension in just an afternoon of play.
Mario Marhuenda Beltrán, Mustafa Khairallah
Plaintext-awareness of AEAD schemes is one of the more obscure and
easily misunderstood notions. Originally proposed by Andreeva et al., Mennink and Talnikar 2025 showed that the original definitions are vague and leave too much room for interpretation. They presented new definitions and analyzed the three main AEAD compositions relative to the new definitions. In particular, they showed that MAC-then-Encrypt (MtE) is not plaintext-aware. However, they showed that an SIV-style variant is indeed plaintext-aware. In this work, we first show that their analysis contains a gap, voiding their proof. We show this by showing several attacks; against their choice of extractor, with trivial complexity, and against any extractor, with birthday-bound complexity. Next, we re-establish their results by designing a new extractor that captures their intended goal and prove a tight PA1 security bound. We also show that the result is not dependent on the encryption scheme used, by showing that an extractor can also be designed for sMtE[ΘCB3], a variant where the encryption step is done by an ΘCB3-like scheme. Afterwards, we strengthen their results, by revisiting other compositions. In particular, we show that Encrypt-then- MAC (EtM) is in fact PA2-secure. Furthermore, we show that SIV-style MtE cannot be PA2-secure. Additionally, we also show that Encode-then-Encipher is PA2-secure, but not beyond the first successful forgery. More importantly, we show that up to some necessary assumptions, PA2 and RAE are equivalent.
Last but not least, we investigate the value of the notion of plaintext awareness. We look deeper into the relation between plaintext awareness and confidentiality. We show that the problem of confidentiality in the presence of release of unverified plaintext can be seen as a confidentiality-with-leakage problem in the simulatable leakage framework. In this regard, we start, by showing that PA1 security cannot imply confidentiality with leakage. Similarly, we compare our results to the AERUP notion of TOSC 2019. We show that a scheme can be AERUP secure but not secure against a somewhat straightforward left-or-right attack in the same model. This puts into question the meaning and relevance of the PA1 and AERUP security notions. These results can be seen as providing security in a two-phase game, where the adversary does not observe any leakage after the first challenge query, as we argue in the paper. On the positive side, we show that if a scheme achieves IND-CPA, INT-RUP and PA2, then it achieves confidentiality with leakage for the appropriate leakage function. This closes a gap in the literature on the relation between confidentiality with leakage and RUP notions.
Last but not least, we investigate the value of the notion of plaintext awareness. We look deeper into the relation between plaintext awareness and confidentiality. We show that the problem of confidentiality in the presence of release of unverified plaintext can be seen as a confidentiality-with-leakage problem in the simulatable leakage framework. In this regard, we start, by showing that PA1 security cannot imply confidentiality with leakage. Similarly, we compare our results to the AERUP notion of TOSC 2019. We show that a scheme can be AERUP secure but not secure against a somewhat straightforward left-or-right attack in the same model. This puts into question the meaning and relevance of the PA1 and AERUP security notions. These results can be seen as providing security in a two-phase game, where the adversary does not observe any leakage after the first challenge query, as we argue in the paper. On the positive side, we show that if a scheme achieves IND-CPA, INT-RUP and PA2, then it achieves confidentiality with leakage for the appropriate leakage function. This closes a gap in the literature on the relation between confidentiality with leakage and RUP notions.
Andrea Flamini, Federico Pintore, Edoardo Signorini, Giovanni Tognolini
Non-interactive zero-knowledge proofs (NIZKPs) used as components in advanced cryptographic protocols typically require straight-line extractability to enable security analysis. While the widely-used Fiat-Shamir transform produces efficient and compact NIZKPs from Sigma protocols, its security proofs rely on adversary rewinding, which prevents straight-line extractability. The Fischlin transform offers an alternative that produces straight-line extractable NIZKPs from Sigma protocols, but typically sacrifices compactness in the process. In the post-quantum setting, Group-action-based Sigma protocols have proven to be truly flexible for the design of advanced cryptosystems. These Sigma protocols have a small challenge space that requires tailored optimizations to improve compactness of the derived NIZKPs and signatures. Some specific techniques for Fiat-Shamir NIZKPs have been studied. Among the most established solutions, the fixed-weight technique leverages on the use of seed trees to encode the majority of the transcripts in the proof. However, the implementation of the same techniques within the Fischlin transform encounters significant obstructions. In particular, its impact is limited, and a closed analysis of its effectiveness appears to be intractable.
This work introduces the GAO (Group Action Oriented) transform, a new generic compiler that produces straight-line extractable NIZKPs from Sigma protocols while significantly simplifying the analysis of the fixed-weight framework. The GAO transform is then optimized in two different ways, defining a collision predicate (yielding the Coll-GAO transform) and adopting a technique (Stretch-and-Compress) that can be applied to improve both GAO and Coll-GAO (yielding the SC-GAO and SC-Coll-GAO transforms). The practical advantages of the SC-Coll-GAO transform are theoretically motivated and concretely tested on the LESS digital signature, a code-based candidate that recently advanced to the second round of the NIST standardization process specifically purposed for post-quantum signatures. Remarkably, when compared to the Fiat-Shamir LESS baseline, SC-Coll-GAO incurs a computational cost increase by 50-60%, while signature sizes grow by only 10-20%.
This work introduces the GAO (Group Action Oriented) transform, a new generic compiler that produces straight-line extractable NIZKPs from Sigma protocols while significantly simplifying the analysis of the fixed-weight framework. The GAO transform is then optimized in two different ways, defining a collision predicate (yielding the Coll-GAO transform) and adopting a technique (Stretch-and-Compress) that can be applied to improve both GAO and Coll-GAO (yielding the SC-GAO and SC-Coll-GAO transforms). The practical advantages of the SC-Coll-GAO transform are theoretically motivated and concretely tested on the LESS digital signature, a code-based candidate that recently advanced to the second round of the NIST standardization process specifically purposed for post-quantum signatures. Remarkably, when compared to the Fiat-Shamir LESS baseline, SC-Coll-GAO incurs a computational cost increase by 50-60%, while signature sizes grow by only 10-20%.
Hongrui Cui, Chun Guo, Xiaojie Guo, Xiao Wang, Kang Yang, Yu Yu
This work studies the security and constructions of correlation robust (CR) hash functions in secure multi-party computation (MPC). Existing definitions of CR hashing are all game-based (i.e., no simulator to achieve programmability or extractability), but MPC protocols are proven secure in the simulation-based models including both stand-alone and universal composability models. We found that for some MPC protocols, e.g., TinyOT-like authenticated-triple generation protocols and correlated oblivious transfer (COT) extension protocols, such a mismatch could lead to a gap in security proofs, even for the semi-honest adversary and stand-alone model.
To bridge the gap, we introduce a simulation-based security notion for CR hash functions to allow secure composition. Instead of building from scratch, we introduce such a simulator to a wide class of existing ideal-cipher-based CR hashing constructions, and derive the security bound from their original game-based CR security.This enables us to obtain an efficient CR hashing construction making just one call to a blockcipher, and is much more efficient than the construction from a random oracle used in previous TinyOT-like protocols. We showcase the utility of the new CR notion in easing security proofs and mitigating the risk of errors on two classes of protocols: (1) authenticated-triple generation protocols in the TinyOT family with a countermeasure; (2) COT extension protocols with bootstrapped iterations.
To bridge the gap, we introduce a simulation-based security notion for CR hash functions to allow secure composition. Instead of building from scratch, we introduce such a simulator to a wide class of existing ideal-cipher-based CR hashing constructions, and derive the security bound from their original game-based CR security.This enables us to obtain an efficient CR hashing construction making just one call to a blockcipher, and is much more efficient than the construction from a random oracle used in previous TinyOT-like protocols. We showcase the utility of the new CR notion in easing security proofs and mitigating the risk of errors on two classes of protocols: (1) authenticated-triple generation protocols in the TinyOT family with a countermeasure; (2) COT extension protocols with bootstrapped iterations.
Kel Zin Tan, Prashant Nalini Vasudevan
A random local function defined by a $d$-ary predicate $P$ is one where each output bit is computed by applying $P$ to $d$ randomly chosen bits of its input. These represent natural distributions of instances for constraint satisfaction problems. They were put forward by Goldreich as candidates for low-complexity one-way functions, and have subsequently been widely studied also as potential pseudo-random generators.
We present a new search-to-decision reduction for random local functions defined by any predicate of constant arity. Given any efficient algorithm that can distinguish, with advantage $\epsilon$, the output of a random local function with $m$ outputs and $n$ inputs from random, our reduction produces an efficient algorithm that can invert such functions with $\tilde{O}(m(n/\epsilon)^2)$ outputs, succeeding with probability $\Omega(\epsilon)$. This implies that if a family of local functions is one-way, then a related family with shorter output length is family of pseudo-random generators.
Prior to our work, all such reductions that were known required the predicate to have additional sensitivity properties, whereas our reduction works for any predicate. Our results also generalise to some super-constant values of the arity $d$, and to noisy predicates.
Alex Davidson, Amit Deo, Louis Tremblay Thibault
We propose Pool: a conceptually simple post-quantum (PQ) oblivious pseudorandom function (OPRF) protocol, that is round-optimal (with input-independent preprocessing), practically efficient, and has security based on the well-understood hardness of the learning with rounding (LWR) problem. Specifically, our design permits oblivious computation of the LWR-based pseudorandom function $F_{\mathsf{sk}}(x) = \lceil H(x)^{\top} \cdot \mathsf{sk} \rfloor_{q,p}$, for random oracle $H: \{0,1\}^* \mapsto \mathbb{Z}_q^n$ and uniformly chosen $\mathsf{sk} \in \{0,1\}^n$. For 128-bits of semi-honest security, the Pool OPRF has an online communication cost of 11.9~kB, and a computational runtime of less than 2~ms on a single thread (via an open-source software implementation). This is more efficient (in either online communication cost or runtime) than constructions from well-known PQ PRFs, and is competitive even with constructions that only conjecture PQ security on lesser-known assumptions. As a result, our design gives high-performance, post-quantum variants of established OPRF applications in multi-party computation and private set operation protocols.
Reo Eriguchi, Kazumasa Shinagawa
A Private Simultaneous Messages (PSM) protocol is a secure multiparty computation protocol with a minimal interaction pattern, which allows input parties sharing common randomness to securely reveal the output of a function by sending messages only once to an external party. Since existing PSM protocols for arbitrary functions have exponentially large communication complexity in the number $n$ of parties, it is important to explore efficient protocols by focusing on special functions of practical use. In this paper, we study the communication efficiency of PSM protocols for symmetric functions, which provide many useful functionalities for real-world applications. We present a new $n$-party PSM protocol for symmetric functions with communication complexity $n^{2d/3+O(1)}$, where $d$ is the size of the input domain of each party. Our protocol improves the currently best known communication complexity of $n^{d+O(1)}$. As applications to other related models, we show that our novel protocol implies improved communication complexity of ad-hoc PSM, where only a subset of parties actually send messages, and also leads to a more communication-efficient robust PSM protocol, which is secure against collusion of the external party and input parties. The extension to ad-hoc PSM is not a straightforward application of the previous transformation but includes an optimization technique based on the symmetry of functions.
Pratyush Dikshit, Ashkan Emami, Johannes Sedlmeir, Gilbert Fridgen
Proof-of-work (PoW)-based consensus mechanisms have long
been criticized for their high resource (electricity, e-waste) consumption
and reliance on hash puzzles, which have no utility beyond cryptocurrencies. Proof-of-Useful Work (PoUW) has emerged as an alternative whose mining objective is expected to provide societal utility. Despite numerous designs, PoUW lacks practical relevance and theoretical scrutiny. In this paper, we provide a systematization of knowledge (SoK) on PoUW, focusing on security-economic trade-offs. We build the taxonomy to discuss core principles (difficulty adjustment, verifiability, etc.), architecture, trade-offs, and economic incentives. We examine more than 50 PoUW constructions where we find recurring shortcomings. We introduce a formal economic model of PoUW for miner incentives, solution reusability, and external market value to the security budget. To validate our hypothesis, we employ a Toulmin-based evaluation of claims on the security and energy efficiency of these constructions. Our findings show that PoUW is actually not as useful as expected, since the economic and societal utility do not contribute to the security budget. Finally, we highlight design recommendations for PoUW that integrate verifiable computation, partial incentive allocation, and utility-aware difficulty adjustment.
Yashvanth Kondi
In this work, we investigate whether the cost of two-party ECDSA signing can be brought within the realm of plain ECDSA signing.
We answer the question in the affirmative for the case of communication complexity, by means of a new signing protocol. Our protocol consumes bandwidth linear in the security parameter, and hence the size of an ECDSA signature.
Our scheme makes only blackbox use of generic tools---Oblivious Transfer during key generation, and any Pseudorandom Function when signing.
While computation complexity is not asymptotically optimal, benchmarks of our protocol confirm that concrete costs are the lowest known for ECDSA signing.
Our protocol is therefore the most concretely efficient in the literature on all fronts: bandwidth, computation, and rounds.
On a technical level, our protocol is enabled by a novel Pseudorandom Correlation Function (PCF) for the Vector Oblivious Linear Evaluation correlation over a large ring. The PCF relies on one-way functions alone, and may be of independent interest.
Our scheme supports standard extensions, such as pre-signing, and including backup servers for key shares in a $(2,n)$ configuration.
On a technical level, our protocol is enabled by a novel Pseudorandom Correlation Function (PCF) for the Vector Oblivious Linear Evaluation correlation over a large ring. The PCF relies on one-way functions alone, and may be of independent interest.
Our scheme supports standard extensions, such as pre-signing, and including backup servers for key shares in a $(2,n)$ configuration.
Marius A. Aardal, Diego F. Aranha, Yansong Feng, Yiming Gao, Yanbin Pan
The hardness of finding isogenies of degree $d$ between supersingular elliptic curves is a fundamental assumption in isogeny-based cryptography. Let $E_1$ and $E_2$ be supersingular elliptic curves defined over $\mathbb{F}_{p^2}$, and let $d>p^{1/2}$ be smooth. At CRYPTO~2024, Benčina et al.\ proposed an algorithm with time complexity $\widetilde{O}(\max\{p^{1/2}, d/p^{5/8}\})$ in the classical setting and $\widetilde{O}(\max\{p^{1/4}, d^{1/2}/p^{1/4}\})$ in the quantum setting.
In this work, we first observe that their analysis omits a sub-exponential factor $\exp(O(\log^{3/4} p))$. We then improve their result to $\widetilde{O}(\max\{p^{1/2},\exp(O(\log^{4/5} p)) \cdot d/p^{2/3}\})$ classically and $\widetilde{O}(\max\{p^{1/4}, \exp(O(\log^{4/5} p)) \cdot d^{1/2}/p^{1/3}\})$ quantumly. Our approach relies on small-root bounds for Coppersmith’s method applied to a four-variable integer equation. To this end, we adapt the explicit asymptotic formulas for small-root bounds introduced by Feng et al.\ (CRYPTO~2025) in the modular setting to the integer setting. As an additional application, we strengthen the attack of Benčina et al.\ on a signature scheme introduced at ACNS~2024, reducing its security by 9 bits. We expect that these refined techniques for Coppersmith’s method will be valuable for further post-quantum cryptanalysis.
In this work, we first observe that their analysis omits a sub-exponential factor $\exp(O(\log^{3/4} p))$. We then improve their result to $\widetilde{O}(\max\{p^{1/2},\exp(O(\log^{4/5} p)) \cdot d/p^{2/3}\})$ classically and $\widetilde{O}(\max\{p^{1/4}, \exp(O(\log^{4/5} p)) \cdot d^{1/2}/p^{1/3}\})$ quantumly. Our approach relies on small-root bounds for Coppersmith’s method applied to a four-variable integer equation. To this end, we adapt the explicit asymptotic formulas for small-root bounds introduced by Feng et al.\ (CRYPTO~2025) in the modular setting to the integer setting. As an additional application, we strengthen the attack of Benčina et al.\ on a signature scheme introduced at ACNS~2024, reducing its security by 9 bits. We expect that these refined techniques for Coppersmith’s method will be valuable for further post-quantum cryptanalysis.