International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

08 October 2025

Puja Mondal, Suparna Kundu, Hikaru Nishiyama, Supriya Adhikary, Daisuke Fujimoto, Yuichi Hayashi, Angshuman Karmakar
ePrint Report ePrint Report
LESS is a digital signature scheme that is currently in the second round of the National Institute of Standards and Technology's (NIST's) ongoing additional call for post-quantum standardization. LESS has been designed using a zero-knowledge identification scheme using a Fiat-Shamir transformation. The design of LESS is based on the hardness of the linear equivalence problem. However, the designers updated the scheme in the second round to improve efficiency and signature size. As we have seen before, in the NIST standardization process, analysis of physical security attacks such as side-channel and fault-injection attacks is considered nearly as important as mathematical security. In recent years, several works have been shown on LESS version 1 (LESS-v1). Among these, the work by Mondal et al. that appeared in Asiacrypt-2024 is most notable. This work showed several attack surfaces on LESS-v1 that can be exploited using different fault attacks. However, the implementation of LESS version 2 (LESS-v2) has not yet been explored in this direction.

In this work, we analyze the new structure of LESS-v2 signature scheme and propose a process of signature forgery. These techniques do not require the full signing key, but some other secret-related information. Our attacks uncovered multiple such attack surfaces in the design of LESS-v2 from where we can recover this secret-related information. We assume an adversary can use interrupted execution techniques, such as fault attacks, to recover this extra information. We have analyzed the average number of required faults for two particular fault models to recover the secret equivalent component and observed that we need only $1$ faulted signature for most of the parameter sets. Our attacks rely on very simple and standard fault models. We demonstrated these using both simulation and a simple experimental setup.
Expand
Minki Hhan, Tomoyuki Morimae, Yasuaki Okinaka, Takashi Yamakawa
ePrint Report ePrint Report
With the rapid advances in quantum computer architectures and the emerging prospect of large-scale quantum memory, it is becoming essential to classically verify that remote devices genuinely allocate the promised quantum memory with specified number of qubits and coherence time. In this paper, we introduce a new concept, proofs of quantum memory (PoQM). A PoQM is an interactive protocol between a classical probabilistic polynomial-time (PPT) verifier and a quantum polynomial-time (QPT) prover over a classical channel where the verifier can verify that the prover has possessed a quantum memory with a certain number of qubits during a specified period of time. PoQM generalize the notion of proofs of quantumness (PoQ) [Brakerski, Christiano, Mahadev, Vazirani, and Vidick, JACM 2021]. Our main contributions are a formal definition of PoQM and its constructions based on hardness of LWE. Specifically, we give two constructions of PoQM. The first is of a four-round and has negligible soundness error under subexponential-hardness of LWE. The second is of a polynomial-round and has inverse-polynomial soundness error under polynomial-hardness of LWE. As a lowerbound of PoQM, we also show that PoQM imply one-way puzzles. Moreover, a certain restricted version of PoQM implies quantum computation classical communication (QCCC) key exchange.
Expand
Yang Liu, Zhen Shi, Chenhui Jin, Jiyan Zhang, Ting Cui, Dengguo Feng
ePrint Report ePrint Report
LOL (League of Legends) is a general framework for designing blockwise stream ciphers to achieve higher software performance in 5G/6G. Following this framework, stream ciphers LOL-MINI and LOL-DOUBLE are proposed, each with a 256-bit key. This paper focuses on analyzing their security against correlation attacks. Additionally, we propose an observation on constructing linear approximation trails. The repeated linear approximations in trails may not only increase the number of active S-boxes but also make the correlations of corresponding approximations zero. We propose a new rule to address this situation and develop multiple correlation attacks. Furthermore, we use theoretical cryptanalysis to lower the time complexity of aggregation and construct multiple linear approximations efficiently. For LOL-MINI, we construct approximately $2^{32.17}$ linear approximations with absolute correlations ranging from $2^{-118.38}$ to $2^{-127}$. We launch the developed multiple correlation attack on LOL-MINI with a time complexity of $2^{250.56}$, a data complexity of $2^{250}$, a memory complexity of $2^{250}$ and a success probability of 99.4$\%$. For LOL-DOUBLE, there similarly exist approximately ${{2}^{22.86}}$ linear approximations with absolute correlations ranging from $2^{-147.23}$ to $2^{-156}$. It is worth noting that our results do not violate the declaration of LOL-MINI since the maximum length of keystreams for a single pair of key and IV is less than $2^{64}$.
Expand
Sabine Oechsner, Vitor Pereira, Peter Scholl
ePrint Report ePrint Report
Computer-aided cryptography, with particular emphasis on formal verification, promises an interesting avenue to establish strong guarantees about cryptographic primitives. The appeal of formal verification is to replace the error-prone pen-and-paper proofs with a proof that was checked by a computer and, therefore, does not need to be checked by a human. In this paper, we ask the question of how reliable are these machine-checked proofs by analyzing a formally verified implementation of the Line-Point Zero-Knowledge (LPZK) protocol (Dittmer, Eldefrawy, Graham-Lengrand, Lu, Ostrovsky and Pereira, CCS 2023). The implementation was developed in EasyCrypt and compiled into OCaml code that was claimed to be high-assurance, i.e., that offers the formal guarantees of guarantees of completeness, soundness, and zero knowledge. We show that despite these formal claims, the EasyCrypt model was flawed, and the implementation (supposed to be high-assurance) had critical security vulnerabilities. Concretely, we demonstrate that: 1) the EasyCrypt soundness proof was incorrectly done, allowing an attack on the scheme that leads honest verifiers into accepting false statements; and 2) the EasyCrypt formalization inherited a deficient model of zero knowledge for a class of non-interactive zero knowledge protocols that also allows the verifier to recover the witness. In addition, we demonstrate 3) a gap in the proof of the perfect zero knowledge property of the LPZK variant of Dittmer, Ishai, Lu and Ostrovsky (CCS 2022) that the EasyCrypt proof is based, which, depending on the interpretation of the protocol and security claim, could allow a malicious verifier to learn the witness. Our findings highlight the importance of scrutinizing machine-checked proofs, including their models and assumptions. We offer lessons learned for both users and reviewers of tools like EasyCrypt, aimed at improving the transparency, rigor, and accessibility of machine-checked proofs. By sharing our methodology and challenges, we hope to foster a culture of deeper engagement with formal verification in the cryptographic community.
Expand
Zhenkai Hu, Haofei Liang, Xiao Wang, Xiang Xie, Kang Yang, Yu Yu, Wenhao Zhang
ePrint Report ePrint Report
Threshold fully homomorphic encryption (ThFHE) enables multiple parties to perform arbitrary computation over encrypted data, while the secret key is distributed across the parties. The main task of designing ThFHE is to construct threshold key-generation and decryption protocols for FHE schemes. Among existing FHE schemes, FHEW-like cryptosystems enjoy the advantage of fast bootstrapping and small parameters. However, known ThFHE solutions use the ``noise-flooding'' technique to realize threshold decryption, which requires either large parameters or switching to a scheme with large parameters via bootstrapping, leading to a slow decryption process. Besides, for key generation, existing ThFHE schemes either assume a generic MPC or a trusted setup, or incur noise growth that is linear in the number $n$ of parties. In this paper, we propose a fast ThFHE scheme Ajax, by designing threshold key-generation and decryption protocols for FHEW-like cryptosystems. In particular, for threshold decryption, we eliminate the need for noise flooding, and instead present a new technique called ``mask-then-open'' based on random double sharings over different rings, while keeping the advantage of small parameters. For threshold key generation, we show a simple approach to reduce the noise growth from $n$ times to $max(0.038n,2)$ times in the honest-majority setting, where at most $t=\floor{(n-1)/2}$ parties are corrupted. Our end-to-end implementation reports the running time 17.6 $s$ and 0.9 $ms$ (resp., 91.9 $s$ and 4.4 $ms$) of generating a set of keys and decrypting a single ciphertext respectively, for $n=3$ (resp., $n=21$) parties under the network of 1 Gbps bandwidth and 1 $ms$ ping time. Compared to the state-of-the-art implementation, our protocol improves the end-to-end performance of the threshold decryption protocol by a factor of at least $5.7\times$ $\sim$ $283.6\times$ across different network latencies from $t=1$ to $t=13$. Our approaches can also be applied in other types of FHE schemes like BGV, BFV, and CKKS. In addition, we show how to strengthen our protocols in order to guarantee the security against malicious adversaries.
Expand
Rohit Chatterjee, Changrui Mu, Prashant Nalini Vasudevan
ePrint Report ePrint Report
We construct a public-key encryption scheme from the hardness of the (planted) MinRank problem over uniformly random instances. This corresponds to the hardness of decoding random linear rank-metric codes. Existing constructions of public-key encryption from such problems require hardness for structured instances arising from the masking of efficiently decodable codes. Central to our construction is the development of a new notion of duality for rank-metric codes.
Expand
Anik Basu Bhaumik, Suman Dutta, Siyi Wang, Anubhab Baksi, Kyungbae Jang, Amit Saha, Hwajeong Seo, Anupam Chattopadhyay
ePrint Report ePrint Report
The ZUC stream cipher is integral to modern mobile communication standards, including 4G and 5G, providing secure data transmission across global networks. Recently, Dutta et al. (Indocrypt, 2024) presented the first quantum resource estimation of ZUC under Grover's search, Although preliminary, this work marks the beginning of quantum security analysis for ZUC. In this paper, we present an improved quantum resource estimation for ZUC, offering tighter bounds for Grover-based exhaustive key search. Beyond traditional quantum resource estimations, we also provide a concrete timescale required to execute such attacks using the specified quantum resources. Our findings show that a full-round, low depth implementation of ZUC-128 can be realized with a maximum of $375$ ancilla qubits, a T-count of $106536$, and a T-depth of $15816$. Furthermore, the Grover-based key search can be performed most efficiently using $1201$ logical qubits, $170681$ T gates, and a T-depth of $78189$, resulting in a runtime of $1.78\times10^{11}$ years, an improvement of 93.43% over the estimated $2.71 \times 10^{12}$ years by the implementation given by Dutta et al., we also provide akin analysis for ZUC-256 with an 99.23% decrease in time. These estimations are done assuming state-of-the-art superconducting qubit error-correcting technology.
Expand
David Heath, Nakul Khambhati, Rafail Ostrovsky, Turan Vural
ePrint Report ePrint Report
Authenticated garbling, introduced by Wang et al., CCS'17, is a leading paradigm for achieving constant-round maliciously-secure 2PC. In this work, we upgrade authenticated garbling to efficiently support tensor gates (introduced in the context of semi-honest garbling by Heath et al., CCS'21) using the one-hot garbling technique. Our maliciously-secure garbled tensor gate computes $\boldsymbol {x}\otimes \boldsymbol{y}$ for $\boldsymbol{x}\in \{0,1\}^n,\boldsymbol{y} \in \{0,1\}^m$ with $O(n+m)\kappa + O(nm)$ bits of communication, where $\kappa$ is the computational security parameter and $n,m$ are logarithmic in $\kappa$. This improves the best prior constant-round maliciously secure approach, which incurs $O(nm)\kappa$ communication. Our protocol is concretely efficient and allows improvement over state-of-the-art for applications including integer multiplication, matrix multiplication, and more. We benchmark the online phase of our protocol and observe a $5.41\times$ improvement in communication and $3.35\times$ improvement in wall clock time when computing a $128\times 128$ bit tensor product.
Expand
Goutam Paul, Anup Kumar Kundu, Sucheta Chakrabarti
ePrint Report ePrint Report
ChaCha and Salsa are two ARX based stream ciphers which are widely used in data encryption including TLS v1.3 standard, VPN software etc. Exploiting Probabilistic Neutral Bits (PNB) is one of the most significant cryptanalysis strategies for reduced round versions of these ciphers. The seminal work using PNB by Aumasson et al. (FSE 2008) claims that the PNB set mostly depends on the output bit difference occurring in the intermediate round. The subsequent works mainly relied on the differential or differential-linear cryptanalysis, or multiple distinct input-output differentials for which the bias is higher than a threshold in the intermediate round. In this paper, we propose a new PNB set construction based on multiple output bit differences with respect to a single input bit difference only. We exploit the differentials to mount key recovery attacks using a multi-step procedure depending on our new PNB set. Our attack achieves a time complexity of $2^{167.9008}$ for ChaCha20/7 and $2^{183.5361}$ for Salsa20/8, beating all the existing PNB-based attacks on ChaCha20/7 and Salsa20/8 by a significant margin. Further, both our time and data complexities for ChaCha20/7.5 are better than the latest published work by Flórez-Gutiérrez and Todo (Eurocrypt 2025). We have also verified our attack experimentally on a published toy version of ChaCha (FSE 2023).
Expand
Joachim Neu, Javier Nieto, Ling Ren
ePrint Report ePrint Report
Proof-of-stake blockchains require consensus protocols that support Dynamic Availability and Reconfiguration (so-called DAR setting), where the former means that the consensus protocol should remain live even if a large number of nodes temporarily crash, and the latter means it should be possible to change the set of operating nodes over time. State-of-the-art protocols for the DAR setting, such as Ethereum, Cardano’s Ouroboros, or Snow White, require unrealistic additional assumptions, such as social consensus, or that key evolution is performed even while nodes are not participating. In this paper, we identify the necessary and sufficient adversarial condition under which consensus can be achieved in the DAR setting without additional assumptions. We then introduce a new and realistic additional assumption: honest nodes dispose of their cryptographic keys the moment they express intent to exit from the set of operating nodes. To add reconfiguration to any dynamically available consensus protocol, we provide a bootstrapping gadget that is particularly simple and efficient in the common optimistic case of few reconfigurations and no double-spending attempts.
Expand
Vladimir Kolesnikov, Stanislav Peceny, Rahul Rachuri, Srinivasan Raghuraman, Peter Rindal, Harshal Shah
ePrint Report ePrint Report
Linear error-correcting codes with fast encoding and high minimum distance are a central primitive across modern cryptography. They appear most prominently in two domains: (1) pseudorandom correlation generators (PCGs), which enable sublinear-communication generation of correlations such as oblivious transfer (OT) and vector oblivious linear evaluation (VOLE), and (2) zero-knowledge (ZK) proof systems, where linear-time encoders underpin proof soundness and scalability. In both settings, the prover or sender must multiply by a large generator matrix $\mathbf{G}$, often with dimensions in the millions, making computational efficiency the dominant bottleneck.

We propose a generalized paradigm for building LPN-friendly codes with provable minimum distance. Roughly speaking, these codes are based on the idea of randomized turbo codes such as repeat accumulate codes. To prove their minimum distance, we present a generalized enumeration technique, which allows us to precisely compute the minimum distance for a broad class of codes. Although we do not prove their asymptotic behavior, the concrete parameters essentially give a linear-time encoder.

Armed with these new techniques, we construct several novel codes, the most promising of which we call Block-Accumulate codes. Our original design goal was to construct codes that run efficiently on GPUs. Surprisingly, we find that our newly constructed codes are the fastest on both GPUs and CPUs, while at the same time achieve a better minimum distance. If we restrict our attention to codes with proofs, our code is $8\times$ faster than state of the art on a CPU and $50\times$ faster on a GPU. Even if we use aggressive parameters, our code is $3$ and $20\times$ faster, respectively. Under these parameters, this yields overall PCG speedups of $2.5\times$ on the CPU and $15\times$ on the GPU, achieving about 200 million OTs or binary Beaver triples per second on the GPU (excluding the one-time 10 ms GGM seed expansion). We expect similar improvements when applied to the ZK space.
Expand
Jules Maire, Alan Pulval-Dady
ePrint Report ePrint Report
Blind signatures have become a cornerstone for privacy-sensitive applications such as digital cash, anonymous credentials, and electronic voting. The elliptic curve variant of the Digital Signature Algorithm (ECDSA) is widely adopted due to its efficiency in resource-constrained environments, such as mobile devices and blockchain systems. Building blind ECDSA is hence a natural goal. One presents the first such construction relying solely on the ECDSA assumption. Despite the inherent complexities in integrating blindness with ECDSA, we design a protocol that ensures both unforgeability and blindness without introducing new computational assumptions and ensuring concurrent security. It involves zero-knowledge proofs based on the MPC-in-the-head paradigm for complex statements combining relations on encrypted elliptic curve points, their coordinates, and discrete logarithms.
Expand
Vipul Goyal, Justin Raizes
ePrint Report ePrint Report
A central challenge in data security is not just preventing theft, but detecting whether it has occurred. Classically, this is impossible because a perfect copy leaves no evidence. Quantum mechanics, on the other hand, forbids general duplication, opening up new possibilities.

We introduce Proofs of No Intrusion, which enable a classical client to remotely test whether a quantum server has been hacked and the client's data stolen. Crucially, the test does not destroy the data being tested, avoiding the need to store a backup elsewhere. We define and construct proofs of no intrusion for ciphertexts assuming fully homomorphic encryption. Additionally, we show how to equip several constructions of unclonable primitives with proofs of non-intrusion, such as unclonable decryption keys and signature tokens. Conceptually, proofs of non-intrusion can be defined for essentially any unclonable primitive.

At the heart of our techniques is a new method for non-destructively testing coset states with classical communication. It can be viewed as a non-destructive proof of knowledge of a measurement result of the coset state.
Expand
Koen de Boer, Joël Felderhoff
ePrint Report ePrint Report
We present a novel analysis of a quantum algorithm computing the S-unit group for a number field from Eisenträger et al. [EHKS14a] and Biasse and Song [BS16]. We prove that this quantum algorithm runs within polynomial time, where we explicitly quantify the polynomials of the quantum gate and memory complexity (under GRH). We do so by carefully analyzing an implementation of an Continuous Hidden Subgroup Problem (CHSP) oracle function whose period is the (logarithm of the) S-unit group, and provide it to an CHSP-solving algorithm as in [BDF19]. Our analysis is novel due to minimizing the use of the quantum memory-inefficient LLL-reduction, by resorting to strategically chosen precomputations of approximations of high powers of prime ideals. Additionally, we provide a new quantum algorithm computing a discrete Gaussian superposition analogue of the GPV algorithm by Gentry et al. [GPV08]. Lastly, we include a full and rigorous numerical analysis of all parts of the oracle-function computing algorithm, allowing to use fixed-point precision arithmetic and thus to precisely quantify the run-time and memory.
Expand
Nikita Snetkov, Mihkel Jaas Karu, Jelizaveta Vakarjuk, Alisa Pankova
ePrint Report ePrint Report
Privacy-preserving technologies are a well-established area of research for the modern digital systems, driven by the regulatory mandates, such as the GDPR, and growing demand for the users' privacy. One example from the different available technologies are blind signatures: a primitive that supports creation of privacy-sensitive applications, starting from secure e-voting and anonymous digital cash to privacy-preserving credentials and identity management systems. In this work, we introduce Coppercloud, a blind server-supported RSA signature scheme designed to enhance privacy in digital identity systems. Coppercloud enables a user to obtain a signature on a message, without revealing its content to the supporting server, while distributing the signing key between the user's device and the supporting server. We formalize the security requirements for blind server-supported signing by defining an ideal functionality, and prove that Coppercloud securely realizes this functionality in the Universal Composability (UC) model.
Expand
Daniele Ballo
ePrint Report ePrint Report
This paper presents a unified information-theoretic framework for steganography that simultaneously addresses reliability, statistical undetectability, computational security, and robustness over noisy channels. Unlike prior work that treated these aspects separately under idealized assumptions, the framework formalizes them within a single model, providing definitions, capacity bounds, and impossibility results based on statistical distances. It also considers computationally bounded adversaries and trade-offs between payload, detectability, and error, offering a rigorous foundation for designing secure, robust, and efficient steganographic systems.
Expand
Sulaiman Alhussaini, Serge˘ı Sergeev
ePrint Report ePrint Report
One-sided linear systems of the form ``$Ax=b$'' are well-known and extensively studied over the tropical (max-plus) semiring and wide classes of related idempotent semirings. The usual approach is to first find the greatest solution to such system in polynomial time and then to solve a much harder problem of finding all minimal solutions. We develop an extension of this approach to the same systems over two well-known extensions of the tropical semiring: symmetrized and supertropical, and discuss the implications of our findings for the tropical cryptography.
Expand
Donald Beaver
ePrint Report ePrint Report
Mental Poker enables two players to play card games at a distance without having a physical deck of cards at hand, as long as they have reliable trapdoor permutations or access to Oblivious Transfer (OT). OT itself can be stored and extended using just a one-way function. We examine whether Mental Poker can be extended from initial hands of ordinary physical poker.

On the theoretical side, this work establishes OT amplification/extraction as a cryptographic primitive using asymmetrically-viewed fully-randomizing permutations. On the pragmatic, it is naturally related to ``card cryptography'' although in a very restricted sense: 52-card deck protocols using fully scrambling shuffles. While secret-key exchange protocols make strong use of full shuffles, full shuffles are avoided in 2PC/AND/Solitaire card settings because of detrimental over-randomization. Within those domains, this work explains decades-old impossibility conjectures in certain circumstances. More positively, we provide several practical and information-theoretically secure ways to establish base OTs from 5-Card Draw over ordinary decks at rates sufficient to establish a foothold for indefinite Mental Poker extension in just an afternoon of play.
Expand
Mario Marhuenda Beltrán, Mustafa Khairallah
ePrint Report ePrint Report
Plaintext-awareness of AEAD schemes is one of the more obscure and easily misunderstood notions. Originally proposed by Andreeva et al., Mennink and Talnikar 2025 showed that the original definitions are vague and leave too much room for interpretation. They presented new definitions and analyzed the three main AEAD compositions relative to the new definitions. In particular, they showed that MAC-then-Encrypt (MtE) is not plaintext-aware. However, they showed that an SIV-style variant is indeed plaintext-aware. In this work, we first show that their analysis contains a gap, voiding their proof. We show this by showing several attacks; against their choice of extractor, with trivial complexity, and against any extractor, with birthday-bound complexity. Next, we re-establish their results by designing a new extractor that captures their intended goal and prove a tight PA1 security bound. We also show that the result is not dependent on the encryption scheme used, by showing that an extractor can also be designed for sMtE[ΘCB3], a variant where the encryption step is done by an ΘCB3-like scheme. Afterwards, we strengthen their results, by revisiting other compositions. In particular, we show that Encrypt-then- MAC (EtM) is in fact PA2-secure. Furthermore, we show that SIV-style MtE cannot be PA2-secure. Additionally, we also show that Encode-then-Encipher is PA2-secure, but not beyond the first successful forgery. More importantly, we show that up to some necessary assumptions, PA2 and RAE are equivalent.

Last but not least, we investigate the value of the notion of plaintext awareness. We look deeper into the relation between plaintext awareness and confidentiality. We show that the problem of confidentiality in the presence of release of unverified plaintext can be seen as a confidentiality-with-leakage problem in the simulatable leakage framework. In this regard, we start, by showing that PA1 security cannot imply confidentiality with leakage. Similarly, we compare our results to the AERUP notion of TOSC 2019. We show that a scheme can be AERUP secure but not secure against a somewhat straightforward left-or-right attack in the same model. This puts into question the meaning and relevance of the PA1 and AERUP security notions. These results can be seen as providing security in a two-phase game, where the adversary does not observe any leakage after the first challenge query, as we argue in the paper. On the positive side, we show that if a scheme achieves IND-CPA, INT-RUP and PA2, then it achieves confidentiality with leakage for the appropriate leakage function. This closes a gap in the literature on the relation between confidentiality with leakage and RUP notions.
Expand
Andrea Flamini, Federico Pintore, Edoardo Signorini, Giovanni Tognolini
ePrint Report ePrint Report
Non-interactive zero-knowledge proofs (NIZKPs) used as components in advanced cryptographic protocols typically require straight-line extractability to enable security analysis. While the widely-used Fiat-Shamir transform produces efficient and compact NIZKPs from Sigma protocols, its security proofs rely on adversary rewinding, which prevents straight-line extractability. The Fischlin transform offers an alternative that produces straight-line extractable NIZKPs from Sigma protocols, but typically sacrifices compactness in the process. In the post-quantum setting, Group-action-based Sigma protocols have proven to be truly flexible for the design of advanced cryptosystems. These Sigma protocols have a small challenge space that requires tailored optimizations to improve compactness of the derived NIZKPs and signatures. Some specific techniques for Fiat-Shamir NIZKPs have been studied. Among the most established solutions, the fixed-weight technique leverages on the use of seed trees to encode the majority of the transcripts in the proof. However, the implementation of the same techniques within the Fischlin transform encounters significant obstructions. In particular, its impact is limited, and a closed analysis of its effectiveness appears to be intractable.

This work introduces the GAO (Group Action Oriented) transform, a new generic compiler that produces straight-line extractable NIZKPs from Sigma protocols while significantly simplifying the analysis of the fixed-weight framework. The GAO transform is then optimized in two different ways, defining a collision predicate (yielding the Coll-GAO transform) and adopting a technique (Stretch-and-Compress) that can be applied to improve both GAO and Coll-GAO (yielding the SC-GAO and SC-Coll-GAO transforms). The practical advantages of the SC-Coll-GAO transform are theoretically motivated and concretely tested on the LESS digital signature, a code-based candidate that recently advanced to the second round of the NIST standardization process specifically purposed for post-quantum signatures. Remarkably, when compared to the Fiat-Shamir LESS baseline, SC-Coll-GAO incurs a computational cost increase by 50-60%, while signature sizes grow by only 10-20%.
Expand
◄ Previous Next ►