IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
12 September 2025
Won Kim, Jeonghwan Lee, Hyeonhak Kim, Changmin Lee
SQIsign is an isogeny-based, post-quantum signature scheme over supersingular elliptic curves that represents isogenies via objects of a quaternion algebra, enabling very compact signatures and efficient computations. However, the SQIsign implementation relies on GMP library, which dynamically allocates size of integers so hinders portability and complicates memory control. Furthermore, a consolidated worst-case bound on the integer coefficients representing quaternion algebra elements does not exist, leaving the required static precision unclear for a GMP-free implementation.
In this work, we audit every routine in the SQIsign Round-2 specification that manipulates quaternion elements and prove a uniform worst-case bound on coefficient growth. Complementing the theoretical bounds, we repeat the key generation and signing process of Round-2 SQIsign reference code implemented with GMP library, record peak operand sizes, and derive experimental bounds. Based on this bound, we choose a fixed-size precision representation and implement SQIsign in C without dynamic allocation such as GMP library.
In this work, we audit every routine in the SQIsign Round-2 specification that manipulates quaternion elements and prove a uniform worst-case bound on coefficient growth. Complementing the theoretical bounds, we repeat the key generation and signing process of Round-2 SQIsign reference code implemented with GMP library, record peak operand sizes, and derive experimental bounds. Based on this bound, we choose a fixed-size precision representation and implement SQIsign in C without dynamic allocation such as GMP library.
Jian Guo, Shichang Wang, Tianyu Zhang
ChiLow is a family of tweakable block ciphers proposed at EUROCRYPT 2025. In this paper, we present a cryptanalysis on ChiLow based on the Meet-in-the-Middle (MITM) attack framework.
For ChiLow-32, we first present an MITM attack on full ChiLow-32 exploiting the cipher's diffusion properties, which achieves a time complexity of $2^{122.6}$ using 97 known plaintext-ciphertext (P-C) pairs. Building on this, we further introduce a refinement based on the linearization of $\chi$ function. By using more known pairs, we significantly improve the attack, reducing the time complexity to $2^{108.6}$ with 196 known P-C pairs. For ChiLow-40, we mount an attack on reduced-round versions: a 7-round attack with time complexity $2^{127.4}$ requiring 164 known P-C pairs, and a 6-round attack with time complexity $2^{88.9}$ requiring 162 known P-C pairs.
Behzad Abdolmaleki, Ruben Baecker, Paul Gerhart, Mike Graf, Mojtaba Khalili, Daniel Rausch, Dominique Schröder
Password-Hardened Encryption (PHE) protects against offline brute-force attacks by involving an external ratelimiter that enforces rate-limited decryption without learning passwords or keys. Threshold Password-Hardened Encryption (TPHE), introduced by Brost et al. (CCS’20), distributes this trust among multiple ratelimiters. Despite its promise, the security foundations of TPHE remain unclear. We make three contributions:
(1) We uncover a flaw in the proof of Brost et al.’s TPHE scheme, which invalidates its claimed security and leaves the guarantees of existing constructions uncertain;
(2) We provide the first universal composability (UC) formalization of PHE and TPHE, unifying previous fragmented models and supporting key rotation, an essential feature for long-term security and related primitives such as updatable encryption;
(3) We present the first provably secure TPHE scheme, which is both round-optimal and UC-secure, thus composable in real-world settings; and we implement and evaluate our protocol, demonstrating practical efficiency that outperforms prior work in realistic WAN scenarios.
Mingshu Cong, Sherman S. M. Chow, Siu Ming Yiu, Tsz Hon Yuen
Sublinear proof sizes have recently become feasible in verifiable machine learning (VML), yet no approach achieves the trio of strictly linear prover time, logarithmic proof size and verification time, and architecture privacy. Hurdles persist because we lack a succinct commitment to the full neural network and a framework for heterogeneous models, leaving verification dependent on architecture knowledge. Existing limits motivate our new approach: a unified proof-composition framework that casts VML as the design of zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs) for matrix computations. Representing neural networks with linear and non-linear layers as a directed acyclic graph of atomic matrix operations enables topology-aware composition without revealing the graph. Modeled this way, we split proving into a reduction layer and a compression layer that attests to the reduction with a proof of proof. At the reduction layer, inspired by reduction of knowledge (Crypto '23), root-node proofs are reduced to leaf-node proofs under an interface standardized for heterogeneous linear and non-linear operations. Next, a recursive zkSNARK compresses the transcript into a single proof while preserving architecture privacy.
Complexity-wise, for a matrix expression with $M$ atomic operations on $n \times n$ matrices, the prover runs in $O(M n^2)$ time while proof size and verification time are $O(\log(M n))$, outperforming known VML systems. Honed for this framework, we formalize relations directly in matrices or vectors---a more intuitive form for VML than traditional polynomials. Our LiteBullet proof, an inner-product proof based on folding and its connection to sumcheck (Crypto '21), yields a polynomial-free alternative. With these ingredients, we reconcile heterogeneity, zero-knowledge, succinctness, and architecture privacy in a single VML system.
Complexity-wise, for a matrix expression with $M$ atomic operations on $n \times n$ matrices, the prover runs in $O(M n^2)$ time while proof size and verification time are $O(\log(M n))$, outperforming known VML systems. Honed for this framework, we formalize relations directly in matrices or vectors---a more intuitive form for VML than traditional polynomials. Our LiteBullet proof, an inner-product proof based on folding and its connection to sumcheck (Crypto '21), yields a polynomial-free alternative. With these ingredients, we reconcile heterogeneity, zero-knowledge, succinctness, and architecture privacy in a single VML system.
Gustavo Banegas, Andreas Hellenbrand, Matheus Saldanha
Isogeny-based cryptography has emerged as a promising post-quantum alternative,
with CSIDH and its constant-time variants \ctidh and \dctidh offering efficient
group-action protocols. However, \ctidh and~\dctidh rely on dummy
operations in differential addition chains (DACs) and Matryoshka, which
can be exploitable by fault-injection attacks.
In this work, we present the first \emph{dummy-free} implementation of
\dctidh. Our approach combines two recent ideas: \dacshund, which enforces
equal-length DACs within each batch without padding, and a reformulated
Matryoshka structure that removes dummy multiplications and validates all
intermediate points.
Our analysis shows that small primes such as $3,5,$ and $7$ severely restrict feasible
\dacshund configurations, motivating new parameter sets that exclude them.
We implement dummy-free \dctidh-2048-194 and \dctidh-2048-205,
achieving group action costs of roughly $357{,}000$–$362{,}000$ $\Fp$-multiplications,
with median evaluation times of $1.59$–$1.60$ (Gcyc).
These results do not surpass \dctidh, but they outperform \ctidh by roughly $5\%$
while eliminating dummy operations entirely. Compared to dCSIDH, our construction
is more than $4\times$ faster.
To the best of our knowledge, this is the first \textit{efficient}
implementation of a CSIDH-like protocol that is simultaneously deterministic,
constant-time, and fully dummy-free.
Lennart Braun, Geoffroy Couteau, Kelsey Melissaris, Mahshid Riahinia, Elahe Sadeghi
We introduce a new and efficient pseudorandom correlation function whose security reduces to the sparse LPN assumption in the random oracle model. Our construction is the first to achieve high concrete efficiency while relying on well-established assumptions: previous candidates either required introducing new assumptions, or had poor concrete performances. We complement our result with an in-depth analysis of the sparse LPN assumption, providing new insight on how to evaluate the strength of concrete sets of parameters.
Wenquan Zhou, An Wang, Yaoling Ding, Annv Liu, Jingqi Zhang, Jiakun Li, Liehuang Zhu
Non-invasive security constitutes an essential component of hardware security, primarily involving side-channel analysis (SCA), with various international standards explicitly mandating rigorous testing. However, current SCA assessments heavily depend on expert manual procedures, resulting in significant expenditures of time and resources. Automated evaluation frameworks are not yet available. In recent years, Large Language Models (LLMs) have been widely adopted in various fields such as language generation, owing to their emergent capabilities. Particularly, LLM agents equipped with tool-usage capabilities have significantly expanded the potential of these models to interact with the physical world.
Motivated by these recent advances in LLM agents, we propose SCA-GPT, a LLM agent framework tailored for SCA tasks. The framework incorporates a Retrieval-Augmented Generation (RAG)-based expert knowledge base along with multiple SCA tools. We present a domain-specific expert knowledge base construction approach and two complementary evaluation metrics. Retrieval experiments validate the effectiveness of our knowledge base construction, achieving an average weighted score of 88.7% and an nDCG@5 of 90%, which demonstrates the contribution of structured expert entries to retrieval accuracy.By effectively infusing expert knowledge, SCA-GPT achieves fully automated, end-to-end ISO/IEC 17825-compliant tests. We conduct comprehensive experiments across three leading LLMs—DeepSeek V3, Kimi K2 and GLM 4.5—using datasets spanning seven cryptographic algorithms (e.g., AES, RSA, ECC, Kyber) and deploying on four hardware platforms, including smart cards, microcontrollers, and FPGAs. Results show that DeepSeek V3, Kimi K2, and GLM 4.5 achieve accuracies of 91.0%, 87.7%, and 82.0%, respectively, with the agent reducing testing time by an average of 76% compared with manual procedures. Notably, SCA-GPT is the first advanced LLM agent specifically designed for SCA tasks.
Furkan Kerim Çabaş, Oğuz Yayla
Secure multi party computation protocols (MPC) translating between arithmetic and binary data types have recently gained attraction which is introduced by Rotaru and Wood in 2019, called daBit, and improved by Escudero et. al. called edaBits. EdaBits are simply secret shares in arithmetic domain and bit decomposition of the arithmetic share is the binary form the secret shares. These protocols are preprocessing for MPC protocols in order to improve efficiency. Furthermore, fluid MPC setting, introduced by Choudhuri, Goel, Green, Jain and Kaptchuk in 2021, is a framework which the parties, called dynamic parties, do not have be online throughout the whole process. Fluid MPC framework is still has to be worked and conventional protocols and techniques has to be adapted for it. Our work introduces an edaBits protocol specifically designed for the fluid MPC setting under a four-party honest-majority model. The protocol, which consists of eight subprotocols, achieves security against an honest majority and departs slightly from the traditional cut-and-choose paradigm. It attains linear memory and time complexity, as well as constant communication complexity. Finally, we demonstrate two applications that employ edaBits within the fluid MPC framework under the semi-honest adversary model.
Jiangxia Ge, Kang Yang, Yang Yu, Yu Yu
The decryption error is an important parameter that requires estimation when applying the Fujisaki-Okamoto (FO) transformation. In general, compared with the worst-case decryption error $\delta_{wc}$, the average-case decryption error $\delta_{ac}$ is simpler to estimate, as the latter is decoupled from the malicious message selected by the adversary. In light of this, Duman et al. (PKC 2023) proposed FO variants $FOAC_0:=FO_m^\bot\circ ACWC_0$ and $FOAC:=FO_m^\bot\circ ACWC$, and subsequently proved their IND-CCA security based on $\delta_{ac}$ and $\gamma$-spread in the ROM and QROM.
In this paper, we revisit the security proof of these variants and obtain the following results:
1, We present a tighter IND-CCA security proof of $FOAC_0$ in the ROM and QROM, while removing the requirement of $\gamma$-spread imposed by Duman et al. Furthermore, as a direct corollary, we fill the gap in the IND-CCA security proof of BAT (CHES 2022) and give a correct one.
2, We present a tighter IND-CCA msecurity proof of $FOAC$ in the QROM. In this proof, we also provide a tighter OW-CPA security proof of ACWC in the QROM, which reduces the loss factor of $q^2$ introduced by Duman et al. to $q$. This actually answers an open question proposed by them, where $q$ denotes the total number of random oracle queries.
3, Based on FOmnbot, we define $FOACmnbot:=FOmnbot\circ ACWC$ and provide its IND-CCA security proof in the ROM and QROM. The advantage of FOACmnbot is that it neither introduces ciphertext expansion as $FOAC_0$ does nor requires $\gamma$-spread as FOAC does.
In addition, we propose a new Check Query Replacement technique to complete our QROM proofs, which may be of independent interest.
In this paper, we revisit the security proof of these variants and obtain the following results:
1, We present a tighter IND-CCA security proof of $FOAC_0$ in the ROM and QROM, while removing the requirement of $\gamma$-spread imposed by Duman et al. Furthermore, as a direct corollary, we fill the gap in the IND-CCA security proof of BAT (CHES 2022) and give a correct one.
2, We present a tighter IND-CCA msecurity proof of $FOAC$ in the QROM. In this proof, we also provide a tighter OW-CPA security proof of ACWC in the QROM, which reduces the loss factor of $q^2$ introduced by Duman et al. to $q$. This actually answers an open question proposed by them, where $q$ denotes the total number of random oracle queries.
3, Based on FOmnbot, we define $FOACmnbot:=FOmnbot\circ ACWC$ and provide its IND-CCA security proof in the ROM and QROM. The advantage of FOACmnbot is that it neither introduces ciphertext expansion as $FOAC_0$ does nor requires $\gamma$-spread as FOAC does.
In addition, we propose a new Check Query Replacement technique to complete our QROM proofs, which may be of independent interest.
Artyom Kuninets, Anton Leevik, Ekaterina Malygina, Evgeniy Melnichuk, Denis Nabokov
In this work, we investigate the application of Barnes-Wall lattices in post-quantum cryptographic schemes. We survey and analyze several constructions of Barnes-Wall lattices, including subgroup chains, the generalized $k$-ing construction, and connections with Reed-Muller codes, highlighting their equivalence over both $\mathbb{Z}[i]$ and $\mathbb{Z}$. Building on these structural insights, we introduce a new algorithm for efficient sampling from discrete Gaussian distribution on $BW_{2^n}$ lattices. Our approach exploits the $k$-ing and squaring constructions to achieve low-variance sampling, which is particularly relevant for cryptographic applications such as digital signature schemes. We further examine the cryptographic hardness of Lattice Isomorphism Problem (LIP), showing that Barnes-Wall lattices provide inherent resistance to hull-based and other known attacks. Our results on sampling algorithms, combined with existing advances in the cryptanalysis of the LIP, indicate that Barnes-Wall lattices hold strong potential for the design of post-quantum schemes based on the LIP problem.
Mario Yaksetig, Jiayu Xu
In this work, we introduce Rayls, a new central bank digital currency (CBDC) design that provides privacy, high performance, and regulatory compliance. In our construction, commercial banks each run their own (private) ledger and are connected to an underlying decentralized programmable blockchain via a relayer.
We also introduce a novel protocol that allows for efficient anonymous transactions between banks, which we call Enygma. Our design is `quantum-private' as a quantum adversary is not able to infer any information (i.e., payer, payee, amounts) about the transactions that take place in the network. We achieve high performance in cross-bank settlement via the use of ZK-SNARKs and 'double' batching. Concretely, our transactions consist of a set of commitments and a zero-knowledge proof. As a result, each transaction can pay more than 1 bank at once and, secondly, each of these individual commitments can contain aggregated transfers from multiple users. For example, bank A transfers $1M to a different bank B and that amount is actually a sum of multiple users making transfers to bank B. Commercial banks can then enforce regulatory rules locally within their ledgers. Our system is in production with one of the largest clearing houses in the world and is currently being explored in a CBDC pilot.
Mario Yaksetig, Pedro M. F. Pereira, Stephen Yang, Mahdi Nejadgholi, Jiayu Xu
In this work, we introduce an upgrade to Rayls, a novel central bank digital currency (CBDC) design that provides privacy, high performance, and regulatory compliance. In the Rayls construction, commercial banks each run their own (private) ledger and are connected to an underlying decentralized programmable blockchain via a relayer.
We introduce an improved and more efficient protocol that allows for efficient anonymous transactions between banks, which is (still) called Enygma. Our design is 'quantum-private' as a quantum adversary is not able to infer any information (i.e., payer, payee, amounts) about the transactions that take place in the network. One of the main improvements of this work is that the design is now completely quantum-secure except for the current use of ZK-SNARKs. We note, however, that this is modular and can simply be updated as desired.
We achieve high performance in cross-bank settlement via the use of ZK-SNARKs and 'double' batching and an optimized ZK implementation, with a prover time three times faster than the initial implementation and a cheaper on-chain verifier. Our transactions consist of a set of commitments and a zero-knowledge proof. As a result, each transaction can pay more than 1 bank at once and, secondly, each of these individual commitments can contain aggregated transfers from multiple users. For example, bank A transfers $1M to a different bank B and that amount is actually a sum of multiple users making transfers to bank B. Commercial banks can then enforce regulatory rules locally within their ledgers. Our system is in production with one of the largest clearing houses in the world and is currently being explored in a CBDC pilot.
We achieve high performance in cross-bank settlement via the use of ZK-SNARKs and 'double' batching and an optimized ZK implementation, with a prover time three times faster than the initial implementation and a cheaper on-chain verifier. Our transactions consist of a set of commitments and a zero-knowledge proof. As a result, each transaction can pay more than 1 bank at once and, secondly, each of these individual commitments can contain aggregated transfers from multiple users. For example, bank A transfers $1M to a different bank B and that amount is actually a sum of multiple users making transfers to bank B. Commercial banks can then enforce regulatory rules locally within their ledgers. Our system is in production with one of the largest clearing houses in the world and is currently being explored in a CBDC pilot.
Sebastian Hasler, Pascal Reisert, Ralf Küsters
State-of-the-art actively secure multi-party computation protocols, like SPDZ (Damgård et al., CRYPTO 2012), use correlated randomness, like Beaver triples, to achieve a highly efficient online phase. For a long time, the generation of the correlated randomness in the offline phase relied on classical cryptographic primitives, like somewhat homomorphic encryption or oblivious transfer, that required significant communication.
More recently, Boyle et al. (FOCS 2020) defined a new primitive called pseudorandom correlation functions (PCFs) to generate correlated randomness non-interactively. PCFs set up keys for each party in an initial interactive phase, which can then be used by the parties to generate a large number of shares of the correlated randomness without further communication. In the random oracle model (ROM), two-party PCFs can be generically constructed based on evaluating a weak pseudorandom function (WPRF) using a powerful-enough HSS scheme. However, the concrete efficiency of instantiations of this approach has not been analyzed so far. There are also some works that construct PCFs based on other approaches, but they cannot be used for correlations of degree $\ge 2$ (e.g., Beaver triples) over large rings/fields (such as those used in SPDZ).
In this paper, we improve the complexity and concrete efficiency of PCFs over large rings/fields by presenting a new generic PCF based on the hardness of the Ring-Learning with Rounding problem (Ring-LWR) and FHE. We only share BFV keys in the initial interactive phase. The two parties then use the random oracle to locally sample BFV (pseudo-)ciphertexts encrypting pseudorandom plaintexts. We use a new bootstrapping algorithm for these (pseudo-)ciphertexts that reduces initially saturated noise to a level where the parties can use the homomorphic properties of the BFV scheme to correlate the encrypted randomness locally. Both parties can then produce, without further interaction, shares of the correlated randomness with their secret key share. Our new PCF works with any form of correlated randomness that can be expressed as an arithmetic circuit over a base ring $\mathbb Z_t$ or field $\mathbb F_{p^d}$, e.g., Beaver or matrix triples.
More recently, Boyle et al. (FOCS 2020) defined a new primitive called pseudorandom correlation functions (PCFs) to generate correlated randomness non-interactively. PCFs set up keys for each party in an initial interactive phase, which can then be used by the parties to generate a large number of shares of the correlated randomness without further communication. In the random oracle model (ROM), two-party PCFs can be generically constructed based on evaluating a weak pseudorandom function (WPRF) using a powerful-enough HSS scheme. However, the concrete efficiency of instantiations of this approach has not been analyzed so far. There are also some works that construct PCFs based on other approaches, but they cannot be used for correlations of degree $\ge 2$ (e.g., Beaver triples) over large rings/fields (such as those used in SPDZ).
In this paper, we improve the complexity and concrete efficiency of PCFs over large rings/fields by presenting a new generic PCF based on the hardness of the Ring-Learning with Rounding problem (Ring-LWR) and FHE. We only share BFV keys in the initial interactive phase. The two parties then use the random oracle to locally sample BFV (pseudo-)ciphertexts encrypting pseudorandom plaintexts. We use a new bootstrapping algorithm for these (pseudo-)ciphertexts that reduces initially saturated noise to a level where the parties can use the homomorphic properties of the BFV scheme to correlate the encrypted randomness locally. Both parties can then produce, without further interaction, shares of the correlated randomness with their secret key share. Our new PCF works with any form of correlated randomness that can be expressed as an arithmetic circuit over a base ring $\mathbb Z_t$ or field $\mathbb F_{p^d}$, e.g., Beaver or matrix triples.
Daniel Pöllman, Tianxin Tang
Encrypted search focuses on protecting sensitive data in outsourced environments while enabling private queries. Although standard encrypted search algorithms are efficient, they often leak some information about the queries and data. One such leakage is the access pattern on the outsourced storage. Recent leakage-abuse attacks have exploited this seemingly harmless leakage to successfully recover both queries and data, shifting research priorities towards finding the right balance between privacy and performance. While some proposals leverage oblivious RAM or other oblivious data structures to hide the access pattern, they typically incur significant bandwidth costs.
In response, researchers have developed new schemes that ensure access leakage satisfies differential privacy (DP). Yet the security implications of these new guarantees remain unclear. Especially, compared with conventional differential privacy, the application and threat model are significantly different. To understand these implications, we investigate two concrete instances of (encrypted) range-query schemes (appeared in SODA ’19 and CCS ’22) that achieve differentially private access. We analyze their security guarantees using inference attacks to recover queries and data on real-world datasets. Our findings raise a critical concern that ensuring access leakage is differentially private either falls short of providing strong security for the queries and data, diverging from the initial goals, or offers only weak security but at a high efficiency/correctness cost. As part of our analysis, we also propose a generic security definition for DP access, and identify two general techniques for leakage mitigation, bucketization and partitioning, that may be of independent interest.
Alex Charlès, Aleksei Udovenko
In the area of white-box cryptography implementations, many existing protections are susceptible to attacks derived from physical cryptanalysis, which can be applied with minimal human effort and no prior design knowledge. The absence of a clear and comprehensive security model hinders the development of effective countermeasures against these attacks.
We introduce the Haystack ciphers, a formal model for the security of white-box countermeasures against such attacks. In this model, the countermeasures are represented simply as symmetric-key encryption schemes. We show that their chosen-plaintext (IND-CPA) security is closely related to the resistance of the countermeasures against computational trace-based attacks. Similarly, their chosen-ciphertext (IND-CCA) security is closely associated with the resistance against fault injection attacks in the white-box model. Secure Haystack ciphers constitute the next formal milestone for advancing white-box designs and countermeasures, the minimal requirement that is not currently clearly achieved but is plausibly feasible with available tools.
We review the white-box literature with respect to our model and bridge the gap between white-box and fault attacks, which are very powerful but were only partially considered in the white-box literature so far. We study known fault protections from the physical cryptography literature and present new fault attacks in the white-box setting, which raises the need and shapes the requirements for future secure countermeasures against fault attacks.
We introduce the Haystack ciphers, a formal model for the security of white-box countermeasures against such attacks. In this model, the countermeasures are represented simply as symmetric-key encryption schemes. We show that their chosen-plaintext (IND-CPA) security is closely related to the resistance of the countermeasures against computational trace-based attacks. Similarly, their chosen-ciphertext (IND-CCA) security is closely associated with the resistance against fault injection attacks in the white-box model. Secure Haystack ciphers constitute the next formal milestone for advancing white-box designs and countermeasures, the minimal requirement that is not currently clearly achieved but is plausibly feasible with available tools.
We review the white-box literature with respect to our model and bridge the gap between white-box and fault attacks, which are very powerful but were only partially considered in the white-box literature so far. We study known fault protections from the physical cryptography literature and present new fault attacks in the white-box setting, which raises the need and shapes the requirements for future secure countermeasures against fault attacks.
Chi Feng, Lei Fan
This paper presents BlockLens, a supervised, trace-level framework for detecting malicious Ethereum transactions using large language models. Unlike previous approaches that rely on static features or storage-level abstractions, our method processes complete execution traces, capturing opcode sequences, memory information, gas usage, and call structures to accurately represent the runtime behavior of each transaction. This framework harnesses the exceptional reasoning capabilities of LLMs for long input sequences and is fine-tuned on transaction data.
We present a tokenization strategy aligned with Ethereum Virtual Machine (EVM) semantics that converts transaction execution traces into tokens. Each transaction captures its complete execution trace through simulated execution and is sliced into overlapping chunks using a sliding window, allowing for long-range context modeling within memory constraints. During inference, the model outputs both a binary decision and a probability score indicating the likelihood of malicious behavior.
We implemented the framework based on LLaMA 3.2-1B and fine-tuned the model using LoRA. We evaluated it on a curated dataset that includes both real-world attacks and normal DeFi transactions. Our model outperforms representative baselines, achieving higher F1 scores and recall at top-k thresholds. Additionally, this work offers interpretable chunk-level outputs that enhance explainability and facilitate actionable decision-making in security-critical environments.
We present a tokenization strategy aligned with Ethereum Virtual Machine (EVM) semantics that converts transaction execution traces into tokens. Each transaction captures its complete execution trace through simulated execution and is sliced into overlapping chunks using a sliding window, allowing for long-range context modeling within memory constraints. During inference, the model outputs both a binary decision and a probability score indicating the likelihood of malicious behavior.
We implemented the framework based on LLaMA 3.2-1B and fine-tuned the model using LoRA. We evaluated it on a curated dataset that includes both real-world attacks and normal DeFi transactions. Our model outperforms representative baselines, achieving higher F1 scores and recall at top-k thresholds. Additionally, this work offers interpretable chunk-level outputs that enhance explainability and facilitate actionable decision-making in security-critical environments.
Sohyun Jeon, Calvin Abou Haidar, Mehdi Tibouchi
In this paper, we construct the first lattice-based threshold ring signature scheme with signature size scaling logarithmically in the size of the ring while supporting arbitrary thresholds. Our construction is also concretely efficient, achieving signature sizes of less than 150kB for ring sizes up to $N = 4096$ (with threshold size $T=N/2$, say). This is substantially more compact than previous work.
Our approach is inspired by the recent work of Aardal et al. (CRYPTO 2024) on the compact aggregation of $\mathsf{Falcon}$ signatures, that uses the $\mathsf{LaBRADOR}$ lattice-based SNARKs to combine a collection of $\mathsf{Falcon}$ signatures into a single succinct argument of knowledge of those signatures. We proceed in a similar way to obtain compact threshold ring signatures from \falcon, but crucially require that the proof system be zero-knowledge in order to ensure the privacy of signers. Since $\mathsf{LaBRADOR}$ is not a zkSNARK, we associate it with a separate (non-succinct) lattice-based zero-knowledge proof system to achieve our desired properties.
Our approach is inspired by the recent work of Aardal et al. (CRYPTO 2024) on the compact aggregation of $\mathsf{Falcon}$ signatures, that uses the $\mathsf{LaBRADOR}$ lattice-based SNARKs to combine a collection of $\mathsf{Falcon}$ signatures into a single succinct argument of knowledge of those signatures. We proceed in a similar way to obtain compact threshold ring signatures from \falcon, but crucially require that the proof system be zero-knowledge in order to ensure the privacy of signers. Since $\mathsf{LaBRADOR}$ is not a zkSNARK, we associate it with a separate (non-succinct) lattice-based zero-knowledge proof system to achieve our desired properties.
Cheng Che, Tian Tian
Differential-linear cryptanalysis was introduced by Langford and Hellman at CRYPTO'94 and has been an important cryptanalysis method against symmetric-key primitives. The current primary framework for constructing differential-linear distinguishers involves dividing the cipher into three parts: the differential part $E_0$, the middle connection part $E_m$, and the linear part $E_1$. This framework was first proposed at EUROCRYPT 2019, where DLCT was introduced to evaluate the differential-linear bias of $E_m$ over a single round. Recently, the TDT method and the generalized DLCT method were proposed at CRYPTO 2024 to evaluate the differential-linear bias of $E_m$ covering multiple rounds. Unlike the DLCT framework, the DATF technique could also handle $E_m$ covering more rounds.
In this paper, we enhance the DATF technique in differential-linear cryptanalysis from three perspectives. First, we improve the precision of differential-linear bias estimation by introducing new transitional rules, a backtracking strategy, and a partitioning technique to DATF. Second, we present a general bias computation method for Boolean functions that significantly reduces computational complexity compared to the exhaustive search used by Liu et al. in the previous DATF technique. Third, we propose an effective method for searching for differential-linear distinguishers with good biases based on DATF. Additionally, the bias computation method has independent interests with a wide application in other cryptanalysis methods, such as differential cryptanalysis and cube attacks. Notably, all these enhancements to DATF are equally applicable to HATF. To show the validity and versatility of our new techniques, we apply the enhanced DATF to the NIST standard Ascon, the AES finalist Serpent, the NIST LWC finalist Xoodyak, and the eSTREAM finalist Grain v1. In all applications, we either present the first differential-linear distinguishers for more rounds or update the best-known ones.
In this paper, we enhance the DATF technique in differential-linear cryptanalysis from three perspectives. First, we improve the precision of differential-linear bias estimation by introducing new transitional rules, a backtracking strategy, and a partitioning technique to DATF. Second, we present a general bias computation method for Boolean functions that significantly reduces computational complexity compared to the exhaustive search used by Liu et al. in the previous DATF technique. Third, we propose an effective method for searching for differential-linear distinguishers with good biases based on DATF. Additionally, the bias computation method has independent interests with a wide application in other cryptanalysis methods, such as differential cryptanalysis and cube attacks. Notably, all these enhancements to DATF are equally applicable to HATF. To show the validity and versatility of our new techniques, we apply the enhanced DATF to the NIST standard Ascon, the AES finalist Serpent, the NIST LWC finalist Xoodyak, and the eSTREAM finalist Grain v1. In all applications, we either present the first differential-linear distinguishers for more rounds or update the best-known ones.
Akhil Bandarupalli, Xiaoyu Ji, Soham Jog, Aniket Kate, Chen-Da Liu-Zhang, Yifan Song
Verifiable Secret Sharing (VSS) is a fundamental primitive in threshold cryptography and multi-party computation. It preserves secrecy, integrity, and availability of a shared secret for a fixed set of parties, with a subset of them being malicious. In practical applications, especially when the secret sharing is expected to be maintained over long durations, the VSS scheme should be able to cater to a dynamic setting where involved parties may change. The primitive known as Dynamic Proactive Secret Sharing (DPSS) is beneficial here, as it facilitates the secure transfer of secrets from the original committee to a new committee. Nevertheless, prior works on DPSS protocols either rely on unrealistic time bounds on message delivery or have a high computational cost, which limits their scalability beyond tens of parties.
In this work, we present a scalable asynchronous DPSS protocol that utilizes lightweight cryptographic tools, such as hash functions and symmetric-key encryption. Our protocol achieves full security and optimal fault tolerance with amortized linear communication costs. Unlike existing solutions, our proposed protocol is also post-quantum secure. By balancing computation and communication, our approach offers practical performance at scale. Our implementation results demonstrate improved scalability and efficiency, surpassing the current state-of-the-art achieving a $22.1\times$ lower latency than the prior best work. Furthermore, our solution also scales gracefully with increasing $n$ and reshares a batch of $100,000$ secrets between committees of sizes $n=112$ parties in under a minute.
In this work, we present a scalable asynchronous DPSS protocol that utilizes lightweight cryptographic tools, such as hash functions and symmetric-key encryption. Our protocol achieves full security and optimal fault tolerance with amortized linear communication costs. Unlike existing solutions, our proposed protocol is also post-quantum secure. By balancing computation and communication, our approach offers practical performance at scale. Our implementation results demonstrate improved scalability and efficiency, surpassing the current state-of-the-art achieving a $22.1\times$ lower latency than the prior best work. Furthermore, our solution also scales gracefully with increasing $n$ and reshares a batch of $100,000$ secrets between committees of sizes $n=112$ parties in under a minute.
Akhil Bandarupalli, Xiaoyu Ji, Aniket Kate, Chen-Da Liu-Zhang, Daniel Pöllmann, Yifan Song
Multi-party computation (MPC) enables a set of mutually $n$ distrusting parties to compute any function on their private inputs. Mainly, MPC facilitates agreement on the function’s output while preserving the secrecy of honest inputs, even against a subset of $t$ parties controlled by an adversary. With applications spanning from anonymous broadcast to private auctions, MPC is considered a cornerstone of distributed cryptography, and significant research efforts have been aimed at making MPC practical in the last decade. However, most libraries either make strong assumptions like the network being bounded synchronous, or incur high computation overhead from the extensive use of expensive public-key operations that prevent them from scaling beyond a few dozen parties.
This work presents Velox, an asynchronous MPC protocol that offers fairness against an optimal adversary corrupting up to $t<\frac{n}{3}$ parties. Velox significantly enhances practicality by leveraging lightweight cryptographic primitives - such as symmetric-key encryption and hash functions - which are 2-3 orders of magnitude faster than public-key operations, resulting in substantial computational efficiency. Moreover, Velox is highly communication-efficient, with linear amortized communication relative to circuit size and only $\mathcal{O}(n^3)$ field elements of additive overhead. Concretely, Velox requires just $9.33$ field elements per party per multiplication gate, more than $10\times$ reduction compared to the state of the art. Moreover, Velox also offers Post-Quantum Security as lightweight cryptographic primitives retain their security against a quantum adversary.
We implement Velox comprehensively, covering both offline and online phases, and evaluate its performance on a geographically distributed testbed through a real-world application: anonymous broadcast. Our implementation securely shuffles a batch of $k=256$ messages in $4$ seconds with $n=16$ parties and $18$ seconds with $n=64$ parties, a $36\times$ and $28.6\times$ reduction in latency compared to the prior best work. At scale with $n=112$ parties, Velox is able to shuffle the same batch of messages in under $50$ seconds from end to end, illustrating its effectiveness and scalability. Overall, our work removes significant barriers faced by prior asynchronous MPC solutions, making asynchronous MPC practical and efficient for large-scale deployments involving $100$s of parties.
This work presents Velox, an asynchronous MPC protocol that offers fairness against an optimal adversary corrupting up to $t<\frac{n}{3}$ parties. Velox significantly enhances practicality by leveraging lightweight cryptographic primitives - such as symmetric-key encryption and hash functions - which are 2-3 orders of magnitude faster than public-key operations, resulting in substantial computational efficiency. Moreover, Velox is highly communication-efficient, with linear amortized communication relative to circuit size and only $\mathcal{O}(n^3)$ field elements of additive overhead. Concretely, Velox requires just $9.33$ field elements per party per multiplication gate, more than $10\times$ reduction compared to the state of the art. Moreover, Velox also offers Post-Quantum Security as lightweight cryptographic primitives retain their security against a quantum adversary.
We implement Velox comprehensively, covering both offline and online phases, and evaluate its performance on a geographically distributed testbed through a real-world application: anonymous broadcast. Our implementation securely shuffles a batch of $k=256$ messages in $4$ seconds with $n=16$ parties and $18$ seconds with $n=64$ parties, a $36\times$ and $28.6\times$ reduction in latency compared to the prior best work. At scale with $n=112$ parties, Velox is able to shuffle the same batch of messages in under $50$ seconds from end to end, illustrating its effectiveness and scalability. Overall, our work removes significant barriers faced by prior asynchronous MPC solutions, making asynchronous MPC practical and efficient for large-scale deployments involving $100$s of parties.