IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
19 May 2025
Rafael del Pino, Thomas Espitau, Guilhem Niot, Thomas Prest
We introduce simple yet efficient lattice-based threshold signatures with identifiable aborts, secure under the MLWE assumption. Central to our construction are novel Distributed Key Generation with Short Shares (sDKG) protocols over lattices, ensuring short shares, small reconstruction coefficients, and linear evaluation of honest shares. This
uniquely realizes the "threshold designer's dream": signature shares double as valid signatures under the corresponding secret key shares. With two concrete instantiations (ramp and replicated secret sharings), our schemes match Threshold Raccoon (del Pino et al. EUROCRYPT 2024)’s compact ~10kB size. Further, we unveil 'Death Star Detection', a new algorithm that enhances identifiable aborts by efficiently spotting short vector adversarial correlations, of interest beyond threshold signatures.
Yiwen Gao, Dongliang Cai, Yang Xu, Haibin Kan
Proximity testing for linear codes is a fundamental problem in coding theory with critical applications in cryptographic protocols, blockchain, and distributed storage systems. This work addresses the proximity gaps for linear codes, a crucial aspect for efficiently verifying whether a batch of codewords is close to a given code. We present a general framework for deriving proximity gaps from the list-decodability properties of the underlying linear code.
Our main result shows that if a code $C\subseteq \mathbb{F}_q^n$ is $(p,L)$-list-decodable, then the probability that a random combination of a batch of $t$ codewords containing a $\delta$-far codeword (for $\delta\le 1-\sqrt{1-p+\varepsilon}$) remains $\delta$-far from $C$ is bounded by $O(\frac{tL^2pn}{q}+\frac{t}{\varepsilon q})$. This result also establishes a form of (mutual) correlated agreement for linear codes, which can be used to strengthen soundness analyses in protocols that rely on proximity testing, thereby reducing query complexity and enabling practical instantiations over smaller finite fields. In particular, we apply our main result to randomly punctured Reed–Solomon codes and folded Reed–Solomon codes—both of which are known to achieve list-decodability up to capacity—and derive linear proximity gaps for these families under the Johnson bound.
Our main result shows that if a code $C\subseteq \mathbb{F}_q^n$ is $(p,L)$-list-decodable, then the probability that a random combination of a batch of $t$ codewords containing a $\delta$-far codeword (for $\delta\le 1-\sqrt{1-p+\varepsilon}$) remains $\delta$-far from $C$ is bounded by $O(\frac{tL^2pn}{q}+\frac{t}{\varepsilon q})$. This result also establishes a form of (mutual) correlated agreement for linear codes, which can be used to strengthen soundness analyses in protocols that rely on proximity testing, thereby reducing query complexity and enabling practical instantiations over smaller finite fields. In particular, we apply our main result to randomly punctured Reed–Solomon codes and folded Reed–Solomon codes—both of which are known to achieve list-decodability up to capacity—and derive linear proximity gaps for these families under the Johnson bound.
Michał Wroński, Łukasz Dzierzkowski, Mateusz Leśniak, Ewa Syta
We introduce a unified approach to quantum cryptanalysis that reduces all \emph{standard abelian hidden subgroup problems} (SAHSPs), including integer factorization, discrete logarithm in finite fields (DLP), and discrete logarithm on elliptic curves, to a single core problem: the \emph{elliptic curve discrete logarithm problem} (ECDLP).
This reduction forms the foundation of a framework for quantum cryptanalysis based on ECDLP. At the core of this framework is a \emph{semi-agnostic quantum algorithm} for solving ECDLP, which requires only partial knowledge of the problem's group structure. By operating on singular curves, we can translate problems such as integer factorization and finite field DLP into the ECDLP.
Leveraging the semi-agnostic algorithm for ECDLP, we propose a \emph{universal, reconfigurable quantum circuit} design that can solve any SAHSP instance by executing the ECDLP solver with appropriate curve and field parameters, without requiring problem-specific customization.
We prove that solving ECDLP in this model is sufficient to solve all instances of SAHSPs, establishing ECDLP as a \emph{complete problem} for this class.
These results unify the structure of Shor's algorithm across all SAHSPs, eliminating the need for custom quantum circuits for each problem. This demonstrates the practical feasibility of universal quantum cryptanalysis and underscores the urgency of transitioning to cryptographic systems that remain secure against quantum-capable adversaries.
Sicheng Wei, Jingwei Hu
This paper presents an efficient protocol for private set intersection in a setting with multiple set owners and a semi-honest cloud server. The core idea is to reduce the intersection computation to secure operations over Bloom filters, enabling both scalability and efficiency. By leveraging this transformation, our protocols achieve strong privacy guarantees while minimizing computation and communication overhead.
Baraq Ghaleb, William J Buchanan
Homomorphic encryption provides many opportunities for privacy-aware processing, including with methods related to machine learning. Many of our existing cryptographic methods have been shown in the past to be susceptible to side channel attacks. With these, the implementation of the cryptographic methods can reveal information about the private keys used, the result, or even the original plaintext. An example of this includes the processing of the RSA exponent using the Montgomery method, and where 0's and 1's differ in their processing time for modular exponentiation. With FHE, we typically use lattice methods, and which can have particular problems in their implementation in relation to side channel leakage. This paper aims to outline a range of weaknesses within FHE implementations as related to side channel analysis. It outlines a categorization for side-channel analysis, some case studies, and mitigation strategies.
Weishen Zou, Bruno Martin, Thomas Prévost
We explore the application of the QUBO and Ising models to the integer factorization problem with implications for the security of public-key algorithms such as RSA. A key contribution is a program that applies existing algorithms to parameterize and simulate integer factorization through an Ising model in order to replicate previous works. Due to limited access to quantum hardware, we use classical heuristic methods to approximate solutions.
Yanpei Guo, Alex Luoyuan Xiong, Wenjie Qu, Jiaheng Zhang
Scalable data availability (DA) is essential for high-throughput, decentralized blockchains, enabling lightweight nodes to verify block availability without incurring the prohibitive costs of full data replication.
Reed-Solomon (RS) code commitment schemes underpin modern DA protocols by ensuring that dispersed data fragments can be verified as part of a valid codeword, even in the presence of malicious block producers.
However, state-of-the-art schemes such as FRIDA (Crypto'24), while computationally efficient, incur substantial per-node communication overhead at the scale of thousands of network nodes, often 5.7$\times$ the size of the actual data fragment.
In this work, we introduce CONDA, a new interleaved RS code commitment scheme that significantly reduces the communication overhead while retaining FRIDA's prover efficiency. At its core is a novel evaluation consolidation technique for polynomial commitment scheme (PCS) that reduces the problem of proving $n$ evaluations at fixed points (one per verifier) to a single evaluation at a random point, using logarithmic communication. This technique is lightweight, hash-based, and compatible with any multilinear PCS.
To further optimize for DA applications, we introduce LightLigero, a new multilinear PCS that improves upon DeepFold (Sec'25) with $O(\log n)$ reduction in proof size and only $30\%$ slowdown in prover time. Combining CONDA and LightLigero yields an efficient DA scheme for thousands of nodes.
Our implementation demonstrates a 4$\times$ reduction in communication cost compared to FRIDA, while incurring only a 25\% increase in prover time. It also achieves near-best prover time and near-best communication cost simultaneously among all code commitment schemes. CONDA also offers at least $3\times$ smaller proofs and $4\times$ faster provers than state-of-the-art verifiable secret sharing constructions such as ZXH+22 (Sec'22) and PolyFRIM (Sec'24).
In this work, we introduce CONDA, a new interleaved RS code commitment scheme that significantly reduces the communication overhead while retaining FRIDA's prover efficiency. At its core is a novel evaluation consolidation technique for polynomial commitment scheme (PCS) that reduces the problem of proving $n$ evaluations at fixed points (one per verifier) to a single evaluation at a random point, using logarithmic communication. This technique is lightweight, hash-based, and compatible with any multilinear PCS.
To further optimize for DA applications, we introduce LightLigero, a new multilinear PCS that improves upon DeepFold (Sec'25) with $O(\log n)$ reduction in proof size and only $30\%$ slowdown in prover time. Combining CONDA and LightLigero yields an efficient DA scheme for thousands of nodes.
Our implementation demonstrates a 4$\times$ reduction in communication cost compared to FRIDA, while incurring only a 25\% increase in prover time. It also achieves near-best prover time and near-best communication cost simultaneously among all code commitment schemes. CONDA also offers at least $3\times$ smaller proofs and $4\times$ faster provers than state-of-the-art verifiable secret sharing constructions such as ZXH+22 (Sec'22) and PolyFRIM (Sec'24).
Hiroki Okada, Rachel Player, Simon Pohmann
Implementations of modern FHE schemes are available in various highly-optimized libraries. Many of these libraries are designed to allow developers who may not have deep expertise in FHE to build fast and secure privacy-preserving applications. To support such users, the API of these libraries often hides the internals of the schemes in question from the user. However, this design choice makes it hard for users of these libraries to modify existing schemes, or implement new ones; work that is often valuable when conducting research on FHE schemes.
We present our new Rust library Fheanor, which aims to facilitate such research on FHE schemes. The core target user is an FHE researcher, rather than an application developer. Most importantly, the design of Fheanor is very modular, and mirrors the mathematical structure of the available FHE schemes. By exposing the mathematical structure, but still hiding implementation details, it is easy to modify or extend the functionality of FHE schemes implemented in the library and still preserve high performance. Indeed, Fheanor demonstrates performance that is close to that of HElib or SEAL, with the potential for optimizations in the future.
Fheanor implements several features that have not, or have only rarely, been implemented in previous libraries. These include non-power-of-two cyclotomic rings, single-RNS based ring arithmetic, the CLPX/GBFV scheme, and bootstrapping for BFV and BGV.
In addition, this paper presents new theoretical contributions that are also implemented in Fheanor. The first is an extension of optimal digit extraction circuits, used in BFV/BGV bootstrapping, to the case 2^23. The second is a more efficient algorithm for computing the trace in the non-power-of-two cyclotomic setting.
17 May 2025
Gaëtan Cassiers, Siemen Dhooghe, Thorben Moos, Sayandeep Saha, François-Xavier Standaert
Cryptographic implementations are vulnerable to active physical attacks where adversaries inject faults to extract sensitive information. Existing fault models, such as the threshold and random fault models, assume limitations on the amount or probability of injecting faults. Such models, however, insufficiently address the case of practical fault injection methods capable of faulting a large proportion of the wires in a circuit with high probability. Prior works have shown that this insufficiency can lead to concrete key recovery attacks against implementations proven secure in these models. We address this blind spot by introducing the uniform random fault model, which relaxes assumptions on the amount/probability of faults and instead assumes a uniform probabilistic faulting of all wires in a circuit or region. We then show that security in this new model can be reduced to security in the random fault model by inserting canaries in the circuit to ensure secret-independent fault detection. We prove that combining canaries with a more classical fault countermeasure such as redundancy can lead to exponential fault security in the uniform random fault model at a polynomial cost in circuit size in the security parameter. Finally, we discuss the interactions between our work and the practical engineering challenges of fault security, shedding light on how the combination of state-of-the-art countermeasures may protect against injections of many high probability faults, while opening a path to methodologies that formally analyze the guarantees provided by such countermeasures.
Gopal Singh
The security of block ciphers such as AES-128, AES-192, and AES-256 relies on the assumption that their ciphertext outputs are computationally indistinguishable from random permutations. While distinguishers have been proposed for reduced-round variants or under non-standard models such as known-key or chosen-key settings, no effective distinguisher has been demonstrated for the full-round AES ciphers in the standard secret-key model.
This work introduces FESLA (Feature Enhanced Statistical Learning Attack), a hybrid statistical learning framework that integrates outputs from a suite of classical statistical tests with machine learning and deep learning classifiers to construct ciphertext-only distinguishers for AES-128, AES-192, and AES-256. In contrast to existing approaches based on handcrafted or bitwise features, FESLA aggregates intermediate statistical metrics as features, enabling the capture of persistent structural biases in ciphertext distributions.
Experimental evaluation across multiple datasets demonstrates consistent 100% classification accuracy using Support Vector Machines, Random Forests, Multi-Layer Perceptron, Logistic Regression, and Naïve Bayes classifiers. Generalization and robustness are confirmed through k-fold cross-validation, including on previously unseen ciphertext samples.
These results establish the first ciphertext-only distinguishers for full-round AES-128, AES-192, and AES-256 under the secret-key model, and underscore the potential of machine learning–augmented cryptanalysis based on principled statistical feature engineering.
This work introduces FESLA (Feature Enhanced Statistical Learning Attack), a hybrid statistical learning framework that integrates outputs from a suite of classical statistical tests with machine learning and deep learning classifiers to construct ciphertext-only distinguishers for AES-128, AES-192, and AES-256. In contrast to existing approaches based on handcrafted or bitwise features, FESLA aggregates intermediate statistical metrics as features, enabling the capture of persistent structural biases in ciphertext distributions.
Experimental evaluation across multiple datasets demonstrates consistent 100% classification accuracy using Support Vector Machines, Random Forests, Multi-Layer Perceptron, Logistic Regression, and Naïve Bayes classifiers. Generalization and robustness are confirmed through k-fold cross-validation, including on previously unseen ciphertext samples.
These results establish the first ciphertext-only distinguishers for full-round AES-128, AES-192, and AES-256 under the secret-key model, and underscore the potential of machine learning–augmented cryptanalysis based on principled statistical feature engineering.
Mahdi Rahimi
Mix networks (mixnets) safeguard client anonymity by forwarding traffic through multiple intermediary nodes (mixnodes), which reorder and delay messages to obscure communication patterns against a global passive adversary capable of monitoring all network transmissions.
The anonymity provided by mixnets is usually assessed with a discrete-event simulator, gauging a target message's indistinguishability among output messages. While useful for comparative analysis, this approach only approximates the mixnet's anonymity potential. Hence, this paper sheds light on the necessity of considering the client (originator of messages) itself to gauge anonymity accurately. We further provide an algorithm (simulator) to simulate client anonymity for Loopix mixnets. We conduct experiments to optimize general Loopix mixnet parameters, considering both message and client anonymity. Our findings indicate that message anonymity often provides an upper bound and can yield misleading results for mixnet optimization, underscoring the importance of client anonymity. Additionally, we explore scenarios where client anonymity is significantly compromised due to an insufficient number of clients. To address these cases, we propose a multimixing strategy that enhances client anonymity by effectively merging varied traffic types with different mixing characteristics.
Debajyoti Das, Jeongeun Park
Anonymous communication is one of the fundamental tools to achieve privacy for communication over the internet. Almost all existing design strategies (e.g., onion routing/Tor, mixnets) for anonymous communication rely on the existence of some honest server/router in the network infrastructure to provide anonymity. A recent seminal work by Shi and Wu (Eurocrypt 2021) proposes the first cryptographic design for a non-interactive anonymous router (NIAR) that can use a single untrusted server or router to permute a set of messages without revealing the permutation to the untrusted router. This work is a really important step towards showing the possibility of designing such protocol from standard cryptographic assumptions. However, their construction is only of theoretical nature and still leaves many open questions towards realizing such systems in practice: (1) the cryptographic building blocks (multi-client functional encryption, correlated pseudorandom function) used in their design are really difficult to implement in practice. (2) Their setup phase takes the permutation as an input to generate the encryption/decryption keys; which means that the messages from the same sender in different rounds will be at the same position in the output vector, unless the setup phase is run before every round with a new permutation. (3) It is not known how to realize such a setup procedure, that initializes a random permutation obliviously, without any trusted entities in the system.
In this paper, we propose the first (somewhat) practical design, which we call sPAR, that solves the above problems using homomorphic encryption techniques. Our design also relies on a one-time setup phase, however the setup phase does not take any specific permutation as input. Instead, our design generates a fresh permutation for every round based on the random values locally generated by the clients. Already existing practical instantiations of fully homomorphic encryption (FHE) schemes make our design implementable and deployable in practice. Our design presents a new direction for designing anonymous communication systems. Unlike some existing systems like Tor, sPAR does not scale to millions of users, however, we demonstrate with a proof-of-concept implementation that sPAR could easily support around hundred users with a few seconds of latency for each message.
In this paper, we propose the first (somewhat) practical design, which we call sPAR, that solves the above problems using homomorphic encryption techniques. Our design also relies on a one-time setup phase, however the setup phase does not take any specific permutation as input. Instead, our design generates a fresh permutation for every round based on the random values locally generated by the clients. Already existing practical instantiations of fully homomorphic encryption (FHE) schemes make our design implementable and deployable in practice. Our design presents a new direction for designing anonymous communication systems. Unlike some existing systems like Tor, sPAR does not scale to millions of users, however, we demonstrate with a proof-of-concept implementation that sPAR could easily support around hundred users with a few seconds of latency for each message.
Hongyuan Qu, Guangwu Xu
As a theoretical cornerstone of post-quantum cryptography, the Learning With Errors (LWE) problem serves as the security foundation for standardized algorithms such as Kyber and Dilithium. Recently, a framework for provable dual attacks on LWE has been proposed by Pouly et al. in Eurocrypt 2024, addressing the limitations in effectiveness caused by existing methods' reliance on heuristic assumptions in LWE dual attacks. Their paper also poses an open problem on how to formally integrate modulus switching into this framework to reduce attack costs. The main purpose of this paper is to give a solution of this open problem by presenting an improved provable dual attack method that incorporates modulus switching and Chinese Remainder Theorem (CRT) techniques. First, we design a modulus switching mechanism that eliminates practical errors via the Poisson summation formula. By embedding the inherent noise from modulus switching into a rational lattice framework, our approach effectively preventing the risk of attack failure caused by the merging of such errors with LWE noise. Theoretical guarantees (Theorems 4 and 5) rigorously quantify the parameter ranges for successful attacks. Second, we introduce a CRT-based secret recovery method that aggregates partial secrets from independent sub-attacks. By leveraging the Chinese Remainder Theorem to reconstruct full secrets from congruence relations, our method adapts to arbitrary secret distributions. Furthermore, by using a tighter variant of Banaszczyk's measure inequality, we obtain a precise parameter range for the dual attack's efficacy through rigorous mathematical proof, and achieve the same complementary gap with the contradictory regime (proposed by Ducas et al.) as in Pouly et al.'s work. Experiments show 15-29 bit superior performance in attack estimation compared to the original framework.
Fabrice Benhamouda, Caicai Chen, Shai Halevi, Yuval Ishai, Hugo Krawczyk, Tamer Mour, Tal Rabin, Alon Rosen
Motivated by applications to efficient secure computation, we consider the following problem of encrypted matrix–vector product (EMVP). Let $\mathbb F$ be a finite field. In an offline phase, a client uploads an encryption of a matrix $M \in \mathbb F^{m\times \ell}$ to a server, keeping only a short secret key. The server stores the encrypted matrix \(\hat{M}\).
In the online phase, the client may repeatedly send encryptions \(\hat{ q}_i\) of query vectors \(q_i \in \mathbb F^\ell\), which enables the client and the server to locally compute compact shares of the matrix–vector product \(Mq_i\). The server learns nothing about \(M\) or \(q_i\). The shared output can either be revealed to the client or processed by another protocol.
We present efficient EMVP protocols based on variants of the learning parity with noise (LPN) assumption and the related learning subspace with noise (LSN) assumption.
Our EMVP protocols are field-agnostic in the sense that the parties only perform arithmetic operations over \(\mathbb F\), and are close to optimal with respect to both communication and computation. In fact, for sufficiently large \(\ell\) (typically a few hundreds), the online computation and communication costs of our LSN-based EMVP can be \emph{less than twice} the costs of computing \(Mq_i\) in the clear.
Combined with suitable secure post-processing protocols on the secret-shared output, our EMVP protocols are useful for a variety of secure computation tasks, including encrypted fuzzy search and secure ML.
Our technical approach builds on recent techniques for private information retrieval in the secret-key setting. The core idea is to encode the matrix \(M\) and the queries \(q_i\) using a pair of secret dual linear codes, while defeating algebraic attacks by adding noise.
In the online phase, the client may repeatedly send encryptions \(\hat{ q}_i\) of query vectors \(q_i \in \mathbb F^\ell\), which enables the client and the server to locally compute compact shares of the matrix–vector product \(Mq_i\). The server learns nothing about \(M\) or \(q_i\). The shared output can either be revealed to the client or processed by another protocol.
We present efficient EMVP protocols based on variants of the learning parity with noise (LPN) assumption and the related learning subspace with noise (LSN) assumption.
Our EMVP protocols are field-agnostic in the sense that the parties only perform arithmetic operations over \(\mathbb F\), and are close to optimal with respect to both communication and computation. In fact, for sufficiently large \(\ell\) (typically a few hundreds), the online computation and communication costs of our LSN-based EMVP can be \emph{less than twice} the costs of computing \(Mq_i\) in the clear.
Combined with suitable secure post-processing protocols on the secret-shared output, our EMVP protocols are useful for a variety of secure computation tasks, including encrypted fuzzy search and secure ML.
Our technical approach builds on recent techniques for private information retrieval in the secret-key setting. The core idea is to encode the matrix \(M\) and the queries \(q_i\) using a pair of secret dual linear codes, while defeating algebraic attacks by adding noise.
Yaoling Ding, Haotong Xu, Annyu Liu, An Wang, Jingqi Zhang, Jing Yu, Liehuang Zhu
Side-channel analysis remains a critical threat to public-key cryptographic implementations. Simple Power Analysis (SPA) techniques can extract secret keys from a single power trace, often using clustering-based classification methods. However, traces captured in real-world environments often suffer from misalignment and variable trace lengths due to unstable clocks and random delays. As a result, clustering methods are required to use alignment methods that may alter the original information of the traces. To address this problem, this work proposes Dynamic Time Classification (DTC) as an alternative approach to classify cryptographic operations in SPA based on Dynamic Time Warping. Unlike clustering methods, DTC inherently compares power traces without requiring fixed-length segments, which greatly improved the adaptability to unequal traces and thus allows us to classify different operations relatively stably. Experimental results on public-key cryptographic algorithms and post-quantum algorithm implementations show that DTC are no less accurate than clustering methods and are more robust to timing variations. This work also systematically divides the features of different operations and explores the effects of different SPA methods on different types of feature. This work also conducts experiments with and without random delays for different categories, compares the experimental accuracy of different alignment methods, and discusses the feasibility of DTW as a preprocessing method.
Elias Riesinger, Jürgen Fuß
During an analysis of the NIST SP 800-22r1a document, which provides a test suite to validate random number generators and their reference implementation, various issues were identified, including imprecise probability constants, erroneous example calculations, and discrepancies within test descriptions.
Here, we provide a technical analysis of the Statistical Test Suite, documenting inconsistencies and deficiencies in both the theoretical specification and the official C reference implementation.
The analysis also reveals concrete implementation bugs and structural limitations in the reference codebase.
Rather than revising any of the statistical tests, this work documents these flaws to support the ongoing revision process of the standard and to encourage more reliable and maintainable implementations.
Tsz Hon Yuen, Ying-Teng Chen, Shimin Pan, Jiangshan Yu, Joseph K. Liu
We introduce posterior security of digital signatures, the additional security features after the original signature is generated. It is motivated by the scenario that some people store their secret keys in secure hardware and can only obtain a standard signature through a standardized interface. In this paper, we consider two different posterior security features: anonymity and message hiding.
We first introduce incognito signature, a new mechanism to anonymize a standard signature. Different from other ring or group signatures, the signer generates a standard (non-anonymous) signature first. The signature is then anonymized by a converter before sending to the verifier, by hiding the signer public key with a set of decoy public keys. We then introduce concealed signature which hides the message in a commitment. The standard signature is converted such that it can be verified with the commitment. The models of posterior anonymity and posterior message hiding capture the separation of the signer and the converter. Anonymity or message hiding is provided by the converter after the creation of a standard signature by the signer.
We give generic constructions of incognito signature and concealed signature. It can be applied to standard signatures like Schnorr. It gives the first practical anonymized ECDSA signature, and the signature size is logarithmic to the number of decoy public keys $n$. The existing ring signature scheme with ECDSA keys is at least 152 times longer than our scheme for $n \le 4096$.
The incognito signature and concealed signature can be composed to provide posterior anonymity and message hiding. It is useful in applications like two-tier central bank digital currency, where users want to hide their addresses (public keys) and transaction amounts (messages) when the payment is settled in the interbank layer.
We first introduce incognito signature, a new mechanism to anonymize a standard signature. Different from other ring or group signatures, the signer generates a standard (non-anonymous) signature first. The signature is then anonymized by a converter before sending to the verifier, by hiding the signer public key with a set of decoy public keys. We then introduce concealed signature which hides the message in a commitment. The standard signature is converted such that it can be verified with the commitment. The models of posterior anonymity and posterior message hiding capture the separation of the signer and the converter. Anonymity or message hiding is provided by the converter after the creation of a standard signature by the signer.
We give generic constructions of incognito signature and concealed signature. It can be applied to standard signatures like Schnorr. It gives the first practical anonymized ECDSA signature, and the signature size is logarithmic to the number of decoy public keys $n$. The existing ring signature scheme with ECDSA keys is at least 152 times longer than our scheme for $n \le 4096$.
The incognito signature and concealed signature can be composed to provide posterior anonymity and message hiding. It is useful in applications like two-tier central bank digital currency, where users want to hide their addresses (public keys) and transaction amounts (messages) when the payment is settled in the interbank layer.
Matthias Probst, Alexander Wiesent, Michael Gruber, Georg Sigl
Localized side-channel analysis makes it possible to evaluate only the relevant chip area by measuring near-field electromagnetic (EM) emanations. Compared to global power measurements, this can lead to more powerful attacks as the signal-to-noise ratio is higher and irrelevant circuit components are not included in the recorded measurements. Especially for profiled attacks and their reproduction, the probe position in a localized scenario is of utmost importance. Ideally a probe should be placed identically during the profiling and attack phases, as small variations can have a large impact on the success of the attack. In this work we present our methodology – ProbeNav – to accurately reposition an EM probe which is optimized for localized measurements, i.e., near-field measurements. We evaluate cross-correlation, Oriented Fast and rotated Brief (ORB) and particle filters to re-calibrate the coordinate system of our setup. As a result, our methodologies show that precise positioning on a STM32F303 microcontroller is possible for a profiled attack scenario with different EM probes. Furthermore, by requiring only a single trace per position, profiling is 3 times and repositioning 28 faster in terms of number of collected traces compared to the state of the art.
Guilhem Niot
The Signal Protocol, underpinning secure messaging for billions, faces the challenge of migrating to a post-quantum world while preserving critical properties like asynchrony and deniability. While Signal’s PQXDH key exchange protocol addresses post-quantum confidentiality, full migration of the X3DH protocol remains elusive. Relying on a split KEM (K-Waay, USENIX ’24) offers a promising migration path, but it has so far suffered from size limitations compared to concurrent works leveraging
ring signatures.
This work introduces Sparrow-KEM and Sym-Sparrow-KEM, novel asymmetric and symmetric split KEMs respectively, i.e. for which keys can be used interchangeably for sending and receiving, or only in one direction. They are designed to optimize the K-Waay protocol for size efficiency. Leveraging the MLWE assumption, these constructions reduce by a factor 5.1× the communication of prior post-quantum X3DH based on split KEMs, plus provides a 40× speedup. Additionally, Sym-Sparrow-KEM is the first symmetric split-KEM to offer deniability, IND-1KCA, and IND-1BatchCCA security, capturing implicit authentication properties. We provide formal security proofs for both schemes, including deniability. Our results demonstrate the feasibility of a compact and deniable post-quantum X3DH protocol based on split KEMs.
This work introduces Sparrow-KEM and Sym-Sparrow-KEM, novel asymmetric and symmetric split KEMs respectively, i.e. for which keys can be used interchangeably for sending and receiving, or only in one direction. They are designed to optimize the K-Waay protocol for size efficiency. Leveraging the MLWE assumption, these constructions reduce by a factor 5.1× the communication of prior post-quantum X3DH based on split KEMs, plus provides a 40× speedup. Additionally, Sym-Sparrow-KEM is the first symmetric split-KEM to offer deniability, IND-1KCA, and IND-1BatchCCA security, capturing implicit authentication properties. We provide formal security proofs for both schemes, including deniability. Our results demonstrate the feasibility of a compact and deniable post-quantum X3DH protocol based on split KEMs.
Liu Zhang, Yiran Yao, Danping Shi, Dongchen Chai, Jian Guo, Zilong Wang
The study by Gohr et.al at CRYPTO 2019 and sunsequent related works have shown that neural networks can uncover previously unused features, offering novel insights into cryptanalysis. Motivated by these findings, we employ neural networks to learn features specifically related to integral properties and integrate the corresponding insights into optimized search frameworks. These findings validate the framework of using neural networks for feature exploration, providing researchers with novel insights that advance established cryptanalysis methods.
Neural networks have inspired the development of more precise integral search models. By comparing the integral distinguishers obtained via neural networks with those identified by classical methods, we observe that existing automated search models often fail to find optimal distinguishers. To address this issue, we develop a meet-in-the-middle search framework that balances model accuracy and computational efficiency. As a result, we reduce the number of active plaintext bits required for an 11-round integral distinguisher on SKINNY-64-64, and further identify a 12-round key-dependent integral distinguisher—achieving one additional round over the previous best-known result.
The integral distinguishers discovered by neural networks enable key-recovery attacks on more rounds. We identify a 7-round key-independent integral distinguisher from neural networks with even only one active plaintext cell, which is based on linear combinations of bits. This distinguisher enables a 15-round key-recovery attack on SKINNY-n-n through a strategy with 3 rounds of forward decryption and 5 rounds of backward encryption, improving upon the previous record by one round. The same distinguisher also enhances attacks on SKINNY-n-2n and SKINNY-n-3n. Additionally, we discover an 8-round key-dependent integral distinguisher using neural network that further reduces the time complexity of key-recovery attacks against SKINNY.
Neural networks have inspired the development of more precise integral search models. By comparing the integral distinguishers obtained via neural networks with those identified by classical methods, we observe that existing automated search models often fail to find optimal distinguishers. To address this issue, we develop a meet-in-the-middle search framework that balances model accuracy and computational efficiency. As a result, we reduce the number of active plaintext bits required for an 11-round integral distinguisher on SKINNY-64-64, and further identify a 12-round key-dependent integral distinguisher—achieving one additional round over the previous best-known result.
The integral distinguishers discovered by neural networks enable key-recovery attacks on more rounds. We identify a 7-round key-independent integral distinguisher from neural networks with even only one active plaintext cell, which is based on linear combinations of bits. This distinguisher enables a 15-round key-recovery attack on SKINNY-n-n through a strategy with 3 rounds of forward decryption and 5 rounds of backward encryption, improving upon the previous record by one round. The same distinguisher also enhances attacks on SKINNY-n-2n and SKINNY-n-3n. Additionally, we discover an 8-round key-dependent integral distinguisher using neural network that further reduces the time complexity of key-recovery attacks against SKINNY.