IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
20 October 2025
Alexander Frolov, Hal Triedman, Ian Miers
We are now entering an era where the large-scale deployment of anonymous credentials seems inevitable, driven both by legislation requiring age verification and the desire to distinguish humans from bots in the face of the proliferation of AI-generated content. However, the widespread deployment of anonymous credentials faces the same security and fraud concerns as existing credentials, but without the established techniques for securing them. For non-anonymous credentials on the web today, authentication is a continuous process in which servers collect large volumes of behavioral data to protect account holders (e.g., by detecting account compromise) or to combat fraudulent behavior.
In this paper, we propose Continuous Anonymous Authentication (CAA) schemes and give a concrete construction and applications for preventing credential sharing and theft. CAA schemes allow us to move the server-side collection, storage, and processing of these behavioral signals to the client while maintaining privacy and integrity. CAA schemes support, on the client side, a number of common behavioral analysis tests and analytics both for determining fraudulent behavior and updating security policies. We implement a prototype, zk-Cookies, which runs in the browser, and supports common behavioral signals such as IP address and geolocation history, browser fingerprinting, and page view history. Using this, we build a prototype application for age verification based on legacy credentials (like passports). We implement these checks efficiently in zk-SNARKs, and also show how to securely implement differentially private behavioral analytics in a zk-SNARK. The simplest version of our construction can perform the computation for an update in under 200 ms.
Marc Damie, Federico Mazzone, Florian Hahn, Andreas Peter, Jan Ramon
Function Secret Sharing (FSS) schemes enable to share secret functions between multiple parties, with notable applications in anonymous communication and privacy-preserving machine learning. While two-party schemes offer logarithmic key sizes, multi-party schemes remain less practical due to significantly larger keys. Although several approaches have been proposed to improve multi-party schemes, a significant efficiency gap remains between the two-party and multi-party settings.
Our work introduces noisy FSS: a relaxation of FSS preserving the standard privacy guarantees but relaxing the correctness definition by allowing a small amount of noise in the output. We formally define noisy FSS and show how the noise introduced by the scheme can be leveraged to provide differential private outputs in statistics applications.
To demonstrate the benefits of this relaxation, we adapt a scheme proposed by Corrigan-Gibbs et al. (S&P'15). While their scheme provides the smallest key sizes among multi-party schemes, they do not support some applications notably in statistics due to their non-linear share decoding. On the contrary, recent works such as Goel et al. (CRYPTO'25) have larger keys, but support all FSS applications. Our noisy adapted scheme offers the best of both worlds by matching the best key sizes, while providing the properties necessary to statistics applications.
Our work introduces noisy FSS: a relaxation of FSS preserving the standard privacy guarantees but relaxing the correctness definition by allowing a small amount of noise in the output. We formally define noisy FSS and show how the noise introduced by the scheme can be leveraged to provide differential private outputs in statistics applications.
To demonstrate the benefits of this relaxation, we adapt a scheme proposed by Corrigan-Gibbs et al. (S&P'15). While their scheme provides the smallest key sizes among multi-party schemes, they do not support some applications notably in statistics due to their non-linear share decoding. On the contrary, recent works such as Goel et al. (CRYPTO'25) have larger keys, but support all FSS applications. Our noisy adapted scheme offers the best of both worlds by matching the best key sizes, while providing the properties necessary to statistics applications.
Vincent Grosso, Carlos Andres Lara-Nino
Masking is one of the most widespread countermeasures against side-channel attacks. However, its integration into hardware implementation is subject to physical hazards that can mitigate its security. To counter glitches, the most studied physical hazard, an effective solution is to resynchronize the signals by integrating additional hardware layers into the architecture. However, these solutions have an impact on the performance of the implementation. A solution to avoid these limitations is to use more shares to compute higher-degree functions. We study the cost of this approach, denominated ($td+n$)-masking. We first derive optimal dependence structures for the creation on non-complete sharings, which allow us to obtain efficient implementation of substitution boxes. As a case study, we use these elements to create a second-order masked architecture for the PRESENT cipher. We perform multiple TVLA tests to validate our approach. Our results confirm that the strategy is efficient in terms of performance, at the cost of increased hardware resources.
Craig Gentry, Yongwoo Lee
We propose an efficient fully homomorphic encryption (FHE) scheme tailored for matrix arithmetic based on the Ring-Learning with Errors (RLWE) problem. The proposed scheme naturally supports matrix multiplication, addition, and Hadamard multiplication for batched matrices of various sizes over both complex numbers and integers. Encrypted matrix multiplication is reduced to four matrix multiplications of ciphertext elements, without the need for expensive operations such as slot-to-coefficient conversion or ring switching. In addition, the scheme efficiently supports matrix transformations, including general and conjugate transpositions, as well as matrix rotations: inter-matrix rotations across batched matrices and intra-matrix rotations within rows and columns. Moreover, the proposed FHE scheme can be directly combined with existing bootstrapping algorithms.
By eliminating the need for expensive operations such as repeated slot rotations and conversion between slot- and coefficient-encoding, the proposed construction achieves significant performance improvements. In our construction, encrypted multiplications of $n\times n$ matrices under slot encoding are decomposed into two parts: (1) matrix multiplication — four $n\times n$ matrix multiplications of ciphertext coefficients, and (2) key switching — with a total cost approximately 2–4 times that of Hadamard multiplication. We implemented the proposed scheme and utilized the FLINT library for the matrix multiplication component. Experimental results demonstrate that, even when leveraging highly optimized implementations, matrix multiplication remains the major cost, indicating that our construction substantially reduces auxiliary overheads and achieves strong overall efficiency.
Alessandra Dolmeta, Valeria Piscopo, Guido Masera, Maurizio Martina, Michael Hutter
This work presents a RISC-V extension for Post-Quantum Cryptography
(PQC) called HORCRUX, which provides a unified Instruction-Set Extension (ISE) supporting all NIST-approved PQC algorithms. HORCRUX addresses the current fragmentation in hardware support, where existing extensions typically focus on individual algorithms or limited subsets of PQC schemes, and targets the common kernels shared across ML-KEM, ML-DSA, SLH-DSA, and HQC. To address the primary computational bottlenecks of all these algorithms (namely, modular arithmetic, matrix
multiplication, and hash transformations), the extension introduces new RISC-V instructions executed by a tightly coupled coprocessor. This coprocessor requires only minimal hardware resources, making the architecture efficient for constrained devices while still providing substantial acceleration. An experimental evaluation on a Zynq UltraScale+ FPGA demonstrates speedups of up to 3× for lattice and
hash-based schemes, and over 30× for code-based schemes, while adding less than 2,900 LUTs and 400 FFs to the design. The extension’s modular structure maintains backward compatibility with standard RISC-V cores, offering a scalable solution for deploying PQC on highly constrained embedded systems.
Xiaohan Wan, Mingqiang Wang, Xiaopeng Cheng, Haiyang Xue, Qi Zhang
Multi-key fully homomorphic encryption (MKFHE) extends the capability of fully homomorphic encryption by enabling homomorphic computations on ciphertexts encrypted under different keys. Multi-key blind rotation is the core and most computationally intensive component of MKFHE. The NTRU-based multi-key blind rotation proposed by Xiang et al. (ASIACRYPT 2024) has the potential to achieve smaller key sizes, faster blind rotation, and lower noise compared to its RLWE-based counterpart. However, while the multi-key blind rotation in the RLWE-based scheme proposed by Kwak et al. (PKC 2024) remains compatible with their single-key counterpart, the NTRU-based versions do not.
This motivates our work to advance NTRU-based schemes in both efficiency and compatibility. We propose a novel workflow for NTRU-based multi-key blind rotation that achieves compatibility with its single-key counterpart. Our approach significantly reduces both computational and storage complexity compared to the state-of-the-art NTRU-based design, while maintaining a comparable noise magnitude. Building upon this workflow, we further construct two MKFHE schemes for bootstrapping multi-key LWE ciphertexts and multi-key matrix NTRU ciphertexts, both supporting a super-constant number of parties with respect to the ring dimension $N$. Experimental results demonstrate that our method outperforms existing NTRU-based bootstrapping for TFHE-like MKFHE in both computational efficiency and bootstrapping key size. Specifically, our 2-key gate bootstrapping takes only 26ms and requires a bootstrapping key of size 7.34MB, achieving a 3.1× speedup and a 1.9× key size reduction compared to prior NTRU-based works.
This motivates our work to advance NTRU-based schemes in both efficiency and compatibility. We propose a novel workflow for NTRU-based multi-key blind rotation that achieves compatibility with its single-key counterpart. Our approach significantly reduces both computational and storage complexity compared to the state-of-the-art NTRU-based design, while maintaining a comparable noise magnitude. Building upon this workflow, we further construct two MKFHE schemes for bootstrapping multi-key LWE ciphertexts and multi-key matrix NTRU ciphertexts, both supporting a super-constant number of parties with respect to the ring dimension $N$. Experimental results demonstrate that our method outperforms existing NTRU-based bootstrapping for TFHE-like MKFHE in both computational efficiency and bootstrapping key size. Specifically, our 2-key gate bootstrapping takes only 26ms and requires a bootstrapping key of size 7.34MB, achieving a 3.1× speedup and a 1.9× key size reduction compared to prior NTRU-based works.
Andrej Bogdanov, Rohit Chatterjee, Yunqi Li, Prashant Nalini Vasudevan
Prange's information set algorithm is a decoding algorithm for arbitrary linear codes. It decodes corrupted codewords of any $\mathbb{F}_2$-linear code $C$ of message length $n$ up to relative error rate $O(\log n / n)$ in $\mathsf{poly}(n)$ time. We show that the error rate can be improved to $O((\log n)^2 / n)$, provided: (1) the decoder has access to a polynomial-length advice string that depends on $C$ only, and (2) $C$ is $n^{-\Omega(1)}$-balanced.
As a consequence we improve the error tolerance in decoding random linear codes if inefficient preprocessing of the code is allowed. This reveals potential vulnerabilities in cryptographic applications of Learning Noisy Parities with low noise rate.
Our main technical result is that the Hamming weight of $Hw$, where $H$ is a random sample of *short dual* codewords, measures the proximity of a word $w$ to the code in the regime of interest. Given such $H$ as advice, our algorithm corrects errors by locally minimizing this measure. We show that for most codes, the error rate tolerated by our decoder is asymptotically optimal among all algorithms whose decision is based on thresholding $Hw$ for an arbitrary polynomial-size advice matrix $H$.
As a consequence we improve the error tolerance in decoding random linear codes if inefficient preprocessing of the code is allowed. This reveals potential vulnerabilities in cryptographic applications of Learning Noisy Parities with low noise rate.
Our main technical result is that the Hamming weight of $Hw$, where $H$ is a random sample of *short dual* codewords, measures the proximity of a word $w$ to the code in the regime of interest. Given such $H$ as advice, our algorithm corrects errors by locally minimizing this measure. We show that for most codes, the error rate tolerated by our decoder is asymptotically optimal among all algorithms whose decision is based on thresholding $Hw$ for an arbitrary polynomial-size advice matrix $H$.
Shichang Wang, Meicheng Liu, Shiqi Hou, Dongdai Lin
At CHES 2017, Banik et al. proposed a lightweight block cipher GIFT consisting of two versions GIFT-64 and GIFT-128. Recently, there are lots of authenticated encryption schemes that adopt GIFT-128 as their underlying primitive, such as GIFT-COFB and HyENA. To promote a comprehensive perception of the soundness of the designs, we evaluate their security against differential-linear cryptanalysis.
For this, automatic tools have been developed to search differential-linear approximation for the ciphers based on S-boxes. With the assistance of the automatic tools, we find 13-round differential-linear approximations for GIFT-COFB and HyENA. Based on the distinguishers, 18-round key-recovery attacks are given for the message processing phase and initialization phase of both ciphers. Moreover, the resistance of GIFT-64/128 against differential-linear cryptanalysis is also evaluated. The 12-round and 17-round differential-linear approximations are found for GIFT-64 and GIFT-128 respectively, which lead to 18-round and 19-round key-recovery attacks respectively. Here, we stress that our attacks do not threaten the security of these ciphers.
For this, automatic tools have been developed to search differential-linear approximation for the ciphers based on S-boxes. With the assistance of the automatic tools, we find 13-round differential-linear approximations for GIFT-COFB and HyENA. Based on the distinguishers, 18-round key-recovery attacks are given for the message processing phase and initialization phase of both ciphers. Moreover, the resistance of GIFT-64/128 against differential-linear cryptanalysis is also evaluated. The 12-round and 17-round differential-linear approximations are found for GIFT-64 and GIFT-128 respectively, which lead to 18-round and 19-round key-recovery attacks respectively. Here, we stress that our attacks do not threaten the security of these ciphers.
Taiyu Wang, Cong Zhang, Hong-Sheng Zhou, Xin Wang, Pengfei Chen, Wenli Wang, Kui Ren, Chun Chen
The generic group model (GGM) is fundamental for evaluating the feasibility and limitations of group-based cryptosystems. Two prominent versions of the GGM exist in the literature: Shoup's GGM and Maurer's GGM. Zhandry (CRYPTO 2022) points out inherent limitations in Maurer's GGM by demonstrating that several textbook cryptographic primitives, which are provably secure in Shoup's GGM, cannot be proven secure in Maurer's model.
In this work, we further investigate Shoup's GGM and identify novel limitations that have been previously overlooked. Specifically, to prevent generic algorithms from generating valid group elements without querying the oracle, the model typically employs sufficiently large encoding lengths. This leads to sparse encodings, a setting referred to as the sparse generic group model (sparse GGM). We emphasize that this sparseness introduces several constraints:
--Groups with AE and Black-Box Separation: Shoup's GGM is typically instantiated with elliptic curve groups, which admit admissible encodings (AE)—functions mapping from Z_N to elliptic curve points. We establish a black-box separation, showing that the sparse GGM fails to capture cryptographic groups that are both (1) computational Diffie-Hellman (CDH) secure and (2) compatible with admissible encodings.
--Comparison with EC-GGM: We examine the relationship between the sparse GGM and the Elliptic Curve Generic Group Model (EC-GGM) introduced by Groth and Shoup (EUROCRYPT 2022), which inherently yields CDH-secure groups with admissible encodings. Within the framework of indifferentiability, we prove that EC-GGM is strictly stronger than sparse GGM.
--Dense Groups and Black-Box Separation: We revisit groups with dense encodings and establish a black-box separation between CDH-secure dense groups and the sparse GGM.
--Extension to Bilinear Settings: Our results naturally extend to the sparse Generic Bilinear Group Model (GBM), demonstrating that the aforementioned constraints still hold.
In conclusion, our findings indicate that both feasibility and impossibility results in Shoup's GGM should be reinterpreted in a fine-grained manner, encouraging further exploration of cryptographic constructions and black-box separations in EC-GGM or dense GGM.
In this work, we further investigate Shoup's GGM and identify novel limitations that have been previously overlooked. Specifically, to prevent generic algorithms from generating valid group elements without querying the oracle, the model typically employs sufficiently large encoding lengths. This leads to sparse encodings, a setting referred to as the sparse generic group model (sparse GGM). We emphasize that this sparseness introduces several constraints:
--Groups with AE and Black-Box Separation: Shoup's GGM is typically instantiated with elliptic curve groups, which admit admissible encodings (AE)—functions mapping from Z_N to elliptic curve points. We establish a black-box separation, showing that the sparse GGM fails to capture cryptographic groups that are both (1) computational Diffie-Hellman (CDH) secure and (2) compatible with admissible encodings.
--Comparison with EC-GGM: We examine the relationship between the sparse GGM and the Elliptic Curve Generic Group Model (EC-GGM) introduced by Groth and Shoup (EUROCRYPT 2022), which inherently yields CDH-secure groups with admissible encodings. Within the framework of indifferentiability, we prove that EC-GGM is strictly stronger than sparse GGM.
--Dense Groups and Black-Box Separation: We revisit groups with dense encodings and establish a black-box separation between CDH-secure dense groups and the sparse GGM.
--Extension to Bilinear Settings: Our results naturally extend to the sparse Generic Bilinear Group Model (GBM), demonstrating that the aforementioned constraints still hold.
In conclusion, our findings indicate that both feasibility and impossibility results in Shoup's GGM should be reinterpreted in a fine-grained manner, encouraging further exploration of cryptographic constructions and black-box separations in EC-GGM or dense GGM.
Agha Aghayev, Nour-eddine Rahmani
The asymmetric cryptographic constructions upon on number-
theoretic hardness assumptions have become insecure, due to Shor’s
quantum algorithm and they will be vulnerable to large scale quantum
computers. Hence, the adaption to quantum-resistant cryptosystems is
a major task. Digital signatures, being a fundamental primitive in nu-
merous applications. Recently, a new approach by Nguyen et al. [9] has
claimed post-quantum security by basing the signature algorithm’s se-
curity on a variant of the discrete logarithm problem. In this paper, we
present a cryptanalysis of this construction and demonstrate a practi-
cal forgery attack that allows generating an unlimited number of valid
signatures—without access to a signing oracle.
Jonas Schupp, Marco Gianvecchio, Alessandro Barenghi, Patrick Karl, Gerardo Pelosi, Georg Sigl
Post-quantum cryptosystems are currently attracting significant research efforts due to the continuous improvements in quantum computing technologies, which led the National Institute of Standards and Technology (NIST) to open standardization competitions to foster proposals and public scrutiny of cryptosystems and digital signatures. Whilst NIST has chosen, after four selection rounds, three digital signature algorithms, it also has opened a new selection process as the chosen candidates were either relying only on lattice-based computationally hard problems, or had unsatisfactory performance figures. In this work, we propose two optimized implementations of the Codes and Restricted Objects Signature Scheme (CROSS) targeting the Cortex-M4 platform. One implementation targets the minimal possible stack size while the other trades some memory space for performance optimization using vectorization for some performance critical arithmetic operations. We show that all parameter sets fit within at maximum 24 kB of stack which corresponds to a reduction by a factor of 15 to 45 with respect to the reference implementation. The memory footprint of our implementation, taking the size of the binary and the signature also into account, is less than 128 kB. We additionally outline different stack reduction options which allow for a fine grained trade-off between memory footprint and performance of the algorithm. Notably, we also show that our memory optimizations alone have no significant impact on the signature verification of CROSS while we even achieve a speed-up factor of up to 1.7 when taking the stack and speed optimizations into account.
Diamante Simone CRESCENZO, Emanuele VALEA, Alberto BOSIO
Conventional cryptographic algorithms rely on hard mathematical problems to ensure an appropriate level of security. However, with the advent of quantum computing, classical cryptographic algorithms are expected to become vulnerable. For this reason, Post-Quantum Cryptography (PQC) algorithms have emerged as a response, being designed to resist quantum attacks. Most PQC algorithms rely on the Learning With Errors (LWE) problem, where generating pseudo-random controlled errors is crucial. A well-known solution is the use of hash functions followed by error samplers, implemented according to specific error distributions, whose implementation is challenging. This paper provides a proof of concept demonstrating how Approximate Computing (AxC) can be exploited in LWE-based cryptographic algorithms to alleviate this implementation bottleneck. The main idea is to use AxC circuits to run some of the algorithm's operations, introducing the required error for free thanks to the approximation. Our key contribution is demonstrating how AxC techniques can be effectively applied to LWE-based algorithms, highlighting a novel approach to generating and introducing the error. This concept has proven effective in an approximate implementation of the FrodoKEM algorithm, where we achieve a 50.3% reduction in the need for Gaussian sampling. Additionally, we observe a performance improvement of 2.19%, which further supports the feasibility of this approach. Overall, this work introduces and validates a new design direction for LWE-based cryptography through AxC, opening the way for further research.
Dimitri Koshelev
This article aims to consider batch hashing to elliptic curves. The given kind of hash functions found numerous applications in elliptic curve cryptography. In practice, a hash-to-curve function is often evaluated at a time by the same entity at many different inputs. It turns out that under certain mild conditions simultaneous evaluation can be carried out several times faster than separate ones. In this regard, the article introduces a new class of elliptic curves over finite fields, more appropriate for multiple hashing to them. Moreover, two explicit hashing-friendly Montgomery/twisted Edwards curves (of $\approx 128$ security bits) have been generated: one of CM discriminant $-7$, i.e., a GLV-friendly curve and one of huge CM discriminant, i.e., a CM-secure curve. The new elliptic curves are intentionally covered by so-called Klein's and Bring's curves of geometric genera $3$ and $4$, respectively. The latter are well studied in various algebraic geometry contexts, although they have not yet been (reasonably) applied in cryptography to the author's knowledge. Such a mathematical complication is justified, since conventional curves (from existing standards or of $j$-invariants $0$, $1728$) are seemingly less efficient for batch hashing.
Debranjan Pal, Anubhab Baksi, Surajit Mandal, Santanu Sarkar
A common way to perform classical cryptanalysis on symmetric-key ciphers is to encode the problem as an instance of the Mixed Integer Linear Programming (MILP) problem and then run the instance with an efficient solver. For this purpose, it is essential to model the components in a way that is compatible with the MILP formulation while preserving the characteristics of the cipher. In this work, we aim at the problem of efficiently encoding a substitution box (SBox for short). More specifically, we take the evaluation tables, namely, the Difference Distribution Table (DDT) that models for the differential cryptanalysis, the Linear Approximation Table (LAT) that models for the linear cryptanalysis, the Division Property Table (DPT) that models for the integral cryptanalysis, and the Boomerang Connectivity Table (BCT) that models for the boomerang cryptanalysis.
In the current stage, the only model proposed/used in the literature is to look for the non-zero entries in the evaluation table and encode for that (this ultimately leads to the minimum number of active SBoxes for the target cipher). In our first contribution, we experiment with the complementary concept of using the zero entries (i.e., treating the zero entries as nonzero entries and vice versa). In our second contribution, we observe that the top row and column of each table (which are always taken into consideration) are actually redundant, as those are inherently taken care of through additional constraints at another level (which we call the `secondary structure').
We show the efficacy of our proposals by furnishing results on the evaluation tables on multiple SBoxes. In fact, the first proposal reduces the number of constraints for certain SBox evaluation tables (depending on the properties of the table). The second proposal, on the other hand, always reduces the number of constraints from the state-of-the-art results.
In the current stage, the only model proposed/used in the literature is to look for the non-zero entries in the evaluation table and encode for that (this ultimately leads to the minimum number of active SBoxes for the target cipher). In our first contribution, we experiment with the complementary concept of using the zero entries (i.e., treating the zero entries as nonzero entries and vice versa). In our second contribution, we observe that the top row and column of each table (which are always taken into consideration) are actually redundant, as those are inherently taken care of through additional constraints at another level (which we call the `secondary structure').
We show the efficacy of our proposals by furnishing results on the evaluation tables on multiple SBoxes. In fact, the first proposal reduces the number of constraints for certain SBox evaluation tables (depending on the properties of the table). The second proposal, on the other hand, always reduces the number of constraints from the state-of-the-art results.
Benedikt Bünz, Kevin Choi, Chelsea Komlo
In this work, we present Golden, a non-interactive Distributed Key Generation (DKG) protocol. The core innovation of Golden is how it achieves publicly verifiability in a lightweight manner, allowing all participants to non-interactively verify that all other participants followed the protocol correctly. For this reason, Golden can be performed with only one round of (broadcast) communication. Non-interactive DKGs are important for distributed applications; as parties may go offline at any moment, reducing rounds of communication is a desirable feature. Golden outputs Shamir secret shares of a field element sk ∈Zp to all participants, and a public key PK= g^sk that is a discrete-logarithm commitment to sk. Further, the security of Golden requires only the hardness of discrete-logarithm assumptions, and so can be used over any elliptic curve where these assumptions hold.
Golden is more efficient than prior related schemes in both bandwidth and computation. For 50 participants, Golden requires only 223 kb bandwidth and 13.5 seconds of total runtime for each participant, in comparison to ElGamal-based non-interactive DKG, which requires 27.8 MB bandwidth and 40.5 seconds runtime per participant.
As a building block, we define a new exponent Verifiable Random Function (eVRF). Our eVRF uses a non-interactive key exchange (NIKE) as a building block to derive a Diffie-Hellman shared secret key, and proves correctness of this key with respect to the corresponding Diffie-Hellman public keys. Within Golden, participants use this eVRF in a pairwise manner to generate a one-time pad to encrypt Shamir secret shares to their respective recipients while ensuring public verifiability. As such, Golden avoids the use of public-key encryption schemes such as ElGamal, Paillier, or class groups, departing from prior schemes in the literature. Finally, our eVRF may be of independent interest to settings where publicly-verifiable encryption is desirable.
Golden is more efficient than prior related schemes in both bandwidth and computation. For 50 participants, Golden requires only 223 kb bandwidth and 13.5 seconds of total runtime for each participant, in comparison to ElGamal-based non-interactive DKG, which requires 27.8 MB bandwidth and 40.5 seconds runtime per participant.
As a building block, we define a new exponent Verifiable Random Function (eVRF). Our eVRF uses a non-interactive key exchange (NIKE) as a building block to derive a Diffie-Hellman shared secret key, and proves correctness of this key with respect to the corresponding Diffie-Hellman public keys. Within Golden, participants use this eVRF in a pairwise manner to generate a one-time pad to encrypt Shamir secret shares to their respective recipients while ensuring public verifiability. As such, Golden avoids the use of public-key encryption schemes such as ElGamal, Paillier, or class groups, departing from prior schemes in the literature. Finally, our eVRF may be of independent interest to settings where publicly-verifiable encryption is desirable.
19 October 2025
Felix Uhle, Nicolai Müller, Thorben Moos, Philipp Günther, Amir Moradi
Fault injection attacks target cryptographic primitives implemented in
hardware by deliberately inducing faults, leading to abnormal behavior that can be exploited to extract sensitive information. To mitigate these threats, practical and low-overhead countermeasures, whether at the algorithmic or physical level, are essential. However, the real-world effectiveness of these countermeasures remains uncertain, as it is not always clear whether, or to what extent, their underlying security assumptions hold in practice. Therefore, a thorough evaluation under realistic attack scenarios, including recent techniques such as Electromagnetic Fault Injection (EMFI), is crucial. In this work, we demonstrate the resistance of a protected real-world target chip against EMFI. Specifically, our fabricated 65 nm CMOS ASIC employs concurrent error correction based on the techniques described in Impeccable Circuits II as an algorithm-level countermeasure. At the physical level, the chip integrates multiple coils of varying sizes and positions that serve as sensors for electromagnetic fields. We employ a practical and affordable attack setup featuring a commercial fault-injection probe mounted on an XYZ stage for precise positioning over the ASIC. This setup allows us to investigate the effects of various attack parameters, including the probe’s position, pulse polarity, and voltage level. Our results highlight that the coils serve as a lightweight and effective countermeasure for the practical detection of EMFI attempts. In contrast, for concurrent error correction, a gap between theory and practice emerges: the protection overhead actually makes such designs more susceptible to EMFI in real-world scenarios.
18 October 2025
Nils Fleischhacker, Guilherme Rito
Deniable authentication is a very desirable guarantee for
secure messaging — and in particular Off-The-Record messaging: it allows
a sender Alice to authenticate her messages while retaining plausible
deniability on whether she authenticated any of her messages.
While deniable authentication is not achievable in standard Public-Key Infrastructures, Dodis et al. (TCC ’09) have shown that it is achievable in so-called Key-Registration with Knowledge setups. These guarantee: 1. dishonest parties’ key-pairs are honestly generated; and 2. dishonest parties have access to their secret keys. However, these are expensive assumptions, and it is unclear how securely realize them in practice.
In this paper we study the weakest setup requirements that are sufficient to enable deniable authentication. One of our main findings is that deniable authentication does require an expensive — but not necessarily impractical — Key-Registration procedure. Roughly, this procedure must guarantee that if a user’s key registration is successful, then the user can extract by themselves valid secret keys from their interaction with the registration authority. We show this setup is sufficient by capturing it via an appropriate security model and then proving the anonymity of a ring signature based on it. On the other hand, we explain why this setup seems inherently necessary by listing a series of attacks that void deniability as soon as a user manages to successfully register a public key for which it can convincingly claim it does not know a valid (corresponding) secret key.
The requirements we identify for key-registration protocols are expensive, but not necessarily impractical. Our second main contribution is showing how to amortize the per-user Key-Registration cost, which brings deniable authentication guarantees a step closer to practicality.
While deniable authentication is not achievable in standard Public-Key Infrastructures, Dodis et al. (TCC ’09) have shown that it is achievable in so-called Key-Registration with Knowledge setups. These guarantee: 1. dishonest parties’ key-pairs are honestly generated; and 2. dishonest parties have access to their secret keys. However, these are expensive assumptions, and it is unclear how securely realize them in practice.
In this paper we study the weakest setup requirements that are sufficient to enable deniable authentication. One of our main findings is that deniable authentication does require an expensive — but not necessarily impractical — Key-Registration procedure. Roughly, this procedure must guarantee that if a user’s key registration is successful, then the user can extract by themselves valid secret keys from their interaction with the registration authority. We show this setup is sufficient by capturing it via an appropriate security model and then proving the anonymity of a ring signature based on it. On the other hand, we explain why this setup seems inherently necessary by listing a series of attacks that void deniability as soon as a user manages to successfully register a public key for which it can convincingly claim it does not know a valid (corresponding) secret key.
The requirements we identify for key-registration protocols are expensive, but not necessarily impractical. Our second main contribution is showing how to amortize the per-user Key-Registration cost, which brings deniable authentication guarantees a step closer to practicality.
17 October 2025
Alper Çakan, Vipul Goyal, Fuyuki Kitagawa, Ryo Nishimaki, Takashi Yamakawa
Unclonable cryptography leverages the quantum no-cloning principle to copy-protect cryptographic functionalities. While most existing works address the basic single-copy security, the stronger notion of multi-copy security remains largely unexplored.
We introduce a generic compiler that upgrades collusion-resistant unclonable primitives to achieve multi-copy security, assuming only one-way functions. Using this framework, we obtain the first multi-copy secure constructions of public-key quantum money (termed quantum coins), single-decryptor encryption, unclonable encryption, and more. We also introduce an extended notion of quantum coins, called upgradable quantum coins, which allow weak (almost-public) verification under weaker assumptions and can be upgraded to full public verification under stronger assumptions by the bank simply publishing additional classical information.
Along the way, we give a generic compiler that upgrades single-copy secure single-decryptor encryption to a collusion-resistant one, assuming the existence of functional encryption, and construct the first multi-challenge secure unclonable encryption scheme, which we believe are of independent interest.
We introduce a generic compiler that upgrades collusion-resistant unclonable primitives to achieve multi-copy security, assuming only one-way functions. Using this framework, we obtain the first multi-copy secure constructions of public-key quantum money (termed quantum coins), single-decryptor encryption, unclonable encryption, and more. We also introduce an extended notion of quantum coins, called upgradable quantum coins, which allow weak (almost-public) verification under weaker assumptions and can be upgraded to full public verification under stronger assumptions by the bank simply publishing additional classical information.
Along the way, we give a generic compiler that upgrades single-copy secure single-decryptor encryption to a collusion-resistant one, assuming the existence of functional encryption, and construct the first multi-challenge secure unclonable encryption scheme, which we believe are of independent interest.
Aurélien Boeuf, Léo Perrin
Zero-Knowledge (ZK) protocols rely internally on hash functions for their security arguments. However, the hash functions that are the most efficient in this context differ substantially from e.g. SHA-3: their round function $R$ must enable an efficient arithmetization of its verification. In practice, it means that verifying if $y = R(x)$ involves as little finite field multiplications as possible. In turn, this design requirement implies a greater vulnerability to algebraic attacks. In fact, improvement of those have proved devastating, and imply the need to completely rethink the methods used to ensure security against them. In this paper, we show that it is possible to build a simple yet efficient security argument based on a precise estimate of the so-called “algebraic degree” of a system of equations. Furthermore, we show that the increase of this quantity across rounds is tightly connected to the cost of the hash function in two different arithmetizations, namely AIR and R1CS. We precisely quantify this relation by introducing ALgebraic Figures Of Merit (ALFOMs) that capture how efficient a specific primitive (and in fact its round function) are at increasing the security per unit of cost. This new insight allows us to better understand sometimes puzzling performance differences between state-of-the-art hash functions in the R1CS and AIR cases, and to provide a fair and simple comparison of their round functions in this context. Furthermore, we present a new group of round functions we called the Moirai which allow us to explore what a round function providing optimal performance/security tradeoff could look like.
Leandro Rometsch, Philipp-Florens Lehwalder, Anh-Tu Hoang, Dominik Kaaser, Stefan Schulte
Self-Sovereign Identity (SSI) frameworks enable individuals to receive and present digital credentials in a user-controlled way. Revocation mechanisms ensure that invalid or withdrawn credentials cannot be misused. These revocation mechanisms must be scalable (e.g., at national scale) and preserve core SSI principles such as privacy, user control, and interoperability. Achieving both is hard, and finding a suitable trade-off remains a key challenge in SSI research. This paper introduces UPPR, a revocation mechanism for One-Show Verifiable Credentials (oVCs) and unlinkable Anonymous Credentials (ACs). Revocations are managed using per-credential Verifiable Random Function (VRF) tokens, which are published in a Bloom filter cascade on a blockchain. Holders prove non-revocation via a VRF proof for oVCs or a single Zero-Knowledge Proof for ACs. The construction prevents revocation status tracking, allows holders to stay offline, and hides issuer revocation behavior. We analyze the privacy properties of UPPR and provide a prototype implementation on Ethereum. Our implementation enables off-chain verification at no cost. On-chain checks cost 0.56-0.84 USD, while issuers pay only 0.00002-0.00005 USD per credential to refresh the revocation state.