IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
17 June 2025
Yi Jin, Yuansheng Pan, Xiaoou He, Boru Gong, Jintai Ding
Multivariate public key cryptosystems represent a promising family of post-quantum cryptographic schemes. Extensive research has demonstrated that multivariate polynomials are particularly well-suited for constructing digital signature schemes. Notably, the Unbalanced Oil and Vinegar (UOV) signature scheme and its variants have emerged as leading candidates in NIST's recent call for additional digital signature proposals.
Security analysis against UOV variants are typically categorized into key-recovery attacks and forgery attacks, with the XL algorithm serving as one of the most significant methods for mounting key-recovery attacks. Recently, Lars Ran introduced a new attack against UOV variants that could be seen as an XL attack using exterior algebra; nevertheless, this new attacking algorithm is applicable only when the underlying (finite) field of the UOV variant is of characteristic $2$.
In this work, we address this limitation by proposing a unified framework. Specifically, we first propose the notion of reduced symmetric algebra over any field, whose strength can be gleaned from the fact that it is essentially symmetric algebra when the characteristic $p$ of the underlying field is $0$ and is exterior algebra when $p=2$. Based upon the reduced symmetric algebra, we then propose a new XL attack against all UOV variants. Our XL attack is equivalent to Lars Ran's one for those UOV variants whose underlying fields are of characteristic $p=2$; more importantly, our XL attack can also be applied to analyze those UOV variants with odd characteristic, such as QR-UOV submitted to NIST's PQC Standardization Project. It should be noted that in regard to those 12 QR-UOV recommended instances, our XL attack does not outperform existing key-recovery counterparts; nevertheless, it is the optimal key-recovery attack for some specific UOV instances with odd characteristic.
Shanxiang Lyu, Ling Liu, Cong Ling
The Learning Parity with Noise (LPN) problem has become a cornerstone for building lightweight, post-quantum secure encryption schemes. Despite its widespread adoption, LPN-based constructions suffer from a fundamental efficiency limitation: the essential noise term that provides security simultaneously requires error correction coding, leading to bandwidth overhead. We introduce a variant of LPN termed Learning Parity with Quantization (LPQ). While maintaining the ``learning from noisy equations'' framework, LPQ generates Bernoulli-like noise from code-aided quantization and enables simultaneous security and compression. Formally, the $\text{LPQ}_{N,n,\mathcal{C}}$ problem challenges adversaries to distinguish the triplet $(\mathbf{A}, Q_{\mathcal{C}}(\mathbf{A}\mathbf{s} \oplus \mathbf{u}), \mathbf{u})$ from uniform, where $Q_{\mathcal{C}}$ is a vector quantization function based on an $(N,K)$ code $\mathcal{C}$, and $\mathbf{u}$ serves as a public dither. We establish the hardness of LPQ through a tight reduction from the LPN problem, maintaining equivalent security guarantees. We demonstrate LPQ’s practical efficacy through a full rate (i.e., rate-1) symmetric key encryption scheme, where LPQ combined with an extendable output function (XOF) achieves optimal ciphertext efficiency ($|\text{ct}| = |\text{pt}|$).
Sana Boussam, Ninon Calleja Albillos
In the last years, Deep Learning algorithms have been browsed and applied to Side-Channel Analysis in order to enhance attack’s performances. In some cases, the proposals came without an indepth analysis allowing to understand the tool, its applicability scenarios, its limitations and the advantages it brings with respect to classical statistical tools. As an example, a study presented at CHES 2021 proposed a corrective iterative framework to perform an unsupervised attack which achieves a 100% key bits recovery.
In this paper we analyze the iterative framework and the datasets it was applied onto. The analysis suggests a much easier and interpretable way to both implement such an iterative framework and perform the attack using more conventional solutions, without affecting the attack’s performances.
Sana Boussam, Mathieu Carbone, Benoît Gérard, Guénaël Renault, Gabriel Zaid
The benefits of using Deep Learning techniques to enhance side-channel attacks performances have been demonstrated over recent years.
Most of the work carried out since then focuses on discriminative models.
However, one of their major limitations is the lack of theoretical results.
Indeed, this lack of theoretical results, especially concerning the choice of neural network architecture to consider or the loss to prioritize to build an optimal model, can be problematic for both attackers and evaluators.
Recently, Zaid et al. addressed this problem by proposing a generative model that bridges conventional profiled attacks and deep learning techniques, thus providing a model that is both explicable and interpretable.
Nevertheless the proposed model has several limitations.
Indeed, the architecture is too complex, higher-order attacks cannot be mounted and desynchronization is not handled by this model.
In this paper, we address the first limitation namely the architecture complexity, as without a simpler model, the other limitations cannot be treated properly.
To do so, we propose a new generative model that relies on solid theoretical results.
This model is based on conditional variational autoencoder and converges towards the optimal statistical model i.e. it performs an optimal attack.
By building on and extending the state-of-the-art theoretical works on dimensionality reduction, we integrate into this neural network an optimal dimensionality reduction i.e. a dimensionality reduction that is achieved without any loss of information.
This results in a gain of $\mathcal{O}(D)$, with $D$ the dimension of traces, compared to Zaid et al. neural network in terms of architecture complexity, while at the same time enhancing the explainability and interpretability.
In addition, we propose a new attack strategy based on our neural network, which reduces the attack complexity of generative models from $\mathcal{O}(N)$ to $\mathcal{O}(1)$, with $N$ the number of generated traces.
We validate all our theoretical results experimentally using extensive simulations and various publicly available datasets covering symmetric, asymmetric pre and post-quantum cryptography implementations.
Antoine Bak
This note gives an explanation for a phenomenon which appeared in the cryptanalysis of the Elisabeth-4 stream cipher, a stream cipher optimized for Torus Fully Homomorphic Encryption (TFHE). This primitive was broken in 2023 by a linearization attack. The authors of this attack made an observation on the rank of the linear system they generated, which was lower than expected. They have provided a partial explanation for it using some properties of the negacyclic lookup tables (NLUT), one of the potential building block of the ciphers optimized for TFHE. NLUTs are defined as functions over integers modulo 2^n such that for all x, L(x + 2^(n−1) ) = −L(x). Their explanation of the rank defect of the linear system relies on the observation that the least significant bit of L(x) does not depend on the most significant bit of x, which prevents some monomials from appearing in the algebraic normal form (ANF) of the system. In this note, we prove a stronger property of the ANF of NLUTs and use it to give full proof of their observation on the rank of the system.
Keitaro Hashimoto, Kyosuke Yamashita, Keisuke Hara
A multi-designated verifier signature (MDVS) is a digital signature that empowers a signer to designate specific verifiers capable of verifying signatures. Notably, designated verifiers are allowed to not only verify signatures but also simulate “fake” signatures indistinguishable from real ones produced by the original signer. Since this property is useful for realizing off-the-record (i.e., deniable) communication in group settings, MDVS is attracting attention in secure messaging. Recently, Damgård et al. (TCC’20) and Chakraborty et al. (EUROCRYPT’23) have introduced new MDVS schemes, allowing a subset of designated verifiers to simulate signatures in contrast to the conventional one, which requires all designated verifiers for signature simulation. They also define a stronger notion of security for them. This work delves into this new MDVS and offers a comprehensive formalization. We identify all possible security levels of MDVS schemes in subset simulations and prove that some of them are not feasible. Furthermore, we demonstrate that MDVS schemes meeting the security notion defined by Chakraborty et al. imply IND-CCA secure public-key encryption schemes. Beyond formalization, we present new constructions of MDVS schemes in subset simulation. Notably, we introduce a new construction of strongly secure MDVS schemes based on ring signatures and public-key encryption, accompanied by a generic conversion for achieving consistency through non-interactive zero-knowledge arguments. Finally, we evaluate the efficiency of our MDVS schemes in classical and post-quantum settings, showing their practicality.
Akshit Aggarwal, Pulkit Bharti, Yang Li, Srinibas Swain
FHE-based private information retrieval (PIR) is widely used to maintain the secrecy of the client queries in a client-server architecture. There are several ways to implement FHE-based PIR. Most of these approaches results in server computation overhead. Attempts for reducing the server computation overhead results in 1) fetching incorrect results, 2) leakage of queries, 3) large number of homomorphic operations (which is a time consuming process), and 4) downloading the entire dataset in the client side. In this work, we design a three server based approach where the first server discuss the nature of dataset, second server stores the computation results performed over first server, and third server stores the dataset in accordance to the proposed novel technique, that is, restricted bin packing algorithm (RBA). The proposed three server based approach optimise the aforementioned limitations. Later we implement the designed protocol using Tenseal library. Our protocol provides to retrieve the data by providing security to the client's query.
Takuya Kojima, Masaki Morita, Hideki Takase, Hiroshi Nakamura
Side-channel attacks are increasingly recognized as a significant threat to hardware roots of trust. As a result, cryptographic module designers must ensure that their modules are resilient to such attacks before deployment. However, efficient evaluation of side-channel vulnerabilities in cryptographic implementations remains challenging. This paper introduces an open-source framework integrating FPGA designs, power measurement tools, and high-performance side-channel analysis libraries to streamline the evaluation process. The framework provides design templates for two widely used FPGA boards in the side-channel analysis community, enabling Shell-Role architecture, a modern FPGA design pattern. This shell abstraction allows designers to focus on developing cryptographic modules while utilizing standardized software tools for hardware control and power trace acquisition. Additionally, the framework includes acceleration plugins for ChipWhisperer, the leading open-source side-channel analysis platform, to enhance the performance of correlation power analysis (CPA) attacks. These plugins exploit modern many-core processors and Graphics Processing Units (GPUs) to speed up analysis significantly. To showcase the capabilities of the proposed framework, we conducted multiple case studies and highlighted significant findings that advance side-channel research. Furthermore, we compare our CPA plugins with existing tools and show that our plugins achieve up to 8.60x speedup over the state-of-the-art CPA tools.
Valerio Cini, Russell W. F. Lai, Ivy K. Y. Woo
Indistinguishability obfuscation (iO) turns a program unintelligible without altering its functionality and is a powerful cryptographic primitive that captures the power of most known primitives. Recent breakthroughs have successfully constructed iO from well-founded computational assumptions, yet these constructions are unfortunately insecure against quantum adversaries. In the search of post-quantum secure iO, a line of research investigates constructions from fully homomorphic encryption (FHE) and tailored decryption hint release mechanisms. Proposals in this line mainly differ in their designs of decryption hints, yet all known attempts either cannot be proven from a self-contained computational assumption, or are based on novel lattice assumptions which are subsequently cryptanalysed.
In this work, we propose a new plausibly post-quantum secure construction of iO by designing a new mechanism for releasing decryption hints. Unlike prior attempts, our decryption hints follow a public Gaussian distribution subject to decryption correctness constraints and are therefore in a sense as random as they could be. To generate such hints efficiently, we develop a general-purpose tool called primal lattice trapdoors, which allow sampling trapdoored matrices whose Learning with Errors (LWE) secret can be equivocated. We prove the security of our primal lattice trapdoors construction from the NTRU assumption. The security of the iO construction is then argued, along with other standard lattice assumptions, via a new Equivocal LWE assumption, for which we provide evidence for plausibility and identify potential targets for further cryptanalysis.
In this work, we propose a new plausibly post-quantum secure construction of iO by designing a new mechanism for releasing decryption hints. Unlike prior attempts, our decryption hints follow a public Gaussian distribution subject to decryption correctness constraints and are therefore in a sense as random as they could be. To generate such hints efficiently, we develop a general-purpose tool called primal lattice trapdoors, which allow sampling trapdoored matrices whose Learning with Errors (LWE) secret can be equivocated. We prove the security of our primal lattice trapdoors construction from the NTRU assumption. The security of the iO construction is then argued, along with other standard lattice assumptions, via a new Equivocal LWE assumption, for which we provide evidence for plausibility and identify potential targets for further cryptanalysis.
Qian Lu, Yansong Feng, Yanbin Pan
At CRYPTO 2020, Dachman-Soled et al. introduced a framework for to analyze the security loss of Learning with Errors LWE, which enables the incremental integration of leaked hints into lattice-based attacks. Later Nowakowski and May at ASIACRYPT 2023 proposed a novel method capable of integrating and combining an arbitrary number of both perfect and modular hints for the LWE secret within a unified framework, which achieves better efficiency in constructing the lattice basis and makes the attacks more practical. In this paper, we first consider solving LWE with independent hints about both the secret and errors. Firstly, we introduce a novel approach to embed the hints for secret into the LWE lattice by just matrix multiplication instead of the LLL reduction as in Nowakowski and May's attack, which further reduces the time complexity to construct the lattice basis. For example, given 234 perfect hints about CRYSTALS-KYBER 512, our method reduces the running time from 3 hours to 0.35 hours. Secondly, we show how to embed the hints about errors to the obtained lattice basis.
Yusuke Naito, Yu Sasaki, Takeshi Sugawara
Constructing a committing authenticated encryption (AE)
satisfying the CMT-4 security notion is an ongoing research challenge.
We propose a new mode KIVR, a black-box conversion for adding the
CMT-4 security to existing AEs. KIVR is a generalization of the Hash-
then-Enc (HtE) [Bellare and Hoang, EUROCRYPT 2022] and uses a
collision-resistant hash function to generate an initial value (or nonce)
and a mask for redundant bits, in addition to a temporary key. We ob-
tain a general bound r/2 + tag-col with r-bit redundancy for a large class
of CTR-based AEs, where tag-col is the security against tag-collision at-
tacks. Unlike HtE, the security of KIVR linearly increases with r, achiev-
ing beyond-birthday-bound security. With a t-bit tag, tag-col lies 0 ≤
tag-col ≤ t/2 depending on the target AE. We set tag-col = 0 for GCM,
GCM-SIV, and CCM, and the corresponding bound r/2 is tight for GCM
and GCM-SIV. With CTR-HMAC, tag-col = t/2, and the bound (r + t)/2
is tight.
16 June 2025
Eshan Chattopadhyay, Jesse Goodman
Given a sequence of $N$ independent sources $\mathbf{X}_1,\mathbf{X}_2,\dots,\mathbf{X}_N\sim\{0,1\}^n$, how many of them must be good (i.e., contain some min-entropy) in order to extract a uniformly random string? This question was first raised by Chattopadhyay, Goodman, Goyal and Li (STOC '20), motivated by applications in cryptography, distributed computing, and the unreliable nature of real-world sources of randomness. In their paper, they showed how to construct explicit low-error extractors for just $K \geq N^{1/2}$ good sources of polylogarithmic min-entropy. In a follow-up, Chattopadhyay and Goodman improved the number of good sources required to just $K \geq N^{0.01}$ (FOCS '21). In this paper, we finally achieve $K=3$.
Our key ingredient is a near-optimal explicit construction of a new pseudorandom primitive, called a leakage-resilient extractor (LRE) against number-on-forehead (NOF) protocols. Our LRE can be viewed as a significantly more robust version of Li's low-error three-source extractor (FOCS '15), and resolves an open question put forth by Kumar, Meka, and Sahai (FOCS '19) and Chattopadhyay, Goodman, Goyal, Kumar, Li, Meka, and Zuckerman (FOCS '20). Our LRE construction is based on a simple new connection we discover between multiparty communication complexity and non-malleable extractors, which shows that such extractors exhibit strong average-case lower bounds against NOF protocols.
Our key ingredient is a near-optimal explicit construction of a new pseudorandom primitive, called a leakage-resilient extractor (LRE) against number-on-forehead (NOF) protocols. Our LRE can be viewed as a significantly more robust version of Li's low-error three-source extractor (FOCS '15), and resolves an open question put forth by Kumar, Meka, and Sahai (FOCS '19) and Chattopadhyay, Goodman, Goyal, Kumar, Li, Meka, and Zuckerman (FOCS '20). Our LRE construction is based on a simple new connection we discover between multiparty communication complexity and non-malleable extractors, which shows that such extractors exhibit strong average-case lower bounds against NOF protocols.
Riddhi Ghosal, Ilan Komargodski, Brent Waters
Understanding the minimal assumptions necessary for constructing non-interactive zero-knowledge arguments (NIZKs) for NP and placing it within the hierarchy of cryptographic primitives has been a central goal in cryptography. Unfortunately, there are very few examples of ``generic'' constructions of NIZKs or any of its natural relaxations.
In this work, we consider the relaxation of NIZKs to the designated-verifier model (DV-NIZK) and present a new framework for constructing (reusable) DV-NIZKs for NP generically from lossy trapdoor functions and PRFs computable by polynomial-size branching programs (a class that includes NC1). Previous ``generic'' constructions of DV-NIZK for NP from standard primitives relied either on (doubly-enhanced) trapdoor permutations or on a public-key encryption scheme plus a KDM-secure secret key encryption scheme.
Notably, our DV-NIZK framework achieves statistical zero-knowledge. To our knowledge, this is the first DV-NIZK construction from any ``generic" standard assumption with statistical zero-knowledge that does not already yield a NIZK.
A key technical component of our construction is an efficient, unconditionally secure secret sharing scheme for non-monotone functions with randomness recovery for all polynomial-size branching programs. As an independent contribution we present an incomparable randomness recoverable (monotone) secret sharing for NC1 in a model with trusted setup that guarantees computational privacy assuming one-way functions. We believe that these primitives will be useful in related contexts in the future.
In this work, we consider the relaxation of NIZKs to the designated-verifier model (DV-NIZK) and present a new framework for constructing (reusable) DV-NIZKs for NP generically from lossy trapdoor functions and PRFs computable by polynomial-size branching programs (a class that includes NC1). Previous ``generic'' constructions of DV-NIZK for NP from standard primitives relied either on (doubly-enhanced) trapdoor permutations or on a public-key encryption scheme plus a KDM-secure secret key encryption scheme.
Notably, our DV-NIZK framework achieves statistical zero-knowledge. To our knowledge, this is the first DV-NIZK construction from any ``generic" standard assumption with statistical zero-knowledge that does not already yield a NIZK.
A key technical component of our construction is an efficient, unconditionally secure secret sharing scheme for non-monotone functions with randomness recovery for all polynomial-size branching programs. As an independent contribution we present an incomparable randomness recoverable (monotone) secret sharing for NC1 in a model with trusted setup that guarantees computational privacy assuming one-way functions. We believe that these primitives will be useful in related contexts in the future.
Christian Cachin, François-Xavier Wicht
Anonymous cryptocurrencies attracted much attention over the past decade, yet ensuring both integrity and privacy in an open system remains challenging. Their transactions preserve privacy because they do not reveal on which earlier transaction they depend, specifically which outputs of previous transactions are spent. However, achieving privacy imposes a significant storage overhead due to two current limitations. First, the set of potentially unspent outputs of transactions grows indefinitely because the design hides cryptographically which one have been consumed; and, second, additional data must be stored for each spent output to ensure integrity, that is, to prevent that it can be spent again. We introduce a privacy-preserving payment scheme that mitigates these issues by randomly partitioning unspent outputs into fixed-size bins. Once a bin has been referenced in as many transactions as its size, it is pruned from the ledger. This approach reduces storage overhead while preserving privacy. We first highlight the scalability benefits of using smaller untraceability sets instead of considering the entire set of outputs, as done in several privacy-preserving cryptocurrencies. We then formalize the security and privacy notions required for a scalable, privacy-preserving payment system and analyze how randomized partitioning plays a key role in both untraceability and scalability. To instantiate our approach, we provide constructions based on Merkle trees and one based on cryptographic accumulators. We finally show the storage benefits of our scheme and analyze its resilience against large-scale flooding attacks using empirical transaction data.
Ritam Bhaumik, Avijit Dutta, Akiko Inoue, Tetsu Iwata, Ashwin Jha, Kazuhiko Minematsu, Mridul Nandi, Yu Sasaki, Meltem Sönmez Turan, Stefano Tessaro
This paper studies the security of key derivation functions (KDFs), a central class of cryptographic algorithms used to derive multiple independent-looking keys (each associated with a particular context) from a single secret. The main security requirement is that these keys are pseudorandom (i.e., the KDF is a pseudorandom function). This paper initiates the study of an additional security property, called key control (KC) security, first informally put forward in a recent update to NIST Special Publication (SP) 800-108 standard for KDFs. Informally speaking, KC security demands that, given a known key, it is hard for an adversary to find a context that forces the KDF-derived key for that context to have a property that is specified a-priori and is hard to satisfy (e.g., that the derived key consists mostly of 0s, or that it is a weak key for a cryptographic algorithm using it).
We provide a rigorous security definition for KC security, and then move on to the analysis of the KDF constructions specified in NIST SP 800-108. We show, via security proofs in the random oracle model, that the proposed constructions based on XOFs or hash functions can accommodate for reasonable security margins (i.e., 128-bit security) when instantiated from KMAC and HMAC. We also show, via attacks, that all proposed block-cipher based modes of operation (while implementing mitigation techniques to prevent KC security attacks affecting earlier version of the standard) only achieve at best 72-bit KC security for 128-bit blocks, as with AES.
Markus Krabbe Larsen, Carsten Schürmann
Inductive reasoning in form of hybrid arguments is prevalent in cryptographic security proofs, but they are not integrated into formalisms used to model cryptographic security proofs, such as, for example, state-separating proofs. In this paper we present an induction principle for hybrid arguments that says that two games are many-steps indistinguishable if they are, respectively, indistinguishable from the end points of an iterated one-step indistinguishability argument. We demonstrate how to implement this induction rule in Nominal-SSProve by taking advantage of the nominal character of state-variables and illustrate its versatility by proving a general reduction from many-time CPA-security to one-time CPA-security for any asymmetric encryption scheme. We
then specialize the result to ElGamal and reduce CPA-secure to the decisional Diffie Hellman-assumption.
Samuel Dittmer, Rafail Ostrovsky
In the field of information-theoretic cryptography, randomness complexity is a key metric for protocols for private computation, that is, the number of random bits needed to realize the protocol. Although some general bounds are known, even for the relatively simple example of $n$-party AND, the exact complexity is unknown.
We improve the upper bound from Couteau and Ros\'en in Asiacrypt 2022 on the (asymptotic) randomness complexity of $n$-party AND from 6 to 5 bits, that is, we give a $1$-private protocol for computing the AND of $n$ parties' inputs requiring $5$ bits of additional randomness, for all $n \geq 120$. Our construction, like that of Couteau and Ros\'en, requires a single source of randomness.
Additionally, we consider the modified setting of Goyal, Ishai, and Song (Crypto '22) where helper parties without any inputs are allowed to assist in the computation. In this setting, we show that the randomness complexity of computing a general boolean circuit $C$ $1$-privately is exactly 2 bits, and this computation can be performed with seven helper parties per gate.
We improve the upper bound from Couteau and Ros\'en in Asiacrypt 2022 on the (asymptotic) randomness complexity of $n$-party AND from 6 to 5 bits, that is, we give a $1$-private protocol for computing the AND of $n$ parties' inputs requiring $5$ bits of additional randomness, for all $n \geq 120$. Our construction, like that of Couteau and Ros\'en, requires a single source of randomness.
Additionally, we consider the modified setting of Goyal, Ishai, and Song (Crypto '22) where helper parties without any inputs are allowed to assist in the computation. In this setting, we show that the randomness complexity of computing a general boolean circuit $C$ $1$-privately is exactly 2 bits, and this computation can be performed with seven helper parties per gate.
Oriol Farràs, Miquel Guiot
Traceable secret sharing complements traditional schemes by enabling the identification of parties who sell their shares. In the model introduced by Boneh, Partap, and Rotem [CRYPTO’24], a group of corrupted parties generates a reconstruction box $R$ that, given enough valid shares as input, reconstructs the secret. The goal is to trace $R$ back to at least one of the corrupted parties using only black-box access to it.
While their work provides efficient constructions for threshold access structures, it does not apply to the general case. In this work, we extend their framework to general access structures and present the first traceable scheme supporting them.
In the course of our construction, we also contribute to the study of anonymous secret sharing, a notion recently introduced by Bishop et al. [CRYPTO’25], which strengthens classical secret sharing by requiring that shares do not reveal the identities of the parties holding them. We further advance this area by proposing new and stronger definitions, and presenting an anonymous scheme for general access structures that satisfies them.
While their work provides efficient constructions for threshold access structures, it does not apply to the general case. In this work, we extend their framework to general access structures and present the first traceable scheme supporting them.
In the course of our construction, we also contribute to the study of anonymous secret sharing, a notion recently introduced by Bishop et al. [CRYPTO’25], which strengthens classical secret sharing by requiring that shares do not reveal the identities of the parties holding them. We further advance this area by proposing new and stronger definitions, and presenting an anonymous scheme for general access structures that satisfies them.
Jan Bormet, Stefan Dziembowski, Sebastian Faust, Tomasz Lizurej, Marcin Mielniczuk
One of the main shortcomings of classical distributed cryptography is its reliance on a certain fraction of participants remaining honest. Typically, honest parties are assumed to follow the protocol and not leak any information, even if behaving dishonestly would benefit them economically. More realistic models used in blockchain consensus rely on weaker assumptions, namely that no large coalition of corrupt parties exists, although every party can act selfishly. This is feasible since, in a consensus protocol, active misbehavior can be detected and "punished" by other parties. However, "information leakage", where an adversary reveals sensitive information via, e.g., a subliminal channel, is often impossible to detect and, hence, much more challenging to handle.
A recent approach to address this problem was proposed by Dziembowski, Faust, Lizurej, and Mielniczuk (ACM CCS 2024), who introduced a new notion called secret sharing with snitching. This primitive guarantees that as long as no large coalition of mutually trusting parties exists, every leakage of the shared secret produces a "snitching proof" indicating that some party participated in the illegal secret reconstruction. This holds in a very strong model, where mutually distrusting parties use an MPC protocol to reconstruct any information about the shared secret. Such a "snitching proof" can be sent to a smart contract (modeled as a "judge") deployed on the blockchain, which punishes the aving party financially.
In this paper, we extend the results from the work of CCS'24 by addressing its two main shortcomings. Firstly, we significantly strengthen the attack model by considering the case when mutually distrusting parties can also rely on a trusted third party (e.g., a smart contract). We call this new primitive strong secret sharing with snitching (SSSS). We present an SSSS protocol that is secure in this model. Secondly, unlike in the construction from CCS'24, our protocol does not require the honest parties to perform any MPC computations on hash functions. Besides its theoretical interest, this improvement is of practical importance, as it allows the construction of SSSS from any (even very "MPC-unfriendly") hash function.
A recent approach to address this problem was proposed by Dziembowski, Faust, Lizurej, and Mielniczuk (ACM CCS 2024), who introduced a new notion called secret sharing with snitching. This primitive guarantees that as long as no large coalition of mutually trusting parties exists, every leakage of the shared secret produces a "snitching proof" indicating that some party participated in the illegal secret reconstruction. This holds in a very strong model, where mutually distrusting parties use an MPC protocol to reconstruct any information about the shared secret. Such a "snitching proof" can be sent to a smart contract (modeled as a "judge") deployed on the blockchain, which punishes the aving party financially.
In this paper, we extend the results from the work of CCS'24 by addressing its two main shortcomings. Firstly, we significantly strengthen the attack model by considering the case when mutually distrusting parties can also rely on a trusted third party (e.g., a smart contract). We call this new primitive strong secret sharing with snitching (SSSS). We present an SSSS protocol that is secure in this model. Secondly, unlike in the construction from CCS'24, our protocol does not require the honest parties to perform any MPC computations on hash functions. Besides its theoretical interest, this improvement is of practical importance, as it allows the construction of SSSS from any (even very "MPC-unfriendly") hash function.
Isaac A. Canales-Martínez, David Santos
Although studied for several years now, parameter extraction of Deep Neural Networks (DNNs) has seen the major advances only in recent years. Carlini et al. (Crypto 2020) and Canales-Martínez et al. (Eurocrypt 2024) showed how to extract the parameters of ReLU-based DNNs efficiently (polynomial time and polynomial number of queries, as a function on the number of neurons) in the raw-output setting, i.e., when the attacker has access to the raw output of the DNN. On the other hand, the more realistic hard-label setting gives the attacker access only to the most likely label after the DNN's raw output has been processed. Recently, Carlini et al. (Eurocrypt 2025) presented an efficient parameter extraction attack in the hard-label setting applicable to DNNs having a large number of parameters.
The work in Eurocrypt 2025 recovers the parameters of all layers except the output layer. The techniques presented there are not applicable to this layer due to its lack of ReLUs. In this work, we fill this gap and present a technique that allows recovery of the output layer. Additionally, we show parameter extraction methods that are more efficient when the DNN has contractive layers, i.e., when the number of neurons decreases in those layers. We successfully apply our methods to some networks trained on the CIFAR-10 dataset. Asymptotically, our methods have polynomial complexity in time and number of queries. Thus, a complete extraction attack combining the techniques by Carlini et al. and ours remains with polynomial complexity. Moreover, real execution time is decreased when attacking DNNs with the required contractive architecture.
The work in Eurocrypt 2025 recovers the parameters of all layers except the output layer. The techniques presented there are not applicable to this layer due to its lack of ReLUs. In this work, we fill this gap and present a technique that allows recovery of the output layer. Additionally, we show parameter extraction methods that are more efficient when the DNN has contractive layers, i.e., when the number of neurons decreases in those layers. We successfully apply our methods to some networks trained on the CIFAR-10 dataset. Asymptotically, our methods have polynomial complexity in time and number of queries. Thus, a complete extraction attack combining the techniques by Carlini et al. and ours remains with polynomial complexity. Moreover, real execution time is decreased when attacking DNNs with the required contractive architecture.