IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
13 April 2025
Danilo Gligoroski, Mayank Raikwar, Sonu Kumar Jha
Recent advancements in Large Language Models (LLMs) have transformed communication, yet their role in secure messaging remains underexplored, especially in surveillance-heavy environments. At the same time, many governments all over the world are proposing legislation to detect, backdoor, or even ban encrypted communication. That emphasizes the need for alternative ways to communicate securely and covertly over open channels. We propose a novel cryptographic embedding framework that enables covert Public Key or Symmetric Key encrypted communication over public chat channels with human-like produced texts. Some unique properties of our framework are: 1. It is LLM agnostic, i.e., it allows participants to use different local LLM models independently; 2. It is pre- or post-quantum agnostic; 3. It ensures indistinguishability from human-like chat-produced texts. Thus, it offers a viable alternative where traditional encryption is detectable and restricted.
Víctor Duarte Melo, William J Buchanan
Whilst many key exchange and digital signature methods use the NIST P256 (secp256r1) and secp256k1 curves, there is often a demand for increased security. With these curves, we have a 128-bit security. These security levels can be increased to 256-bit security with NIST P-521 Curve 448 and Brainpool-P512. This paper outlines a new curve - Eccfrog512ck2 - and which provides 256-bit security and enhanced performance over NIST P-521. Along with this, it has side-channel resistance and is designed to avoid weaknesses such as related to the MOV attack. It shows that Eccfrog512ck2 can have a 61.5% speed-up on scalar multiplication and a 33.3% speed-up on point generation over the NIST P-521 curve.
Dnnis Faut, Julia Hesse, Lisa Kohl, Andy Rupp
Abstract—Anonymous token schemes are cryptographic
protocols for limiting the access to online resources to
credible users. The resource provider issues a set of access
tokens to the credible user that they can later redeem
anonymously, i.e., without the provider being able to link
their redemptions. When combined with credibility tests such
as CAPTCHAs, anonymous token schemes can significantly
increase user experience and provider security, without
exposing user access patterns to providers.
Current anonymous token schemes such as the Privacy
Pass protocol by Davidson et al. rely on oblivious
pseudorandom functions (OPRFs), which let server and user
jointly compute randomly looking access tokens. For those
protocols, token issuing costs are linear in the number of
requested tokens.
In this work, we propose a new approach for building
anonymous token schemes. Instead of relying on two-party
computation to realize a privacy-preserving pseudorandom
function evaluation, we propose to offload token generation
to the user by using group verifiable random functions
(GVRFs). GVRFs are a new cryptographic primitive
that allow users to produce verifiable pseudorandomness.
Opposed to standard VRFs, verification is anonymous within
the group of credible users. We give a construction of group
VRFs from the Dodis-Yampolskiy VRF and Equivalence-
Class Signatures, based on pairings and a new Diffie-
Hellman inversion assumption that we analyze in the Generic
Group Model. Our construction enjoys compact public keys
and proofs, while evaluation and verification costs are only
slightly increased compared to the Dodis-Yampolskiy VRF.
By deploying a group VRF instead of a OPRF, we
obtain an anonymous token scheme where communication
as well as server-side computation during the issuing phase
is constant and independent of the number of tokens a
user requests. Moreover, by means of our new concept of updatable token policies, the number of unspent tokens in
circulation can retrospectively (i.e., even after the credibility
check) be decreased or increased in order to react to
the current or expected network situation. Our tokens are
further countable and publicly verifiable. This comes at the
cost of higher computational efforts for token redemption
and verification as well as somewhat weaker unlinkability
guarantees compared to Privacy Pass.
Jonathan Bootle, Vadim Lyubashevsky, Antonio Merino-Gallardo
Mixnets are powerful building blocks for providing anonymity
in applications like electronic voting and anonymous messaging. The en-
cryption schemes upon which traditional mixnets are built, as well as the
zero-knowledge proofs used to provide verifiability, will, however, soon
become insecure once a cryptographically-relevant quantum computer is
built. In this work, we construct the most compact verifiable mixnet that
achieves privacy and verifiability through encryption and zero-knowledge
proofs based on the hardness of lattice problems, which are believed to
be quantum-safe.
A core component of verifiable mixnets is a proof of shuffle. The starting point for our construction is the proof of shuffle of Aranha et al. (CT- RSA 2021). We first identify an issue with the soundness proof in that work, which is also present in the adaptation of this proof in the mixnets of Aranha et al. (ACM CCS 2023) and Hough et al. (IACR CiC 2025). The issue is that one cannot directly adapt classical proofs of shuffle to the lattice setting due to the splitting structure of the rings used in lattice-based cryptography. This is not just an artifact of the proof, but a problem that manifests itself in practice, and we successfully mount an attack against the implementation of the first of the mixnets. We fix the problem and introduce a general approach for proving shuffles in split- ting rings that can be of independent interest.
The efficiency improvement of our mixnet over prior work is achieved by switching from re-encryption mixnets (as in the works of Aranha et al. and Hough et al.) to decryption mixnets with very efficient layering based on the hardness of the LWE and LWR problems over polynomial rings. The ciphertexts in our scheme are smaller by approximately a factor of 10X and 2X over the aforementioned instantiations, while the linear-size zero-knowledge proofs are smaller by a factor of 4X and 2X.
A core component of verifiable mixnets is a proof of shuffle. The starting point for our construction is the proof of shuffle of Aranha et al. (CT- RSA 2021). We first identify an issue with the soundness proof in that work, which is also present in the adaptation of this proof in the mixnets of Aranha et al. (ACM CCS 2023) and Hough et al. (IACR CiC 2025). The issue is that one cannot directly adapt classical proofs of shuffle to the lattice setting due to the splitting structure of the rings used in lattice-based cryptography. This is not just an artifact of the proof, but a problem that manifests itself in practice, and we successfully mount an attack against the implementation of the first of the mixnets. We fix the problem and introduce a general approach for proving shuffles in split- ting rings that can be of independent interest.
The efficiency improvement of our mixnet over prior work is achieved by switching from re-encryption mixnets (as in the works of Aranha et al. and Hough et al.) to decryption mixnets with very efficient layering based on the hardness of the LWE and LWR problems over polynomial rings. The ciphertexts in our scheme are smaller by approximately a factor of 10X and 2X over the aforementioned instantiations, while the linear-size zero-knowledge proofs are smaller by a factor of 4X and 2X.
Matilda Backendal, Sebastian Clermont, Marc Fischlin, Felix Günther
Key derivation functions (KDFs) are integral to many cryptographic protocols. Their functionality is to turn raw key material, such as a Diffie-Hellman secret, into a strong cryptographic key that is indistinguishable from random. This guarantee was formalized by Krawczyk together with the seminal introduction of HKDF (CRYPTO 2010), in a model where the KDF only takes a single key material input. Modern protocol designs, however, regularly need to combine multiple secrets, possibly even from different sources, with the guarantee that the derived key is secure as long as at least one of the inputs is good. This is particularly relevant in settings like hybrid key exchange for quantum-safe migration. Krawczyk's KDF formalism does not capture this goal, and there has been surprisingly little work on the security considerations for KDFs since then.
In this work, we thus revisit the syntax and security model for KDFs to treat multiple, possibly correlated inputs. Our syntax is assertive: We do away with salts, which are needed in theory to extract from arbitrary sources in the standard model, but in practice, they are almost never used (or even available) and sometimes even misused, as we argue. We use our new model to analyze real-world multi-input KDFs—in Signal's X3DH protocol, ETSI's TS 103 744 standard, and MLS' combiner for pre-shared keys—as well as new constructions we introduce for specialized settings—e.g., a purely blockcipher-based one. We further discuss the importance of collision resistance for KDFs and finally apply our multi-input KDF model to show how hybrid KEM key exchange can be analyzed from a KDF perspective.
In this work, we thus revisit the syntax and security model for KDFs to treat multiple, possibly correlated inputs. Our syntax is assertive: We do away with salts, which are needed in theory to extract from arbitrary sources in the standard model, but in practice, they are almost never used (or even available) and sometimes even misused, as we argue. We use our new model to analyze real-world multi-input KDFs—in Signal's X3DH protocol, ETSI's TS 103 744 standard, and MLS' combiner for pre-shared keys—as well as new constructions we introduce for specialized settings—e.g., a purely blockcipher-based one. We further discuss the importance of collision resistance for KDFs and finally apply our multi-input KDF model to show how hybrid KEM key exchange can be analyzed from a KDF perspective.
Xiaohan Wan, Yang Wang, Haiyang Xue, Mingqiang Wang
Proxy re-encryption (PRE) schemes enable a semi-honest proxy to transform a ciphertext of one user $i$ to another user $j$ while preserving the privacy of the underlying message. Multi-hop PRE schemes allow a legal ciphertext to undergo multiple transformations, but for lattice-based multi-hop PREs, the number of transformations is typically bounded due to the increase of error terms. Recently, Zhao et al. (Esorics 2024) introduced a lattice-based unbounded multi-hop (homomorphic) PRE scheme that supports an unbounded number of hops. Nevertheless, their scheme only achieves the selective CPA security. In contrast, Fuchsbauer et al. (PKC 2019) proposed a generic framework for constructing HRA-secure unbounded multi-hop PRE schemes from FHE. Despite this, when instantiated with state-of-the-art FHEW-like schemes, the overall key size and efficiency remain unsatisfactory.
In this paper, we present a lattice-based unbounded multi-hop PRE scheme with the stronger adaptive HRA security (i.e. security against honest re-encryption attacks), which is more suitable for practical applications. Our scheme features an optimized re-encryption process based on the FHEW-like blind rotation, which resolves the incompatibility between the noise flooding technique and Fuchsbauer et al. 's framework when instantiated with FHEW-like schemes. This results in reduced storage requirements for public keys and offers higher efficiency. Moreover, our optimized unbounded multi-hop PRE scheme can be modified to an unbounded homomorphic PRE, a scheme allowing for arbitrary homomorphic computations over fresh, re-encrypted, and evaluated ciphertexts.
In this paper, we present a lattice-based unbounded multi-hop PRE scheme with the stronger adaptive HRA security (i.e. security against honest re-encryption attacks), which is more suitable for practical applications. Our scheme features an optimized re-encryption process based on the FHEW-like blind rotation, which resolves the incompatibility between the noise flooding technique and Fuchsbauer et al. 's framework when instantiated with FHEW-like schemes. This results in reduced storage requirements for public keys and offers higher efficiency. Moreover, our optimized unbounded multi-hop PRE scheme can be modified to an unbounded homomorphic PRE, a scheme allowing for arbitrary homomorphic computations over fresh, re-encrypted, and evaluated ciphertexts.
Lucas David Meier, Felipe Valencia, Cristian-Alexandru Botocan, Damian Vizár
This paper revisits the Hamming Weight (HW) labelling function for machine learning assisted side channel attacks. Contrary to what has been suggested by previous works, our investigation shows that, when paired with modern deep learning architectures, appropriate pre-processing and normalization techniques; it can perform as well as the popular identity labelling functions and sometimes even beat it. In fact, we hereby introduce a new machine learning method, dubbed, that helps solve the class imbalance problem associated to HW, while significantly improving the performance of unprofiled attacks. We additionally release our new, easy to use python package that we used in our experiments, implementing a broad variety of machine learning driven side channel attacks as open source, along with a new dataset AES_nRF, acquired on the nRF52840 SoC.
William J Buchanan, Jamie Gilchrist, Keir Finlow-Bates
The ECDSA (Elliptic Curve Digital Signature Algorithm) is used in many blockchain networks for digital signatures. This includes the Bitcoin and the Ethereum blockchains. While it has good performance levels and as strong current security, it should be handled with care. This care typically relates to the usage of the nonce value which is used to create the signature. This paper outlines the methods that can be used to break ECDSA signatures, including revealed nonces, weak nonce choice, nonce reuse, two keys and shared nonces, and fault attack.
Mehmet Ugurbil, Dimitris Mouris, Manuel B. Santos, José Cabrero-Holgueras, Miguel de Vega, Shubho Sengupta
The increased popularity of large language models (LLMs) raises serious privacy concerns, where users' private queries are sent to untrusted servers. Many cryptographic techniques have been proposed to provide privacy, such as secure multiparty computation (MPC), which enables the evaluation of LLMs directly on private data. However, cryptographic techniques have been deemed impractical as they introduce large communication and computation. On the other hand, many obfuscation techniques have been proposed, such as split inference, where part of the model is evaluated on edge devices to hide the input data from untrusted servers, but these methods provide limited privacy guarantees.
We propose Fission, a privacy-preserving framework that improves latency while providing strong privacy guarantees. Fission utilizes an MPC network for linear computations, while nonlinearities are computed on a separate evaluator network that receives shuffled values in the clear and returns nonlinear functions evaluated at these values back to the MPC network. As a result, each evaluator only gets access to parts of the shuffled data, while the model weights remain private. We evaluate fission on a wide set of LLMs and compare it against prior works. Fission results in up to eight times faster inference and eight times reduced bandwidth compared to prior works while retaining high accuracy. Finally, we construct an attack on obfuscation techniques from related works that show significant information leakage, and we demonstrate how Fission enhances privacy.
We propose Fission, a privacy-preserving framework that improves latency while providing strong privacy guarantees. Fission utilizes an MPC network for linear computations, while nonlinearities are computed on a separate evaluator network that receives shuffled values in the clear and returns nonlinear functions evaluated at these values back to the MPC network. As a result, each evaluator only gets access to parts of the shuffled data, while the model weights remain private. We evaluate fission on a wide set of LLMs and compare it against prior works. Fission results in up to eight times faster inference and eight times reduced bandwidth compared to prior works while retaining high accuracy. Finally, we construct an attack on obfuscation techniques from related works that show significant information leakage, and we demonstrate how Fission enhances privacy.
Andreas Brüggemann, Nishat Koti, Varsha Bhat Kukkala, Thomas Schneider
As real-world networks such as social networks and computer networks are often complex and distributed, modeling them as multilayer graphs is gaining popularity. For instance, when studying social interactions across platforms like LinkedIn, Facebook, TikTok, and Bluesky, users may be connected on several of these platforms. To identify important nodes/users, the platforms might wish to analyze user interactions using, e.g., centrality measures when accounting for connections across all platforms. That raises the challenge for platforms to perform such computation while simultaneously protecting their user data to shelter their own business as well as uphold data protection laws. This necessitates designing solutions that allow for performing secure
computation on a multilayer graph which is distributed among mutually distrusting parties while keeping each party's data hidden.
The work of Asharov et al. (WWW'17) addresses this problem by designing secure solutions for centrality measures that involve computing the truncated Katz score and reach score on multilayer graphs. However, we identify several limitations in that work which render the solution inefficient or even unfeasible for realistic networks with significantly more than 10k nodes. We address these limitations by designing secure solutions that are significantly more efficient and scalable. In more detail, given that real-world graphs are known to be sparse, our solutions move away from an expensive matrix-based representation to a more efficient list-based representation. We design novel, secure, and efficient solutions for computing centrality measures and prove their correctness. Our solutions drastically reduce the asymptotic complexity from the prohibitive $\mathcal{O}(|\mathsf{V}|^2)$ even for the fastest solution by Asharov et al. down to $\mathcal{O}(|\mathsf{V}|\log |\mathsf{V}|)$, for $|\mathsf{V}|$ nodes. To design our solutions, we extend upon the secure graph computation framework of Koti et al. (CCS'24), providing a novel framework with improved capabilities in multiple directions. Finally, we provide an end-to-end implementation of our secure graph analysis framework and establish concrete efficiency improvements over prior work, observing several orders of magnitude improvement.
The work of Asharov et al. (WWW'17) addresses this problem by designing secure solutions for centrality measures that involve computing the truncated Katz score and reach score on multilayer graphs. However, we identify several limitations in that work which render the solution inefficient or even unfeasible for realistic networks with significantly more than 10k nodes. We address these limitations by designing secure solutions that are significantly more efficient and scalable. In more detail, given that real-world graphs are known to be sparse, our solutions move away from an expensive matrix-based representation to a more efficient list-based representation. We design novel, secure, and efficient solutions for computing centrality measures and prove their correctness. Our solutions drastically reduce the asymptotic complexity from the prohibitive $\mathcal{O}(|\mathsf{V}|^2)$ even for the fastest solution by Asharov et al. down to $\mathcal{O}(|\mathsf{V}|\log |\mathsf{V}|)$, for $|\mathsf{V}|$ nodes. To design our solutions, we extend upon the secure graph computation framework of Koti et al. (CCS'24), providing a novel framework with improved capabilities in multiple directions. Finally, we provide an end-to-end implementation of our secure graph analysis framework and establish concrete efficiency improvements over prior work, observing several orders of magnitude improvement.
Jean-Sébastien Coron, Robin Köstler
We introduce a new bootstrapping equation for the CKKS homomorphic encryption scheme of approximate numbers. The original bootstrapping approach for CKKS consists in homomorphically evaluating a polynomial that approximates the modular reduction modulo q. In contrast, our new bootstrapping equation directly embeds the additive group modulo q into the complex roots of unity, which can be evaluated natively in the CKKS scheme. Due to its reduced multiplicative depth, our new bootstrapping equation achieves a 7x latency improvement for a single slot compared to the original CKKS bootstrapping, though it scales less efficiently when applied to a larger number of slots.
Yadi Zhong
Recently, Dupin and Abelard proposed a broadcast encryption scheme which outperforms the Complete Subtree-based and Subset Difference broadcast encryption in terms of encryption cost and bandwidth requirement. However, Dupin and Abelard acknowledge that the worst-case bound for bandwidth requirement of Complete Subtree approach can be reached in their scheme as well. In this paper, we answer the call to further reduce this bandwidth bottleneck. We first provide concrete analysis to show how this worst-case upper-bound is reached from concrete Boolean functions. Then we present two improved broadcast encryption schemes to significantly reduce this worst-case bandwidth consumption for further optimization of Dupin and Abelard’s technique. Our proposed approach ADC-BE, composed of two algorithms, AD-BE and AC-BE, can significantly optimize this worst-case complexity from n/2 down to 1 for a system of n users. This is efficient especially for large number of users in the system. Our proposed schemes combines the algebraic normal form, disjunctive normal form, and conjunctive normal form to optimize a Boolean function to its minimized representation. In addition, our approaches can be made secure against quantum adversaries and are therefore post-quantum, where both algorithms AD-BE and AC-BE require minimal assumptions based on existence of one-way function.
Ying Cai, Chengyi Qin, Mingqiang Wang
Asynchronous Complete Secret Sharing (ACSS) is a foundational module for asynchronous networks, playing a critical role in cryptography. It is essential for Asynchronous Secure Multi-Party Computation (AMPC) and, with termination, is widely applied in Validated Asynchronous Byzantine Agreement (VABA) and Asynchronous Distributed Key Generation (ADKG) to support secure distributed systems.
Currently, there are relatively few statistical secure ACSS protocols that can guarantee termination, and their communication complexity is relatively high. To reduce communication complexity, we propose a new multi-receiver signature scheme, ARICP, which supports linear operations on signatures. Leveraging the ARICP scheme and the properties of symmetric polynomials, we propose an ACSS protocol that ensures termination and optimal resilience ($t < n / 3$) with $\mathcal{O}(n^{2}\kappa$) bits per sharing. Compared with the best-known result of ACSS protocols that guarantee termination [CP23], the amortized communication complexity of our protocol is reduced by a factor of $\mathcal{O}(n)$.
Marco Baldi, Sebastian Bitzer, Nicholas Lilla, Paolo Santini
In Hamming Quasi-Cyclic (HQC), one of the finalists in the NIST competition for the standardization of post-quantum cryptography, decryption relies on decoding a noisy codeword through a public error-correcting code. The noise vector has a special form that depends on the secret key (a pair of sparse polynomials). However, the decoder, which is currently employed in HQC, is agnostic to the secret key, operating under the assumption that the error arises from a Binary Symmetric Channel (BSC). In this paper, we demonstrate that this special noise structure can instead be leveraged to develop more powerful decoding strategies.
We first study the problem from a coding-theoretic perspective. The current code design, which admits a non-zero decryption failure rate, is close to optimal in the setting of a decoder that is agnostic to the error structure. We show that there are code-decoder pairs with a considerably shorter code length that can guarantee unique decoding by taking the error structure into account. This result is non-constructive, i.e., we do not provide an explicit code construction and it remains open whether efficient decoding is possible. Nevertheless, it highlights the potential that can be tapped by taking the error structure into account. We then argue that, in practice, the matter of decoding in HQC can be related to solving an instance of the noisy syndrome decoding problem, in which the parity-check matrix is constituted by the polynomials in the secret key. We show that, using decoders for Low-Density Parity-Check (LDPC) and Moderate-Density Parity-Check (MDPC) codes, one can significantly reduce the entity of the noise and, de facto, also the Decoding Failure Rate (DFR) of the HQC decoder.
This preliminary study leaves some open questions and problems. While it shows that decoding in HQC can be improved, the modeling of the DFR gets more complicated: even for the basic decoder we propose in this paper, we have not been able to devise a reliable DFR model. This is likely due to the fact that the decoder structure resembles the iterative nature of LDPC/MDPC decoders, for which devising a reliable DFR estimation is a well-known difficult problem.
We first study the problem from a coding-theoretic perspective. The current code design, which admits a non-zero decryption failure rate, is close to optimal in the setting of a decoder that is agnostic to the error structure. We show that there are code-decoder pairs with a considerably shorter code length that can guarantee unique decoding by taking the error structure into account. This result is non-constructive, i.e., we do not provide an explicit code construction and it remains open whether efficient decoding is possible. Nevertheless, it highlights the potential that can be tapped by taking the error structure into account. We then argue that, in practice, the matter of decoding in HQC can be related to solving an instance of the noisy syndrome decoding problem, in which the parity-check matrix is constituted by the polynomials in the secret key. We show that, using decoders for Low-Density Parity-Check (LDPC) and Moderate-Density Parity-Check (MDPC) codes, one can significantly reduce the entity of the noise and, de facto, also the Decoding Failure Rate (DFR) of the HQC decoder.
This preliminary study leaves some open questions and problems. While it shows that decoding in HQC can be improved, the modeling of the DFR gets more complicated: even for the basic decoder we propose in this paper, we have not been able to devise a reliable DFR model. This is likely due to the fact that the decoder structure resembles the iterative nature of LDPC/MDPC decoders, for which devising a reliable DFR estimation is a well-known difficult problem.
Rosario Giustolisi, Mohammadamin Rakeei, Gabriele Lenzini
Electronic voting schemes typically ensure ballot privacy by
assuming that the decryption key is distributed among tallying authorities, preventing any single authority from decrypting a voter’s ballot.
However, this assumption may fail in a fully dishonest environment where
all tallying authorities collude to break ballot privacy.
In this work, we introduce the notion of anamorphic voting, which enables voters to convey their true voting intention to an auditor while
casting an (apparently) regular ballot. We present new cryptographic
techniques demonstrating that several existing voting schemes can support anamorphic voting.
12 April 2025
Caicai Chen, Yuval Ishai, Tamer Mour, Alon Rosen
Private information retrieval (PIR) allows to privately read a chosen bit from an $N$-bit database $x$ with $o(N)$ bits of communication. Lin, Mook, and Wichs (STOC 2023) showed that by preprocessing $x$ into an encoded database $\hat x$, it suffices to access only $polylog(N)$ bits of $\hat x$ per query. This requires $|\hat x|\ge N\cdot polylog(N)$, and prohibitively large server circuit size.
We consider an alternative preprocessing model (Boyle et al. and Canetti et al., TCC 2017), where the encoding $\hat x$ depends on a client's short secret key. In this secret-key PIR (sk-PIR) model we construct a protocol with $O(N^\epsilon)$ communication, for any constant $\epsilon>0$, from the Learning Parity with Noise assumption in a parameter regime not known to imply public-key encryption. This is evidence against public-key encryption being necessary for sk-PIR.
Under a new conjecture related to the hardness of learning a hidden linear subspace of $\mathbb{F}_2^n$ with noise, we construct sk-PIR with similar communication and encoding size $|\hat x|=(1+\epsilon)\cdot N$ in which the server is implemented by a Boolean circuit of size $(4+\epsilon)\cdot N$. This is the first candidate PIR scheme with such a circuit complexity.
We consider an alternative preprocessing model (Boyle et al. and Canetti et al., TCC 2017), where the encoding $\hat x$ depends on a client's short secret key. In this secret-key PIR (sk-PIR) model we construct a protocol with $O(N^\epsilon)$ communication, for any constant $\epsilon>0$, from the Learning Parity with Noise assumption in a parameter regime not known to imply public-key encryption. This is evidence against public-key encryption being necessary for sk-PIR.
Under a new conjecture related to the hardness of learning a hidden linear subspace of $\mathbb{F}_2^n$ with noise, we construct sk-PIR with similar communication and encoding size $|\hat x|=(1+\epsilon)\cdot N$ in which the server is implemented by a Boolean circuit of size $(4+\epsilon)\cdot N$. This is the first candidate PIR scheme with such a circuit complexity.
Alberto Garoffolo, Dmytro Kaidalov, Roman Oliynykov, Daniele Di Tullio, Mariia Rodinko
The scalability of modern decentralized blockchain systems is constrained by the requirement that the participating nodes execute the entire chains transactions without the ability to delegate the verification workload across multiple actors trustlessly. This is further limited by the need for sequential transaction execution and repeated block validation, where each node must re-execute all transactions before accepting blocks, also leading to delayed broadcasting in many architectures.
Consequently, throughput is limited by the capacity of individual nodes, significantly preventing scalability.
In this paper, we introduce GIGA, a SNARK-based protocol that enables trustless parallel execution of transactions, processing non-conflicting operations concurrently, while preserving security guarantees and state consistency. The protocol organizes transactions into non-conflicting batches which are executed and proven in parallel, distributing execution across multiple decentralized entities. These batch proofs are recursively aggregated into a single succinct proof that validates the entire block.
As a result, the protocol both distributes the execution workload and removes redundant re-execution from the network, significantly improving blockchain throughput while not affecting decentralization.
Performance estimates demonstrate that, under the same system assumptions (e.g., consensus, networking, and virtual machine architecture) and under high degrees of transaction parallelism (i.e., when most transactions operate on disjoint parts of the state), our protocol may achieve over a 10000x throughput improvement compared to popular blockchain architectures that use sequential execution models, and over a 500x improvement compared to blockchain architectures employing intra-node parallelization schemes.
Furthermore, our protocol enables a significant increase in transaction computational complexity, unlocking a wide range of use cases that were previously unfeasible on traditional blockchain architectures due to the limited on-chain computational capacity.
Additionally, we propose a reward mechanism that ensures the economic sustainability of the proving network, dynamically adjusting to computational demand while fostering competition among provers based on cost-efficiency and reliability.
Consequently, throughput is limited by the capacity of individual nodes, significantly preventing scalability.
In this paper, we introduce GIGA, a SNARK-based protocol that enables trustless parallel execution of transactions, processing non-conflicting operations concurrently, while preserving security guarantees and state consistency. The protocol organizes transactions into non-conflicting batches which are executed and proven in parallel, distributing execution across multiple decentralized entities. These batch proofs are recursively aggregated into a single succinct proof that validates the entire block.
As a result, the protocol both distributes the execution workload and removes redundant re-execution from the network, significantly improving blockchain throughput while not affecting decentralization.
Performance estimates demonstrate that, under the same system assumptions (e.g., consensus, networking, and virtual machine architecture) and under high degrees of transaction parallelism (i.e., when most transactions operate on disjoint parts of the state), our protocol may achieve over a 10000x throughput improvement compared to popular blockchain architectures that use sequential execution models, and over a 500x improvement compared to blockchain architectures employing intra-node parallelization schemes.
Furthermore, our protocol enables a significant increase in transaction computational complexity, unlocking a wide range of use cases that were previously unfeasible on traditional blockchain architectures due to the limited on-chain computational capacity.
Additionally, we propose a reward mechanism that ensures the economic sustainability of the proving network, dynamically adjusting to computational demand while fostering competition among provers based on cost-efficiency and reliability.
Jeremy Guillaume, Maxime Pelcat, Amor Nafkha, Ruben Salvador
Screaming-channel attacks enable Electromagnetic (EM) Side-Channel Attacks (SCAs) at larger distances due to higher EM leakage energies than traditional SCAs, relaxing the requirement of close access to the victim. This attack can be mounted on devices integrating Radio Frequency (RF) modules on the same die as digital circuits, where the RF can unintentionally capture, modulate, amplify, and transmit the leakage along with legitimate signals. Leakage results from digital switching activity, so the hypothesis of previous works was that this leakage would appear at multiples of the digital clock frequency, i.e., harmonics. This work demonstrates that compromising signals appear not only at the harmonics and that leakage at non-harmonics can be exploited for successful attacks. Indeed, the transformations undergone by the leaked signal are complex due to propagation effects through the substrate and power and ground planes, so the leakage also appears at other frequencies. We first propose two methodologies to locate frequencies that contain leakage and demonstrate that it appears at non-harmonic frequencies. Then, our experimental results show that screaming-channel attacks at non-harmonic frequencies can be as successful as at harmonics when retrieving a 16-byte AES key. As the RF spectrum is polluted by interfering signals, we run experiments and show successful attacks in a more realistic, noisy environment where harmonic frequencies are contaminated by multi-path fading and interference. These attacks at non-harmonic frequencies increase the attack surface by providing attackers with an increased number of potential frequencies where attacks can succeed.
Yulian Sun, Vedant Bonde, Li Duan, Yong Li
Well-trained deep neural networks (DNN), including large
language models (LLM), are valuable intellectual property assets. To defend against model extraction attacks, one of the major ideas proposed in a large body of previous research is obfuscation: splitting the original DNN and storing the components separately. However, systematically analyzing the methods’ security against various attacks and optimizing the efficiency of defenses are still challenging. In this paper, We propose a taxonomy of model-based extraction attacks, which enables us to identify vulnerabilities of several existing obfuscation methods. We also propose an extremely efficient model obfuscation method called O2Splitter using trusted execution environment (TEE). The secrets we store in TEE have O(1)-size, i.e., independent of model size. Although O2Splitter relies on a pseudo-random function to provide a quantifiable guarantee for protection and noise compression, it does not need any complicated training or filtering of the weights. Our comprehensive experiments show that O2Splitter can mitigate norm-clipping and fine-tuning attacks. Even for
small noise (ϵ = 50), the accuracy of the obfuscated model is close to
random guess, and the tested attacks cannot extract a model with comparable accuracy. In addition, the empirical results also shed light on discovering the relation between DP parameters in obfuscation and the risks of concrete extraction attacks.
Bruno P. Cavalar, Eli Goldin, Matthew Gray, Peter Hall
We prove the first meta-complexity characterization of a quantum cryptographic primitive. We show that one-way puzzles exist if and only if there is some quantum samplable distribution of binary strings over which it is hard to approximate Kolmogorov complexity. Therefore, we characterize one-way puzzles by the average-case hardness of a uncomputable problem. This brings to the quantum setting a recent line of work that characterizes classical cryptography with the average-case hardness of a meta-complexity problem, initiated by Liu and Pass. Moreover, since the average-case hardness of Kolmogorov complexity over classically polynomial-time samplable distributions characterizes one-way functions, this result poses one-way puzzles as a natural generalization of one-way functions to the quantum setting. Furthermore, our equivalence goes through probability estimation, giving us the additional equivalence that one-way puzzles exist if and only if there is a quantum samplable distribution over which probability estimation is hard. We also observe that the oracle worlds of defined by Kretschmer et. al. rule out any relativizing characterization of one-way puzzles by the hardness of a problem in $\mathbf{NP}$ or $\mathbf{QMA}$, which means that it may not be possible with current techniques to characterize one-way puzzles with another meta-complexity problem.