IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
26 September 2025
Stephan Krenn, Kai Samelin, Daniel Slamanig
Group signatures enable users to sign on behalf of a group while preserving anonymity, with accountability provided by a designated opener. The first rigorous model for dynamic groups (Bellare, Shi, Zhang, CT--RSA '05) captured anonymity, non-frameability, and traceability, later extended with trace-soundness (Sakai et al., PKC '12) and non-claimability (introduced as ``opening-soundness'' by Bootle et al., ACNS '16 & JoC '20).
In practice, issuer and opener are often distinct entities, often implemented by different organizations and/or hardware modules. We therefore formalize and prove the consequences of a model that enforces their complete separation, allows key reuse across groups, treats issuer and opener as stateless, and makes both joining and opening non-interactive.
This separation makes it necessary to reformulate traceability against a corrupt issuer and to introduce three additional unforgeability notions-key-unforgeability, certificate-unforgeability, and opening-unforgeability-for the case of a corrupt opener. Following this line of reasoning, we also develop strengthened formulations of trace-soundness and non-claimability.
We prove that in this model the eight resulting properties are fully distinct: even the conjunction of any seven does not imply the eighth. This yields the first comprehensive map of group signature security in a stateless, reusable-key, and non-interactive framework, and formally demonstrates the impact of complete issuer--opener separation.
In practice, issuer and opener are often distinct entities, often implemented by different organizations and/or hardware modules. We therefore formalize and prove the consequences of a model that enforces their complete separation, allows key reuse across groups, treats issuer and opener as stateless, and makes both joining and opening non-interactive.
This separation makes it necessary to reformulate traceability against a corrupt issuer and to introduce three additional unforgeability notions-key-unforgeability, certificate-unforgeability, and opening-unforgeability-for the case of a corrupt opener. Following this line of reasoning, we also develop strengthened formulations of trace-soundness and non-claimability.
We prove that in this model the eight resulting properties are fully distinct: even the conjunction of any seven does not imply the eighth. This yields the first comprehensive map of group signature security in a stateless, reusable-key, and non-interactive framework, and formally demonstrates the impact of complete issuer--opener separation.
25 September 2025
Andrey S. Shchebetov
This paper introduces and formalizes new, stringent security notions for elliptic curves, establishing a higher benchmark for cryptographic strength. We define two new classes of secure elliptic curves, which offer resilience against a broader range of known attacks, including those leveraging the curve's endomorphism ring or the twist's group structure. To construct curves satisfying these exceptional criteria, we develop a highly scalable, parallel framework based on the complex multiplication method. Our approach efficiently navigates the vast parameter space defined by safe primes and fundamental discriminants. The core of our method is an efficient scanning algorithm that postpones expensive curve construction until after orders are confirmed to meet our security definitions, enabling significant search efficiency. As a concrete demonstration of our definitions and techniques, we conducted a large-scale computational experiment. This resulted in the first known construction of 91 very strong 256-bit curves with extreme twist order (i.e., curve order is a safe prime and twist order is prime) and cofactor $u=1$ along with 4 such 512-bit curves meeting the extreme twist order criteria. Among these, one 512-bit curve has both its order (a safe prime) and its twist order being prime, while the other three have a small cofactor ($u=7$) but their twist orders are primes. All curves are defined over finite fields whose orders are safe primes. These results not only prove the existence of these cryptographically superior curves but also provide a viable path for their systematic generation, shifting the paradigm from constructing curves faster to constructing curves that are fundamentally stronger.
Jonas Janneck, Aysan Nishaburi, Guilherme Rito
Emails are one of the main forms of digital communication. They were designed to provide many guarantees that have surprisingly not yet been formalized in cryptography. Yet many of the guarantees emails were designed to provide have not been formalized in cryptography. This paper models an important feature of email applications: the plausible deniability of including Bcc recipients. Concretely,
. we define a basic (theoretical) email application capturing these guarantees in Constructive Cryptography (Maurer and Renner, ICS '11)
. we introduce Email Encryption: a new cryptographic primitive that is tailor-made to construct our email application
. we define game-based notions for Email Encryption schemes, proving that their combination is sufficient to construct our simple email application and
. we give a generic (proof-of-concept) construction of an Email Encryption scheme that provides all these guarantees.
Our work identifies and formalizes missing theoretical foundations for the security of emails providing the first step towards practical solutions.
Our work identifies and formalizes missing theoretical foundations for the security of emails providing the first step towards practical solutions.
Serge Fehr, Yu-Hsuan Huang, Julia Kastner
We revisit the BUFF transform, which was proposed by Cremers et al. (S&P'21) as a means to achieve security properties beyond standard unforgeability for digital signature schemes. One of these properties, non-resignability (NR), has recently drawn some attention due to a strong impossibility result for the original definition of the property. Recent follow-up work then considered a variant (sNR) of the original definition, and showed that it is satisfied by the BUFF transform when the underlying hash function is modeled as a random oracle - while the original impossibility result still applies for the plain model. This raises the natural question of whether the BUFF transform satisfies sNR in a more fine-grained use of the random oracle model, when we consider a real-life iterative-hash-function design (such as Merkle-Damgaard or Sponge) and instead idealize the round function. Our discoveries in this direction are two-fold:
First, contrary to what one might expect, we show that there is a simple attack on the non-resignability property sNR of the BUFF-transform when instantiated with an iterative hash function. The attack relies on leaking an intermediate result of the hash computation to the adversary who is challenged to "resign" the message. This negative result once more shows the subtlety in the non-resignability property.
Second, on the positive side, we propose a small modification to the original BUFF transform, which we call Sandwich BUFF (for reasons to become clear), and prove the non-resignability property sNR of Sandwich BUFF both for Merkle-Damgaard-based hash functions in the random oracle model, and for Sponge-based hash functions in the random permutation model.
First, contrary to what one might expect, we show that there is a simple attack on the non-resignability property sNR of the BUFF-transform when instantiated with an iterative hash function. The attack relies on leaking an intermediate result of the hash computation to the adversary who is challenged to "resign" the message. This negative result once more shows the subtlety in the non-resignability property.
Second, on the positive side, we propose a small modification to the original BUFF transform, which we call Sandwich BUFF (for reasons to become clear), and prove the non-resignability property sNR of Sandwich BUFF both for Merkle-Damgaard-based hash functions in the random oracle model, and for Sponge-based hash functions in the random permutation model.
Jinrong Chen, Biming Zhou, Rongmao Chen, Haodong Jiang, Yi Wang, Xinyi Huang, Yunlei Zhao, Moti Yung
TLS 1.3 is at the heart of secure modern internet communications. With the rise of quantum attacks, post-quantum TLS 1.3, built on post-quantum key encapsulation mechanisms (KEMs), has naturally become a major research focus. At Eurocrypt 2022, Huguenin-Dumittan and Vaudenay demonstrated that KEMs secure against chosen-plaintext attacks (CPA) are sufficient to construct a secure TLS 1.3 handshake in the random oracle model (ROM), but their security reduction incurs an $\mathcal{O}(q^6)$ loss, where $q$ is the number of random oracle queries. Improving their security bounds was left as an open problem. To address this problem, Zhou et al. took the first step at Asiacrypt 2024, improving the loss factor to $\mathcal{O}(q^2)$ in the ROM and $\mathcal{O}(q^4)$ in the quantum ROM (QROM) for OW-CPA secure KEMs, and to $\mathcal{O}(q)$ (ROM) and $\mathcal{O}(q^2)$ (QROM) for IND-CPA secure KEMs.
In this work, we further advance the state-of-the-art by providing tighter security reductions for the post-quantum TLS 1.3 handshake based on CPA-secure KEMs. We introduce a new security notion, \textit{IND-1CCA-1MAC}, and demonstrate that the loss factors can be further improved with a slight ciphertext expansion, i.e., $\mathcal{O}(q)$ (ROM) and $\mathcal{O}(q^2)$ (QROM) for OW-CPA secure KEMs, and only $\mathcal{O}(1)$ in both models for IND-CPA secure KEMs. Furthermore, we prove that without ciphertext expansion, the loss factor of $\mathcal{O}(q)$ (ROM) and $\mathcal{O}(q^2)$ (QROM) is unavoidable. Given the above theoretical results, we investigate the security of TLS 1.3 based on CPA-secure KEMs in the hybrid key exchange and experimentally demonstrate that ciphertext expansion is indeed a viable trade-off for mitigating reduction losses.
In this work, we further advance the state-of-the-art by providing tighter security reductions for the post-quantum TLS 1.3 handshake based on CPA-secure KEMs. We introduce a new security notion, \textit{IND-1CCA-1MAC}, and demonstrate that the loss factors can be further improved with a slight ciphertext expansion, i.e., $\mathcal{O}(q)$ (ROM) and $\mathcal{O}(q^2)$ (QROM) for OW-CPA secure KEMs, and only $\mathcal{O}(1)$ in both models for IND-CPA secure KEMs. Furthermore, we prove that without ciphertext expansion, the loss factor of $\mathcal{O}(q)$ (ROM) and $\mathcal{O}(q^2)$ (QROM) is unavoidable. Given the above theoretical results, we investigate the security of TLS 1.3 based on CPA-secure KEMs in the hybrid key exchange and experimentally demonstrate that ciphertext expansion is indeed a viable trade-off for mitigating reduction losses.
Victor Normand, Sonia Belaïd, Matthieu Rivain
Designing practically secure masked circuits remains a central problem in the field of cryptographic implementation. While most masking schemes have been proven secure in the classical probing model, this model fails to capture more advanced side-channel attacks such as horizontal attacks. In recent years, the community has shifted toward the more realistic random probing model, which offers stronger guarantees. Yet, balancing strong security with practical efficiency continues to be a significant challenge.
In this work, we introduce new tools and constructions that significantly improve the design and analysis of random probing secure circuits. First, we formalize new security notions that combine the benefits of cardinal and general Random Probing Composability (RPC), two recently introduced notions enabling more flexible and efficient composition of secure gadgets. We then show how uniformly random permutations can be applied to transform any cardinal or general RPC gadget into a so-called uniformly cardinal RPC gadget, thereby enhancing security at low cost. Using these techniques, we propose the first non-linear multiplication gadget, inspired by the recursive construction from CHES 2016, that achieves concrete cardinal RPC security. We provide a detailed comparison with state-of-the-art multiplication gadgets in terms of both random probing advantage and implementation complexity. Building upon this gadget, we design a tighter random probing compiler that strategically uses permutations to improve security bounds while preserving efficiency. Finally, we apply our compiler to the AES and demonstrate improved performance and security compared to existing methods.
Michele Ciampi, Muhammad Ishaq, Rafail Ostrovsky, Ioannis Tzannetos, Vassilis Zikas
Payment channels are a cornerstone of a scalable blockchain infrastructure that enables transacting parties to lock assets on the blockchain and perform rapid off-chain updates with minimal latency and overhead. These protocols dramatically reduce on-chain interaction and improve throughput, with blockchain consensus only invoked in the event of disputes or final closure. While widely adopted in single-chain settings—such as in the Lightning Network for Bitcoin—existing constructions have several limitations, in particular they suffer from at least one of the following limitations:
1) No cross-chain. They do not enable fast trading of assets that reside on multiple isolated blockchains. 2) Non-optimal round complexity. The off-chain round complexity is not optimal (i.e., parties require more than two rounds to update the channel). 3) No bitcoin compatibility. They require a more advanced scripting language that prevents cross-chain interaction between chains that only have a simple (Bitcoin-like) scripting language.
In this work, we introduce a novel payment channel protocol that breaks through all the above limitations. Our construction supports bidirectional, multiasset, and off-chain interaction across an arbitrary set of blockchains, where each update of the channel requires the parties to send only one message each. Our protocol is fully compatible with Bitcoin and can be deployed on chains with only minimal scripting capabilities, making it broadly applicable to real-world blockchain networks.
Crucially, our design ensures atomic settlement across blockchains without relying on trusted intermediaries. We formally prove the security of our protocol together with a novel definition of the cross-chain payment channel that we introduce. Finally, we empirically validate our protocol, showing the minimal costs that our payment channel incurs in the setting where two parties make multiple exchanges of assets that reside on three different blockchains.
1) No cross-chain. They do not enable fast trading of assets that reside on multiple isolated blockchains. 2) Non-optimal round complexity. The off-chain round complexity is not optimal (i.e., parties require more than two rounds to update the channel). 3) No bitcoin compatibility. They require a more advanced scripting language that prevents cross-chain interaction between chains that only have a simple (Bitcoin-like) scripting language.
In this work, we introduce a novel payment channel protocol that breaks through all the above limitations. Our construction supports bidirectional, multiasset, and off-chain interaction across an arbitrary set of blockchains, where each update of the channel requires the parties to send only one message each. Our protocol is fully compatible with Bitcoin and can be deployed on chains with only minimal scripting capabilities, making it broadly applicable to real-world blockchain networks.
Crucially, our design ensures atomic settlement across blockchains without relying on trusted intermediaries. We formally prove the security of our protocol together with a novel definition of the cross-chain payment channel that we introduce. Finally, we empirically validate our protocol, showing the minimal costs that our payment channel incurs in the setting where two parties make multiple exchanges of assets that reside on three different blockchains.
Harrison Banda, Jan Brinkmann, Juliane Krämer
In this work, we present two fault attacks against MPCitH-based signature schemes: we present a key recovery attack and a signature forgery attack, both of which only need a single successful fault injection to succeed. We analyze all five MPCitH-based schemes which are currently analyzed in round 2 of NIST's additional signature standardization process: Mirath, MQOM, PERK, RYDE, and SDitH. Our analysis shows that all five schemes are vulnerable to at least one of the attacks. We validate the practicality of our attacks using the ChipWhisperer setup and discuss countermeasures to prevent the attacks.
Daji Landis, Joseph Bonneau
Using stock market data as a source of public randomness has deep historical roots and has seen renewed interest with the development of verifiable delay functions. Prior work has estimated that asset prices contain ample entropy to prevent prediction by a passive observer, but has not considered an active attacker making trades in the marketplace. VDFs can make manipulation more difficult, forcing an attacker to precompute beacon results for some number of potential outcomes and then force the market into one of them by price manipulation. To date, there has been no analysis of the difficulty of such an attack. We propose a framework for evaluating this attack using standard finance models for price movement and the price impact of trades. We then estimate from empirical data that, even under generous assumptions, for a basket of large-cap stocks (the S&P 100) an active adversary would require huge capital reserves (on the order of billions of US dollars) and incur major losses to slippage (millions of US dollars).
Alexandru Cojocaru, Juan Garay, Qipeng Liu, Fang Song
We give novel lifting theorems for security games in the quantum random oracle model (QROM) in Noisy Intermediate-Scale Quantum (NISQ) settings such as the hybrid query model, the noisy oracle and the bounded-depth models.
We provide, for the first time, a hybrid lifting theorem for hybrid algorithms that can perform both quantum and classical queries, as well as a lifting theorem for quantum algorithms with access to noisy oracles or bounded quantum depth.
At the core of our results lies a novel measure-and-reprogram framework, called hybrid coherent measure-and-reprogramming, tailored specifically for hybrid algorithms. Equipped with the lifting theorem, we are able to prove directly NISQ security and complexity results by calculating a single combinatorial quantity, relying solely on classical reasoning. As applications, we derive the first direct product theorems in the average case, in the hybrid setting - i.e., an enabling tool to determine the hybrid hardness of solving multi-instance security games. This allows us to derive in a straightforward manner the NISQ hardness of various security games, such as (i) the non-uniform hardness of salted games, (ii) the hardness of specific cryptographic tasks such as the multiple instance version of one-wayness and collision-resistance, and (iii) uniform or non-uniform hardness of many other games.
At the core of our results lies a novel measure-and-reprogram framework, called hybrid coherent measure-and-reprogramming, tailored specifically for hybrid algorithms. Equipped with the lifting theorem, we are able to prove directly NISQ security and complexity results by calculating a single combinatorial quantity, relying solely on classical reasoning. As applications, we derive the first direct product theorems in the average case, in the hybrid setting - i.e., an enabling tool to determine the hybrid hardness of solving multi-instance security games. This allows us to derive in a straightforward manner the NISQ hardness of various security games, such as (i) the non-uniform hardness of salted games, (ii) the hardness of specific cryptographic tasks such as the multiple instance version of one-wayness and collision-resistance, and (iii) uniform or non-uniform hardness of many other games.
Hoeteck Wee
We present the first pairing-based ciphertext-policy attribute-based encryption (CP-ABE) scheme for the class of degree $3$ polynomials with compact parameters: the public key, ciphertext and secret keys comprise $O(n)$ group elements, where $n$ is input length for the
function. As an immediate corollary, we obtain a pairing-based broadcast encryption scheme for $N$ users with $O(N^{1/3})$-sized parameters, breaking the long-standing $\sqrt{N}$ barrier for pairing-based
broadcast encryption. All of our constructions achieve adaptive security against unbounded collusions, and rely on the (bilateral) $k$-Lin assumption in prime-order bilinear groups.
24 September 2025
Jotaro Yano
Blockchains preserve public data indefinitely, creating tension between verifiability today and secrecy decades hence. In particular, pairing-based SNARKs (e.g., Groth16, PLONK) rely on discrete-log assumptions that are structurally vulnerable to Shor-type quantum attacks, motivating hash-based alternatives. This work investigates whether a fully on-chain pipeline that verifies both a ZK-STARK and a post-quantum signature can operate within Solana L1's compute and memory constraints. Our prototype adapts Winterfell 0.12 with a dedicated SHA-256 hashv syscall path to reduce hashing overhead, suppresses inlining in FRI hotspots to respect SBF (Solana BPF) stack limits, and uses a custom bump allocator synchronized with requested heap frames. Artifacts are uploaded in ≤900 byte chunks under a rolling hash chain and finalized in two steps: (i) SLH-DSA (SPHINCS+) signature verification over a length-delimited transcript, and (ii) STARK verification bound to SHA256(cipher) via public inputs. The result is an L1 verifier that is CPI-friendly, reproducible from a public repository, and engineered for predictable cost. On devnet, across n=100 runs, we measure mean finalize_sig cost of 5.01×10^5 CU (median 4.999×10^5) and mean verify_stark cost of 1.10×10^6 CU (median 1.11×10^6), with maxima below 1.19×10^6 CU; all runs fit within Solana's 1.4×10^6 CU transaction budget. At representative sizes, the derived intensities are ≈63.8 CU/Sig-byte (7,856 B) and ≈248.9 CU/Proof-byte (4,437 B), and verification scales approximately linearly with proof bytes under a fixed FRI policy. We systematize DoS and fee controls (fixed-offset appends, rolling-hash checks), justify the binding of public inputs to the ciphertext, and outline engineering levers (single-call hashing, stack discipline, phase separation) that make full L1 STARK+PQC verification practical at ≈128-bit settings.
Gyeongwon Cha, Dongjin Park, Joon-Woo Lee
Homomorphic encryption (HE) for high-precision integers has been steadily researched through various schemes; however, these approaches incurred severe overhead as the bit-width grew, requiring larger parameters to support integers of several hundred to a thousand bits.
A significant breakthrough was recently made by Boneh et al.(Crypto'25). Their scheme constructs a residue number system from the different slots of a single CKKS ciphertext. This enables arithmetic on thousand-bit integers without increasing parameters.
However, RNS approach in Boneh et al., which performs approximate reduction, fundamentally cannot support non-arithmetic operations. Alternatively, radix-based approach proposed by Kim(CHES'25) can perform non-arithmetic operations, but they require $O(k)$ bootstraps for a bit-width $k$. This makes them highly inefficient, and thus impractical, for non-arithmetic operations requiring thousand-bit precision.
This paper proposes an improved radix-based CKKS scheme, centered on a 2-step algorithm that optimizes the number of bootstraps required for the digit carry operation to $O(\log k)$. The proposed scheme requires only 3-6 bootstraps to restore the result of a 32-2048 bit integer multiplication to its unique representation, which enables the efficient implementation of non-arithmetic operations such as comparison. Furthermore, our scheme extends the radix-based system, previously limited to prime-power moduli, to support an efficient homomorphic reduction algorithm for arbitrary moduli.
Furthermore, our experiments demonstrate substantial efficiency gains compared to Boneh et al. For example, for moduli used in homomorphic signatures(Curve25519, P-384, and 2048-bit RSA), our scheme can process up to 4$\times$ more integers in a single ciphertext. Specifically for Curve25519, we also reduce the latency by 1.4$\times$, shortening the amortized time by 5.6$\times$ compared to Boneh et. al. and achieving a final processing time of 1.34 seconds per data point.
George Teseleanu
Let $N = pq$ be the product of two balanced primes. Cotan and Te\c seleanu (2023) introduced a family of RSA-like cryptosystems defined by $ed - k(p^n - 1)(q^n - 1) = 1$, where $n \geq 1$, encompassing classical RSA ($n=1$) and the Elkamchouchi–Elshenawy–Shaban variant ($n=2$). We present a new attack for $n=3$ that integrates continued fractions with lattice-based methods, naturally extending previous results for $n = 1, 2, 4, 6$.
Hanwen Feng, Zhenliang Lu, Qiang Tang, Yuchen Ye
To more accurately capture real-world network and adversarial behaviors, recent research has explored Byzantine Agreement (BA) under various mixed fault models. The breakthroughs by Loss et al. (TCC'23, TCC'24) have established the feasibility of optimally resilient BA in these settings. Specifically, their protocols tolerate up to $t$ byzantine parties, $r$ receive faulty parties, and $s$ send faulty parties in a network of $n > 2t + r + s$ parties. Initially, Loss et al. (TCC'23) considers a model that a party will be either receive faulty or send faulty but not at the same time (called non-overlapping model). The extended model in Loss et al. (TCC'24) further accommodates the overlapping model, where a party can simultaneously exhibit both receive faulty and send faulty behaviors. However, despite this flexibility, both protocols incur a prohibitively high $O(n^5)$-bit communication cost, leaving open the fundamental question of whether the optimal $O(n^2)$-bit complexity achieved by many classical BA protocols is attainable in the optimally resilient mixed fault model (with overlapping faults or not).
In this work, we answer these open questions affirmatively. We present a mixed-fault BA protocol that achieves the optimal expected $O(n^2\lambda)$ communication complexity while maintaining expected $O(1)$ round complexity and optimal (strongly adaptive) resilience. Our protocol supports the strongest overlapping model, while matching the best-known complexity of classical BA protocols. To achieve this, we develop a series of novel techniques, carefully designed to ensure efficient and secure agreement even under mixed faults. Beyond binary BA, we extend our protocol to a multi-valued BA setting, achieving an expected communication complexity of $O(\frac{n^2}{t}L + n^2\lambda^2)$ and an expected round complexity of $O(1)$, where $t$ is the number of byzantine faults, and $L$ is the bit-length of the input values. In particular, for $t = O(n)$, the communication reduces to $O(nL + n^2\lambda^2)$. Notably, our protocols operate under the same setup and cryptographic assumptions as those in Loss et al.
In this work, we answer these open questions affirmatively. We present a mixed-fault BA protocol that achieves the optimal expected $O(n^2\lambda)$ communication complexity while maintaining expected $O(1)$ round complexity and optimal (strongly adaptive) resilience. Our protocol supports the strongest overlapping model, while matching the best-known complexity of classical BA protocols. To achieve this, we develop a series of novel techniques, carefully designed to ensure efficient and secure agreement even under mixed faults. Beyond binary BA, we extend our protocol to a multi-valued BA setting, achieving an expected communication complexity of $O(\frac{n^2}{t}L + n^2\lambda^2)$ and an expected round complexity of $O(1)$, where $t$ is the number of byzantine faults, and $L$ is the bit-length of the input values. In particular, for $t = O(n)$, the communication reduces to $O(nL + n^2\lambda^2)$. Notably, our protocols operate under the same setup and cryptographic assumptions as those in Loss et al.
Tako Boris Fouotsa
Isogeny group action based signatures are obtained from a sigma protocol with high soundness error, say $\frac{1}{2}$ for its most basic variant. One needs to independently repeat the sigma protocol $O(\lambda)$ times to reduce the soundness error to negligible (with $\lambda$ being the security parameter). These repetitions come with a considerable efficiency and size overhead. On the other hand, quaternion isogeny-based signatures such as SQIsign and PRISM are directly obtained from a sigma protocol with a negligible soundness error. The secret key in the SQIsign and PRISM is a random supersingular isogeny, and both schemes are insecure when the secret isogeny arises from the supersingular isogeny group action setting.
In this paper, we propose WaterSQI and PRISMO, variants of SQIsign and PRISM respectively, suited for secret isogenies that arise from the supersingular isogeny group action setting. They use a sigma protocol whose soundness error is negligible without requiring parallel repetitions. They are hence more compact and $O(\lambda)$ times more efficient compared to Generalised CSI-FiSh (the generalisation of CSI-FiSh to large parameters using generic isogeny group action evaluation algorithms such as Clapotis/KLaPoTi/PEGASIS). For example, for our proof of concept implementation with a 2000 bits prime in sagemath, PRISMO, when compared to Generalised CSI-FiSh with the same public key size, is about 3x faster for key generation, 273x faster for signing and 4900x faster for verification, while also being 29x more compact (signature size).
In this paper, we propose WaterSQI and PRISMO, variants of SQIsign and PRISM respectively, suited for secret isogenies that arise from the supersingular isogeny group action setting. They use a sigma protocol whose soundness error is negligible without requiring parallel repetitions. They are hence more compact and $O(\lambda)$ times more efficient compared to Generalised CSI-FiSh (the generalisation of CSI-FiSh to large parameters using generic isogeny group action evaluation algorithms such as Clapotis/KLaPoTi/PEGASIS). For example, for our proof of concept implementation with a 2000 bits prime in sagemath, PRISMO, when compared to Generalised CSI-FiSh with the same public key size, is about 3x faster for key generation, 273x faster for signing and 4900x faster for verification, while also being 29x more compact (signature size).
Banashri Karmakar, Aniket Kate, Shravani Patil, Arpita Patra, Sikhar Patranabis, Protik Paul, Divya Ravi
Multiparty computation (MPC) is a topic of growing interest for privacy-preserving computation tasks. A few MPC libraries have been developed, and newer protocols are regularly proposed to reduce the latency overhead, improve scalability, and achieve strong termination guarantees. However, most current MPC protocols are designed and implemented assuming network synchrony: in theory, they assume that all messages are delivered within a known time bound, while for experimental analysis, most assume all nodes to be honest, such that the time bounds are never deployed. While deploying MPC systems in the wild and trying to minimize the latency, network synchrony is indeed a strong assumption to make: natural adverse network conditions can break the safety and/or liveness of the protocol due to simply delayed messages.
Asynchronous MPC (AMPC) protocols can overcome the challenge as they do not assume fixed time bounds for message delivery delays; however, AMPC faces a natural threshold barrier of 2/3rd honest majority and introduces significant computation and/or communication overheads. This work aims to achieve the best-of-both network models by designing a practical AMPC protocol that has stronger resilience guarantees matching those for synchronous MPC.
We achieve this by adopting the emerging helper-aided model, and designing protocols that achieve fairness not only in the simple honest majority setting but also in the dishonest majority setting. Our protocols follow the standard preprocessing-online paradigm, enabling a lightweight and fast input-dependent online phase. In the honest majority setting, our protocol relies solely on lightweight cryptographic operations. In the dishonest majority setting, the protocol requires oblivious transfer (OT) during preprocessing, which we prove is necessary in this setting. We implement our constructions and provide a thorough performance comparison with state-of-the-art MPC protocols in the helper-aided model. Our experiments demonstrate that our protocols substantially outperform the state-of-the-art helper-aided MPC scheme, while being significantly more resilient to network delays.
Asynchronous MPC (AMPC) protocols can overcome the challenge as they do not assume fixed time bounds for message delivery delays; however, AMPC faces a natural threshold barrier of 2/3rd honest majority and introduces significant computation and/or communication overheads. This work aims to achieve the best-of-both network models by designing a practical AMPC protocol that has stronger resilience guarantees matching those for synchronous MPC.
We achieve this by adopting the emerging helper-aided model, and designing protocols that achieve fairness not only in the simple honest majority setting but also in the dishonest majority setting. Our protocols follow the standard preprocessing-online paradigm, enabling a lightweight and fast input-dependent online phase. In the honest majority setting, our protocol relies solely on lightweight cryptographic operations. In the dishonest majority setting, the protocol requires oblivious transfer (OT) during preprocessing, which we prove is necessary in this setting. We implement our constructions and provide a thorough performance comparison with state-of-the-art MPC protocols in the helper-aided model. Our experiments demonstrate that our protocols substantially outperform the state-of-the-art helper-aided MPC scheme, while being significantly more resilient to network delays.
Manoja Shridhar, Bala Puruvana, Alex Cravill, Joey Wolff
The convergence of cloud computing and edge
intelligence has given rise to a new era of distributed com-
puting infrastructure[1] that aim to leverage the strengths
of both paradigms. Cloud computing provides on-demand
scalability, high computational power, and global accessibil-
ity. Edge computing, on the other hand, is positioned closer
to the end-user or data source, minimizing latency and
improving real-time responsiveness. By merging these two,
organizations can build efficient, low-latency solutions that
meet stringent performance[2] and reliability requirements.
However, this convergence also introduces new challenges
related to security, data integrity, and management. De-
spite the many advances in cryptographic protocols and
distributed systems orchestration, ensuring robust security
postures in hybrid cloud-edge environments remains a
daunting task.
Joseph Carolan
The analysis of quantum algorithms which query random, invertible permutations has been a long-standing challenge in cryptography. Many techniques which apply to random oracles fail, or are not known to generalize to this setting. As a result, foundational cryptographic constructions involving permutations often lack quantum security proofs. With the aim of closing this gap, we develop and prove soundness of a compressed permutation oracle. Our construction shares many of the attractive features of Zhandry's original compressed function oracle: the purification is a small list of input-output pairs which meaningfully reflect an algorithm's knowledge of the oracle.
We then apply this framework to show that the Feistel construction with seven rounds is a strong quantum PRP, resolving an open question of (Zhandry, 2012). We further re-prove essentially all known quantum query lower bounds in the random permutation model, notably the collision and preimage resistance of both Sponge and Davies-Meyer, hardness of double-sided zero search and sparse predicate search, and give new lower bounds for cycle finding and the one-more problem.
We then apply this framework to show that the Feistel construction with seven rounds is a strong quantum PRP, resolving an open question of (Zhandry, 2012). We further re-prove essentially all known quantum query lower bounds in the random permutation model, notably the collision and preimage resistance of both Sponge and Davies-Meyer, hardness of double-sided zero search and sparse predicate search, and give new lower bounds for cycle finding and the one-more problem.
23 September 2025
Jeremiah Blocki, Seunghoon Lee, Brayan Sebastian Yepes-Garcia
We initiate the study of differentially private data-compression schemes motivated by the insecurity of the popular "Compress-Then-Encrypt" framework. Data compression is a useful tool which exploits redundancy in data to reduce storage/bandwidth when files are stored or transmitted. However, if the contents of a file are confidential then the length of a compressed file might leak confidential information about the content of the file itself. Encrypting a compressed file does not eliminate this leakage as data encryption schemes are only designed to hide the content of confidential message instead of the length of the message. In our proposed Differentially Private Compress-Then-Encrypt framework, we add a random positive amount of padding to the compressed file to ensure that any leakage satisfies the rigorous privacy guarantee of $(\epsilon,\delta)$-differential privacy. The amount of padding that needs to be added depends on the sensitivity of the compression scheme to small changes in the input, i.e., to what degree can changing a single character of the input message impact the length of the compressed file. While some popular compression schemes are highly sensitive to small changes in the input, we argue that effective data compression schemes do not necessarily have high sensitivity. Our primary technical contribution is analyzing the fine-grained sensitivity of the LZ77 compression scheme (IEEE Trans. Inf. Theory 1977) which is one of the most common compression schemes used in practice. We show that the global sensitivity of the LZ77 compression scheme has the upper bound $\mathcal{O}(W^{2/3}\log n)$ where $W\leq n$ denotes the size of the sliding window. When $W=n$, we show the lower bound $\Omega(n^{2/3}\log^{1/3}n)$ for the global sensitivity of the LZ77 compression scheme which is tight up to a sublogarithmic factor.