IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
24 September 2025
Jotaro Yano
Blockchains preserve public data indefinitely, creating tension between verifiability today and secrecy decades hence. In particular, pairing-based SNARKs (e.g., Groth16, PLONK) rely on discrete-log assumptions that are structurally vulnerable to Shor-type quantum attacks, motivating hash-based alternatives. This work investigates whether a fully on-chain pipeline that verifies both a ZK-STARK and a post-quantum signature can operate within Solana L1's compute and memory constraints. Our prototype adapts Winterfell 0.12 with a dedicated SHA-256 hashv syscall path to reduce hashing overhead, suppresses inlining in FRI hotspots to respect SBF (Solana BPF) stack limits, and uses a custom bump allocator synchronized with requested heap frames. Artifacts are uploaded in ≤900 byte chunks under a rolling hash chain and finalized in two steps: (i) SLH-DSA (SPHINCS+) signature verification over a length-delimited transcript, and (ii) STARK verification bound to SHA256(cipher) via public inputs. The result is an L1 verifier that is CPI-friendly, reproducible from a public repository, and engineered for predictable cost. On devnet, across n=100 runs, we measure mean finalize_sig cost of 5.01×10^5 CU (median 4.999×10^5) and mean verify_stark cost of 1.10×10^6 CU (median 1.11×10^6), with maxima below 1.19×10^6 CU; all runs fit within Solana's 1.4×10^6 CU transaction budget. At representative sizes, the derived intensities are ≈63.8 CU/Sig-byte (7,856 B) and ≈248.9 CU/Proof-byte (4,437 B), and verification scales approximately linearly with proof bytes under a fixed FRI policy. We systematize DoS and fee controls (fixed-offset appends, rolling-hash checks), justify the binding of public inputs to the ciphertext, and outline engineering levers (single-call hashing, stack discipline, phase separation) that make full L1 STARK+PQC verification practical at ≈128-bit settings.
Gyeongwon Cha, Dongjin Park, Joon-Woo Lee
Homomorphic encryption (HE) for high-precision integers has been steadily researched through various schemes; however, these approaches incurred severe overhead as the bit-width grew, requiring larger parameters to support integers of several hundred to a thousand bits.
A significant breakthrough was recently made by Boneh et al.(Crypto'25). Their scheme constructs a residue number system from the different slots of a single CKKS ciphertext. This enables arithmetic on thousand-bit integers without increasing parameters.
However, RNS approach in Boneh et al., which performs approximate reduction, fundamentally cannot support non-arithmetic operations. Alternatively, radix-based approach proposed by Kim(CHES'25) can perform non-arithmetic operations, but they require $O(k)$ bootstraps for a bit-width $k$. This makes them highly inefficient, and thus impractical, for non-arithmetic operations requiring thousand-bit precision.
This paper proposes an improved radix-based CKKS scheme, centered on a 2-step algorithm that optimizes the number of bootstraps required for the digit carry operation to $O(\log k)$. The proposed scheme requires only 3-6 bootstraps to restore the result of a 32-2048 bit integer multiplication to its unique representation, which enables the efficient implementation of non-arithmetic operations such as comparison. Furthermore, our scheme extends the radix-based system, previously limited to prime-power moduli, to support an efficient homomorphic reduction algorithm for arbitrary moduli.
Furthermore, our experiments demonstrate substantial efficiency gains compared to Boneh et al. For example, for moduli used in homomorphic signatures(Curve25519, P-384, and 2048-bit RSA), our scheme can process up to 4$\times$ more integers in a single ciphertext. Specifically for Curve25519, we also reduce the latency by 1.4$\times$, shortening the amortized time by 5.6$\times$ compared to Boneh et. al. and achieving a final processing time of 1.34 seconds per data point.
George Teseleanu
Let $N = pq$ be the product of two balanced primes. Cotan and Te\c seleanu (2023) introduced a family of RSA-like cryptosystems defined by $ed - k(p^n - 1)(q^n - 1) = 1$, where $n \geq 1$, encompassing classical RSA ($n=1$) and the Elkamchouchi–Elshenawy–Shaban variant ($n=2$). We present a new attack for $n=3$ that integrates continued fractions with lattice-based methods, naturally extending previous results for $n = 1, 2, 4, 6$.
Hanwen Feng, Zhenliang Lu, Qiang Tang, Yuchen Ye
To more accurately capture real-world network and adversarial behaviors, recent research has explored Byzantine Agreement (BA) under various mixed fault models. The breakthroughs by Loss et al. (TCC'23, TCC'24) have established the feasibility of optimally resilient BA in these settings. Specifically, their protocols tolerate up to $t$ byzantine parties, $r$ receive faulty parties, and $s$ send faulty parties in a network of $n > 2t + r + s$ parties. Initially, Loss et al. (TCC'23) considers a model that a party will be either receive faulty or send faulty but not at the same time (called non-overlapping model). The extended model in Loss et al. (TCC'24) further accommodates the overlapping model, where a party can simultaneously exhibit both receive faulty and send faulty behaviors. However, despite this flexibility, both protocols incur a prohibitively high $O(n^5)$-bit communication cost, leaving open the fundamental question of whether the optimal $O(n^2)$-bit complexity achieved by many classical BA protocols is attainable in the optimally resilient mixed fault model (with overlapping faults or not).
In this work, we answer these open questions affirmatively. We present a mixed-fault BA protocol that achieves the optimal expected $O(n^2\lambda)$ communication complexity while maintaining expected $O(1)$ round complexity and optimal (strongly adaptive) resilience. Our protocol supports the strongest overlapping model, while matching the best-known complexity of classical BA protocols. To achieve this, we develop a series of novel techniques, carefully designed to ensure efficient and secure agreement even under mixed faults. Beyond binary BA, we extend our protocol to a multi-valued BA setting, achieving an expected communication complexity of $O(\frac{n^2}{t}L + n^2\lambda^2)$ and an expected round complexity of $O(1)$, where $t$ is the number of byzantine faults, and $L$ is the bit-length of the input values. In particular, for $t = O(n)$, the communication reduces to $O(nL + n^2\lambda^2)$. Notably, our protocols operate under the same setup and cryptographic assumptions as those in Loss et al.
In this work, we answer these open questions affirmatively. We present a mixed-fault BA protocol that achieves the optimal expected $O(n^2\lambda)$ communication complexity while maintaining expected $O(1)$ round complexity and optimal (strongly adaptive) resilience. Our protocol supports the strongest overlapping model, while matching the best-known complexity of classical BA protocols. To achieve this, we develop a series of novel techniques, carefully designed to ensure efficient and secure agreement even under mixed faults. Beyond binary BA, we extend our protocol to a multi-valued BA setting, achieving an expected communication complexity of $O(\frac{n^2}{t}L + n^2\lambda^2)$ and an expected round complexity of $O(1)$, where $t$ is the number of byzantine faults, and $L$ is the bit-length of the input values. In particular, for $t = O(n)$, the communication reduces to $O(nL + n^2\lambda^2)$. Notably, our protocols operate under the same setup and cryptographic assumptions as those in Loss et al.
Tako Boris Fouotsa
Isogeny group action based signatures are obtained from a sigma protocol with high soundness error, say $\frac{1}{2}$ for its most basic variant. One needs to independently repeat the sigma protocol $O(\lambda)$ times to reduce the soundness error to negligible (with $\lambda$ being the security parameter). These repetitions come with a considerable efficiency and size overhead. On the other hand, quaternion isogeny-based signatures such as SQIsign and PRISM are directly obtained from a sigma protocol with a negligible soundness error. The secret key in the SQIsign and PRISM is a random supersingular isogeny, and both schemes are insecure when the secret isogeny arises from the supersingular isogeny group action setting.
In this paper, we propose WaterSQI and PRISMO, variants of SQIsign and PRISM respectively, suited for secret isogenies that arise from the supersingular isogeny group action setting. They use a sigma protocol whose soundness error is negligible without requiring parallel repetitions. They are hence more compact and $O(\lambda)$ times more efficient compared to Generalised CSI-FiSh (the generalisation of CSI-FiSh to large parameters using generic isogeny group action evaluation algorithms such as Clapotis/KLaPoTi/PEGASIS). For example, for our proof of concept implementation with a 2000 bits prime in sagemath, PRISMO, when compared to Generalised CSI-FiSh with the same public key size, is about 3x faster for key generation, 273x faster for signing and 4900x faster for verification, while also being 29x more compact (signature size).
In this paper, we propose WaterSQI and PRISMO, variants of SQIsign and PRISM respectively, suited for secret isogenies that arise from the supersingular isogeny group action setting. They use a sigma protocol whose soundness error is negligible without requiring parallel repetitions. They are hence more compact and $O(\lambda)$ times more efficient compared to Generalised CSI-FiSh (the generalisation of CSI-FiSh to large parameters using generic isogeny group action evaluation algorithms such as Clapotis/KLaPoTi/PEGASIS). For example, for our proof of concept implementation with a 2000 bits prime in sagemath, PRISMO, when compared to Generalised CSI-FiSh with the same public key size, is about 3x faster for key generation, 273x faster for signing and 4900x faster for verification, while also being 29x more compact (signature size).
Banashri Karmakar, Aniket Kate, Shravani Patil, Arpita Patra, Sikhar Patranabis, Protik Paul, Divya Ravi
Multiparty computation (MPC) is a topic of growing interest for privacy-preserving computation tasks. A few MPC libraries have been developed, and newer protocols are regularly proposed to reduce the latency overhead, improve scalability, and achieve strong termination guarantees. However, most current MPC protocols are designed and implemented assuming network synchrony: in theory, they assume that all messages are delivered within a known time bound, while for experimental analysis, most assume all nodes to be honest, such that the time bounds are never deployed. While deploying MPC systems in the wild and trying to minimize the latency, network synchrony is indeed a strong assumption to make: natural adverse network conditions can break the safety and/or liveness of the protocol due to simply delayed messages.
Asynchronous MPC (AMPC) protocols can overcome the challenge as they do not assume fixed time bounds for message delivery delays; however, AMPC faces a natural threshold barrier of 2/3rd honest majority and introduces significant computation and/or communication overheads. This work aims to achieve the best-of-both network models by designing a practical AMPC protocol that has stronger resilience guarantees matching those for synchronous MPC.
We achieve this by adopting the emerging helper-aided model, and designing protocols that achieve fairness not only in the simple honest majority setting but also in the dishonest majority setting. Our protocols follow the standard preprocessing-online paradigm, enabling a lightweight and fast input-dependent online phase. In the honest majority setting, our protocol relies solely on lightweight cryptographic operations. In the dishonest majority setting, the protocol requires oblivious transfer (OT) during preprocessing, which we prove is necessary in this setting. We implement our constructions and provide a thorough performance comparison with state-of-the-art MPC protocols in the helper-aided model. Our experiments demonstrate that our protocols substantially outperform the state-of-the-art helper-aided MPC scheme, while being significantly more resilient to network delays.
Asynchronous MPC (AMPC) protocols can overcome the challenge as they do not assume fixed time bounds for message delivery delays; however, AMPC faces a natural threshold barrier of 2/3rd honest majority and introduces significant computation and/or communication overheads. This work aims to achieve the best-of-both network models by designing a practical AMPC protocol that has stronger resilience guarantees matching those for synchronous MPC.
We achieve this by adopting the emerging helper-aided model, and designing protocols that achieve fairness not only in the simple honest majority setting but also in the dishonest majority setting. Our protocols follow the standard preprocessing-online paradigm, enabling a lightweight and fast input-dependent online phase. In the honest majority setting, our protocol relies solely on lightweight cryptographic operations. In the dishonest majority setting, the protocol requires oblivious transfer (OT) during preprocessing, which we prove is necessary in this setting. We implement our constructions and provide a thorough performance comparison with state-of-the-art MPC protocols in the helper-aided model. Our experiments demonstrate that our protocols substantially outperform the state-of-the-art helper-aided MPC scheme, while being significantly more resilient to network delays.
Manoja Shridhar, Bala Puruvana, Alex Cravill, Joey Wolff
The convergence of cloud computing and edge
intelligence has given rise to a new era of distributed com-
puting infrastructure[1] that aim to leverage the strengths
of both paradigms. Cloud computing provides on-demand
scalability, high computational power, and global accessibil-
ity. Edge computing, on the other hand, is positioned closer
to the end-user or data source, minimizing latency and
improving real-time responsiveness. By merging these two,
organizations can build efficient, low-latency solutions that
meet stringent performance[2] and reliability requirements.
However, this convergence also introduces new challenges
related to security, data integrity, and management. De-
spite the many advances in cryptographic protocols and
distributed systems orchestration, ensuring robust security
postures in hybrid cloud-edge environments remains a
daunting task.
Joseph Carolan
The analysis of quantum algorithms which query random, invertible permutations has been a long-standing challenge in cryptography. Many techniques which apply to random oracles fail, or are not known to generalize to this setting. As a result, foundational cryptographic constructions involving permutations often lack quantum security proofs. With the aim of closing this gap, we develop and prove soundness of a compressed permutation oracle. Our construction shares many of the attractive features of Zhandry's original compressed function oracle: the purification is a small list of input-output pairs which meaningfully reflect an algorithm's knowledge of the oracle.
We then apply this framework to show that the Feistel construction with seven rounds is a strong quantum PRP, resolving an open question of (Zhandry, 2012). We further re-prove essentially all known quantum query lower bounds in the random permutation model, notably the collision and preimage resistance of both Sponge and Davies-Meyer, hardness of double-sided zero search and sparse predicate search, and give new lower bounds for cycle finding and the one-more problem.
We then apply this framework to show that the Feistel construction with seven rounds is a strong quantum PRP, resolving an open question of (Zhandry, 2012). We further re-prove essentially all known quantum query lower bounds in the random permutation model, notably the collision and preimage resistance of both Sponge and Davies-Meyer, hardness of double-sided zero search and sparse predicate search, and give new lower bounds for cycle finding and the one-more problem.
23 September 2025
Jeremiah Blocki, Seunghoon Lee, Brayan Sebastian Yepes-Garcia
We initiate the study of differentially private data-compression schemes motivated by the insecurity of the popular "Compress-Then-Encrypt" framework. Data compression is a useful tool which exploits redundancy in data to reduce storage/bandwidth when files are stored or transmitted. However, if the contents of a file are confidential then the length of a compressed file might leak confidential information about the content of the file itself. Encrypting a compressed file does not eliminate this leakage as data encryption schemes are only designed to hide the content of confidential message instead of the length of the message. In our proposed Differentially Private Compress-Then-Encrypt framework, we add a random positive amount of padding to the compressed file to ensure that any leakage satisfies the rigorous privacy guarantee of $(\epsilon,\delta)$-differential privacy. The amount of padding that needs to be added depends on the sensitivity of the compression scheme to small changes in the input, i.e., to what degree can changing a single character of the input message impact the length of the compressed file. While some popular compression schemes are highly sensitive to small changes in the input, we argue that effective data compression schemes do not necessarily have high sensitivity. Our primary technical contribution is analyzing the fine-grained sensitivity of the LZ77 compression scheme (IEEE Trans. Inf. Theory 1977) which is one of the most common compression schemes used in practice. We show that the global sensitivity of the LZ77 compression scheme has the upper bound $\mathcal{O}(W^{2/3}\log n)$ where $W\leq n$ denotes the size of the sliding window. When $W=n$, we show the lower bound $\Omega(n^{2/3}\log^{1/3}n)$ for the global sensitivity of the LZ77 compression scheme which is tight up to a sublogarithmic factor.
Arman Riasi, Haodi Wang, Rouzbeh Behnia, Viet Vo, Thang Hoang
Artificial Intelligence as a Service (AIaaS) enables users to query a model hosted by a service provider and receive inference results from a pre-trained model. Although AIaaS makes artificial intelligence more accessible, particularly for resource-limited users, it also raises verifiability and privacy concerns for the client and server, respectively. While zero-knowledge proof techniques can address these concerns simultaneously, they incur high proving costs due to the non-linear operations involved in AI inference and suffer from precision loss because they rely on fixed-point representations to model real numbers.
In this work, we present ZIP, an efficient and precise commit and prove zero-knowledge SNARK for AIaaS inference (both linear and non-linear layers) that natively supports IEEE-754 double-precision floating-point semantics while addressing reliability and privacy challenges inherent in AIaaS. At its core, ZIP introduces a novel relative-error-driven technique that efficiently proves the correctness of complex non-linear layers in AI inference computations without any loss of precision, and hardens existing lookup-table and range proofs with novel arithmetic constraints to defend against malicious provers. We implement ZIP and evaluate it on standard datasets (e.g., MNIST, UTKFace, and SST-2). Our experimental results show, for non-linear activation functions, ZIP reduces circuit size by up to three orders of magnitude while maintaining the full precision required by modern AI workloads.
In this work, we present ZIP, an efficient and precise commit and prove zero-knowledge SNARK for AIaaS inference (both linear and non-linear layers) that natively supports IEEE-754 double-precision floating-point semantics while addressing reliability and privacy challenges inherent in AIaaS. At its core, ZIP introduces a novel relative-error-driven technique that efficiently proves the correctness of complex non-linear layers in AI inference computations without any loss of precision, and hardens existing lookup-table and range proofs with novel arithmetic constraints to defend against malicious provers. We implement ZIP and evaluate it on standard datasets (e.g., MNIST, UTKFace, and SST-2). Our experimental results show, for non-linear activation functions, ZIP reduces circuit size by up to three orders of magnitude while maintaining the full precision required by modern AI workloads.
Vıctor Duarte Melo, William J Buchanan
Whilst many key exchange and digital signature systems still rely on NIST P-256 (secp256r1) and secp256k1, offering around 128-bit security, there is an increasing demand for transparent and reproducible curves at the 256-bit security level. Standard higher-security options include NIST P-521, Curve448, and Brainpool-P512. This paper presents ECCFROG522PP ('Presunto Powered'), a 522-bit prime-field elliptic curve that delivers security in the same classical $\sim$260-bit ballpark as NIST P-521, but with a fundamentally different design philosophy. All of the curve parameters are deterministically derived from a fixed public seed via BLAKE3, with zero hidden choices. The curve has prime order (cofactor = 1), a verified twist with a proven approximate 505-bit prime factor, safe embedding degree, and passes anti-MOV checks up to k <200 and CM discriminant sanity up to 100k. Unlike prior opaque or ad-hoc constructions, ECCFROG522PP is fully reproducible: anyone can regenerate and verify it byte-for-byte using the published scripts. The intent is not to outperform NIST P-521 in raw speed, but to maximise trust, verifiability, and long-term auditability in a practical curve of equivalent security level.
Damiano Abram, Serge Fehr, Maciej Obremski, Peter Scholl
One-round secure computation is generally believed impossible due to the residual function attack: any honest-but-curious participant can replay the protocol in their head changing their input, and learn, in this way, a new output. Inputless functionalities are among the few that are immune to this problem.
This paper studies one-round, multi-party computation protocols (MPC) that implement the most natural inputless functionality: one that generates a random sample from a fixed distribution. These are called distributed samplers. At Eurocrypt 2022, Abram, Scholl and Yakoubov showed how to build this primitive in the semi-honest model with dishonest majority. In this work, we give a lower bound for constructing distributed samplers with a malicious adversary in the standard model. More in detail, we show that for any construction in the stand-alone model with black-box simulation, even with a CRS and honest majority, the output of the sampling protocol must have low entropy. This essentially implies that this type of construction is useless in applications. Our proof is based on an entropic argument, drawing a new connection between computationally secure MPC, information theory and learning theory.
This paper studies one-round, multi-party computation protocols (MPC) that implement the most natural inputless functionality: one that generates a random sample from a fixed distribution. These are called distributed samplers. At Eurocrypt 2022, Abram, Scholl and Yakoubov showed how to build this primitive in the semi-honest model with dishonest majority. In this work, we give a lower bound for constructing distributed samplers with a malicious adversary in the standard model. More in detail, we show that for any construction in the stand-alone model with black-box simulation, even with a CRS and honest majority, the output of the sampling protocol must have low entropy. This essentially implies that this type of construction is useless in applications. Our proof is based on an entropic argument, drawing a new connection between computationally secure MPC, information theory and learning theory.
Mohammad Hashemi, Domenic Forte, Fatemeh Ganji
The rapid growth of deep learning (DL) has raised
serious concerns about users’ data and neural network (NN)
models’ security and privacy, particularly the risk of backdoor
insertion when outsourcing the training or employing pre-trained
models. To ensure resilience against such backdoor attacks, this
work presents GuardianMPC, a novel framework leveraging
secure multiparty computation (MPC). GuardianMPC is built
upon garbled circuits (GC) within the LEGO protocol framework
to accelerate oblivious inference on FPGAs in the presence of
malicious adversaries that can manipulate the model weights
and/or insert a backdoor in the architecture of a pre-trained
model. In this regard, GuardianMPC is the first to offer private
function evaluation in the LEGO family. GuardianMPC
also supports private training to effectively counter backdoor
attacks targeting NN model architectures and parameters. With
optimized pre-processing, GuardianMPC significantly accelerates
the online phase, achieving up to x13.44 faster computation than
its software counterparts. Our experimental results for multilayer
perceptrons (MLPs) and convolutional neural networks (CNNs)
assess GuardianMPC’s time complexity and scalability across
diverse NN model architectures. Interestingly, GuardianMPC
does not adversely affect the training accuracy, as opposed to
many existing private training frameworks. These results confirm
GuardianMPC as a high-performance, model-agnostic solution
for secure NN computation with robust security and privacy
guarantees.
Arsalan Ali Malik, Furkan Aydin, Aydin Aysu
Fault injection attacks (FIAs) present a significant threat to the integrity of deep neural networks (DNNs), particularly in hardware-accelerated deployments on field-programmable gate arrays (FPGAs). These attacks intentionally introduce faults into the system, leading the DNN to generate incorrect outputs. This work presents the first successful targeted misclassification attack against a convolutional neural network (CNN) implemented on FPGA hardware, achieved by injecting a single clock glitch at the final layer (argmax) to manipulate the predicted output class. Our attack targets the commonly adopted argmax layer, a lightweight replacement for softmax in resource-constrained implementations. By precisely injecting a single clock glitch during the comparison phase of the argmax operation, the attack reliably induces misclassifications, forcing the model to 'skip' a specifically chosen class and output an incorrect label for it without affecting the computed scores of other classes.
Unlike prior works that only cause random misclassifications, our attack achieves a high success rate of 80–87% for a targeted class, without inducing collateral misclassifications of other classes. Our evaluations show a significant reduction in classification accuracy, with the model’s performance dropping from an initial 94.7% to an average final accuracy ranging from 7.7–14.7%. Our attack is demonstrated on a CNN model implemented using a common systolic array architecture, which is well-suited for resource-constrained edge devices and artificial intelligence (AI) accelerators. Our study confirms the vulnerability of hardware-accelerated machine learning systems to low-cost physical attacks, emphasizing the critical need for hardware-level countermeasures in safety-critical machine learning applications.
Unlike prior works that only cause random misclassifications, our attack achieves a high success rate of 80–87% for a targeted class, without inducing collateral misclassifications of other classes. Our evaluations show a significant reduction in classification accuracy, with the model’s performance dropping from an initial 94.7% to an average final accuracy ranging from 7.7–14.7%. Our attack is demonstrated on a CNN model implemented using a common systolic array architecture, which is well-suited for resource-constrained edge devices and artificial intelligence (AI) accelerators. Our study confirms the vulnerability of hardware-accelerated machine learning systems to low-cost physical attacks, emphasizing the critical need for hardware-level countermeasures in safety-critical machine learning applications.
Armando Faz-Hernandez
Prio, tailored under privacy-by-design principles, is a protocol for aggregating client-provided measurements between non-colluding entities. The validity of measurements is determined by using a fully linear probabilistically-checkable proof (FLPCP). The Prover distributes secret shares of the measurement and the proof to multiple Verifiers. These Verifiers can only use linear queries on the input statement for validation without accessing the actual measurement. Efficiency is key for the practical application of Prio. The FLPCP operates with polynomials represented in the Lagrange basis using roots of unity as the nodes. However, we observe opportunities to improve its performance by embracing the Lagrange basis more extensively. For instance, we show an inversion-free O(n) time-complexity algorithm for polynomial evaluation in the Lagrange basis (an alternative to the classic rational barycentric formula). By applying our methods to libprio-rs, a cutting-edge Rust implementation, the Sharding phase (proof generation) runs a 36% faster and the Prep-Init phase (proof verification) is twice as fast, showing a substantial acceleration of the most time-consuming phases of Prio.
Elif Ozbay Gurler, Patrick Struck
In this work we show obstacles when constructing identity-based encryption (IBE) from isogenies. We first give a modular description for IBEs, what we call a canonical IBE, that consists of two components: an identity key derivation scheme and a public-key encryption scheme. This allows us to investigate the identity key derivation scheme (where the obstacles are rooted in) in isolation. We present several approaches, showing that they can either not be realized—extracting the secret keys would require to solve the underlying hardness assumption—or result in IBE schemes that are insecure—users can use their secret keys to compute secret keys of other users. Finally, we identify properties for isogeny-based trapdoor functions that one would need in order to overcome the identified obstacles.
Navid Abapour, Amir Goharshady, Catalin Dragan, Mahdi Mahdavi
Electronic voting has demonstrated that it streamlines the democratic process, making it more convenient for citizens and enhancing the accuracy and speed of election results in real-world scenarios in the US, Estonia, Switzerland, and many other countries. One major challenge for e-voting, especially online voting, is ensuring that voting and tallying devices behave honestly, particularly in cases involving monetary transactions. These are addressed by economic voting, where everything is on-chain; in essence, voters utilize smart contracts to conduct all voting stages. There are very few results on economic voting, and none post-quantum secure. The challenge comes from having the entire voting system run by smart contracts. In this work, we propose the first post-quantum economic voting scheme, which combines hybrid on- and off-chain operations, called the Post-Quantum Blind Vote (PQBV). The core idea is to utilize smart contracts that enable blind signatures during the voting process. We enhance our contribution by introducing a post-quantum blind signature with Posterior Security, as proposed by Yuen et al. (CCS 2025), which retroactively enhances the privacy of already generated signatures. This has a significant impact on PQBV, as it is able to satisfy formal cryptographic privacy definitions, including ballot privacy. Our efficiency analysis reveals competitive performance compared to existing state-of-the-art post-quantum e-voting systems, such as Epoque (EuroS&P 2021), which is done without blockchain.
Rebekah Mercer, Kaoutar El Khiyaoui, Angelo De Caro, Elli Androulaki
Anonymous credential schemes allow users to prove claims about themselves in a privacy-preserving manner. Put simply, a user can prove that she has an attribute value (e.g., she is a citizen) without revealing any other information about herself. Traditional schemes (including those with multiple credential issuers) operate under the assumption that each recognized issuer produces credentials for all user attributes. This assumption, however, is not practical: in the real world, no such issuer exists; an identity provider today issues a user credentials for a subset of user attributes, not all. A promising approach to support this setting is aggregate anonymous credentials, which allow a user to receive credentials from several issuers such that she can aggregate them and prove various claims about her attribute values in one shot. In this paper, we first introduce what we call aggregate tag-based signatures and describe an efficient instantiation. We then leverage the latter together with structure-preserving signatures and signatures of knowledge to construct an efficient aggregate anonymous credential scheme. We finally, formally evaluate the security of the proposed schemes and run benchmarks to showcase the practicality of the resulting scheme and its relevance for decentralized identity applications.
Jesko Dujmovic, Christoph U. Günther, Krzysztof Pietrzak
We introduce and construct a new proof system called Non-interactive Arguments of Knowledge or Space (NArKoS), where a space bounded prover can convince a verifier they know a secret, while having access to sufficient space allows one to forge indistinguishable proofs without the secret.
An application of NArKoS are space-deniable proofs, which are proofs of knowledge (say for authentication in access control) that are sound when executed by a lightweight device like a smart-card or an RFID chip that cannot have much storage, but are deniable (in the strong sense of online deniability) as the verifier, like a card reader, can efficiently forge such proofs.
We construct NArKoS in the random oracle model using an OR-proof combining a sigma protocol (for the proof of knowledge of the secret) with a new proof system called simulatable Proof of Transient Space (simPoTS). We give two different constructions of simPoTS, one based on labelling graphs with high pebbling complexity, a technique used in the construction of memory-hard functions and proofs of space, and a more practical construction based on the verifiable space-hard functions from TCC'24 where a prover must compute a root of a sparse polynomial. In both cases, the main challenge is making the proofs efficiently simulatable.
An application of NArKoS are space-deniable proofs, which are proofs of knowledge (say for authentication in access control) that are sound when executed by a lightweight device like a smart-card or an RFID chip that cannot have much storage, but are deniable (in the strong sense of online deniability) as the verifier, like a card reader, can efficiently forge such proofs.
We construct NArKoS in the random oracle model using an OR-proof combining a sigma protocol (for the proof of knowledge of the secret) with a new proof system called simulatable Proof of Transient Space (simPoTS). We give two different constructions of simPoTS, one based on labelling graphs with high pebbling complexity, a technique used in the construction of memory-hard functions and proofs of space, and a more practical construction based on the verifiable space-hard functions from TCC'24 where a prover must compute a root of a sparse polynomial. In both cases, the main challenge is making the proofs efficiently simulatable.
Jack Doerner, Iftach Haitner, Yuval Ishai, Nikolaos Makriyannis
Oblivious Linear Evaluation (OLE) is an algebraic generalization of oblivious transfer (OT) that forms a critical part of a growing number of applications. An OLE protocol over a modulus $q$ enables the receiver party to securely evaluate a line $a\cdot X+b$ chosen by the sender party on a secret point $x\in\mathbb{Z}_q$.
Motivated by the big efficiency gap between OLE and OT and by fast OT extension techniques, we revisit the question of reducing OLE to OT, aiming to improve the communication cost of known reductions.
We start by observing that the Chinese Remainder Theorem (CRT) can be combined with a prior protocol of Gilboa (Crypto ’99) to reduce its communication cost from $O(\ell^2)$ to $\tilde O(\ell)$ bits, for $\ell=\log q$. Unfortunately, whereas Gilboa's protocol is secure against a semi-honest sender and a malicious receiver, a direct application of the CRT technique is only semi-honest secure (it is insecure against malicious receivers). Thus, we employ number-theoretic techniques to protect our CRT-based protocol against malicious receivers, while still retaining a concrete advantage over Gilboa's protocol (e.g., $10.2\times$ less communication for $\ell=256$). Furthermore, we obtain a fully malicious OLE-to-OT reduction by applying either information-theoretic techniques with moderate overhead, or RSA-based cryptographic techniques with very low overhead.
We demonstrate the usefulness of our results in the context of OLE applications, including a post-quantum oblivious pseudorandom function (OPRF) and distributed signatures. In particular, assuming pre-existing random OT correlations, we can use our malicious-receiver OLE protocol to realize (a single instance of) the power-residue based OPRF candidate with security against a malicious client and a semi-honest server using only 1.14 KB of communication, a $16\times$ improvement over the best previous protocol in this setting. Using our RSA-based fully malicious OLE protocol, we achieve a $5\times$ communication improvement over previous OT and EC-based distributed ECDSA protocols. Compared to other ECDSA protocols (including ones that use Paillier and class groups), the communication gains are more modest, but come at only a fraction of the computational cost as we avoid all expensive group operations.
We start by observing that the Chinese Remainder Theorem (CRT) can be combined with a prior protocol of Gilboa (Crypto ’99) to reduce its communication cost from $O(\ell^2)$ to $\tilde O(\ell)$ bits, for $\ell=\log q$. Unfortunately, whereas Gilboa's protocol is secure against a semi-honest sender and a malicious receiver, a direct application of the CRT technique is only semi-honest secure (it is insecure against malicious receivers). Thus, we employ number-theoretic techniques to protect our CRT-based protocol against malicious receivers, while still retaining a concrete advantage over Gilboa's protocol (e.g., $10.2\times$ less communication for $\ell=256$). Furthermore, we obtain a fully malicious OLE-to-OT reduction by applying either information-theoretic techniques with moderate overhead, or RSA-based cryptographic techniques with very low overhead.
We demonstrate the usefulness of our results in the context of OLE applications, including a post-quantum oblivious pseudorandom function (OPRF) and distributed signatures. In particular, assuming pre-existing random OT correlations, we can use our malicious-receiver OLE protocol to realize (a single instance of) the power-residue based OPRF candidate with security against a malicious client and a semi-honest server using only 1.14 KB of communication, a $16\times$ improvement over the best previous protocol in this setting. Using our RSA-based fully malicious OLE protocol, we achieve a $5\times$ communication improvement over previous OT and EC-based distributed ECDSA protocols. Compared to other ECDSA protocols (including ones that use Paillier and class groups), the communication gains are more modest, but come at only a fraction of the computational cost as we avoid all expensive group operations.