IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
05 September 2025
Gilad Asharov, Eliran Eiluz, Ilan Komargodski, Wei-Kai Lin
Oblivious RAM (ORAM) is a central cryptographic primitive that enables secure memory access while hiding access patterns.
Among existing ORAM paradigms, hierarchical ORAMs were long considered impractical despite their asymptotic optimality. However, recent advancements (FutORAMa, CCS'23) demonstrate that hierarchical ORAM-based schemes can be made efficient given sufficient client-side memory. In this work, we present a new hierarchical ORAM construction that achieves practical performance without requiring large local memory.
From a theoretical standpoint, we identify that there is a gap in the literature concerning the asymmetric setting, where the logical word size is asymptotically smaller than the physical memory block size. In this scenario, the best-known construction (OptORAMa, J.\ ACM '23,) turns every logical query into $O(\log N)$ physical memory accesses (quantity known as ``I/O overhead''), whereas the lower bound of Komargodski and Lin (CRYPTO'21) implies that $\Omega(\log N /\log\log N)$ accesses are needed.
We close this gap by constructing an optimal ORAM for the asymmetric setting, achieving an I/O overhead of $O(\log N / \log\log N)$. Our construction features exceptionally small constants (between 1 and 4, depending on the block size) and operates without requiring large local memory. We implement our scheme and compare it to PathORAM (CCS'13) and FutORAMa, demonstrating significant improvement. For 1TB logical memory, our construction obtains $\times 10$-$\times 30$ reduction in I/O overhead and bandwidth compared to PathORAM, and $\times 7$--$\times 26$ improvement over FutORAMa. This improvement applies when those schemes weren't designed to operate on large blocks, as in our settings, and the exact improvement depends on the physical block size and the exact local memory available.
From a theoretical standpoint, we identify that there is a gap in the literature concerning the asymmetric setting, where the logical word size is asymptotically smaller than the physical memory block size. In this scenario, the best-known construction (OptORAMa, J.\ ACM '23,) turns every logical query into $O(\log N)$ physical memory accesses (quantity known as ``I/O overhead''), whereas the lower bound of Komargodski and Lin (CRYPTO'21) implies that $\Omega(\log N /\log\log N)$ accesses are needed.
We close this gap by constructing an optimal ORAM for the asymmetric setting, achieving an I/O overhead of $O(\log N / \log\log N)$. Our construction features exceptionally small constants (between 1 and 4, depending on the block size) and operates without requiring large local memory. We implement our scheme and compare it to PathORAM (CCS'13) and FutORAMa, demonstrating significant improvement. For 1TB logical memory, our construction obtains $\times 10$-$\times 30$ reduction in I/O overhead and bandwidth compared to PathORAM, and $\times 7$--$\times 26$ improvement over FutORAMa. This improvement applies when those schemes weren't designed to operate on large blocks, as in our settings, and the exact improvement depends on the physical block size and the exact local memory available.
Thomas Schneider, Huan-Chih Wang, Hossein Yalame
Energy-efficient edge devices are essential for the widespread deployment of machine learning (ML) services. However, their limited computational capabilities make local model training infeasible. While cloud-based training offers a scalable alternative, it raises serious privacy concerns when sensitive data is outsourced. Homomorphic Encryption (HE) enables computation directly on encrypted data and has emerged as a promising solution to this privacy challenge. Yet, current HE-based training frameworks face several shortcomings: they often lack support for complex models and non-linear functions, struggle to train over multiple epochs, and require cryptographic expertise from end users.
We present HE-SecureNet, a novel framework for privacy-preserving model training on encrypted data in a single-client–server setting, using hybrid HE cryptosystems. Unlike prior HE-based solutions, HE-SecureNet supports advanced models such as Convolutional Neural Networks and handles non-linear operations including ReLU, Softmax, and MaxPooling. It introduces a level-aware training strategy that eliminates costly ciphertext level alignment across epochs. Furthermore, HE-SecureNet automatically converts ONNX models into optimized secure C++ training code, enabling seamless integration into privacy-preserving ML pipeline—without requiring cryptographic knowledge.
Experimental results demonstrate the efficiency and practicality of our approach. On the Breast Cancer dataset, HE-SecureNet achieves a 5.2× speedup and 33% higher accuracy compared to ConcreteML (Zama) and TenSEAL (OpenMined). On the MNIST dataset, it reduces CNN training latency by 2× relative to Glyph (Lou et al., NeurIPS’20), and cuts communication overhead by up to 66× on MNIST and 42× on CIFAR-10 compared to MPC-based solutions.
We present HE-SecureNet, a novel framework for privacy-preserving model training on encrypted data in a single-client–server setting, using hybrid HE cryptosystems. Unlike prior HE-based solutions, HE-SecureNet supports advanced models such as Convolutional Neural Networks and handles non-linear operations including ReLU, Softmax, and MaxPooling. It introduces a level-aware training strategy that eliminates costly ciphertext level alignment across epochs. Furthermore, HE-SecureNet automatically converts ONNX models into optimized secure C++ training code, enabling seamless integration into privacy-preserving ML pipeline—without requiring cryptographic knowledge.
Experimental results demonstrate the efficiency and practicality of our approach. On the Breast Cancer dataset, HE-SecureNet achieves a 5.2× speedup and 33% higher accuracy compared to ConcreteML (Zama) and TenSEAL (OpenMined). On the MNIST dataset, it reduces CNN training latency by 2× relative to Glyph (Lou et al., NeurIPS’20), and cuts communication overhead by up to 66× on MNIST and 42× on CIFAR-10 compared to MPC-based solutions.
MINKA MI NGUIDJOI Thierry Emmanuel
We introduce the Affine Iterated Inversion Problem (AIIP), a new candidate hard problem
for post-quantum cryptography, based on inverting iterated polynomial maps over finite fields.
Given a polynomial f ∈ Fq[x] of degree d ≥ 2, an iteration parameter n, and a target y ∈ Fq,
AIIP requires finding an input x such that f(n)(x) = y, where f(n) denotes the n-fold composi
tion of f. We establish the computational hardness of AIIP through two independent analytical
frameworks: first, by establishing a formal connection to the Discrete Logarithm Problem in
the Jacobian of hyperelliptic curves of exponentially large genus; second, via a polynomial
time reduction to solving structured systems of multivariate quadratic (MQ) equations. The
f
irst construction provides number-theoretic evidence for hardness by embedding an AIIP in
stance into the arithmetic of a high-genus curve, while the second reduction proves worst-case
hardness relative to the NP-hard MQ problem. For the quadratic case f(x) = x2 + α, we
show that the induced MQ system is heuristically indistinguishable from a random system, and
we formalize a sufficient condition for its pseudorandomness under a standard cryptographic
assumption. We provide a detailed security analysis against classical and quantum attacks,
derive concrete parameters for standard security levels, and discuss the potential of AIIP as
a foundation for digital signatures and public-key encryption. This dual hardness foundation,
rooted in both algebraic geometry and multivariate algebra, positions AIIP as a versatile and
promising primitive for post-quantum cryptography.
Kaveh Dastouri
We introduce a novel public-key cryptosystem based on the symmetric groups $S_{p_1} \times S_{p_2} $, where \( p_1, p_2 \) are large primes. The modulus \( N = f(\lambda_1) \cdot f(\lambda_2) \), with partitions \( \lambda_1 \in P(p_1) \), \( \lambda_2 \in P(p_2) \), and \( f(\lambda_i) = |C_{\lambda_i}| \cdot m_1(\lambda_i) \), leverages conjugacy class sizes to ensure large prime factors, including \( p_1, p_2 \). A partition selection strategy using non-repeated composition numbers guarantees robust security, surpassing RSA by supporting multiple large primes and deterministic key generation. Efficient decryption is achieved via known factorizations, and a lightweight symmetric hash primitive provides message authentication. We provide rigorous security analysis, practical implementation, and comparisons to multi-prime RSA, advancing algebraic cryptography for modern applications.
Anubhav Baweja, Pratyush Mishra, Tushar Mopuri, Matan Shtepel
We present the first IOPP for a linear-time encodable code that achieves linear prover time and $O(\lambda)$ query complexity, for a broad range of security parameters $\lambda$. No prior work is able to simultaneously achieve this efficiency: it either supports linear-time encodable codes but with worse query complexity [FICS; ePrint 2025], or achieves $O(\lambda)$ query complexity but only for quasilinear-time encodable codes [Minzer, Zheng; FOCS 2025]. Furthermore, we prove a matching lower bound that shows that the query complexity of our IOPP is asymptotically optimal (up to additive factors) for codes with constant rate.
We obtain our result by tackling a ubiquitous subproblem in IOPP constructions: checking that a batch of claims hold. Our novel solution to this subproblem is twofold. First, we observe that it is often sufficient to ensure that, with all but negligible probability, most of the claims hold. Next, we devise a new `lossy batching' technique which convinces a verifier of the foregoing promise with lower query complexity than that required to convince it that all the claims hold. This method differs significantly from the line-versus-point test used to achieve query-optimal IOPPs (for quasilinear-time encodable codes) in prior work [Minzer, Zheng; FOCS 2025], and may be of independent interest.
Our IOPP can handle all codes that support efficient codeswitching [Ron-Zewi, Rothblum; JACM 2024], including several linear-time encodable codes. Via standard techniques, our IOPP can be used to construct the first (to the best of our knowledge) IOP for NP with $O(n)$ prover time and $O(\lambda)$ query complexity. We additionally show that our IOPP (and by extension the foregoing IOP) is round-by-round tree-extractable and hence can be used to construct a SNARK in the random oracle model with $O(n)$ prover time and $O(\lambda \log n)$ proof size.
We obtain our result by tackling a ubiquitous subproblem in IOPP constructions: checking that a batch of claims hold. Our novel solution to this subproblem is twofold. First, we observe that it is often sufficient to ensure that, with all but negligible probability, most of the claims hold. Next, we devise a new `lossy batching' technique which convinces a verifier of the foregoing promise with lower query complexity than that required to convince it that all the claims hold. This method differs significantly from the line-versus-point test used to achieve query-optimal IOPPs (for quasilinear-time encodable codes) in prior work [Minzer, Zheng; FOCS 2025], and may be of independent interest.
Our IOPP can handle all codes that support efficient codeswitching [Ron-Zewi, Rothblum; JACM 2024], including several linear-time encodable codes. Via standard techniques, our IOPP can be used to construct the first (to the best of our knowledge) IOP for NP with $O(n)$ prover time and $O(\lambda)$ query complexity. We additionally show that our IOPP (and by extension the foregoing IOP) is round-by-round tree-extractable and hence can be used to construct a SNARK in the random oracle model with $O(n)$ prover time and $O(\lambda \log n)$ proof size.
Nakul Khambhati, Joonwon Lee, Gary Song, Rafail Ostrovsky, Sam Kumar
Organizations increasingly need to pool their sensitive data for collaborative computation while keeping their own data private from each other. One approach is to use a family of cryptographic protocols called Secure Multi-Party Computation (MPC). Another option is to use a set of cloud services called clean rooms. Unfortunately, neither approach is satisfactory. MPC is orders of magnitude more resource-intensive than regular computation, making it impractical for workloads like data analytics and AI. Clean rooms do not give users the flexibility to perform arbitrary computations.
We propose and develop an approach and system called a secure agent and utilize it to create a virtual clean room, Flexroom, that is both performant and flexible. Secure agents enable parties to create a phantom identity that they can collectively control, using maliciously secure MPC, which issues API calls to external services with parameters that remain secret from all participating parties. Importantly, in Flexroom, the secure agent uses MPC not to perform the computation itself, but instead merely to orchestrate the computation in the cloud, acting as a distinct trusted entity jointly governed by all parties. As a result, Flexroom enables collaborative computation with unfettered flexibility, including the ability to use convenient cloud services. By design, the collaborative computation runs at plaintext speeds, so the overhead of Flexroom will be amortized over a long computation.
We propose and develop an approach and system called a secure agent and utilize it to create a virtual clean room, Flexroom, that is both performant and flexible. Secure agents enable parties to create a phantom identity that they can collectively control, using maliciously secure MPC, which issues API calls to external services with parameters that remain secret from all participating parties. Importantly, in Flexroom, the secure agent uses MPC not to perform the computation itself, but instead merely to orchestrate the computation in the cloud, acting as a distinct trusted entity jointly governed by all parties. As a result, Flexroom enables collaborative computation with unfettered flexibility, including the ability to use convenient cloud services. By design, the collaborative computation runs at plaintext speeds, so the overhead of Flexroom will be amortized over a long computation.
Ritam Bhaumik, Avijit Dutta, Tetsu Iwata, Ashwin Jha, Kazuhiko Minematsu, Mridul Nandi, Yu Sasaki, Meltem Sönmez Turan, Stefano Tessaro
We consider FB-PRF, one of the key derivation functions defined in NIST SP 800-108 constructed from a pseudorandom function in a feedback mode. The standard allows some flexibility in the specification, and we show that one specific instance of FB-PRF allows an efficient distinguishing attack.
Yi-Fu Lai, Edoardo Persichetti
Recently, Hanzlik, Lai, Paracucchi, Slamanig, Tang proposed several blind signature frameworks, collectively named Tanuki(s) (Asiacrypt'25), built upon cryptographic group actions. Their work introduces novel techniques and culminates in a concurrently secure blind signature framework. Straightforward instantiations based on CSIDH (CSI-FiSh) and LESS yield signature sizes of 4.5 KB and 64 KB respectively, providing the first efficient blind signatures in the isogeny-based and code-based literature allowing concurrent executions.
In this work, we improve the code-based instantiations by using the canonical form of linear equivalent codes by a careful treatment. However, the canonical form does not naturally support a group action structure, which is central to the security proofs of Tanuki(s). Consequently and unfortunately, the original security guarantees do not directly apply. To address this, we develop two distinct non-black-box reductions for both blindness and the one-more unforgeability.
In the end, the improvements do not compromise the security.
This results in a concurrently secure code-based blind signature scheme with a compact signature size of 4.4 KB, which is approximately 1% smaller than the isogeny-based one. We also provide a C implementation where the signing time in 99ms and 268 Mcycles on an Intel i7 2.3~GHz CPU. We also look forward to our approaches benefiting advanced constructions built on top of LESS in the future.
Yang Yang, Guomin Yang, Yingjiu Li, Pengfei WU, Rui Shi, Minming Huang, Jian Weng, HweeHwa Pang, Robert H. Deng
Service discovery is a fundamental process in wireless networks, enabling devices to find and communicate with services dynamically, and is critical for the seamless operation of modern systems like 5G and IoT. This paper introduces PriSrv+, an advanced privacy and usability-enhanced service discovery protocol for modern wireless networks and resource-constrained environments. PriSrv+ builds upon PriSrv (NDSS'24), by addressing critical limitations in expressiveness, privacy, scalability, and efficiency, while maintaining compatibility with widely-used wireless protocols such as mDNS, BLE, and Wi-Fi.
A key innovation in PriSrv+ is the development of Fast and Expressive Matchmaking Encryption (FEME), the first matchmaking encryption scheme capable of supporting expressive access control policies with an unbounded attribute universe, allowing any arbitrary string to be used as an attribute. FEME significantly enhances the flexibility of service discovery while ensuring robust message and attribute privacy. Compared to PriSrv, PriSrv+ optimizes cryptographic operations, achieving 7.62$\times$ faster for encryption and 6.23$\times$ faster for decryption, and dramatically reduces ciphertext sizes by 87.33$\%$. In addition, PriSrv+ reduces communication costs by 87.33$\%$ for service broadcast and 86.64$\%$ for anonymous mutual authentication compared with PriSrv. Formal security proofs confirm the security of FEME and PriSrv+. Extensive evaluations on multiple platforms demonstrate that PriSrv+ achieves superior performance, scalability, and efficiency compared to existing state-of-the-art protocols.
A key innovation in PriSrv+ is the development of Fast and Expressive Matchmaking Encryption (FEME), the first matchmaking encryption scheme capable of supporting expressive access control policies with an unbounded attribute universe, allowing any arbitrary string to be used as an attribute. FEME significantly enhances the flexibility of service discovery while ensuring robust message and attribute privacy. Compared to PriSrv, PriSrv+ optimizes cryptographic operations, achieving 7.62$\times$ faster for encryption and 6.23$\times$ faster for decryption, and dramatically reduces ciphertext sizes by 87.33$\%$. In addition, PriSrv+ reduces communication costs by 87.33$\%$ for service broadcast and 86.64$\%$ for anonymous mutual authentication compared with PriSrv. Formal security proofs confirm the security of FEME and PriSrv+. Extensive evaluations on multiple platforms demonstrate that PriSrv+ achieves superior performance, scalability, and efficiency compared to existing state-of-the-art protocols.
Shuiyin Liu, Amin Sakzad
This work presents a joint design of encoding and encryption procedures for public key encryptions (PKEs) and key encapsulation mechanism (KEMs) such as Kyber, without relying on the assumption of independent decoding noise components, achieving reductions in both communication overhead (CER) and decryption failure rate (DFR). Our design features two techniques: ciphertext packing and lattice packing. First, we extend the Peikert-Vaikuntanathan-Waters (PVW) method to Kyber: $\ell$ plaintexts are packed into a single ciphertext. This scheme is referred to as P$_\ell$-Kyber. We prove that the P$_\ell$-Kyber is IND-CCA secure under the M-LWE hardness assumption. We show that the decryption decoding noise entries across the $\ell$ plaintexts (also known as layers) are mutually independent. Second, we propose a cross-layer lattice encoding scheme for the P$_\ell$-Kyber, where every $\ell$ cross-layer information symbols are encoded to a lattice point. This way we obtain a \emph{coded} P$_\ell$-Kyber, where the decoding noise entries for each lattice point are mutually independent. Therefore, the DFR analysis does not require the assumption of independence among the decryption decoding noise entries. Both DFR and CER are greatly decreased thanks to ciphertext packing and lattice packing. We demonstrate that with $\ell=24$ and Leech lattice encoder, the proposed coded P$_\ell$-KYBER1024 achieves DFR $<2^{-281}$ and CER $ = 4.6$, i.e., a decrease of CER by $90\%$ compared to KYBER1024. If minimizing CPU runtime is the priority, our C implementation shows that the E8 encoder provides the best trade-off among runtime, CER, and DFR. Additionally, for a fixed plaintext size matching that of standard Kyber ($256$ bits), we introduce a truncated variant of P$_\ell$-Kyber that deterministically removes ciphertext components carrying surplus information bits. Using $\ell=8$ and E8 lattice encoder, we show that the proposed truncated coded P$_\ell$-KYBER1024 achieves a $10.2\%$ reduction in CER and improves DFR by a factor of $2^{30}$ relative to KYBER1024. Finally, we demonstrate that constructing a multi-recipient PKE and a multi-recipient KEM (mKEM) using the proposed truncated coded P$_\ell$-KYBER1024 results in a $20\%$ reduction in bandwidth consumption compared to the existing schemes.
Mahimna Kelkar, Aadityan Ganesh, Aditi Partap, Joseph Bonneau, S. Matthew Weinberg
Cryptographic protocols often make honesty assumptions---e.g., fewer than $t$ out of $n$ participants are adversarial. In practice, these assumptions can be hard to ensure, particularly given monetary incentives for participants to collude and deviate from the protocol.
In this work, we explore combining techniques from cryptography and mechanism design to discourage collusion. We formalize protocols in which colluders submit a cryptographic proof to whistleblow against their co-conspirators, revealing the dishonest behavior publicly. We provide general results on the cryptographic feasibility, and show how whistleblowing fits a number of applications including secret sharing, randomness beacons, and anonymous credentials.
We also introduce smart collusion---a new model for players to collude. Analogous to blockchain smart contracts, smart collusion allows colluding parties to arbitrarily coordinate and impose penalties on defectors (e.g., those that blow the whistle). We show that unconditional security is impossible against smart colluders even when whistleblowing is anonymous and can identify all colluding players. On the positive side, we construct a whistleblowing protocol that requires only a small deposit and can protect against smart collusion even with roughly $t$ times larger deposit.
In this work, we explore combining techniques from cryptography and mechanism design to discourage collusion. We formalize protocols in which colluders submit a cryptographic proof to whistleblow against their co-conspirators, revealing the dishonest behavior publicly. We provide general results on the cryptographic feasibility, and show how whistleblowing fits a number of applications including secret sharing, randomness beacons, and anonymous credentials.
We also introduce smart collusion---a new model for players to collude. Analogous to blockchain smart contracts, smart collusion allows colluding parties to arbitrarily coordinate and impose penalties on defectors (e.g., those that blow the whistle). We show that unconditional security is impossible against smart colluders even when whistleblowing is anonymous and can identify all colluding players. On the positive side, we construct a whistleblowing protocol that requires only a small deposit and can protect against smart collusion even with roughly $t$ times larger deposit.
Shuo Peng, Jiahui He, Kai Hu, Zhongfeng Niu, Shahram Rasoolzadeh, Meiqin Wang
Proposed in EUROCRYPT~2025, \chilow is a family of tweakable block ciphers and a related PRF built on the novel nonlinear $\chichi$ function, designed to enable efficient and secure embedded code encryption.
The only key-recovery results of \chilow are from designers which can reach at most 4 out of 8 rounds, which is not enough for a low-latency cipher like \chilow: more cryptanalysis efforts are expected.
Considering the low-degree $\chichi$ function, we present three kinds of cube-like attacks on \chilow-32 under both single-tweak and multi-tweak settings, including
\begin{itemize}
\item[-] a \textit{conditional cube attack} in the multi-tweak setting, which enables full key recovery for 5-round and 6-round instances with time complexities $2^{32}$ and $2^{120}$, data complexities $2^{23.58}$ and $2^{40}$, and negligible memory requirements, respectively.
\item[-] a \textit{borderline cube attack} in the multi-tweak setting, which recovers the full key of 5-round \chilow-32 with time, data, and memory complexities of $2^{32}$, $2^{18.58}$, and $2^{33.56}$, respectively. For 6-round \chilow-32, it achieves full key recovery with time, data, and memory complexities of $2^{34}$, $2^{33.58}$, and $2^{54.28}$, respectively.
Both attacks are practical.
\item [-] an \textit{integral attack} on 7-round \chilow-32 in the single-tweak setting.
By combining a 4-round borderline cube with three additional rounds, we reduce the round-key search space from $2^{96}$ to $2^{73}$. Moreover, we present a method to recover the master key based on round-key information, allowing us to recover the master key for 7-round \chilow-32 with a time complexity of $2^{127.78}$.
\end{itemize}
All of our attacks respect security claims made by the designers. Though our analysis does not compromise the security of the full 8-round \chilow, we hope that our results offer valuable insights into its security properties.
All of our attacks respect security claims made by the designers. Though our analysis does not compromise the security of the full 8-round \chilow, we hope that our results offer valuable insights into its security properties.
Hossein Hafezi, Alireza Shirzad, Benedikt Bünz, Joseph Bonneau
We present IronDict, a transparent dictionary construction based on polynomial commitment schemes. Transparent dictionaries enable an untrusted server to maintain a mutable dictionary and provably serve clients lookup queries. A major open challenge is supporting efficient auditing by lightweight clients. Previous solutions either incurred high server costs (limiting throughput) or high client lookup verification costs, hindering them from modern messaging key transparency deployments with billions of users. Our construction makes black-box use of a generic multilinear polynomial commitment scheme and inherits its security notions, i.e. binding and zero-knowledge. We implement our construction with the recent KZH scheme and find that a dictionary with $1$ billion entries can be verified on a consumer-grade laptop in $35$ ms, a $300\times$ improvement over the state of the art, while also achieving $150{,}000\times$ smaller proofs ($8$ KB). In addition, our construction ensures perfect privacy with concretely efficient costs for both the client and the server. We also show fast-forwarding techniques based on incremental verifiable computation (IVC) and checkpoints to enable even faster client auditing.
Varun Madathil, Arthur Lazzaretti, Zeyu Liu, Charalampos Papamanthou
Secure aggregation enables a central server to compute the sum of client inputs without learning any individual input, even in the presence of dropouts or partial participation. This primitive is fundamental to privacy-preserving applications such as federated learning, where clients collaboratively train models without revealing raw data.
We present a new secure aggregation protocol, TACITA, in the single-server setting that satisfies four critical properties simultaneously: (1) one-shot communication from clients with no per-instance setup, (2) input-soundness, i.e. the server cannot manipulate the ciphertexts, (3) constant-size communication per client, independent of the number of participants per-instance, and (4) robustness to client dropouts
Previous works on secure aggregation - Willow and OPA (CRYPTO'25) that achieve one-shot communication do not provide input soundness, and allow the server to manipulate the aggregation. They consequently do not achieve full privacy and only achieve Differential Privacy guarantees at best. We achieve full privacy at the cost of assuming a PKI. Specifically, TACITA relies on a novel cryptographic primitive we introduce and realize: succinct multi-key linearly homomorphic threshold signatures (MKLHTS), which enables verifiable aggregation of client-signed inputs with constant-size signatures. To encrypt client inputs, we adapt the Silent Threshold Encryption (STE) scheme of Garg et al. (CRYPTO 2024) to support ciphertext-specific decryption and additive homomorphism.
We formally prove security in the Universal Composability framework and demonstrate practicality through an open-source proof-of-concept implementation, showing our protocol achieves scalability without sacrificing efficiency or requiring new trust assumptions.
We present a new secure aggregation protocol, TACITA, in the single-server setting that satisfies four critical properties simultaneously: (1) one-shot communication from clients with no per-instance setup, (2) input-soundness, i.e. the server cannot manipulate the ciphertexts, (3) constant-size communication per client, independent of the number of participants per-instance, and (4) robustness to client dropouts
Previous works on secure aggregation - Willow and OPA (CRYPTO'25) that achieve one-shot communication do not provide input soundness, and allow the server to manipulate the aggregation. They consequently do not achieve full privacy and only achieve Differential Privacy guarantees at best. We achieve full privacy at the cost of assuming a PKI. Specifically, TACITA relies on a novel cryptographic primitive we introduce and realize: succinct multi-key linearly homomorphic threshold signatures (MKLHTS), which enables verifiable aggregation of client-signed inputs with constant-size signatures. To encrypt client inputs, we adapt the Silent Threshold Encryption (STE) scheme of Garg et al. (CRYPTO 2024) to support ciphertext-specific decryption and additive homomorphism.
We formally prove security in the Universal Composability framework and demonstrate practicality through an open-source proof-of-concept implementation, showing our protocol achieves scalability without sacrificing efficiency or requiring new trust assumptions.
Victor Shoup
We present a practical, non-interactive threshold decryption scheme. It can be proven CCA secure with respect to adaptive corruptions in the random oracle model under a standard computational assumption, namely, the DDH assumption. Our scheme, called TDH2a, is a minor tweak on the TDH2 scheme presented by Shoup and Gennaro at Eurocrypt 1998, which was proven secure against static corruptions under the same assumptions. The design and analysis of TDH2a are based on a straightforward extension of the simple information-theoretic argument underlying the security of the Cramer-Shoup encryption scheme presented at Crypto 1998.
Sedric Nkotto
Kyber a.k.a ML-KEM has been stardardized by NIST under FIPS-203 and will
definetely in the coming years be implemented in several commercial products.
However the resilience of implementations against side channel attacks is still an open
and practical concern. One of the drawbacks of the ongoing side channel analysis
research related to PQC schemes is the availability of open source datasets. Luckily
some opensource datasets start popping up. For instance the one recently published
by Rezaeezade et al. in [2]. This dataset captures power consumption during a pair-
pointwise multiplication occuring in the course of ML-KEM decapsulation process
and involving the decapsulation (sub)key and ciphertexts. In this paper we present
a template side channel attack targetting that operation, which yields a complete
recovery of the decapsulation secret (sub)key.