International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

12 September 2025

Simon Damm, Asja Fischer, Alexander May, Soundes Marzougui, Leander Schwarz, Henning Seidler, Jean-Pierre Seifert, Jonas Thietke, Vincent Quentin Ulitzsch
ePrint Report ePrint Report
Lattice-based signatures like Dilithium (ML-DSA) prove knowledge of a secret key $s \in \mathbb{Z}_n$ by using Integer LWE (ILWE) samples $z = \langle \vec c, \vec s \rangle +y $, for some known hash value $c \in \mathbb{Z}_n$ of the message and unknown error $y$. Rejection sampling guarantees zero-knowledge, which makes the ILWE problem, that asks to recover s from many z’s, unsolvable.

Side-channel attacks partially recover y, thereby obtaining more informative samples resulting in a—potentially tractable—ILWE problem. The standard method to solve the resulting problem is Ordinary Least Squares (OLS), which requires independence of $y$ from $\langle c, s \rangle$ —an assumption that is violated by zero-knowledge samples.

We present efficient algorithms for a variant of the ILWE problem that was not addressed in prior work, which we coin Concealed ILWE (CILWE). In this variant, only a fraction of the ILWE samples is zero-knowledge. We call this fraction the concealment rate. This ILWE variant naturally occurs in side-channel attacks on lattice-based signatures. A case in point are profiling side-channel attacks on Dilithium implementations that classify whether y = 0. This gives rise to either zero-error ILWE samples $z = \langle c, s \rangle$ with $y = 0$ (in case of correct classification), or ordinary zero-knowledge ILWE samples (in case of misclassification).

As we show, OLS is not practical for CILWE instances, as it requires a prohibitively large amount of samples for even small (under 10%) concealment rates. A known integer linear programming-based approach can solve some CILWE instances, but suffers from two short-comings. First, it lacks provable efficiency guarantees, as ILP is NP-hard in the worst case. Second, it does not utilize small, independent error y samples, that could occur in addition to zero-knowledge samples. We introduce two statistical regression methods to cryptanalysis, Huber and Cauchy regression. They are both efficient and can handle instances with all three types of samples. At the same time, they are capable of handling high concealment rates, up to 90% in practical experiments. While Huber regression comes with theoretically appealing correctness guarantees, Cauchy regression performs best in practice. We use this efficacy to execute a novel profiling attack against a masked Dilithium implementation. The resulting ILWE instances suffer from both concealment and small, independent errors. As such, neither OLS nor ILP can recover the secret key. Cauchy regression, however, allows us to recover the secret key in under three minutes for all NIST security levels.
Expand
Pratish Datta, Junichi Tomida, Nikhil Vanjani
ePrint Report ePrint Report
We revisit decentralized multi‑authority attribute‑based encryption (MA‑ABE) through the lens of fully adaptive security -- the most realistic setting in which an adversary can decide on‑the‑fly which users and which attribute authorities to corrupt. Previous constructions either tolerated only static authority corruption or relied on highly complex “dual system with dual‑subsystems” proof technique that inflated ciphertexts and keys.

Our first contribution is a streamlined security analysis showing -- perhaps surprisingly -- that the classic Lewko–Waters MA-ABE scheme [EUROCRYPT 2011] already achieves full adaptive security, provided its design is carefully reinterpreted and, more crucially, its security proof is re-orchestrated to conclude with an information-theoretic hybrid in place of the original target-group-based computational step. By dispensing with dual subsystems and target-group-based assumptions, we achieve a significantly simpler and tighter security proof along with a more lightweight implementation. Our construction reduces ciphertext size by 33 percent, shrinks user secret keys by 66 percent, and requires 50 percent fewer pairing operations during decryption -- all while continuing to support arbitrary collusions of users and authorities. These improvements mark a notable advance over the state-of-the-art fully adaptive decentralized MA-ABE scheme of Datta et al. [EUROCRYPT 2023]. We instantiate the scheme in both composite‑order bilinear groups under standard subgroup‑decision assumptions and in asymmetric prime‑order bilinear groups under the Matrix‑Diffie–Hellman assumption. We further show how the Kowalczyk–Wee attribute‑reuse technique [EUROCRYPT 2019] seamlessly lifts our construction from ``one‑use’’ boolean span programs (BSP) to ``multi‑use’’ policies computable in $\mathsf{NC^{1}}$, resulting in a similarly optimized construction over the state-of-the-art by Chen et al. [ASIACRYPT 2023].

Going beyond the Boolean world, we present the first MA-ABE construction for arithmetic span program (ASP) access policies, capturing a richer class of Boolean, arithmetic, and combinatorial computations. This advancement also enables improved concrete efficiency by allowing attributes to be handled directly as field elements, thereby eliminating the overhead of converting arithmetic computations into Boolean representations. The construction -- again presented in composite and prime orders -- retains decentralization and adaptive user‑key security, and highlights inherent barriers to handling corrupted authorities in the arithmetic setting.
Expand
Zeyu Liu, Yunhao Wang, Ben Fisch
ePrint Report ePrint Report
Fully homomorphic encryption (FHE) is a powerful and widely used primitive in lots of real-world applications, with IND-CPA as its standard security guarantee. Recently, Li and Micciancio [Eurocrypt'21] introduced IND-CPA-D security, which strengthens the standard IND-CPA security by allowing the attacker to access a decryption oracle for honestly generated ciphertexts (generated via either an encryption oracle or an honest homomorphic circuit evaluation process).

Recently, Jung et al. [CCS'24] and Checri et al. [Crypto'24] have shown that even exact FHE schemes like FHEW/TFHE/BGV/BFV may still not be IND-CPA-D secure, by exploiting the bootstrapping failure. However, such existing attacks can be mitigated by setting negligible bootstrapping failure probability.

On the other hand, Liu and Wang [Asiacrypt'24] proposed relaxed functional bootstrapping, which has orders of magnitude performance improvement and furthermore allows a free function evaluation during bootstrapping. These efficiency advantages make it a competitive choice in many applications but its ``relaxed'' nature also opens new directions of IND-CPA-D attack. In this work, we show that the underlying secret key could be recovered within 10 minutes against all existing relaxed functional bootstrapping constructions, and even within 1 minute for some of them. Moreover, our attack works even with a negligible bootstrapping failure probability, making it immune to existing mitigation methods.

Additionally, we propose a general fix that mitigates all the existing modulus-switching-error-based attacks, including ours, in the IND-CPA-D model. This is achieved by constructing a new modulus switching procedure with essentially no overhead. Lastly, we show that IND-CPA-D may not be sufficient for some applications, even in the passive adversary model. Thus, we extend this model to IND-CPA-D with randomness (IND-CPA-DR).
Expand
Kigen Fukuda, Shin’ichiro Matsuo
ePrint Report ePrint Report
The emergence of Cryptographically Relevant Quantum Com- puters (CRQCs) poses an existential threat to the security of contem- porary blockchain networks, which rely on public-key cryptography vul- nerable to Shor’s algorithm. While the need for a transition to Post- Quantum Cryptography (PQC) is widely acknowledged, the evolution of blockchains from simple transactional ledgers to complex, multi-layered financial ecosystems has rendered early, simplistic migration plans ob- solete. This paper provides a comprehensive analysis of the blockchain PQC migration landscape as it stands in 2025. We dissect the core tech- nical challenges, including the performance overhead of PQC algorithms, the loss of signature aggregation efficiency vital for consensus and scala- bility, and the cascading complexities within Layer 2 protocols and smart contracts. Furthermore, the analysis extends to critical operational and socio-economic hurdles, such as the ethical dilemma of dormant assets and the conflicting incentives among diverse stakeholders including users, developers, and regulators. By synthesizing ongoing community discus- sions and roadmaps for Bitcoin, Ethereum, and others, this work estab- lishes a coherent framework to evaluate migration requirements, aiming to provide clarity for stakeholders navigating the path toward a quantum- secure future.
Expand
Véronique Cortier, Alexandre Debant, Olivier Esseiva, Pierrick Gaudry, Audhild Høgåsen, Chiara Spadafora
ePrint Report ePrint Report
Internet voting in Switzerland for political elections is strongly regulated by the Federal Chancellery (FCh). It puts a great emphasis on the individual verifiability: security against a corrupted voting device is ensured via return codes, sent by postal mail. For a long time, the FCh was accepting to trust an offline component to set up data and in particular the voting material. Today, the FCh aims at removing this strong trust assumption. We propose a protocol that abides by this new will. At the heart of our system lies a setup phase where several parties create the voting material in a distributed way, while allowing one of the parties to remain offline during the voting phase. A complication arises from the fact that the voting material has to be printed, sent by postal mail, and then used by the voter to perform several operations that are critical for security. Usability constraints are taken into account in our design, both in terms of computation complexity (linear setup and tally) and in terms of user experience (we ask the voter to type a high-entropy string only once). The security of our scheme is proved in a symbolic setting, using the ProVerif prover, for various corruption scenarios, demonstrating that it fulfills the Chancellery’s requirements and sometimes goes slightly beyond them.
Expand
Sven Schäge, Marc Vorstermans
ePrint Report ePrint Report
We make progress on the foundational problem of determining the strongest security notion achievable by homomorphic encryption. Our results are negative. We prove that a wide class of semi-homomorphic public key encryption schemes (SHPKE) cannot be proven IND-PCA secure (indistinguishability against plaintext checkability attacks), a relaxation of IND-CCA security. This class includes widely used and versatile schemes like ElGamal PKE, Paillier PKE, and the linear encryption system by Boneh, Boyen, and Shacham (Crypto'04). Besides their homomorphic properties, these schemes have in common that they all provide efficient algorithms for recognizing valid ciphertexts (and public keys) from public values. In contrast to CCA security, where the adversary is given access to a decryption oracle, in a PCA attack, the adversary solely has access to an oracle that decides for a given ciphertext/plaintext pair $(c,m)$ if decrypting $c$ indeed gives $m$. Since the notion of IND-PCA security is weaker than IND-CCA security, it is not only easier to achieve, leading to potentially more efficient schemes in practice, but it also side-steps existing impossibility results that rule out the IND-CCA security. To rule out IND-PCA security we thus have to rely on novel techniques. We provide two results, depending on whether the attacker is allowed to query the PCA oracle after it has received the challenge (IND-PCA2 security) or not (IND-PCA1 security -- the more challenging scenario). First, we show that IND-PCA2 security can be ruled out unconditionally if the number of challenges is smaller than the number of queries made after the challenge. Next, we prove that no Turing reduction can reduce the IND-PCA1 security of SHPKE schemes with $O(\kappa^3)$ PCA queries overall to interactive complexity assumptions that support $t$-time access to their challenger with $t=O(\kappa)$. To obtain our second impossibility results, we develop a new meta-reduction-based methodology that can be used to tackle security notions where the attacker is granted access to a \emph{decision} oracle. This makes it challenging to utilize the techniques of existing meta-reduction-based impossibility results that focus on definitions where the attacker is allowed to access an inversion oracle (e.g. long-term key corruptions or a signature oracle). To obtain our result, we have to overcome several technical challenges that are entirely novel to the setting of public key encryption.
Expand

11 September 2025

Ruida Wang, Jikang Bai, Xuan Shen, Xianhui Lu, Zhihao Li, Binwu Xiang, Zhiwei Wang, Hongyu Wang, Lutan Zhao, Kunpeng Wang, Rui Hou
ePrint Report ePrint Report
Fully Homomorphic Encryption (FHE) enables computation over encrypted data, but deployment is hindered by the gap between plaintext and ciphertext programming models. FHE compilers aim to automate this translation, with a promising approach being FHE Instruction Set Architecture (FHE-ISA) based on homomorphic look-up-tables (LUT). However, existing FHE LUT techniques are limited to 16-bit precision and face critical performance bottlenecks. We introduce Tetris, a versatile TFHE LUT framework for high-precision FHE instructions. Tetris incorporates three advances: a) A GLWE-based design with refined noise control, enabling up to 32-bit LUTs; b) A batched TFHE circuit bootstrapping algorithm that enhances the LUT performance; and c) Adaptive parameterization and parallel execution strategy that is optimized for high-precision evaluation. These techniques deliver: a) The first general FHE instruction set with 16-bit bivariate and 32-bit univariate operations; b) Performance improvements of 2×-863× over modern TFHE LUT approaches, and 2901× lower latency than the leading CKKS-based solution [CRYPTO'25]; (c) Up to 65× speedups over existing FHE-ISA implementations, and 3×-40× faster than related FHE compilers.
Expand
Fredrik Meisingseth, Christian Rechberger, Fabian Schmid
ePrint Report ePrint Report
At Crypto'24, Bell et al. propose a definition of Certified Differential Privacy (DP) and a quite general blueprint for satisfying it. The core of the blueprint is the new primitive called random variable commitment schemes (RVCS). Bell et al. construct RVCS's for fair coins and binomial distributions. In this work, we prove three lemmata that enable simple modular design of new RVCS's: First, we show that the properties of RVCS's are closed under polynomial sequential composition. Secondly, we show that homomorphically evaluating a function $f$ on the output of an RVCS for distribution $Z$ leads to an RVCS for distribution $f(Z)$. Thirdly, we show (roughly) that applying a `Commit-and-Prove'-style proof of knowledge for a function $f$ onto the output of an RVCS for distribution $Z$ results in an RVCS for distribution $f(Z)$. These lemmata together implies that there exists RVCS's for any distribution which can be sampled exactly in strict polynomial time, under, for instance, the discrete logarithm assumption. This is, to the best of our knowledge, the first result establishing the possibility of RVCS's for arbitrary distributions. We demonstrate the usefulness of these lemmata by constructing RVCS's for arbitrarily biased coins and a discrete Laplace distribution, leading to a certified DP protocol for a discrete Laplace mechanism. We observe that the definitions in BGKW do not directly allow sampling algorithms with non-zero honest abort probability, which rules out many practical sampling algorithms. We propose a slight relaxation of the definitions which enables the use of sampling methods with negligible abort probability. It is with respect to these weakened definitions that we prove our certified discrete Laplace mechanism.
Expand
Francesca Falzon, Zichen Gui, Michael Reichle
ePrint Report ePrint Report
Encrypted multi-maps (EMMs) allow a client to outsource a multi-map to an untrusted server and then later retrieve the values corresponding to a queried label. They are a core building block for various applications such as encrypted cloud storage and searchable encryption. One important metric of EMMs is memory-efficiency: most schemes incur many random memory accesses per search query, leading to larger overhead compared to plaintext queries. Memory-efficient EMMs reduce random accesses but, in most known solutions, this comes at the cost of higher query bandwidth.

This work focuses on EMMs run on SSDs and we construct two page-efficient schemes---one static and one dynamic---both with optimal search bandwidth. Our static scheme achieves $\bigOtilde{\log N/p}$ page-efficiency and $\bigo(1)$ client storage, where $N$ denotes the size of the EMM and $p$ the SSD's page size. Our dynamic scheme achieves forward and backward privacy with $\bigOtilde{\log N/p}$ page-efficiency and $\bigo(M)$ client storage, where $M$ denotes the number of labels. Among schemes with optimal server storage, these are the first to combine optimal bandwidth with good page-efficiency, saving up to $\bigo(p)$ and $\bigOtilde{p\log\log (N/p)}$ bandwidth over the state-of-the-art static and dynamic schemes, respectively. Our implementation on real-world data shows strong practical performance.
Expand
Danilo Francati, Yevin Nikhel Goonatilake, Shubham Pawar, Daniele Venturi, Giuseppe Ateniese
ePrint Report ePrint Report
We prove a sharp threshold for the robustness of cryptographic watermarking for generative models. This is achieved by introducing a coding abstraction, which we call messageless secret-key codes, that formalizes sufficient and necessary requirements of robust watermarking: soundness, tamper detection, and pseudorandomness. Thus, we establish that robustness has a precise limit: For binary outputs no scheme can survive if more than half of the encoded bits are modified, and for an alphabet of size q the corresponding threshold is $(1−1/q)$ of the symbols.

Complementing this impossibility, we give explicit constructions that meet the bound up to a constant slack. For every $\delta > 0$, assuming pseudorandom functions and access to a public counter, we build linear-time codes that tolerate up to $(1/2)(1−\delta)$ errors in the binary case and $(1−1/q)(1−\delta)$ errors in the $q$-ary case. Together with the lower bound, these yield the maximum robustness achievable under standard cryptographic assumptions.

We then test experimentally whether this limit appears in practice by looking at the recent watermarking for images of Gunn, Zhao, and Song (ICLR 2025). We show that a simple crop and resize operation reliably flipped about half of the latent signs and consistently prevented belief-propagation decoding from recovering the codeword, erasing the watermark while leaving the image visually intact.

These results provide a complete characterization of robust watermarking, identifying the threshold at which robustness fails, constructions that achieve it, and an experimental confirmation that the threshold is already reached in practice.
Expand
Lea Thiemt, Paul Rösler, Alexander Bienstock, Rolfe Schmidt, Yevgeniy Dodis
ePrint Report ePrint Report
Modern messengers use advanced end-to-end encryption protocols to protect message content even if user secrets are ever temporarily exposed. Yet, encryption alone does not prevent user tracking, as protocols often attach metadata, such as sequence numbers, public keys, or even plain user identifiers. This metadata reveals the social network as well as communication patterns between users. Existing protocols that hide metadata in Signal (i.e., Sealed Sender), for MLS-like constructions (Hashimoto et al., CCS 2022), or in mesh networks (Bienstock et al., CCS 2023) are relatively inefficient or specially tailored for only particular settings. Moreover, all existing practical solutions reveal crucial metadata upon exposures of user secrets.

In this work, we introduce a formal definition of Anonymity Wrappers (AW) that generically hide metadata of underlying two-party and group messaging protocols. Our definition captures forward and post-compromise anonymity as well as authenticity in the presence of temporary state exposures. Inspired by prior wrapper designs, the idea of our provably secure AW construction is to use shared keys of the underlying wrapped (group) messaging protocols to derive and continuously update symmetric keys for hiding metadata. Beyond hiding metadata on the wire, we also avoid and hide structural metadata in users' local states for stronger anonymity upon their exposure.

We implement our construction, evaluate its performance, and provide a detailed comparison with Signal's current approach based on Sealed Sender: Our construction reduces the wire size of small 1:1 messages from 441 bytes to 114 bytes. For a group of 100 members, it reduces the wire size of outgoing group messages from 7240 bytes to 155 bytes. We see similar improvements in computation time for encryption and decryption, but these improvements come with substantial storage costs for receivers. For this reason, we develop extensions with a Bloom filter for compressing the receiver storage. Based on this, Signal considers deploying our solution.
Expand
Tabitha Ogilvie
ePrint Report ePrint Report
Approximate Homomorphic Encryption (AHE), introduced by Cheon et al. [CKKS17] offers a powerful solution for encrypting real-valued data by relaxing the correctness requirement and allowing small decryption errors. Existing constructions from (Ring) Learning with Errors achieve standard IND-CPA security, but this does not fully capture scenarios where an adversary observes decrypted outputs. Li and Micciancio [LiMic21] showed that when decryptions are passively leaked, these schemes become vulnerable to practical key recovery attacks even against honest-but-curious attackers. They formalise security when decryptions are shared with new notions of IND-CPA-D and KR-D security.

We propose new techniques to achieve provable IND-CPA-D and KR-D security for AHE, while adding substantially less additional decryption noise than the prior provable results. Our approach hinges on refined "game-hopping" tools in the bit-security framework, which allow bounding security loss with a lower noise overhead. We also give a noise-adding strategy independent of the number of oracle queries, removing a costly dependence inherent in the previous solution.

Beyond generic noise-flooding, we show that leveraging the recently introduced HintLWE problem [KLSS23] can yield particularly large security gains for AHE ciphertexts that are the result of "rescaling", a common operation in CKKS. Our analysis uses the fact that rescale-induced noise amounts to a linear ``hint" on the secret to enable a tighter reduction to LWE (via HintLWE). In many practical parameter regimes where the rescaling noise dominates, our results imply an additional precision loss of as little as two bits is sufficient to restore a high level of security against passive key-recovery attacks for standard parameters.

Overall, our results enable a provably secure and efficient real-world deployment of Approximate Homomorphic Encryption in scenarios with realistic security requirements.
Expand
Forest Zhang, Ke Wu
ePrint Report ePrint Report
Fair multi-party coin toss protocols are fundamental to many cryptographic applications. While Cleve's celebrated result (STOC'86) showed that strong fairness is impossible against half-sized coalitions, recent works have explored a relaxed notion of fairness called Cooperative-Strategy-Proofness (CSP-fairness). CSP-fairness ensures that no coalition has an incentive to deviate from the protocol. Previous research has established the feasibility of CSP-fairness against majority-sized coalitions, but these results focused on specific preference structures where each player prefers exactly one outcome out of all possible choices. In contrast, real-world scenarios often involve players with more complicated preference structures, rather than simple binary choices.

In this work, we initiate the study of CSP-fair multi-party coin toss protocols that accommodate arbitrary preference profiles, where each player may have an unstructured utility distribution over all possible outcomes. We demonstrate that CSP-fairness is attainable against majority-sized coalitions even under arbitrary preferences. In particular, we give the following results: \begin{itemize} \item We give a reduction from achieving CSP-fairness for arbitrary preference profiles to achieving CSP-fairness for structured split-bet profiles, where each player distributes the same amount of total utility across all outcomes. \item We present two CSP-fair protocols: (1) a \emph{size-based protocol}, which defends against majority-sized coalitions, and (2) a \emph{bets-based protocol}, which defends against coalitions controlling a majority of the total utilities. \item Additionally, we establish an impossibility result for CSP-fair binary coin toss with split-bet preferences, showing that our protocols are nearly optimal. \end{itemize}
Expand
Nouhou Abdou Idris, Yunusa Abdulsalam, Mustapha Hedabou
ePrint Report ePrint Report
The inherent presence of large-scale quantum computers is deemed to threaten the traditional public-key cryptographic systems. As a result, NIST has necessitated the development of post-quantum cryptography (PQC) algorithms. Isogeny-based schemes have compact key sizes; however, there have been some fundamental vulnerabilities in the smooth-degree isogeny systems. Therefore, the literature has concentrated its efforts on higher-dimensional isogenies of non-smooth degrees. In this work, we present POKE-KEM, a key encapsulation mechanism (KEM) derived from the POKE public-key encryption (PKE) scheme via a Fujisaki-Okamoto (FO) transformation with its corresponding optimized security levels during transformation. Despite POKE’s efficiency advantages, its current formulation provides only IND-CPA security, limiting practical deployment where adaptive chosen-ciphertext security is essential. The resulting POKE-KEM construction achieves IND-CCA security under the computational hardness of the C-POKE problem in both the random oracle model (ROM) and quantum random oracle model (QROM). Our implementation demonstrates significant practical advantages across all NIST security levels (128, 192, and 256 bits), supporting the practical viability of POKE-KEM for post-quantum cryptographic deployments. We provide a comprehensive security analysis including formal proofs of correctness, rigidity, and IND-CCA security. Our underlying construction leverages the difficulty of recovering isogenies with unknown, non-smooth degrees, resistant to known quantum attacks
Expand
MINKA MI NGUIDJOI Thierry Emmanuel
ePrint Report ePrint Report
Weintroduce the Chaotic Entropic Expansion (CEE), a new one-way function based on iterated polynomial maps over finite fields. For polynomials f in a carefully defined class Fd, we prove that N iterations preserve min-entropy of at least log2q − N log2d bits and achieve statistical distance ≤ (q − 1)(dN − 1)/(2√q) from uniform. We formalize security through the Affine Iterated Inversion Problem (AIIP) and provide reductions to the hardness of solving multivariate quadratic equations (MQ) and computing discrete logarithms (DLP). Against quantum adversaries, CEE achieves O(2λ/2) security for λ-bit classical security. We provide comprehensive cryptanalysis and parameter recommendations for practical deployment. While slower than AES, CEE’s algebraic structure enables unique applications in verifiable computation and post-quantum cryptography within the CASH framework.
Expand
Michele Ciampi, Divya Ravi, Luisa Siniscalchi, Yu Xia
ePrint Report ePrint Report
When investigating the round-complexity of multi-party computation protocols (MPC) protocols, it is common to assume that in each round parties can communicate over broadcast channels. However, broadcast is an expensive resource, and as such its use should be minimized. For this reason, Cohen, Garay, and Zikas (Eurocrypt 2020) investigated the tradeoffs between the use of broadcast in two-round protocols assuming setup and the achievable security guarantees. Despite the prolific line of research that followed the results of Cohen, Garay, and Zikas, none of the existing results considered the problem of minimizing the use of broadcast while relying in a ?????-??? way on the underlying cryptographic primitives. In this work, we fully characterize the necessary and sufficient requirements in terms of broadcast usage in the ???ℎ????? ???????? setting for round-optimal MPC with black-box use of minimal cryptographic assumptions. Our main result shows that to securely realize any functionality with ????????? ????? in the common reference string model with black-box use of two-round oblivious transfer it is necessary and sufficient for the parties to adhere to the following pattern: in the first two rounds the parties exchange messages over peer-to-peer channels, and in the last round the messages are sent over a broadcast channel. We also extend our results to the correlated randomness setting where we prove that one round of peer-to-peer interaction and one round of broadcast is optimal to evaluate any functionality with unanimous abort.
Expand
Shuai Han, Hongxu Yi, Shengli Liu, Dawu Gu
ePrint Report ePrint Report
Currently, the only tightly secure inner-product functional encryption (IPFE) schemes in the multi-user and multi-challenge setting are the IPFE scheme due to Tomida (Asiacrypt 2019) and its derivatives. However, these tightly secure schemes have large ciphertext expansion and are all based on the matrix decisional Diffie-Hellman (DDH) assumption.

To improve the efficiency of tightly secure IPFE and enrich the diversity of its underlying assumptions, we construct a set of tightly secure IPFE schemes, with very compact ciphertexts and efficient algorithms, from the matrix DDH, the decisional composite residuosity (DCR), and the learning-with-errors (LWE) assumptions, respectively. In particular,

-- our DDH-based scheme has about 5×~100× shorter public/secret keys, 2×~2.9× shorter ciphertexts and about 5×~100× faster algorithms than all existing tightly secure IPFE schemes, at the price of 3~7 bits of decreased security level, resolving an open problem raised by Tomida (Asiacrypt 2019);

-- our DCR-based scheme constitutes the first tightly secure IPFE scheme from the DCR assumption;

-- our LWE-based scheme is the first almost tightly secure IPFE scheme from lattices, hence achieving post-quantum security.

We obtain our schemes by proposing a generic and flexible construction of IPFE with a core technical tool called two-leveled inner-product hash proof system (TL-IP-HPS). Specifically, our IPFE construction is in a compact design, and to achieve tight security, we propose an economic proof strategy to reuse the spaces in such compact IPFE. Interestingly, our new proof strategy also applies to the loosely secure IPFE schemes proposed by Agrawal, Libert and Stehlé (Crypto 2016), and yields a tighter bound on their security loss.
Expand
Onur Gunlu, Maciej Skorski, H. Vincent Poor
ePrint Report ePrint Report
Semantic communication systems focus on the transmission of meaning rather than the exact reconstruction of data, reshaping communication network design to achieve transformative efficiency in latency-sensitive and bandwidth-limited scenarios. Within this context, we investigate the rate-distortion-perception (RDP) problem for image compression, which plays a central role in producing perceptually realistic outputs under rate constraints. By employing the randomized distributed function computation (RDFC) framework, we derive an achievable non-asymptotic RDP region that captures the finite blocklength trade-offs among rate, distortion, and perceptual quality, aligning with the objectives of semantic communications. This region is further generalized to include either side information or a secrecy requirement, the latter ensuring strong secrecy against eavesdroppers through physical-layer security mechanisms and maintaining robustness in the presence of quantum-capable adversaries. The main contributions of this work are: (i) achievable bounds for non-asymptotic RDP regions subject to realism and distortion constraints; (ii) extensions to cases where side information is available at both the encoder and decoder; (iii) achievable, nonasymptotic RDP bounds with strong secrecy guarantees; (iv) characterization of the asymptotic secure RDP region under a perfect realism constraint; and (v) demonstrations of significant rate reductions and the effects of finite blocklengths, side information, and secrecy constraints. These findings offer concrete guidelines for the design of low-latency, secure, and high-fidelity image compression and generative modeling systems that generate realistic outputs, with relevance for, e.g., privacy-critical applications.
Expand
Marc Fischlin, Moritz Huppert, Sam A. Markelon
ePrint Report ePrint Report
Probabilistic data structures like hash tables, skip lists, and treaps support efficient operations through randomized hierarchies that enable "skipping" elements, achieving sub-linear query complexity on average for perfectly correct responses. They serve as critical components in performance-sensitive systems where correctness is essential and efficiency is highly desirable. While simpler than deterministic alternatives like balanced search trees, these structures traditionally assume that input data are independent of the structure's internal randomness and state - an assumption questionable in malicious environments - potentially leading to a significantly increased query complexity. We present adaptive attacks on all three aforementioned structures that, in the case of hash tables and skip lists, cause exponential degradation compared to the input-independent setting. While efficiency-targeting attacks on hash tables are well-studied, our attacks on skip lists and treaps provide new insights into vulnerabilities of skipping-based probabilistic data structures. Next, we propose simple and efficient modifications to the original designs of these data structures to provide provable security against adaptive adversaries. Our approach is formalized through Adaptive Adversary Property Conservation (AAPC), a general security notion that captures deviation from the expected efficiency guarantees in adversarial scenarios. We use this notion to present rigorous robustness proofs for our versions of the data structures. Lastly, we perform experiments whose empirical results closely agree with our analytical results.
Expand
Rujia Li, Mingfei Zhang, Xueqian Lu, Wenbo Xu, Ying Yan, Sisi Duan
ePrint Report ePrint Report
Ethereum, a leading blockchain platform, relies on incentive mechanisms to improve its stability. Recently, several attacks targeting the incentive mechanisms have been proposed. Examples include the so-called reorganization attacks that cause blocks proposed by honest validators to be discarded to gain more rewards. Finding these attacks, however, heavily relies on expert knowledge and may involve substantial manual effort.

We present BunnyFinder, a semi-automated framework for finding incentive flaws in Ethereum. BunnyFinder is inspired by failure injection, a technique commonly used in software testing for finding implementation vulnerabilities. Instead of finding implementation vulnerabilities, we aim to find design flaws. Our main technical contributions involve a carefully designed strategy generator that generates a large pool of attack instances, an automatic workflow that launches attacks and analyzes the results, and a workflow that integrates reinforcement learning to fine-tune the attack parameters and identify the most profitable attacks. We simulate a total of 9,354 attack instances using our framework and find the following results. First, our framework reproduces five known incentive attacks that were previously found manually. Second, we find three new attacks that can be identified as incentive flaws. Finally and surprisingly, one of our experiments also identified two implementation flaws.
Expand
◄ Previous Next ►