International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

04 January 2025

Wouter Legiest, Jan-Pieter D'Anvers, Bojan Spasic, Nam-Luc Tran, Ingrid Verbauwhede
ePrint Report ePrint Report
This paper presents a novel approach to calculating the Levenshtein (edit) distance within the framework of Fully Homomorphic Encryption (FHE), specifically targeting third-generation schemes like TFHE. Edit distance computations are essential in applications across finance and genomics, such as DNA sequence alignment. We introduce an optimised algorithm that significantly reduces the cost of edit distance calculations called \LVS{}. This algorithm specifically reduces the number of programmable bootstraps (PBS) needed per cell of the calculation, lowering it from approximately 28 operations—required by the conventional Wagner-Fisher algorithm—to just 1. Additionally, we propose an efficient method for performing equality checks on characters, reducing ASCII character comparisons to only 2 PBS operations. Finally, we explore the potential for further performance improvements by utilizing preprocessing when one of the input strings is unencrypted. Our Leuvenshtein achieves up to $205\times$ faster performance compared to the best available TFHE implementation and up to $39\times$ faster than an optimised implementation of the Wagner-Fisher algorithm. Moreover, when offline preprocessing is possible due to the presence of one unencrypted input on the server side, an additional $3\times$ speedup can be achieved.
Expand

03 January 2025

Dipayan Saha, Farimah Farahmandi
ePrint Report ePrint Report
Side-channel analysis (SCA) does not aim at the algorithm's weaknesses but rather its implementations. The rise of machine learning (ML) and deep learning (DL) is giving adversaries advanced capabilities to perform stealthy attacks. In this paper, we propose DL-SCADS, a DL-based approach along with signal decomposition techniques to leverage the power of secret key extraction from post-silicon EM/power side-channel traces. We integrate previously proven effective ideas of model ensembling and automated hyperparameter tuning with signal decomposition to develop an efficient and robust side-channel attack. Extensive experiments are performed on Advanced Encryption Standard (AES) and Post-Quantum Cryptography (PQC) to demonstrate the efficacy of our approach. The evaluation of the performance of the side-channel attack employing various decomposition techniques and comparison with the proposed approach in a range of datasets are also tabulated.
Expand
Anandarup Roy, Bimal Kumar Roy, Kouichi Sakurai, Suprita Talnikar
ePrint Report ePrint Report
This article explores the potential of Secret Sharing-Based Internet of Things (SBIoT) as a promising cryptographic element across diverse applications, including secure data storage in commercial cloud systems (Datachest), smart home environments (encompassing sensors, cameras, smart locks, and smart assistants), and e-health applications (protecting patient data and medical records). Beyond these applications, the paper makes two key contributions: the introduction of a novel cheater identification algorithm designed to verify the correct submission of shares during secret reconstruction, and empirical validation through experimental studies to support the theoretical advancements. This multifaceted approach not only demonstrates the versatility of SBIoT but also proposes innovative mechanisms to enhance security and integrity, contributing to the development of a more robust cryptographic framework. This article expands upon the work presented in the poster "A Combinatorial Approach to IoT Data Security" at IWSEC 2023, Yokohama, Japan.
Expand
Merve Karabulut, Reza Azarderakhsh
ePrint Report ePrint Report
Side-channel attacks (SCA) pose a significant threat to cryptographic implementations, including those designed to withstand the computational power of quantum computers. This paper introduces the first side-channel attack on an industry-grade post-quantum cryptography implementation, Adam's Bridge. Specifically, we present a Correlation Power Analysis (CPA) attack targeting the hardware implementation of ML-DSA within Caliptra, an open-source Silicon Root of Trust framework developed through a multi-party collaboration involving Google, AMD, and Microsoft.

Our attack focuses on the modular reduction process that follows the Number Theoretic Transform-based polynomial pointwise multiplication. By exploiting side-channel leakage from a distinctive reduction algorithm unique to Adam's Bridge and leveraging the zeroization mechanism used to securely erase sensitive information by clearing internal registers, we significantly enhance the attack's efficacy. Our findings reveal that an adversary can extract Caliptra's ML-DSA secret keys using only 10,000 power traces. With access to these keys, an attacker could forge signatures for certificate generation, thereby compromising the integrity of the root of trust. This work highlights the vulnerabilities of industry-standard root-of-trust systems to side-channel attacks. It underscores the urgent need for robust countermeasures to secure commercially deployed systems against such threats.
Expand
Angold Wang
ePrint Report ePrint Report
This survey provides a comprehensive examination of zero-knowledge interactive verifiable computing, emphasizing the utilization of randomnes in low-degree polynomials. We begin by tracing the evolution of general-purpose verifiable computing, starting with the foundational concepts of complexity theory developed in the 1980s, including classes such as P, NP and NP-completeness. Through an exploration of the Cook-Levin Theorem and the transformation between NP problems like HAMPATH and SAT, we demonstrate the reducibility of NP problems to a unified framework, laying the groundwork for subsequent advancements.

Recognizing the limitations of NP-based proof systems in effectively verifying certain problems, we then delve into interactive proof systems (IPS) as a probabilistic extension of NP. IPS enhance verification efficiency by incorporating randomness and interaction, while accepting a small chance of error for that speed. We address the practical challenges of traditional IPS, where the assumption of a prover with unlimited computational power is unrealistic, and introduce the concept of secret knowledge. This approach allows a prover with bounded computational resources to convincingly demonstrate possession of secret knowledge to the verifier, thereby enabling high-probability verification by the verifier. We quantify this knowledge by assessing the verifier's ability to distinguish between a simulator and genuine prover, referencing seminal works such as Goldwasser et al.'s "The knowledge Complexity of Theorem Proving Procedures"

The survey further explores essential mathematical theories and cryptographic protocols, including the Schwartz-Zippel lemma and Reed-Solomon error correction, which underpin the power of low-degree polynomials in error detection and interactive proof systems. We provide a detailed, step-by-step introduction to tyhe sum-check protocol, proving its soundness and runtime characteristics.

Despite the sum-check protocol's theoretical applicability to all NP problems via SAT reduction, we highlight the sum-check protocol's limitation in requiring superpolynomial time for general-purpose computations of a honest prover. To address these limitations, we introduce the GKR protocol, a sophisticate general-purpose interactive proof system developed in the 2010s. We demonstrate how the sum-check protocol integrates into the GKR framework to achieve efficient, sound verification of computations in polynomial time. This survey not only reviews the historical and theoretical advancement in verifiable computing in the past 30 years but also offers an accessible introduction for newcomers by providing a solid foundation to understand the significant advancements in verifiable computing over the past decade, including developments such as ZK-SNARKs.
Expand
Daniel Nager
ePrint Report ePrint Report
In [Pan21] a linearization attack is proposed in order to break the cryp- tosystem proposed in [Gli21]. We want to propose here a non-linearizable operator that disables this attack as this operator doesn't give raise to a quasigrup and doesn't obey the latin square property.
Expand

01 January 2025

Yevgeniy Dodis, Daniel Jost
ePrint Report ePrint Report
In recent work [Crypto'24], Dodis, Jost, and Marcedone introduced Compact Key Storage (CKS) as a modern approach to backup for end-to-end (E2E) secure applications. As most E2E-secure applications rely on a sequence of secrets $(s_1,...,s_n)$ from which, together with the ciphertexts sent over the network, all content can be restored, Dodis et al. introduced CKS as a primitive for backing up $(s_1,...,s_n)$. The authors provided definitions as well as two practically efficient schemes (with different functionality-efficiency trade-offs). Both, their security definitions and schemes relied however on the random oracle model (ROM).

In this paper, we first show that this reliance is inherent. More concretely, we argue that in the standard model, one cannot have a general CKS instantiation that is applicable to all "CKS-compatible games", as defined by Dodis et al., and realized by their ROM construction. Therefore, one must restrict the notion of CKS-compatible games to allow for standard model CKS instantiations.

We then introduce an alternative standard-model CKS definition that makes concessions in terms of functionality (thereby circumventing the impossibility). More precisely, we specify CKS which does not recover the original secret $s_i$ but a derived key $k_i$, and then observe that this still suffices for many real-world applications. We instantiate this new notion based on minimal assumptions. For passive security, we provide an instantiation based on one-way functions only. For stronger notions, we additionally need collision-resistant hash functions and dual-PRFs, which we argue to be minimal.

Finally, we provide a modularization of the CKS protocols of Dodis et al. In particular, we present a unified protocol (and proof) for standard-model equivalents for both protocols introduced in the original work.
Expand
Jiaxing Zhao, Srinath Setty, Weidong Cui
ePrint Report ePrint Report
We describe the design and implementation of MicroNova, a folding-based recursive argument for producing proofs of incremental computations of the form $y = F^{(\ell)}(x)$, where $F$ is a possibly non-deterministic computation (encoded using a constraint system such as R1CS), $x$ is the initial input, $y$ is the output, and $\ell > 0$. The proof of an $\ell$-step computation is produced step-by-step such that the proof size nor the time to verify it depends on $\ell$. The proof at the final iteration is then compressed, to achieve further succinctness in terms of proof size and verification time. Compared to prior folding-based arguments, a distinguishing aspect of MicroNova is the concrete efficiency of the verifier—even in a resource-constrained environment such as Ethereum’s blockchain. In particular, the compressed proof consists of $O(\log{N})$ group elements and it can be verified with $O(\log{N})$ group scalar multiplications and two pairing operations, where $N$ is the number of constraints for a single invocation of $F$. MicroNova requires a universal trusted setup and can employ any existing setup material created for the popular KZG univariate polynomial commitment scheme. Finally, we implement and experimentally evaluate MicroNova. We find that MicroNova’s proofs can be efficiently verified on the Ethereum blockchain with $\approx$2.2M gas. Furthermore, MicroNova’s prover incurs minimal overheads atop its baseline Nova’s prover.
Expand
Hanwen Feng, Qiang Tang
ePrint Report ePrint Report
This paper presents the first optimal-resilient, adaptively secure asynchronous common coin protocol with $O(\lambda n^2)$ communication complexity and $O(1)$ rounds, requiring only a public silent setup. Our protocol immediately implies a sequence of quadratic-communication, constant-round asynchronous Byzantine agreement protocols and asynchronous distributed key generation with a silent setup. Along the way, we formulate a new primitive called asynchronous subset alignment and introduce a simple framework to reason about specific composition security suitable for asynchronous common coin, which may be of independent interest.
Expand
Dongming Zhang, Lei Xie, Yu Tao
ePrint Report ePrint Report
With the rapid growth of blockchain-based Non-Fungible Tokens (NFTs), data trading has evolved to incorporate NFTs for ownership verification. However, the NFT ecosystem faces significant challenges in copyright protection, particularly when malicious buyers slightly modify the purchased data and re-mint it as a new NFT, infringing upon the original owner's rights. In this paper, we propose a copyright-preserving data trading protocol to address this challenge.

First, we introduce the Merkle Feature Tree (MFT), an enhanced version of the traditional Merkle Tree that incorporates an AI-powered feature layer above the data layer. Second, we design a copyright challenge phase during the trading process, which recognizes the data owner with highly similar feature vectors and earlier on-chain timestamp as the legitimate owner. Furthermore, to achieve efficient and low-gas feature vector similarity computation on blockchain, we employ Locality-Sensitive Hashing (LSH) to compress high-dimensional floating-point feature vectors into single uint256 integers.

Experiments with multiple image and text feature extraction models demonstrate that LSH effectively preserves the similarity between highly similar feature vectors before and after compression, thus supporting similarity-based copyright challenges. Experimental results on the Ethereum Sepolia testnet demonstrate NMFT's scalability with sublinear growth in gas consumption while maintaining stable latency.
Expand
Qiang Liu, Joon-Woo Lee
ePrint Report ePrint Report
Multi-party Private Set Union (MPSU) enables multiple participants to jointly compute the union of their private sets without leaking any additional information beyond the resulting union. Liu et al. (ASIACRYPT 2023) presented the first MPSU protocol that scales to large data sets, designating one participant as the "leader" responsible for obtaining the final union. However, this approach assumes that the leader does not collude with any other participant, which can be impractical due to the inherent lack of mutual trust among participants, thereby limiting its applicability. On the other hand, the state-of-the-art protocol that allows all participants to learn the computed union was proposed by Seo et al. (PKC 2012). While their construction achieves $O(1)$ round complexity, it remains secure only if fewer than half of the participants collude, leaving open the problem of designing stronger collusion tolerance and multi-party output. In this work, we address these limitations by first proposing $\Pi_\text{MPSU}^{\text{one-leader}}$ that designates one participant as leader to obtain the union result. Building upon this construction, we extend this design to $\Pi_\text{MPSU}^{\text{leaderless}}$, which enables every participant to receive the union result simultaneously. Both protocols operate under the semi-honest model, tolerate maximal collusion among participants, and efficiently handle large-scale set computation. We implement these schemes and conducted a comprehensive comparison against state-of-the-art solutions. The result shows that, for input sizes of $2^{12}$ at a comparable security level, $\Pi_\text{MPSU}^{\text{one-leader}}$ achieves a $663$ times speedup in online runtime compared to the state-of-the-art. Furthermore, it also remains $22$ times faster than half-collusion-tolerant protocol.
Expand
Sam Buxbaum, Mohammad Mahmoody
ePrint Report ePrint Report
In classical cryptography, one-way functions (OWFs) play a central role as the minimal primitive that (almost) all primitives imply. The situation is more complicated in quantum cryptography, in which honest parties and adversaries can use quantum computation and communication, and it is known that analogues of OWFs in the quantum setting might not be minimal.

In this work we ask whether OWFs are minimal for the intermediate setting of post-quantum cryptography, in which the protocols are classical while they shall resist quantum adversaries. We show that for a wide range of natural settings, if a primitive Q implies OWFs, then so does its (uniformly or non-uniformly secure) post-quantum analogue. In particular, we show that if a primitive Q implies any other primitive P that has a 2-message security game (e.g., OWFs) through a black-box classical security reduction R, then one can always (efficiently) turn any polynomial-size quantum adversary breaking P into a polynomial-size quantum adversary breaking Q. Note that this result holds even if the implementation of P using that of Q is arbitrarily non-black-box.

We also prove extensions of this result for when the reduction R anticipates its oracle adversary to be deterministic, whenever either of the following conditions hold: (1) the adversary needs to win the security game of Q only with non-negligible probability (e.g., Q is collision-resistant hashing) or (2) that either of P and Q have "falsifiable" security games (this is the case when P is OWFs). Our work leaves open answering our main question when Q implies OWFs through a non-black-box security reduction, or when P uses a more complicated security game than a two-message one.
Expand
Rishiraj Bhattacharyya, Avradip Mandal, Meghna Sengupta
ePrint Report ePrint Report
The rising demand for data privacy in cloud-based environments has led to the development of advanced mechanisms for securely managing sensitive information. A prominent solution in this domain is the "Data Privacy Vault," a concept that is being provided commercially by companies such as Hashicorp, Basis Theory, Skyflow Inc., VGS, Evervault, Protegrity, Anonomatic, and BoxyHQ. However, no existing work has rigorously defined the security notions required for a Data Privacy Vault or proven them within a formal framework which is the focus of this paper.

Among its other uses, data privacy vaults are increasingly being used as storage for LLM training data which necessitates a scheme that enables users to securely store sensitive information in the cloud while allowing controlled access for performing analytics on specific non-sensitive attributes without exposing sensitive data. Conventional solutions involve users generating encryption keys to safeguard their data, but these solutions are not deterministic and are therefore unsuited for the LLM setting. To address this, we propose a novel framework that is deterministic as well as semantically secure. Our scheme operates in the Cloud Operating model where the server is trusted but stateless, and the storage is outsourced.

We provide a formal definition and a concrete instantiation of this data privacy vault scheme. We introduce a novel tokenization algorithm that serves as the core mechanism for protecting sensitive data within the vault. Our approach not only generates secure, unpredictable tokens for sensitive data but also securely stores sensitive data while enabling controlled data retrieval based on predefined access levels. Our work fills a significant gap in the existing literature by providing a formalized framework for the data privacy vault, complete with security proofs and a practical construction - not only enhancing the understanding of vault schemes but also offering a viable solution for secure data management in the era of cloud computing.
Expand
Florian Krieger, Florian Hirner, Sujoy Sinha Roy
ePrint Report ePrint Report
Emerging cryptographic systems such as Fully Homomorphic Encryption (FHE) and Zero-Knowledge Proofs (ZKP) are computation- and data-intensive. FHE and ZKP implementations in software and hardware largely rely on the von Neumann architecture, where a significant amount of energy is lost on data movements. A promising computing paradigm is computing in memory (CIM), which enables computations to occur directly within memory, thereby reducing data movements and energy consumption. However, efficiently performing large integer multiplications - critical in FHE and ZKP - is an open question, as existing CIM methods are limited to small operand sizes. In this work, we address this question by exploring advanced algorithmic approaches for large integer multiplication, identifying the Karatsuba algorithm as the most effective for CIM applications. Thereafter, we design the first Karatsuba multiplier for resistive CIM crossbars. Our multiplier uses a three-stage pipeline to enhance throughput and, additionally, balances memory endurance with efficient array sizes. Compared to existing CIM multiplication methods, when scaled up to the bit widths required in ZKP and FHE, our design achieves up to 916x in throughput and 281x in area-time product improvements.
Expand
Ittai Abraham, Renas Bacho, Julian Loss, Gilad Stern
ePrint Report ePrint Report
We prove that for any $1\le k\le \log n$, given a VRF setup and assuming secure erasures, there exists a protocol for Asynchronous Distributed Key Generation (ADKG) that is resilient to a strongly adaptive adversary that can corrupt up to $f
Expand
Tanusree Sharma, Atm Mizanur Rahman, Silvia Sandhi, Yang Wang, Rifat Shahriyar, S M Taiabul Haque
ePrint Report ePrint Report
Cryptocurrency practices worldwide are seen as innovative, yet they navigate a fragmented regulatory environment. Many local authorities aim to balance promoting innovation, safeguarding consumers, and managing potential threats. In particular, it is unclear how people deal with cryptocurrencies in the regions where trading or mining is prohibited. This insight is crucial in conveying the risk reduction strategies. To address this, we conducted semi-structured interviews with 28 cryptocurrency traders and miners from Bangladesh, where the local authority is hostile towards cryptocurrencies. Our research revealed that the participants use unique strategies to mitigate risks around cryptocurrencies. Our findings indicate a prevalent uncertainty at both personal and organizational levels concerning the interpretation of laws, a situation worsened by the actions of the major financial service providers who indirectly facilitate cryptocurrency transactions. We further connect our findings to the broader issues in HCI regarding folk models, informal market and legality, and education and awareness.
Expand
Radhika Garg, Xiao Wang
ePrint Report ePrint Report
Secure multi-party computation (MPC) is a crucial tool for privacy-preserving computation, but it is getting increasingly complicated due to recent advancements and optimizations. Programming tools for MPC allow programmers to develop MPC applications without mastering all cryptography. However, most existing MPC programming tools fail to attract real users due to the lack of documentation, maintenance, and the ability to compose with legacy codebases. In this work, we build Smaug, a modular extension of LLVM. Smaug seamlessly brings all LLVM support to MPC programmers, including error messaging, documentation, code optimization, and frontend support to compile from various languages to LLVM intermediate representation (IR). Smaug can efficiently convert non-oblivious LLVM IR to their oblivious counterparts while applying popular optimizations as LLVM code transformations. With benchmarks written in C++ and Rust and backends for Yao and GMW protocols, we observe that Smaug performs as well as (and sometimes much better than) prior tools using domain-specific languages with similar backends. Finally, we use Smaug to compile open-source projects that implement Minesweeper and Blackjack, producing usable two-party games with ease.
Expand
Aditya Singh Rawat, Mahabir Prasad Jhanwar
ePrint Report ePrint Report
In classical DNSSEC, a drop-in replacement with quantum-safe cryptography would increase DNS query resolution times by $\textit{at least}$ a factor of $2\times$. Since a DNS response containing large post-quantum signatures is likely to get marked truncated ($\texttt{TC}$) by a nameserver (resulting in a wasted UDP round-trip), the client (here, the resolver) would have to retry its query over TCP, further incurring a $\textit{minimum}$ of two round-trips due to the three-way TCP handshake.

We present $\mathsf{TurboDNS}$: a backward-compatible protocol that eliminates $\textit{two}$ round-trips from the preceding flow by 1) sending TCP handshake data in the initial DNS/UDP flight itself, and 2) immediately streaming the DNS response over TCP after authenticating the client with a cryptographic cookie. Our experiments show that DNSSEC over $\mathsf{TurboDNS}$, with either Falcon-512 or Dilithium-2 as the zone signing algorithm, is practically as fast as the currently deployed ECDSA P-256 and RSA-2048 setups in resolving $\texttt{QTYPE}$ $\texttt{A}$ DNS queries.
Expand
Panagiotis Grontas, Aris Pagourtzis, Marianna Spyrakou
ePrint Report ePrint Report
We propose an e-voting protocol based on a novel linkable ring signature scheme with unconditional anonymity. In our system, all voters register create private credentials and register their public counterparts. To vote, they create a ring (anonymity set) consisting of public credentials together with a proof of knowledge of their secret credential via our signature. Its unconditional anonymity prevents an attacker, no matter how powerful, from deducing the identity of the voter, thus attaining everlasting privacy. Additionally, our protocol provides coercion resistance in the JCJ framework; when an adversary tries to coerce a voter, the attack can be evaded by creating a signature with a fake but indistinguishable credential. During a moment of privacy, they will cast their real vote. Our scheme also provides verifiability and ballot secrecy.
Expand
Shweta Agrawal, Simran Kumari, Shota Yamada
ePrint Report ePrint Report
We provide the first attribute based encryption (ABE) scheme for Turing machines supporting unbounded collusions from lattice assumptions. In more detail, the encryptor encodes an attribute $\mathbf{x}$ together with a bound $t$ on the machine running time and a message $m$ into the ciphertext, the key generator embeds a Turing machine $M$ into the secret key and decryption returns $m$ if and only if $M(\mathbf{x})=1$. Crucially, the input $\mathbf{x}$ and machine $M$ can be of unbounded size, the time bound $t$ can be chosen dynamically for each input and decryption runs in input specific time. Previously the best known ABE for uniform computation supported only non-deterministic log space Turing machines (${\sf NL})$ from pairings (Lin and Luo, Eurocrypt 2020). In the post-quantum regime, the state of the art supports non-deterministic finite automata from LWE in the $\textit{ symmetric}$ key setting (Agrawal, Maitra and Yamada, Crypto 2019).

In more detail, our results are:

1. We construct the first ABE for ${\sf NL}$ from the LWE, evasive LWE (Wee, Eurocrypt 2022 and Tsabary, Crypto 2022) and Tensor LWE (Wee, Eurocrypt 2022) assumptions. This yields the first (conjectured) post-quantum ABE for ${\sf NL}$. 2. Relying on LWE, evasive LWE and a new assumption called $\textit{circular tensor}$ LWE, we construct ABE for all Turing machines. At a high level, the circular tensor LWE assumption incorporates circularity into the tensor LWE (Wee, Eurocrypt 2022) assumption.

Towards our ABE for Turing machines, we obtain the first CP-ABE for circuits of unbounded depth and size from the same assumptions -- this may be of independent interest.
Expand
◄ Previous Next ►