International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

02 July 2024

Divyanshu Bhardwaj, Sandhya Saravanan, Nishanth Chandran, Divya Gupta
ePrint Report ePrint Report
Decision trees are an important class of supervised learning algorithms. When multiple entities contribute data to train a decision tree (e.g. for fraud detection in the financial sector), data privacy concerns necessitate the use of a privacy-enhancing technology such as secure multi-party computation (MPC) in order to secure the underlying training data. Prior state-of-the-art (Hamada et al.) construct an MPC protocol for decision tree training with a communication of $\mathcal{O}(hmN\log N)$, when building a decision tree of height $h$ for a training dataset of $N$ samples, each having $m$ attributes.

In this work, we significantly reduce the communication complexity of secure decision tree training. We construct a protocol with communication complexity $\mathcal{O}(mN\log N + hmN + hN\log N)$, thereby achieving an improvement of $\approx \mathsf{min}(h, m, \log N)$ over Hamada et al. At the core of our technique is an improved protocol to regroup sorted private elements further into additional groups (according to a flag vector) while maintaining their relative ordering. We implement our protocol in the MP-SPDZ framework and show that it requires $10\times$ lesser communication and is $9\times$ faster than the state-of-the-art.
Expand
Dag Arne Osvik, David Canright
ePrint Report ePrint Report
We reduce the number of bit operations required to implement AES to a new minimum, and also compute improvements to elements of some other ciphers. Exploring the algebra of AES allows choices of basis and streamlining of the nonlinear parts. We also compute a more efficient implementation of the linear part of each round. Similar computational optimizations apply to other cryptographic matrices and S-boxes. This work may be incorporated into a hardware AES implementation using minimal resources, or potentially in a bit-sliced software implementation to increase speed.
Expand
Daniel Dore
ePrint Report ePrint Report
We present TaSSLE, a new lookup argument for decomposable tables with minimal commitment costs. The construction generalizes techniques introduced in Lasso (Eurocrypt '24) which take advantage of the internal structure present in such tables to avoid the need for any party to need to commit to, or even construct, the entire table. This allows the use of lookups against very large tables, with applications including new design strategies for "zero-knowledge virtual machines". We show that these techniques may be combined in a generic way with any existing lookup argument to achieve similar results. We then give a construction of TaSSLE by applying this observation to a recent lookup argument, introduced in [Papini-Haböck '23], which combines logarithmic derivatives with the GKR protocol to achieve a lookup argument with minimal commitment costs.
Expand
Pierpaolo Della Monica, Ivan Visconti, Andrea Vitaletti, Marco Zecchini
ePrint Report ePrint Report
The Internet has plenty of images that are transformations (e.g., resize, blur) of confidential original images. Several scenarios (e.g., selling images over the Internet, fighting disinformation, detecting deep fakes) would highly benefit from systems allowing to verify that an image is the result of a transformation applied to a confidential authentic image. In this paper, we focus on systems for proving and verifying the correctness of transformations of authentic images guaranteeing: 1) confidentiality (i.e., the original image remains private), 2) efficient proof generation (i.e., the proof certifying the correctness of the transformation can be computed with a common laptop) even for high-resolution images, 3) authenticity (i.e., only the advertised transformations have been applied) and 4) fast detection of fraud proofs. Our contribution consists of the following results: - We present new definitions following in part the ones proposed by Naveh and Tromer [IEEE S&P 2016] and strengthening them to face more realistic adversaries. - We propose techniques leveraging the way typical transformations work to then efficiently instantiate ZK-snarks circumventing the major bottlenecks due to claims about large pre-images of cryptographic hashes. - We present a 1st construction based on an ad-hoc signature scheme and an and-hoc cryptographic hash function, obtaining for the first time all the above 4 properties. - We present a 2nd construction that, unlike in previous results, works with the signature scheme and cryptographic hash function included in the C2PA specifications. Experimental results confirm the viability of our approach: in our 1st construction, an authentic transformation (e.g., a resize or a crop) of a high-resolution image of 30 MP can be generated on a common 8 cores PC in about 41 minutes employing less than 4 GB of RAM. Our 2nd construction is roughly one order of magnitude slower than our 1st construction. Prior results instead either require expensive computing resources or provide unsatisfying confidentiality.
Expand
Alvaro Revuelta, Sergei Tikhomirov, Aaryamann Challani, Hanno Cornelius, Simon Pierre Vivier
ePrint Report ePrint Report
Waku is a privacy-preserving, generalized, and decentralized messaging protocol suite. Waku uses GossipSub for message routing and Rate Limiting Nullifiers (RLN) for spam protection. GossipSub ensures fast and reliable peer-to-peer message delivery in a permissionless environment, while RLN enforces a common publishing rate limit using zero-knowledge proofs.

This paper presents a practical evaluation of message propagation latency in Waku. First, we estimate latencies analytically, building a simple mathematical model for latency under varying conditions. Second, we run a large-scale single-host simulation with 1000 nodes. Third, we set up a multi-host Waku deployment using five nodes in different locations across the world. Finally, we compare our analytical estimations to the results of the simulation and the real-world measurement.

The experimental results are in line with our theoretical model. Under realistic assumptions, medium-sized messages (25 KB) are delivered within 1 second. We conclude that Waku can achieve satisfactory latency for typical use cases, such as decentralized messengers, while providing scalability and anonymity.
Expand
Anubhab Baksi
ePrint Report ePrint Report
In this work, we take a look at the two recently proposed block ciphers, DEFAULT and BAKSHEESH, both of which are descendent of another block cipher named GIFT. We show that both ciphers can be interpreted within the partial non-linear layer category, thanks to the SBoxes having at least one non-trivial linear structure. We also reevaluate the security claim of DEFAULT.
Expand
Damien Robert
ePrint Report ePrint Report
We survey different (efficient or not) representations of isogenies, with a particular focus on the recent "higher dimensional" isogeny representation, and algorithms to manipulate them.
Expand
Santiago Arranz Olmos, Gilles Barthe, Chitchanok Chuengsatiansup, Benjamin Grégoire, Vincent Laporte, Tiago Oliveira, Peter Schwabe, Yuval Yarom, Zhiyuan Zhang
ePrint Report ePrint Report
It is fundamental that executing cryptographic software must not leak secrets through side-channels. For software-visible side-channels, it was long believed that "constant-time" programming would be sufficient as a systematic countermeasure. However, this belief was shattered in 2018 by attacks exploiting speculative execution—so called Spectre attacks. Recent work shows that language support suffices to protect cryptographic code with minimal overhead against one class of such attacks, Spectre v1, but leaves an open question of whether this result can be extended to also cover other classes of Spectre attacks. In this paper, we answer this question in the affirmative: We design, validate, implement, and verify an approach to protect cryptographic implementations against all known classes of Spectre attacks—the main challenge in this endeavor is attacks exploiting the return stack buffer, which are known as Spectre-RSB. Our approach combines a new value-dependent information-flow type system that enforces speculative constant-time in an idealized model of transient execution and a compiler transformation that realizes this idealized model on the generated low-level code. Using the Coq proof assistant, we prove that the type system is sound with respect to the idealized semantics and that the compiler transformation preserves speculative constant-time. We implement our approach in the Jasmin framework for high-assurance cryptography and demonstrate that the overhead incurred by full Spectre protections is below 2% for most cryptographic primitives and reaches only about 5–7% for the more complex post-quantum key-encapsulation mechanism Kyber.
Expand
Mukul Kulkarni, Keita Xagawa
ePrint Report ePrint Report
NIST started the standardization of additional post-quantum signatures in 2022. Among 40 candidates, a few of them showed their stronger security than existential unforgeability, strong existential unforgeability and BUFF (beyond unforgeability features) securities. Recently, Aulbach, Düzlü, Meyer, Struck, and Weishäupl (PQCrypto 2024) examined the BUFF securities of 17 out of 40 candidates. Unfortunately, on the so-called MPC-in-the-Head (MPCitH) signature schemes, we have no knowledge of strong existential unforgeability and BUFF securities.

This paper studies the strong securities of all nine MPCitH signature candidates: AIMer, Biscuit, FAEST, MIRA, MiRitH, MQOM, PERK, RYDE, and SDitH.

We show that the MPCitH signature schemes are strongly existentially unforgeable under chosen message attacks in the (quantum) random oracle model. To do so, we introduce a new property of the underlying multi-pass identification, which we call _non-divergency_. This property can be considered as a weakened version of the computational unique response for three-pass identification defined by Kiltz, Lyubashevsky, and Schaffner (EUROCRYPT 2018) and its extension to multi-pass identification defined by Don, Fehr, and Majentz (CRYPTO 2020). In addition, we show that the SSH11 protocol proposed by Sakumoto, Shirai, and Hiwatari (CRYPTO 2011) is _not_ computational unique response, while Don et al. (CRYPTO 2020) claimed it.

We also survey BUFF securities of the nine MPCitH candidates in the quantum random oracle model. In particular, we show that Biscuit and MiRitH do not have some of the BUFF security.
Expand
Shahriar Ebrahimi, Parisa Hassanizadeh
ePrint Report ePrint Report
Remote attestation (RA) protocols have been widely used to evaluate the integrity of software on remote devices. Currently, the state-of-the-art RA protocols lack a crucial feature: transparency. This means that the details of the final attestation verification are not openly accessible or verifiable by the public. Furthermore, the interactivity of these protocols often limits attestation to trusted parties who possess privileged access to confidential device data, such as pre-shared keys and initial measurements. These constraints impede the widespread adoption of these protocols in various applications. In this paper, we introduce zRA, a non-interactive, transparent, and publicly provable RA protocol based on zkSNARKs. zRA enables verification of device attestations without the need for pre-shared keys or access to confidential data, ensuring a trustless and open attestation process. This eliminates the reliance on online services or secure storage on the verifier side. Moreover, zRA does not impose any additional security assumptions beyond the fundamental cryptographic schemes and the essential trust anchor components on the prover side (i.e., ROM and MPU). The zero-knowledge attestation proofs generated by devices have constant size regardless of the network complexity and number of attestations. Moreover, these proofs do not reveal sensitive information regarding internal states of the device, allowing verification by anyone in a public and auditable manner. We conduct an extensive security analysis and demonstrate scalability of zRA compared to prior work. Our analysis suggests that zRA excels especially in peer-to-peer and Pub/Sub network structures. To validate the practicality, we implement an open-source prototype of zRA using the Circom language. We show that zRA can be securely deployed on public permissionless blockchains, serving as an archival platform for attestation data to achieve resilience against DoS attacks.
Expand
Guofeng Tang, Bo Pang, Long Chen, Zhenfeng Zhang
ePrint Report ePrint Report
A threshold signature scheme distributes the ability to generate signatures through distributed key generation and signing protocols. A threshold signature scheme should be functionally interchangeable, meaning that a signature produced by a threshold scheme should be verifiable by the same algorithm used for non-threshold signatures. To resist future attacks from quantum adversaries, lattice-based threshold signatures are desirable. However, the performance of existing lattice-based threshold signing protocols is still far from practical.

This paper presents the first lattice-based $t$-out-of-$n$ threshold signature scheme with functional interchangeability that has been implemented. To build an $t$-out-of-$n$ access structure for arbitrary $t \leq n$, we first present a novel $t$-out-of-$n$ version of the SPDZ MPC protocol. We avoid using the MPC protocol to evaluate hash operations for high concrete efficiency. Moreover, we design an efficient distributed rejection sampling protocol. Consequently, the online phase of our distributed signing protocol takes only 0.5 seconds in the two-party setting and 7.3 seconds in the 12-party setting according to our implementation. As a byproduct, our scheme also presents a periodic key refreshment mechanism and offers proactive security.
Expand
Trisha Datta, Binyi Chen, Dan Boneh
ePrint Report ePrint Report
Verifying image provenance has become an important topic, especially in the realm of news media. To address this issue, the Coalition for Content Provenance and Authenticity (C2PA) developed a standard to verify image provenance that relies on digital signatures produced by cameras. However, photos are usually edited before being published, and a signature on an original photo cannot be verified given only the published edited image. In this work, we describe VerITAS, a system that uses zero-knowledge proofs (zk-SNARKs) to prove that only certain edits have been applied to a signed photo. While past work has created image editing proofs for photos, VerITAS is the first to do so for realistically large images (30 megapixels). Our key innovation enabling this leap is the design of a new proof system that enables proving knowledge of a valid signature on a large amount of witness data. We run experiments on realistically large images that are more than an order of magnitude larger than those tested in prior work. In the case of a computationally weak signer, such as a camera, we are able to generate proofs of valid edits for a 90 MB image in under an hour, costing about \$2.42 on AWS per image. In the case of a more powerful signer, we are able to generate proofs of valid edits for 90 MB images in under five minutes, costing about \$0.09 on AWS per image. Either way, proof verification time is about 2 seconds in the browser. Our techniques apply broadly whenever there is a need to prove that an efficient transformation was applied correctly to a large amount of signed private data.
Expand

01 July 2024

Truong Son Nguyen, Lun Wang, Evgenios M. Kornaropoulos, Ni Trieu
ePrint Report ePrint Report
Researchers across various fields seek to understand causal relationships but often find controlled experiments impractical. To address this, statistical tools for causal discovery from naturally observed data have become crucial. Non-linear regression models, such as Gaussian process regression, are commonly used in causal inference but have limitations due to high costs when adapted for secure computation. Support vector regression (SVR) offers an alternative but remains costly in an Multi-party computation context due to conditional branches and support vector updates.

In this paper, we propose Aitia, the first two-party secure computation protocol for bivariate causal discovery. The protocol is based on optimized multi-party computation design choices and is secure in the semi-honest setting. At the core of our approach is BSGD-SVR, a new non-linear regression algorithm designed for MPC applications, achieving both high accuracy and low computation and communication costs. Specifically, we reduce the training complexity of the non-linear regression model from approximately from $\mathcal{O}(N^3)$ to $\mathcal{O}(N^2)$ where $N$ is the number of training samples. We implement Aitia using CrypTen and assess its performance across various datasets. Empirical evaluations show a significant speedup of $3.6\times$ to $340\times$ compared to the baseline approach.
Expand
Zhou Zhang, Song Bian, Zian Zhao, Ran Mao, Haoyi Zhou, Jiafeng Hua, Yier Jin, Zhenyu Guan
ePrint Report ePrint Report
Fully homomorphic encryption (FHE) based database outsourcing is drawing growing research interests. At its current state, there exist two primary obstacles against FHE-based encrypted databases (EDBs): i) low data precision, and ii) high computational latency. To tackle the precision-performance dilemma, we introduce ArcEDB, a novel FHE-based SQL evaluation infrastructure that simultaneously achieves high data precision and fast query evaluation. Based on a set of new plaintext encoding schemes, we are able to execute arbitrary-precision ciphertext-to-ciphertext homomorphic comparison orders of magnitude faster than existing methods. Meanwhile, we propose efficient conversion algorithms between the encoding schemes to support highly composite SQL statements, including advanced filter-aggregation and multi-column synchronized sorting. We perform comprehensive experiments to study the performance characteristics of ArcEDB. In particular, we show that ArcEDB can be up to $57\times$ faster in homomorphic filtering and up to $20\times$ faster over end-to-end SQL queries when compared to the state-of-the-art FHE-based EDB solutions. Using ArcEDB, a SQL query over a 10K-row time-series EDB with 64-bit timestamps only runs for under one minute.
Expand

30 June 2024

Stefan Dziembowski, Shahriar Ebrahimi, Parisa Hassanizadeh
ePrint Report ePrint Report
With the rise of generative AI technology, the media's credibility as a source of truth has been significantly compromised. This highlights the need to verify the authenticity of media and its originality. Ensuring the integrity of media during capture using the device itself presents a straightforward solution to this challenge. However, raw captured media often require certain refinements or redactions before publication. Zero-knowledge proofs (ZKP) offer a solution by allowing attestation of the correctness of specific transformations applied to an authorized image. While shown to be feasible, previous approaches faced challenges in practice due to their high prover complexity.

In this paper, we aim to develop a practical framework for efficiently proving the authenticity of HD and 4K images on commodity hardware. Our goal is to minimize prover complexity by utilizing the folding-based zkSNARKs technique, resulting in VIMz, the first practical verifiable image manipulation system of this kind. VIMz leverages Nova's folding scheme to achieve low complexity recursive zkSNARK proofs of authentic image manipulation. Our implementation results demonstrate a substantial reduction in prover complexity—up to a 3$\times$ speedup in time and a 96$\times$ reduction in memory (from 309 GB in [Kang et al., arXiv 2022] to only 3.2 GB). Moreover, the low memory consumption allows VIMz to prove the correctness of multiple chained transformations simultaneously, further increasing the performance (up to 3.5$\times$). Additionally, we propose a trustless smart contract system that autonomously verifies the proofs of media authenticity, achieving trustless copyright and ownership management, aligning with the standards of the Coalition for Content Provenance and Authenticity (C2PA). Such a system serves as a foundational infrastructure for constructing trustless media marketplaces with diverse applications.
Expand
Chandan Kumar, Sikhar Patranabis, Debdeep Mukhopadhyay
ePrint Report ePrint Report
We present a variant of Function Secret Sharing (FSS) schemes tailored for point, comparison, and interval functions, featuring compact key sizes at the expense of additional comparison. While existing FSS constructions are primarily geared towards $2$-party scenarios, exceptions such as the work by Boyle et al. (Eurocrypt 2015) and Riposte (S&P 2015) have introduced FSS schemes for $p$-party scenarios ($p \geq 3$). This paper aims to achieve the most compact $p$-party FSS key size to date. We achieve a noteworthy reduction in key size, a $2^p$-factor decrease compared to state-of-the-art FSS constructions (including computationally efficient constructions using symmetric-key primitives) of distributed point function (DPF). Compared to the previous public-key-based FSS design for DPF, we also get a key size reduction equal to a $2^{n/2}$-sized row vector, where $2^n$ is the domain size of the point function. This reduction in key size comes at the cost of a required comparison operation by the decoder (hence called a non-linear decoder), a departure from prior schemes. In $p$-party scenarios, our construction outperforms existing FSS constructions in key size, remaining on par with Riposte in evaluation time and showing significant improvement over Boyle et al.

In addition to constructing FSS for distributed point functions (DPF), we extend our approach to distributed comparison and interval functions, achieving the most efficient key size to date. Our distributed comparison function exhibits a key-size reduction by a factor of $q^{p-1}$, where $q$ denotes the size of the algebraic group used in the scheme's construction. The reduced key size of the comparison function has practical implications, particularly in applications like privacy-preserving machine learning (PPML), where thousands of comparison functions are employed in each neural network layer. To demonstrate the effectiveness of our improvements, we design and prototype-implement a scalable privacy-preserving framework for neural networks over distributed models. Specifically, we implement a distributed rectified linear unit (ReLU) activation function using our distributed comparison function, showcasing the efficacy of our proposed scheme.
Expand
Sanjam Garg, Aniket Kate, Pratyay Mukherjee, Rohit Sinha, Sriram Sridhar
ePrint Report ePrint Report
We develop a distributed service for generating correlated randomness (e.g. permutations) for multiple parties, where each party’s output is private but publicly verifiable. This service provides users with a low-cost way to play online poker in real-time, without a trusted party. Our service is backed by a committee of compute providers, who run a multi-party computation (MPC) protocol to produce an (identity-based) encrypted permutation of a deck of cards, in an offline phase well ahead of when the players’ identities are known. When the players join, what we call the online phase, they decrypt their designated cards immediately after deriving the identity-based decryption keys, a much simpler computation. In addition, the MPC protocol also generates a publicly-verifiable proof that the output is a permutation. In our construction, we introduce a new notion of succinctly verifiable multi-identity based encryption (SVME), which extends the existing notion of verifiable encryption to a multi-identity-based setting, but with a constant sized proof – this may be of independent interest. We instantiate this for a permutation relation (defined over a small set) along with identity-based encryption, polynomial commitments and succinct proofs – our choices are made to enable a distributed computation when the card deck is always secret shared. Moreover, we design a new protocol to efficiently generate a secret-sharing of random permutation of a small set, which is run prior to distributed SVME. Running these protocols offline simplifies the online phase substantially, as parties only derive their identity-specific keys privately via secure channels with the MPC committee, and then decrypt locally to obtain their decks. We provide a rigorous UC-based formalization in a highly modularized fashion. Finally, we demonstrate practicality with an implementation that shows that for 8 MPC parties, gen- erating a secret publicly-verifiable permutation of 64 cards takes under 3 seconds, while accessing cards for a player takes under 0.3 seconds.
Expand
Joseph Johnston
ePrint Report ePrint Report
Interactive proofs and arguments of knowledge can be generalized to the concept of interactive reductions of knowledge, where proving knowledge of a witness for one NP language is reduced to proving knowledge of a witness for another NP language. We take this generalization and specialize it to a class of reductions we refer to as `quirky interactive reductions of knowledge' (or QUIRKs). This name reflects our particular design choices within the broad and diverse world of interactive reduction methods. A central design choice is allowing the prover to rewind or regress to any previous reduction and repeat it as many times as desired. We prove completeness and extractability properties for QUIRKs. We also offer tools for constructing extraction algorithms along with several simple examples of usage.
Expand
Jonathan Takeshita, Nirajan Koirala, Colin McKechney, Taeho Jung
ePrint Report ePrint Report
Fully Homomorphic Encryption (FHE) allows computation on encrypted data. Various software libraries have implemented the approximate- arithmetic FHE scheme CKKS, which is highly useful for applications in machine learning and data analytics; each of these libraries have differing performance and features. It is useful for developers and researchers to learn details about these libraries’ performance and their differences. Some previous work has profiled FHE and CKKS implementations for this purpose, but these comparisons are limited in their fairness and completeness.

In this article, we compare four major libraries supporting the CKKS scheme. Working with the maintainers of each of the PALISADE, Microsoft SEAL, HElib, and HEAAN libraries, we devise methods for fair comparisons of these libraries, even with their widely varied development strategies and library architectures. To show the practical performance of these libraries, we present HEProfiler, a simple and extensible framework for profiling C++ FHE libraries. Our experimental evaluation is complete in both the scope of tasks tested and metrics evaluated, allowing us to draw conclusions about the behaviors of different libraries under a wide range of real-world workloads. This is the first work-giving experimental comparisons of different bootstrapping-capable CKKS libraries.
Expand
Matteo Campanelli, Dario Fiore, Rosario Gennaro
ePrint Report ePrint Report
Lookup arguments allow an untrusted prover to commit to a vector $\vec f \in \mathbb{F}^n$ and show that its entries reside in a predetermined table $\vec t \in \mathbb{F}^N$. One of their key applications is to augment general-purpose SNARKs making them more efficient on subcomputations that are hard to arithmetize. In order for this "augmentation" to work out, a SNARK and a lookup argument should have some basic level of compatibility with respect to the commitment on $\vec f$. However, not all existing efficient lookup arguments are fully compatible with other efficient general-purpose SNARKs. This incompatibility can for example occur whenever SNARKs use multilinear extensions under the hood (e.g. Spartan) but the lookup argument is univariate in flavor (e.g. Caulk or $\mathsf{cq}$).

In this paper we discuss how to widen the spectrum of "super-efficient" lookup arguments (where the proving time is independent of the size of the lookup table): we present a new construction inspired by $\mathsf{cq}$and based on multilinear polynomial encodings (MLE). Our construction is the first lookup argument for any table that is also natively compatible with MLE-based SNARKs at comparable costs with other state-of-the-art lookup arguments, particularly when the large table is unstructured. This case arises in various applications, such as using lookups to prove that the program in a virtual machine is fetching the right instruction and when proving the correct computation of floating point arithmetic (e.g., in verifiable machine learning).

We also introduce a second more general construction: a compiler that, given any super-efficient lookup argument compatible with univariate SNARKs, converts it into a lookup argument compatible with MLE-based SNARKs with a very small overhead. Finally, we discuss SNARKs that we can compose with our constructions as well as approaches for this composition to work effectively.
Expand
◄ Previous Next ►