International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

12 June 2023

Emre Karabulut, Aydin Aysu
ePrint Report ePrint Report
Sampling random values from a discrete Gaussian distribution with high precision is a major and computationally intensive operation of upcoming or existing cryptographic standards. FALCON is one such algorithm that the National Institute of Standards and Technology chose to standardize as a next-generation, quantum-secure digital signature algorithm. The discrete Gaussian sampling of FALCON has both flexibility and efficiency needs—it constitutes 72% of total signature generation in reference software and requires sampling from a variable mean and standard deviation. Unfortunately, there are no prior works on accelerating this complete sampling procedure. In this paper, we propose a hardware-software co-design for accelerating FALCON’s discrete Gaussian sampling subroutine. The proposed solution handles the flexible computations for setting the variable parameters in software and executes core operations with low latency, parameterized, and custom hardware. The hardware parameterization allows trading off area vs. performance. On a Xilinx SoC FPGA Architecture, the results show that compared to the reference software, our solution can accelerate the sampling up to 9.83× and the full signature scheme by 2.7×. Moreover, we quantified that our optimized multiplier circuits can improve the throughput over a straightforward implementation by 60%.
Expand
Michael Raymond, Gillian Evers, Jan Ponti, Diya Krishnan, Xiang Fu
ePrint Report ePrint Report
A succinct zero knowledge proof for regular language mem- bership, i.e., to prove a secret string behind an encryption (hash) belongs to a regular language is useful, e.g., for asserting that an encrypted email is free of malware. The great challenge in practice is that the regular language used is often huge. We present zkreg, a distributed commit- and-prove system that handles such complexity. In zkreg, cryptographic operations are encoded using arithmetic circuits, and input acceptance is modeled as a zero knowledge subset problem using Σ-protocols. We in- troduce a Feedback Commit-and-Prove (FB-CP) scheme, which connects Σ-protocols and the Groth16 system with O(1) proof size and verifier cost. We present a close-to-optimal univariate instantiation of zk-VPD, a zero knowledge variation of the KZG polynomial commitment scheme, based on which an efficient zk-subset protocol is developed. We develop a 2-phase proof scheme to further exploit the locality of Aho-Corasick automata. To demonstrate the performance and scalability of zkreg, we prove that all ELF files (encrypted and hashed) in a Linux CentOS 7 are malware free. Applying inner pairing product argument, we obtain an aggregated proof of 1.96 MB which can be verified in 6.5 seconds.
Expand
Hoeteck Wee
ePrint Report ePrint Report
We present a new, simple candidate broadcast encryption scheme for $N$ users with parameter size poly$(\log N)$. We prove security of our scheme under a non-standard variant of the LWE assumption where the distinguisher additionally receives short Gaussian pre-images, while avoiding zeroizing attacks. This yields the first candidate optimal broadcast encryption that is plausibly post-quantum secure, and enjoys a security reduction to a simple assumption. As a secondary contribution, we present a candidate ciphertext-policy attribute-based encryption (CP-ABE) scheme for circuits of a-priori bounded polynomial depth where the parameter size is independent of the circuit size, and prove security under an additional non-standard assumption.
Expand
Sanjam Garg, Aarushi Goel, Abhishek Jain, Guru-Vamsi Policharla, Sruthi Sekar
ePrint Report ePrint Report
A decade of active research has led to practical constructions of zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARKs) that are now being used in a wide variety of applications. Despite this astonishing progress, overheads in proof generation time remain significant.

In this work, we envision a world where consumers with low computational resources can outsource the task of proof generation to a group of untrusted servers in a privacy-preserving manner. The main requirement is that these servers should be able to collectively generate proofs at a faster speed (than the consumer). Towards this goal, we introduce a framework called zk-SNARKs-as-a-service ($\mathsf{zkSaaS}$) for faster computation of zk-SNARKs. Our framework allows for distributing proof computation across multiple servers such that each server is expected to run for a shorter duration than a single prover. Moreover, the privacy of the prover's witness is ensured against any minority of colluding servers.

We design custom protocols in this framework that can be used to obtain faster runtimes for widely used zk-SNARKs, such as Groth16 [EUROCRYPT 2016], Marlin [EUROCRYPT 2020], and Plonk [EPRINT 2019]. We implement proof of concept zkSaaS for the Groth16 and Plonk provers. In comparison to generating these proofs on commodity hardware, we show that not only can we generate proofs for a larger number of constraints (without memory exhaustion), but can also get $\approx 22\times$ speed-up when run with 128 parties for $2^{25}$ constraints with Groth16 and $2^{21}$ gates with Plonk.
Expand
Prabhanjan Ananth, Yao-Ting Lin, Henry Yuen
ePrint Report ePrint Report
A fundamental result in classical cryptography is that pseudorandom generators are equivalent to one-way functions and in fact implied by nearly every classical cryptographic primitive requiring computational assumptions. In this work, we consider a variant of pseudorandom generators called quantum pseudorandom generators (QPRGs), which are quantum algorithms that (pseudo)deterministically map short random seeds to long pseudorandom strings. We provide evidence that QPRGs can be as useful as PRGs by providing cryptographic applications of QPRGs such as commitments and encryption schemes.

Our main result is showing that QPRGs can be constructed assuming the existence of logarithmic-length quantum pseudorandom states. This raises the possibility of basing QPRGs on assumptions weaker than one-way functions. We also consider quantum pseudorandom functions (QPRFs) and show that QPRFs can be based on the existence of logarithmic-length pseudorandom function-like states.

Our primary technical contribution is a method for pseudodeterministically extracting uniformly random strings from Haar-random states.
Expand
Alexander Bienstock, Sarvar Patel, Joon Young Seo, Kevin Yeo
ePrint Report ePrint Report
In this paper, we study oblivious key-value stores (OKVS) that enable encoding n key-value pairs into length $m$ encodings while hiding the input keys. The goal is to obtain high rate, $n/m$, with efficient encoding and decoding algorithms. We present $\mathsf{RB\text{-}OKVS}$ built on random band matrices that obtains near-optimal rates as high as 0.97 whereas prior works could only achieve rates up to 0.81 with similar encoding times.

Using $\mathsf{RB\text{-}OKVS}$, we obtain state-of-the-art protocols for private set intersection (PSI) and union (PSU). Our semi-honest PSI has up to 12% smaller communication and 13% reductions in monetary cost with slightly larger computation. We also obtain similar improvements for both malicious and circuit PSI. For PSU, our protocol obtains improvements of up to 22% in communication, 40% in computation and 21% in monetary cost. In general, we obtain the most communication- and cost-efficient protocols for all the above primitives.

Finally, we present the first connection between OKVS and volume-hiding encrypted multi-maps (VH-EMM) where the goal is to outsource storage of multi-maps while hiding the number of values associated with each key (i.e., volume). We present $\mathsf{RB\text{-}MM}$ with 16% smaller storage, 5x faster queries and 8x faster setup than prior works.
Expand
Arka Rai Choudhuri, Sanjam Garg, Aarushi Goel, Sruthi Sekar, Rohit Sinha
ePrint Report ePrint Report
We propose $\mathcal{S}\mathfrak{ublon}\mathcal{K}-$ a new zero-knowledge succinct non-interactive argument of knowledge (zkSNARK). $\mathcal{S}\mathfrak{ublon}\mathcal{K}$ builds on $\mathcal{P}\mathfrak{lon}\mathcal{K}$ [EPRINT'19], a popular state-of-the-art practical zkSNARK. Our new construction preserves all the great features of $\mathcal{P}\mathfrak{lon}\mathcal{K}$, i.e., it supports constant size proofs, constant time proof verification, a circuit-independent universal setup, as well as support for custom gates and lookup gates. Moreover, $\mathcal{S}\mathfrak{ublon}\mathcal{K}$ achieves improved prover running time over $\mathcal{P}\mathfrak{lon}\mathcal{K}$. In $\mathcal{P}\mathfrak{lon}\mathcal{K}$, the prover runtime grows with circuit size. Instead, in $\mathcal{S}\mathfrak{ublon}\mathcal{K}$, the prover runtime grows with the size of the "active part" of the circuit. For instance, consider circuits encoding conditional execution, where only a fraction of the circuit is exercised by the input. For such circuits, the prover runtime in $\mathcal{S}\mathfrak{ublon}\mathcal{K}$ grows only with the exercised execution path.

As an example, consider the zkRollup circuit. This circuit involves executing one of $n$ code segments $k$ times. For this case, using $\mathcal{P}\mathfrak{lon}\mathcal{K}$ involves proving a circuit of size $n\cdot k$ code segments. In $\mathcal{S}\mathfrak{ublon}\mathcal{K}$, the prover costs are close to proving a $\mathcal{P}\mathfrak{lon}\mathcal{K}$ proof for a circuit of size roughly $k$ code segments. Concretely, based on our implementation, for parameter choices derived from rollup contracts on Ethereum, $n =8, k = \{2^{10}\ldots 2^{16}\}$, the $\mathcal{S}\mathfrak{ublon}\mathcal{K}$ prover is approximately $4.6\times$ faster than the $\mathcal{P}\mathfrak{lon}\mathcal{K}$ prover. Proofs in $\mathcal{S}\mathfrak{ublon}\mathcal{K}$ are 2.4KB, and can be verified in under 50ms.
Expand
Aarushi Goel, Mathias Hall-Andersen, Aditya Hegde, Abhishek Jain
ePrint Report ePrint Report
We study secure multi-party computation (MPC) protocols for branching circuits that contain multiple sub-circuits (i.e., branches) and the output of the circuit is that of a single active branch. Crucially, the identity of the active branch must remain hidden from the protocol participants.

While such circuits can be securely computed by evaluating each branch and then multiplexing the output, such an approach incurs a communication cost linear in the size of the entire circuit. To alleviate this, a series of recent works have investigated the problem of reducing the communication cost of branching executions inside MPC (without relying on fully homomorphic encryption). Most notably, the stacked garbling paradigm [Heath and Kolesnikov, CRYPTO'20] yields garbled circuits for branching circuits whose size only depends on the size of the largest branch. Presently, however, it is not known how to obtain similar communication improvements for secure computation involving more than two parties.

In this work, we provide a generic framework for branching multi-party computation that supports any number of parties. The communication complexity of our scheme is proportional to the size of the largest branch and the computation is linear in the size of the entire circuit. We provide an implementation and benchmarks to demonstrate practicality of our approach.
Expand
Anindya Bhandari, Allison Bishop
ePrint Report ePrint Report
In this paper, we pose the problem of defining technical privacy guarantees for anonymizing stories. This is a challenge that arises in practice in contexts like journalism and memoir writing, and can be crucial in determining whether a story gets told and how effectively it is presented. While recent decades have seen leaps and bounds in the development of approaches like differential privacy for numerical and categorical data sets, none of these techniques are directly applicable to settings where narrative structure is crucial to preserve. Here we begin an exploration of what kind of definitions could be achievable in such settings, and discuss the building blocks and interdisciplinary collaborations that could shape new solutions. Having scientific guarantees for anonymity in narrative forms could have far-reaching effects - in the peace of mind of potential tellers and subjects of stories, as well as in the legal sphere. This could ultimately expand the kinds of stories that can be told.
Expand
Hien Chu, Paul Gerhart, Tim Ruffing, Dominique Schröder
ePrint Report ePrint Report
Threshold signatures are digital signature schemes in which a set of $n$ signers specify a threshold $t$ such that any subset of size $t$ is authorized to produce signatures on behalf of the group. There has recently been a renewed interest in this primitive, largely driven by the need to secure highly valuable signing keys, e.g., DNSSEC keys or keys protecting digital wallets in the cryptocurrency ecosystem. Of special interest is FROST, a practical Schnorr threshold signature scheme, which is currently undergoing standardization in the IETF and whose security was recently analyzed at CRYPTO'22.

We continue this line of research by focusing on FROST's unforgeability combined with a practical distributed key generation (DKG) algorithm. Existing proofs of this setup either use non-standard heuristics, idealized group models like the AGM, or idealized key generation. Moreover, existing proofs do not consider all practical relevant optimizations that have been proposed. We close this gap between theory and practice by presenting the Schnorr threshold signature scheme Olaf, which combines the most efficient known FROST variant FROST3 with a variant of Pedersen's DKG protocol (as commonly used for FROST), and prove its unforgeability. Our proof relies on the AOMDL assumption (a weaker and falsifiable variant of the OMDL assumption) and, like proofs of regular Schnorr signatures, on the random oracle model.
Expand
Céline Chevalier, Guirec Lebrun, Ange Martinelli
ePrint Report ePrint Report
Hybrid post-quantum cryptography is a cautious approach that aims to guard against the threat posed by the quantum computer, through the simultaneous use of Post- Quantum (PQ) and classical (i.e. pre-quantum) cryptosystems, in case the post-quantum schemes used would turn out to be insecure.

Regarding the hybridization of Key Encapsulation Mechanisms (KEMs), most recent studies focus on how to safely combine the symmetric keys output by a parallel execution of classical and post-quantum KEMs. As simple as this architecture is, it however appears not to be the most efficient, computationally speaking as well as regarding the bandwidth of the exchanges.

Hence, we propose a new method to hybridize several KEMs more effectively, by combining the underlying Public Key Encryption schemes (PKEs) in an innovative variant of the cas- cade composition that we call "leaking-cascade". We prove that this architecture constitutes an IND-CPA-secure robust combiner for the encryption schemes, which permits to create an IND-CCA2 KEM upon the generated hybrid PKE. The leaking-cascade is at least as computationally effective as the commonly used parallel combination, and has a bandwidth gain - when it comes to the ciphertext produced - that may exceed 13 % compared to the latter.
Expand
Emanuele Giunta
ePrint Report ePrint Report
Non-Interactive Zero-Knowledge proofs (NIZK) allow a prover to convince a verifier that a statement is true by sending only one message and without conveying any other information. In the CRS model, many instantiations have been proposed from group-theoretic assumptions. On the one hand, some of these constructions use the group structure in a black-box way but rely on pairings, an example being the celebrated Groth-Sahai proof system. On the other hand, a recent line of research realized NIZKs from sub-exponential DDH in pairing-free groups using Correlation Intractable Hash functions, but at the price of making non black-box usage of the group.

As of today no construction is known to simultaneously reduce its security to pairing-free group problems and to use the underlying group in a black-box way.

This is indeed not a coincidence: in this paper, we prove that for a large class of NIZK either a pairing-free group is used non black-box by relying on element representation, or security reduces to external hardness assumptions. More specifically our impossibility applies to two incomparable cases. The first one covers Arguments of Knowledge (AoK) which proves that a preimage under a given one way function is known. The second one covers NIZK (not necessarily AoK) for hard subset problems, which captures relations such as DDH, Decision-Linear and Matrix-DDH.
Expand
Jean-Sébastien Coron, François Gérard, Matthias Trannoy, Rina Zeitoun
ePrint Report ePrint Report
We present novel and improved high-order masking gadgets for Dilithium, a post-quantum signature scheme that has been standardized by the National Institute of Standards and Technologies (NIST). Our proposed gadgets include the ShiftMod gadget, which is used for efficient arithmetic shifts and serves as a component in other masking gadgets. Additionally, we propose a new algorithm for Boolean-to-arithmetic masking conversion of a $\mu$-bit integer $x$ modulo any integer $q$, with a complexity that is independent of both $\mu$ and $q$. This algorithm is used in Dilithium to mask the generation of the random variable $y$ modulo $q$. Moreover, we describe improved techniques for masking the Decompose function in Dilithium. Our new gadgets are proven to be secure in the $t$-probing model.

We demonstrate the effectiveness of our countermeasures by presenting a complete high-order masked implementation of Dilithium that utilizes the improved gadgets described above. We provide practical results obtained from a C implementation and compare the performance improvements provided by our new gadgets with those of previous work.
Expand
Anisha Mukherjee, Aikata Aikata, Ahmet Can Mert, Yongwoo Lee, Sunmin Kwon, Maxim Deryabin, Sujoy Sinha Roy
ePrint Report ePrint Report
The promising field of homomorphic encryption enables functions to be evaluated on encrypted data and produce results that mimic the same computations done on plaintexts. It, therefore, comes as no surprise that many ventures at constructing homomorphic encryption schemes have come into the limelight in recent years. Most popular are those that rely on the hard lattice problem, called the Ring Learning with Errors problem (RLWE). One major limitation of these homomorphic encryption schemes is that in order to securely increase the maximum multiplicative depth, they need to increase the polynomial-size thereby also increasing the complexity of the design. We aim to bridge this gap by proposing a homomorphic encryption (HE) scheme based on the Module Learning with Errors problem (MLWE), ModHE that allows us to break the big computations into smaller ones. Given the popularity of module lattice-based post-quantum schemes, it is an evidently interesting research endeavor to also formulate module lattice-based homomorphic encryption schemes.

While our proposed scheme is general, as a case study, we port the well-known RLWE-based CKKS scheme to the MLWE setting. The module version of the scheme completely stops the polynomial-size blowups when aiming for a greater circuit depth. Additionally, it presents greater opportunities for designing flexible, reusable, and parallelizable hardware architecture. A hardware implementation is provided to support our claims. We also acknowledge that as we try to decrease the complexity of computations, the amount of computations (such as relinearizations) increases. We hope that the potential and limitations of using such a hardware-friendly scheme will spark further research.
Expand
Ivan Damgård, Hannah Keller, Boel Nelson, Claudio Orlandi, Rasmus Pagh
ePrint Report ePrint Report
Given a collection of vectors $\mathbf{x}^{(1)},\dots,\mathbf{x}^{(n)} \in \{0,1\}^d$, the selection problem asks to report the index of an "approximately largest" entry in $\mathbf{x}=\sum_{j=1}^n \mathbf{x}^{(j)}$. Selection abstracts a host of problems; in machine learning it can be used for hyperparameter tuning, feature selection, or to model empirical risk minimization. We study selection under differential privacy, where a released index guarantees privacy for individual vectors. Though selection can be solved with an excellent utility guarantee in the central model of differential privacy, the distributed setting where no single entity is trusted to aggregate the data lacks solutions. Specifically, strong privacy guarantees with high utility are offered in high trust settings, but not in low trust settings. For example, in the popular shuffle model of distributed differential privacy, there are strong lower bounds suggesting that the utility of the central model cannot be obtained. In this paper we design a protocol for differentially private selection in a trust setting similar to the shuffle model - with the crucial difference that our protocol tolerates corrupted servers while maintaining privacy. Our protocol uses techniques from secure multi-party computation (MPC) to implement a protocol that: (i) has utility on par with the best mechanisms in the central model, (ii) scales to large, distributed collections of high-dimensional vectors, and (iii) uses $k\geq 3$ servers that collaborate to compute the result, where the differential privacy guarantee holds assuming an honest majority. Since general-purpose MPC techniques are not sufficiently scalable, we propose a novel application of integer secret sharing, and evaluate the utility and efficiency of our protocol both theoretically and empirically. Our protocol is the first to demonstrate that large-scale differentially private selection is possible in a distributed setting.
Expand
Marina Krček, Thomas Ordas, Stjepan Picek
ePrint Report ePrint Report
In the first step of fault injection attacks, it is necessary to perform fault injections for target characterization to improve the chances of finding vulnerabilities that can be exploited in the second step. The second step is the attack, where the vulnerabilities are used to break the security. This work considers the parameter search on the laser fault injection parameters. While this work can also be adjusted to find exploitable faults, the objective here is to find as many faults as possible on the target device. The goal comes from a security evaluation perspective to perform a successful target characterization. Previous works propose several methods, such as the memetic algorithm or hyperparameter tuning techniques. However, we notice a problem concerning the convergence of such methods to one specific target region, which is beneficial for an attack where one parameter combination could be enough. Indeed, these search algorithms lead to many observed vulnerabilities, but most seem to come from the same area on the target, which could mean some crucial vulnerabilities are missed. In this work, we propose considering the location coverage of the algorithms and offer two methods promoting diversity in tested parameter combinations to increase it while still finding many faults.We compare the grid memetic algorithm and evolution strategies to the performance of the memetic algorithm and random search. Our results show a benefit from introducing diversity to increase location coverage, but the overall number of vulnerabilities is decreased compared to the memetic algorithm. However, the number of unique locations with vulnerabilities is similar between the three evolutionary algorithms, with evolution strategies providing the most distant locations.
Expand
Aviv Yaish, Maya Dotan, Kaihua Qin, Aviv Zohar, Arthur Gervais
ePrint Report ePrint Report
The Decentralized Finance (DeFi) ecosystem has proven to be immensely popular in facilitating financial operations such as lending and exchanging assets, with Ethereum-based platforms holding a combined amount of more than 30 billion USD. The public availability of these platforms' code together with real-time data on all user interactions and platform liquidity has given rise to sophisticated automatic tools that recognize profit opportunities on behalf of users and seize them.

In this work, we formalize three core DeFi primitives which together are responsible for a daily volume of over 100 million USD in Ethereum-based platforms alone: (1) lending and borrowing funds, (2) liquidation of insolvent loans, and (3) using flash-swaps to close arbitrage opportunities between cryptocurrency exchanges. The profit which can be made from each primitive is then cast as an optimization problem that can be readily solved.

We use our formalization to analyze several case studies for each primitive, showing that popular platforms and tools which promise to automatically optimize profits for users, actually fall short. In specific instances, the profits can be increased by more than 100%, with highest amount of ``missed'' revenue by a single suboptimal action equal to 428.14 ETH, or roughly 517K USD.

Finally, we show that many missed opportunities to make a profit do not go unnoticed by other users. Indeed, suboptimal transactions are sometimes immediately followed by ``trailing'' back-running transactions which extract additional profits using similar actions. By analyzing a subset of such events, we uncover that some users who frequently create such trailing transactions are heavily tied to specific miners, meaning that all of their transactions appear only in blocks mined by one miner in particular. As some of the backrun non-optimal transactions are private, we hypothesize that the users who create them are, in fact, miners (or users collaborating with miners) who use inside information known only to them to make a profit, thus gaining an unfair advantage.
Expand
Zhichun Lu, Ren Zhang
ePrint Report ePrint Report
For years, Bitcoin miners put little effort into adopting several widely-acclaimed block acceleration techniques, which, as some argued, would secure their revenues. Their indifference inspires a theory that slower block propagation is beneficial for some miners. In this study, we analyze and confirm this counterintuitive theory. Specifically, by modeling inadvertent slower blocks, we show that a mining coalition that controls more than a third of the total mining power can earn unfair revenue by propagating blocks slower to outsiders. Afterward, we explore the strategies of an attacker that consciously exploits this phenomenon. The results indicate that an attacker with 45% of the total mining power can earn 58% of the total revenue. This attack is alarming as it is equally fundamental but more stealthy than the well-known selfish mining attack. At last, we discuss its detection and defense mechanisms.
Expand
Krzysztof MAŃK
ePrint Report ePrint Report
Randomness testing is one of the essential and easiest tools for evaluating cryptographic primitives. The faster we can test, the greater volume of data that can be tested. Thus a more detailed analysis is possible. This paper presents a range of observations made for a well-known frequency test for overlapping vectors in binary sequence testing. We have obtained precise chi-square statistic computed in $O \left(dt 2^{dt} \right)$ instead of $O\left( 2^{2dt}\right)$ time, without precomputed tables.
Expand
Zhengjun Cao, Lihua Liu
ePrint Report ePrint Report
We show that the key agreement scheme [J. Syst. Archit., 116: 102053, 2021] is flawed. It makes use of a symmetric key encryption to transfer data between the user and server. But the symmetric key is easily retrieved by an adversary, which results in the loss of data confidentiality, and makes it vulnerable to impersonation attack.
Expand
◄ Previous Next ►