International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

28 August 2023

Sujaya Maiyya, Sharath Vemula, Divyakant Agrawal, Amr El Abbadi, Florian Kerschbaum
ePrint Report ePrint Report
We present Waffle, a datastore that protects an application’s data access patterns from a passive persistent adversary. Waffle achieves this without prior knowledge of the input data access distribution, making it the first of its kind to adaptively handle input sequences under a passive persistent adversary. Waffle maintains a constant bandwidth and client-side storage overhead, which can be adjusted to suit the application owner’s preferences. This flexibility allows the owner to fine-tune system parameters and strike a balance between security and performance. Our evaluation, utilizing the Yahoo! Cloud Serving Benchmark (YCSB) benchmark and Redis as the backend storage, demonstrates promising results. The insecure baseline outperforms Waffle by a mere 5-6x, whereas Waffle outperforms Pancake—a state-of-the-art oblivious datastore under passive persistent adversaries—by 45-57%, and a concurrent ORAM system, TaoStore, by 102x.
Expand
Shahar Papini, Ulrich Haböck
ePrint Report ePrint Report
In this informal note, we instantiate the Goldwasser-Kalai-Rothblum (GKR) protocol to prove fractional sumchecks as present in lookup arguments based on logarithmic derivatives, with the following impact on the prover cost of logUp (IACR eprint 2022/1530): When looking up $M\geq 1$ columns in a (for the sake of simplicity) single column table, the prover has to commit only to a single extra column, i.e. the multiplicities of the table entries. In order to carry over the GKR fractional sumcheck to the univariate setting, we furthermore introduce a simple, yet (as far as we now) novel transformation for turning a univariate polynomial commitment scheme into a multilinear one. The transformation complements existing approaches and might be of independent interest for its elegant way to prove arbitrary powers of the lexicographic shift over the Boolean hypercube.
Expand
Augustin Bariant
ePrint Report ePrint Report
With the increasing interest for advanced protocols for Multi Party Computation, Fully-Homomorphic Encryption or Zero Knowledge proofs, a need for cryptographic algorithms with new constraints has emerged. These algorithms, called Arithmetization-Oriented ciphers, seek to minimize the number of field multiplications in large finite fields $\mathbb{F}_{2^n}$ or $\mathbb{F}_{p}$. Among them, Ciminion is an encryption algorithm proposed by Dobraunig et al. in Eurocrypt 2021.

In this paper, we show a new univariate modelization on a variant of Ciminion proposed by the designers. This instance restricts the attacker to at most $2^{s/2}$ data, where $s$ is the security level. Because the designers chose to reduce the number of rounds in that specific attacker model, we are able to attack the cipher for large security levels. We also propose some slight modifications of Ciminion that would overcome this vulnerability.
Expand
Zibo Zhou, Zongyang Zhang, Jin Dong
ePrint Report ePrint Report
Proof-carrying data (PCD) is a powerful cryptographic primitive that allows mutually distrustful parties to perform distributed computation defined on directed acyclic graphs in an efficiently verifiable manner. Important efficiency parameters include prover's cost at each step and the recursion overhead that measures the additional cost apart from proving the computation.

In this paper, we construct a PCD scheme having the smallest prover's cost and recursion overhead in the literature. Specifically, the prover's cost at each step is dominated by only one $O(|C|)$-sized multi-scalar multiplication (MSM), and the recursion overhead is dominated by only one $2r$-sized MSM, where $|C|$ is the computation size and $r$ is the number of incoming edges at certain step. In contrast, the state-of-the-art PCD scheme requires $4r+12$ $O(|C|)$-sized MSMs w.r.t. the prover's cost and six $2r$-sized MSMs, one $6r$-sized MSM w.r.t. the recursion overhead. In addition, our PCD scheme supports more expressive constraint system for computations—customizable constraint system (CCS) that supports high-degree constraints efficiently, in contrast with rank-1 constraint system (R1CS) that supports only quadratic constraints used in existing PCD schemes.

Underlying our PCD scheme is a multi-folding scheme that reduces the task of checking multiple instances into the task of checking one. We generalize existing construction to support arbitrary number of instances.
Expand
Christoffer Raun, Benjamin Estermann, Liyi Zhou, Kaihua Qin, Roger Wattenhofer, Arthur Gervais, Ye Wang
ePrint Report ePrint Report
The emergence of blockchain technologies as central components of financial frameworks has amplified the extraction of market inefficiencies, such as arbitrage, through Miner Extractable Value (MEV) from Decentralized Finance smart contracts. Exploiting these opportunities often requires fee payment to miners and validators, colloquially termed as bribes. The recent development of centralized MEV relayers has led to these payments shifting from the public transaction pool to private channels, with the objective of mitigating information leakage and curtailing execution risk. This transition instigates highly competitive first-price auctions for MEV. However, effective bidding strategies for these auctions remain unclear. This paper examines the bidding behavior of MEV bots using Flashbots' private channels, shedding light on the opaque dynamics of these auctions. We gather and analyze transaction data for the entire operational period of Flashbots, providing an extensive view of the current Ethereum MEV extraction landscape. Additionally, we engineer machine learning models that forecast winning bids whilst increasing profitability, capitalizing on our comprehensive transaction data analysis. Given our unique status as an adaptive entity, the findings reveal that our machine learning models can secure victory in more than 50% of Flashbots auctions, consequently yielding superior returns in comparison to current bidding strategies in arbitrage MEV auctions. Furthermore, the study highlights the relative advantages of adaptive constant bidding strategies in sandwich MEV auctions.
Expand
Shuping Mao, Zhiyu Zhang, Lei Hu, Luying Li, Peng Wang
ePrint Report ePrint Report
With the development of quantum attacks, many classical-secure structures are not secure in quantum. How to evaluate the quantum security of structure and give a tight security bound becomes a challenging research topic. As a tweakable block cipher structure based on block ciphers, $\mathsf{TNT}$ was proven to be of classical beyond-birthday-bound $O(2^{3n/4})$ security. We prove that $\mathsf{TNT}$ is a quantum-secure tweakable block cipher with a bound of $O(2^{n/6})$. In addition, we show the tight quantum PRF security bound of $O(2^{n/3})$ when $\mathsf{TNT}$ is based on random functions, which is better than $O(2^{n/4})$ given by Bhaumik et al. and solves their open problem. Our proof uses the recording standard oracle with errors technique of Hosoyamada and Iwata based on Zhandry’s compressed oracle technique.
Expand
Jun Yan
ePrint Report ePrint Report
In this work, we show that general non-interactive quantum commitments (allowing quantum computation and communication) to classical messages are compatible with current-known quantum-rewinding techniques. Specifically, we first propose a definition of collapse-binding of quantum commitments which generalizes from its post-quantum counterpart and is shown to work well with quantum rewinding. Then we show that thus defined collapse-binding is equivalent to the conceivably minimal unique-message-binding. This in particular implies that canonical quantum bit commitments are collapse-binding and can be used to instantiate many cryptographic applications.

Additionally, we rephrase the flavor conversion of canonical quantum bit commitments as a hardness conversion, which then can be used to establish a stronger quantum indistinguishability that works well with quantum rewinding just like in the post-quantum setting. Such indistinguishability allows us to establish the security of the Goldreich-Kahan construction of constant-round zero-knowledge proofs for NP instantiated with canonical quantum bit commitments. We thus for the first time construct a constant-round (actually, four-round) quantum computational zero-knowledge proof for NP based on the minimum complexity assumption that is needed for the complexity-based quantum cryptography.
Expand
Alessandro Coglio, Eric McCarthy, Eric Smith, Collin Chin, Pranav Gaddamadugu, Michel Dellepere
ePrint Report ePrint Report
We provide a preliminary report of our ongoing work in formally defining and verifying, in a compositional way, the R1CS gadgets generated by Aleo's snarkVM. The approach is applicable to other systems that generate gadgets in a similar manner, and that may use non-R1CS representations.
Expand
Anaïs Barthoulot, Olivier Blazy, Sébastien Canard
ePrint Report ePrint Report
In 1993, Benaloh and De Mare introduced cryptographic accumulator, a primitive that allows the representation of a set of values by a short object (the accumulator) and offers the possibility to prove that some input values are in the accumulator. For this purpose, so-called asymmetric accumulators require the creation of an additional cryptographic object, called a witness. Through the years, several instantiations of accumulators were proposed either based on number theoretic assumptions, hash functions, bilinear pairings or more recently lattices. In this work, we present the first instantiation of an asymmetric cryptographic accumulator that allows private computation of the accumulator but public witness creation. This is obtained thanks to our unique combination of the pairing based accumulator of Nguyen with dual pairing vector spaces. We moreover introduce the new concept of dually computable cryptographic accumulators, in which we offer two ways to compute the representation of a set: either privately (using a dedicated secret key) or publicly (using only the scheme's public key), while there is a unique witness creation for both cases. All our constructions of accumulators have constant size accumulated value and witness, and satisfy the accumulator security property of collision resistance, meaning that it is not possible to forge a witness for an element that is not in the accumulated set. As a second contribution, we show how our new concept of dually computable cryptographic accumulator can be used to build a Ciphertext Policy Attribute Based Encryption (CP-ABE). Our resulting scheme permits policies expressed as disjunctions of conjunctions (without ``NO'' gates), and is adaptively secure in the standard model. This is the first CP-ABE scheme having both constant-size user secret keys and ciphertexts (i.e. independent of the number of attributes in the scheme, or the policy size). For the first time, we provide a way to use cryptographic accumulators for both key management and encryption process.
Expand
Hanwen Feng, Qiang Tang
ePrint Report ePrint Report
We initiate the study of witness authenticating NIZK proof systems (waNIZKs), in which one can use a witness $w$ of a statement $x$ to identify whether a valid proof for $x$ is indeed generated using $w$. Such a new identification functionality enables more diverse applications, and it also puts new requirements on soundness that: (1) no adversary can generate a valid proof that will not be identified by any witness; (2) or forge a proof using some valid witness to frame others. To work around the obvious obstacle towards conventional zero-knowledgeness, we define entropic zero-knowledgeness that requires the proof to leak no partial information, if the witness has sufficient computational entropy. We give a formal treatment of this new primitive. The modeling turns out to be quite involved and multiple subtle points arise and particular cares are required. We present general constructions from standard assumptions. We also demonstrate three applications in non-malleable (perfectly one-way) hash, group signatures with verifier-local revocations and plaintext-checkable public-key encryption. Our waNIZK provides a new tool to advance the state of the art in all these applications.
Expand
Jacqueline Brendel, Sebastian Clermont, Marc Fischlin
ePrint Report ePrint Report
The Fast IDentity Online (FIDO) Alliance develops open standards to replace password-based authentication methods by token-based solutions. The latest protocol suite FIDO2 provides such a promising alternative which many key players have already adopted or are willing to. The central authentication mechanism WebAuthn uses cryptographic keys stored on the device to authenticate clients to a relying party via a challenge-response protocol. Yet, this approach leaves several open issues about post-quantum secure instantiations and methods for recovery of credentials. Recently Frymann et al. (CCS 2020, ACNS 2023, EuroS&P 2023) made significant progress to advance the security of FIDO2 sys- tems. Following a suggestion by device manufacturer Yubico, they considered a WebAuthn-compliant mechanism to store recovery information at the relying party. If required, the client can recover essential data with the help of a backup authenticator device. They analyzed the Diffie-Hellman based scheme, showing that it provides basic authentication and privacy features. One of their solutions also provides a post-quantum secure variant, but only for a weaker version of authentication security. Our starting point is to note that the security definitions of Fry- mann et al., especially the privacy notion, do not seem to capture real threats appropriately. We thus strengthen the notions. De- spite this strengthening, we show a generic construction based on (anonymous) KEMs and signature schemes. It follows that, us- ing post-quantum secure instances, like Kyber and Dilitihium, one immediately obtains a post-quantum and strongly secure solution.
Expand
Antonio de la Piedra, Marloes Venema, Greg Alpár
ePrint Report ePrint Report
Attribute-based encryption (ABE) is a popular type of public-key encryption that enforces access control cryptographically, and has spurred the proposal of many use cases. To satisfy the requirements of the setting, tailor-made schemes are often introduced. However, designing secure schemes---as well as verifying that they are secure---is notoriously hard. Several of these schemes have turned out to be broken, making them dangerous to deploy in practice.

To overcome these shortcomings, we introduce ACABELLA. ACABELLA simplifies generating and verifying security proofs for pairing-based ABE schemes. It consists of a framework for security proofs that are easy to verify manually and an automated tool that efficiently generates these security proofs. Creating such security proofs generally takes no more than a few seconds. The output is easy to understand, and the proofs can be verified manually. In particular, the verification of a security proof generated by ACABELLA boils down to performing simple linear algebra.

The ACABELLA tool is open source and also available via a web interface. With its help, experts can simplify their proof process by verifying or refuting the security claims of their schemes and practitioners can get an assurance that the ABE scheme of their choice is secure.
Expand

24 August 2023

Peter Gaži, Aggelos Kiayias, Alexander Russell
ePrint Report ePrint Report
We study the problem of committee selection in the context of proof-of-stake consensus mechanisms or distributed ledgers. These settings determine a family of participating parties---each of which has been assigned a non-negative "stake"---and are subject to an adversary that may corrupt a subset of the parties. The challenge is to select a committee of participants that accurately reflects the proportion of corrupt and honest parties, as measured by stake, in the full population. The trade-off between committee size and the probability of selecting a committee that over-represents the corrupt parties is a fundamental factor in both security and efficiency of proof-of-stake consensus, as well as committee-run layer-two protocols.

We propose and analyze several new committee selection schemes that improve upon existing techniques by adopting low-variance assignment of certain committee members that hold significant stake. These schemes provide notable improvements to the size--security trade-off arising from the stake distributions of many deployed ledgers.
Expand
Ashwin Jha, Mustafa Khairallah, Mridul Nandi, Abishanka Saha
ePrint Report ePrint Report
Liskov, Rivest and Wagner laid the theoretical foundations for tweakable block ciphers (TBC). In a seminal paper, they proposed two (up to) birthday-bound secure design strategies --- LRW1 and LRW2 --- to convert any block cipher into a TBC. Several of the follow-up works consider cascading of LRW-type TBCs to construct beyond-the-birthday bound (BBB) secure TBCs. Landecker et al. demonstrated that just two-round cascading of LRW2 can already give a BBB security. Bao et al. undertook a similar exercise in context of LRW1 with TNT --- a three-round cascading of LRW1 --- that has been shown to achieve BBB security as well. In this paper, we present a CCA distinguisher on TNT that achieves a non-negligible advantage with $ O(2^{n/2}) $ queries, directly contradicting the security claims made by the designers. We provide a rigorous and complete advantage calculation coupled with experimental verifications that further support our claim. Next, we provide new and simple proofs of birthday-bound CCA security for both TNT and its single-key variant, which confirm the tightness of our attack. Furthering on to a more positive note, we show that adding just one more block cipher call, referred as 4-LRW1, does not just reestablish the BBB security, but also amplifies it up to $ 2^{3n/4} $ queries. As a side-effect of this endeavour, we propose a new abstraction of the cascaded LRW-design philosophy, referred to as the LRW+ paradigm, comprising two block cipher calls sandwiched between a pair of tweakable universal hashes. This helps us to provide a modular proof approach covering all cascaded LRW constructions with at least $ 2 $ rounds, including 4-LRW1, and its more established relative, the well-known CLRW2, or more aptly, 2-LRW2.
Expand
Tianyi Liu, Tiancheng Xie, Jiaheng Zhang, Dawn Song, Yupeng Zhang
ePrint Report ePrint Report
In the past decade, blockchains have seen various financial and technological innovations, with cryptocurrencies reaching a market cap of over 1 trillion dollars. However, scalability is one of the key issues hindering the deployment of blockchains in many applications. To improve the throughput of the transactions, zkRollups and zkEVM techniques using the cryptographic primitive of zero-knowledge proofs (ZKPs) have been proposed and many companies are adopting these technologies in the layer-2 solutions. However, in these technologies, the proof generation of the ZKP is the bottleneck and the companies have to deploy powerful machines with TBs of memory to batch a large number of transactions in a ZKP.

In this work, we improve the scalability of these techniques by proposing new schemes of fully distributed ZKPs. Our schemes can improve the efficiency and the scalability of ZKPs using multiple machines, while the communication among the machines is minimal. With our schemes, the ZKP generation can be distributed to multiple participants in a model similar to the mining pools. Our protocols are based on Plonk, an efficient zero-knowledge proof system with a universal trusted setup. The first protocol is for data-parallel circuits. For a computation of $M$ sub-circuits of size $T$ each, using $M$ machines, the prover time is $O(T\log T + M \log M)$, while the prover time of the original Plonk on a single machine is $O(MT\log (MT))$. Our protocol incurs only $O(1)$ communication per machine, and the proof size and verifier time are both $O(1)$, the same as the original Plonk. Moreover, we show that with minor modifications, our second protocol can support general circuits with arbitrary connections while preserving the same proving, verifying, and communication complexity. The technique is general and may be of independent interest for other applications of ZKP.

We implement Pianist (Plonk vIA uNlimited dISTribution), a fully distributed ZKP system using our protocols. Pianist can generate the proof for 8192 transactions in 313 seconds on 64 machines. This improves the scalability of the Plonk scheme by 64$\times$. The communication per machine is only 2.1 KB, regardless of the number of machines and the size of the circuit. The proof size is 2.2 KB and the verifier time is 3.5 ms. We further show that Pianist has similar improvements for general circuits. On a randomly generated circuit with $2^{25}$ gates, it only takes 5s to generate the proof using 32 machines, 24.2$\times$ faster than Plonk on a single machine.
Expand
Yuval Ishai, Aayush Jain, Paul Lou, Amit Sahai, Mark Zhandry
ePrint Report ePrint Report
A wiretap coding scheme for a pair of noisy channels $(\mathsf{ChB},\mathsf{ChE})$ enables Alice to reliably communicate a message to Bob by sending its encoding over $\mathsf{ChB}$, while hiding the message from an adversary Eve who obtains the same encoding over $\mathsf{ChE}$.

A necessary condition for the feasibility of wiretap coding is that $\mathsf{ChB}$ is not a degradation of $\mathsf{ChE}$, namely Eve cannot simulate Bob’s view. While insufficient in the information-theoretic setting, a recent work of Ishai, Korb, Lou, and Sahai (Crypto 2022) showed that the non-degradation condition is sufficient in the computational setting, assuming idealized flavors of obfuscation. The question of basing a similar feasibility result on standard cryptographic assumptions was left open, even in simple special cases.

In this work, we settle the question for all discrete memoryless channels where the (common) input alphabet of $\mathsf{ChB}$ and $\mathsf{ChE}$ is binary, and with arbitrary finite output alphabet, under standard (sub-exponential) hardness assumptions: namely those assumptions that imply indistinguishability obfuscation (Jain-Lin-Sahai 2021, 2022), and injective PRGs. In particular, this establishes the feasibility of computational wiretap coding when $\mathsf{ChB}$ is a binary symmetric channel with crossover probability $p$ and $\mathsf{ChE}$ is a binary erasure channel with erasure probability $e$, where $e>2p$.

On the information-theoretic side, our result builds on a new polytope characterization of channel degradation for pairs of binary-input channels, which may be of independent interest.
Expand
Kanav Gupta, Neha Jawalkar, Ananta Mukherjee, Nishanth Chandran, Divya Gupta, Ashish Panwar, Rahul Sharma
ePrint Report ePrint Report
Secure 2-party computation (2PC) enables secure inference that offers protection for both proprietary machine learning (ML) models and sensitive inputs to them. However, the existing secure inference solutions suffer from high latency and communication overheads, particularly for transformers. Function secret sharing (FSS) is a recent paradigm for obtaining efficient 2PC protocols with a preprocessing phase. We provide SIGMA, the first end-to-end system for secure transformer inference based on FSS. By constructing new FSS-based protocols for complex machine learning functionalities, such as Softmax and GeLU, and also accelerating their computation on GPUs, SIGMA improves the latency of secure inference of transformers by $11-19\times$ over the state-of-the-art that uses preprocessing and GPUs. We present the first secure inference of generative pre-trained transformer (GPT) models. In particular, SIGMA executes GPT-Neo with 1.3 billion parameters in 7.4s and HuggingFace's GPT2 in 1.6s.
Expand
Sarah Arpin, James Clements, Pierrick Dartois, Jonathan Komada Eriksen, Péter Kutas, Benjamin Wesolowski
ePrint Report ePrint Report
Orientations of supersingular elliptic curves encode the information of an endomorphism of the curve. Computing the full endomorphism ring is a known hard problem, so one might consider how hard it is to find one such orientation. We prove that access to an oracle which tells if an elliptic curve is $\mathfrak{O}$-orientable for a fixed imaginary quadratic order $\mathfrak{O}$ provides non-trivial information towards computing an endomorphism corresponding to the $\mathfrak{O}$-orientation. We provide explicit algorithms and in-depth complexity analysis.

We also consider the question in terms of quaternion algebras. We provide algorithms which compute an embedding of a fixed imaginary quadratic order into a maximal order of the quaternion algebra ramified at $p$ and $\infty$. We provide code implementations in Sagemath which is efficient for finding embeddings of imaginary quadratic orders of discriminants up to $O(p)$, even for cryptographically sized $p$.
Expand
Florian Hirner, Michael Streibl, Ahmet Can Mert, Sujoy Sinha Roy
ePrint Report ePrint Report
We present a hardware implementation for the MAYO post-quantum digital signature scheme, which is submitted to the American National Institute of Standards and Technology’s call for diversification of quantum-resistant public key cryptographic standards. The scheme is based on the Unbalanced Oil and Vinegar signature scheme, which operates on the fact that solving systems of multivariate polynomial equations is NP-complete. MAYO utilizes a unique whipping technique in combination with emulsifier maps to offer a significant reduction in key size compared to other Unbalanced Oil and Vinegar signature schemes. In this paper, we demonstrate how to design a hardware architecture for the MAYO post-quantum signature scheme. We also provide a comprehensive analysis and propose multiple optimization techniques to reduce resource utilization and accelerate computation on hardware platforms.
Expand
Huina Li, Le He, Shiyao Chen, Jian Guo, Weidong Qiu
ePrint Report ePrint Report
\ascon is the final winner of the lightweight cryptography standardization competition $(2018-2023)$. In this paper, we focus on preimage attacks against round-reduced \ascon. The preimage attack framework, utilizing the linear structure with the allocating model, was initially proposed by Guo \textit{et al.} at ASIACRYPT 2016 and subsequently improved by Li \textit{et al.} at EUROCRYPT 2019, demonstrating high effectiveness in breaking the preimage resistance of \keccak. In this paper, we extend this preimage attack framework to \ascon from two aspects. Firstly, we propose a linearize-and-guess approach by analyzing the algebraic properties of the \ascon permutation. As a result, the complexity of finding a preimage for 2-round \ascon-\xof with a 64-bit hash value can be significantly reduced from $2^{39}$ guesses to $2^{27.56}$ guesses. To support the effectiveness of our approach, we find an actual preimage of all ‘0’ hash in practical time. Secondly, we develop a SAT-based automatic preimage attack framework using the linearize-and-guess approach, which is efficient to search for the optimal structures exhaustively. Consequently, we present the best theoretical preimage attacks on 3-round and 4-round \ascon-\xof so far.
Expand
◄ Previous Next ►