International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

29 August 2023

Ayan Sajwan, Girish Mishra
ePrint Report ePrint Report
This research paper explores the vulnerabilities of the lightweight block cipher SPECK 32/64 through the application of differential analysis and deep learning techniques. The primary objectives of the study are to investigate the cipher’s weaknesses and to compare the effectiveness of ResNet as used by Aron Gohr at Crypto2019 and DenseNet . The methodology involves conducting an analysis of differential characteristics to identify potential weaknesses in the cipher’s structure. Experimental results and analysis demonstrate the efficacy of both approaches in compromising the security of SPECK 32/64.
Expand
Carmen Wabartha, Julian Liedtke, Nicolas Huber, Daniel Rausch, Ralf Kuesters
ePrint Report ePrint Report
Modern e-voting systems provide what is called verifiability, i.e., voters are able to check that their votes have actually been counted despite potentially malicious servers and voting authorities. Some of these systems, called tally-hiding systems, provide increased privacy by revealing only the actual election result, e.g., the winner of the election, but no further information that is supposed to be kept secret. However, due to these very strong privacy guarantees, supporting complex voting methods at a real-world scale has proven to be very challenging for tally-hiding systems.

A widespread class of elections, and at the same time, one of the most involved ones is parliamentary election with party-based seat-allocation. These elections are performed for millions of voters, dozens of parties, and hundreds of individual candidates competing for seats; they also use very sophisticated multi-step algorithms to compute the final assignment of seats to candidates based on, e.g., party lists, hundreds of electoral constituencies, possibly additional votes for individual candidates, overhang seats, and special exceptions for minorities. So far, it has not been investigated whether and in how far such elections can be performed in a verifiable tally-hiding manner.

In this work, we design and implement the first verifiable (fully) tally-hiding e-voting system for an election from this class, namely, for the German parliament (Bundestag). As part of this effort, we propose several new tally-hiding building blocks that are of independent interest. We perform benchmarks based on actual election data, which show, perhaps surprisingly, that our proposed system is practical even at a real-world scale. Our work thus serves as a foundational feasibility study for this class of elections.
Expand
Nicolas Gama, Anand Kumar Narayanan, Ryder LiuLin, Dongze Yue
ePrint Report ePrint Report
Most of the current lattice-based cryptosystems rely on finding Gaussian Samples from a lattice that are close to a given target. To that end, two popular distributions have been historically defined and studied: the Rounded Gaussian distribution and the Discrete Gaussian distribution. The first one is nearly trivial to sample: simply round the coordinates of continuous Gaussian samples to their nearest integer. Unfortunately, the security of resulting cryptosystems are not as well understood. In the opposite, the second distribution is only implicitly defined by a restriction of the support of the continuous Gaussian distribution to the discrete lattice points. Thus, algorithms to achieve such distribution are more involved, even in dimension one. The justification for exerting this computational effort is that the resulting lattice-based cryptographic schemes are validated by rigorous security proofs, often by leveraging the fact that the distribution is radial and discrete Gaussians behave well under convolutions, enabling arithmetic between samples, as well as decomposition across dimensions.

In this work, we unify both worlds. We construct out of infinite series, the cumulative density function of a new continuous distribution that acts as surrogate for the cumulative distribution of the discrete Gaussian. If $\mu$ is a center and $x$ a sample of this distribution, then rounding $\mu+x$ yields a faithful Discrete Gaussian sample. This new sampling algorithm naturally splits into a pre-processing/offline phase and a very efficient online phase. The online phase is simple and has a trivial constant time implementation. Modulo the offline phase, our algorithm offers both the efficiency of rounding and the security guarantees associated with discrete Gaussian sampling.
Expand
Markus Krausz, Georg Land, Florian Stolz, Dennis Naujoks, Jan Richter-Brockmann, Tim Güneysu, Lucie Kogelheide
ePrint Report ePrint Report
In this work, we examine widespread components of various Post-Quantum Cryptography (PQC) schemes that exhibit disproportionately high overhead when implemented in software in a side-channel secure manner: fixed-weight polynomial sampling, Cumulative Distribution Table (CDT) sampling, and rotation of polynomials by a secret offset. These components are deployed in a range of lattice-based and code-based Key Encapsulation Mechanisms (KEMs) and signature schemes from NIST’s fourth round of PQC standardization and the signature on-ramp. Masking – to defend against power Side-Channel Analysis (SCA) – on top of required constant-time methods, leads in some of these cases to impractical runtimes. To solve this issue, we start by identifying a small set of core operations, which are crucial for the performance of all three components. We accelerate these operations with an Instruction Set Extension (ISE) featuring masked instructions, which are generic and low-level and can be used in a wide range of cryptographic applications and thereby tackle performance, microarchitectural power leakage, and cryptographic agility, simultaneously. We implement dedicated masked instructions for our core operations as an add-on to the RISC-V core by Gao et al. which features masked instructions for Boolean and arithmetic operations and evaluate several algorithmic approaches in standard and bitsliced implementations on different ISE constellations. Our instructions allow some masked components to run more than one order of magnitude faster and are first-order power side-channel secure, which our practical evaluation confirms.
Expand

28 August 2023

Xiaoyang Dong, Shun Li, Phuong Pham, Guoyan Zhang
ePrint Report ePrint Report
At ASIACRYPT 2022, Benedikt, Fischlin, and Huppert proposed the quantum herding attacks on iterative hash functions for the first time. Their attack needs exponential size of quantum random access memory (qRAM). As the existence of large qRAM is questionable, Benedikt et al. left open question for building low-qRAM quantum herding attacks.

In this paper, we answer this open question by building a quantum herding attack, where the time complexity is slightly increased from Benedikt et al.'s $2^{0.43n}$ to ours $2^{0.46n}$, but the size of qRAM is reduced from Benedikt et al.'s $2^{0.43n}$ to ours $\mathcal{O}(n)$. Besides, we also introduce various low-qRAM quantum attacks on hash concatenation combiner, hash XOR combiner, Hash-Twice, and Zipper hash functions.
Expand
Sujaya Maiyya, Sharath Vemula, Divyakant Agrawal, Amr El Abbadi, Florian Kerschbaum
ePrint Report ePrint Report
We present Waffle, a datastore that protects an application’s data access patterns from a passive persistent adversary. Waffle achieves this without prior knowledge of the input data access distribution, making it the first of its kind to adaptively handle input sequences under a passive persistent adversary. Waffle maintains a constant bandwidth and client-side storage overhead, which can be adjusted to suit the application owner’s preferences. This flexibility allows the owner to fine-tune system parameters and strike a balance between security and performance. Our evaluation, utilizing the Yahoo! Cloud Serving Benchmark (YCSB) benchmark and Redis as the backend storage, demonstrates promising results. The insecure baseline outperforms Waffle by a mere 5-6x, whereas Waffle outperforms Pancake—a state-of-the-art oblivious datastore under passive persistent adversaries—by 45-57%, and a concurrent ORAM system, TaoStore, by 102x.
Expand
Shahar Papini, Ulrich Haböck
ePrint Report ePrint Report
In this informal note, we instantiate the Goldwasser-Kalai-Rothblum (GKR) protocol to prove fractional sumchecks as present in lookup arguments based on logarithmic derivatives, with the following impact on the prover cost of logUp (IACR eprint 2022/1530): When looking up $M\geq 1$ columns in a (for the sake of simplicity) single column table, the prover has to commit only to a single extra column, i.e. the multiplicities of the table entries. In order to carry over the GKR fractional sumcheck to the univariate setting, we furthermore introduce a simple, yet (as far as we now) novel transformation for turning a univariate polynomial commitment scheme into a multilinear one. The transformation complements existing approaches and might be of independent interest for its elegant way to prove arbitrary powers of the lexicographic shift over the Boolean hypercube.
Expand
Augustin Bariant
ePrint Report ePrint Report
With the increasing interest for advanced protocols for Multi Party Computation, Fully-Homomorphic Encryption or Zero Knowledge proofs, a need for cryptographic algorithms with new constraints has emerged. These algorithms, called Arithmetization-Oriented ciphers, seek to minimize the number of field multiplications in large finite fields $\mathbb{F}_{2^n}$ or $\mathbb{F}_{p}$. Among them, Ciminion is an encryption algorithm proposed by Dobraunig et al. in Eurocrypt 2021.

In this paper, we show a new univariate modelization on a variant of Ciminion proposed by the designers. This instance restricts the attacker to at most $2^{s/2}$ data, where $s$ is the security level. Because the designers chose to reduce the number of rounds in that specific attacker model, we are able to attack the cipher for large security levels. We also propose some slight modifications of Ciminion that would overcome this vulnerability.
Expand
Zibo Zhou, Zongyang Zhang, Jin Dong
ePrint Report ePrint Report
Proof-carrying data (PCD) is a powerful cryptographic primitive that allows mutually distrustful parties to perform distributed computation defined on directed acyclic graphs in an efficiently verifiable manner. Important efficiency parameters include prover's cost at each step and the recursion overhead that measures the additional cost apart from proving the computation.

In this paper, we construct a PCD scheme having the smallest prover's cost and recursion overhead in the literature. Specifically, the prover's cost at each step is dominated by only one $O(|C|)$-sized multi-scalar multiplication (MSM), and the recursion overhead is dominated by only one $2r$-sized MSM, where $|C|$ is the computation size and $r$ is the number of incoming edges at certain step. In contrast, the state-of-the-art PCD scheme requires $4r+12$ $O(|C|)$-sized MSMs w.r.t. the prover's cost and six $2r$-sized MSMs, one $6r$-sized MSM w.r.t. the recursion overhead. In addition, our PCD scheme supports more expressive constraint system for computations—customizable constraint system (CCS) that supports high-degree constraints efficiently, in contrast with rank-1 constraint system (R1CS) that supports only quadratic constraints used in existing PCD schemes.

Underlying our PCD scheme is a multi-folding scheme that reduces the task of checking multiple instances into the task of checking one. We generalize existing construction to support arbitrary number of instances.
Expand
Christoffer Raun, Benjamin Estermann, Liyi Zhou, Kaihua Qin, Roger Wattenhofer, Arthur Gervais, Ye Wang
ePrint Report ePrint Report
The emergence of blockchain technologies as central components of financial frameworks has amplified the extraction of market inefficiencies, such as arbitrage, through Miner Extractable Value (MEV) from Decentralized Finance smart contracts. Exploiting these opportunities often requires fee payment to miners and validators, colloquially termed as bribes. The recent development of centralized MEV relayers has led to these payments shifting from the public transaction pool to private channels, with the objective of mitigating information leakage and curtailing execution risk. This transition instigates highly competitive first-price auctions for MEV. However, effective bidding strategies for these auctions remain unclear. This paper examines the bidding behavior of MEV bots using Flashbots' private channels, shedding light on the opaque dynamics of these auctions. We gather and analyze transaction data for the entire operational period of Flashbots, providing an extensive view of the current Ethereum MEV extraction landscape. Additionally, we engineer machine learning models that forecast winning bids whilst increasing profitability, capitalizing on our comprehensive transaction data analysis. Given our unique status as an adaptive entity, the findings reveal that our machine learning models can secure victory in more than 50% of Flashbots auctions, consequently yielding superior returns in comparison to current bidding strategies in arbitrage MEV auctions. Furthermore, the study highlights the relative advantages of adaptive constant bidding strategies in sandwich MEV auctions.
Expand
Shuping Mao, Zhiyu Zhang, Lei Hu, Luying Li, Peng Wang
ePrint Report ePrint Report
With the development of quantum attacks, many classical-secure structures are not secure in quantum. How to evaluate the quantum security of structure and give a tight security bound becomes a challenging research topic. As a tweakable block cipher structure based on block ciphers, $\mathsf{TNT}$ was proven to be of classical beyond-birthday-bound $O(2^{3n/4})$ security. We prove that $\mathsf{TNT}$ is a quantum-secure tweakable block cipher with a bound of $O(2^{n/6})$. In addition, we show the tight quantum PRF security bound of $O(2^{n/3})$ when $\mathsf{TNT}$ is based on random functions, which is better than $O(2^{n/4})$ given by Bhaumik et al. and solves their open problem. Our proof uses the recording standard oracle with errors technique of Hosoyamada and Iwata based on Zhandry’s compressed oracle technique.
Expand
Jun Yan
ePrint Report ePrint Report
In this work, we show that general non-interactive quantum commitments (allowing quantum computation and communication) to classical messages are compatible with current-known quantum-rewinding techniques. Specifically, we first propose a definition of collapse-binding of quantum commitments which generalizes from its post-quantum counterpart and is shown to work well with quantum rewinding. Then we show that thus defined collapse-binding is equivalent to the conceivably minimal unique-message-binding. This in particular implies that canonical quantum bit commitments are collapse-binding and can be used to instantiate many cryptographic applications.

Additionally, we rephrase the flavor conversion of canonical quantum bit commitments as a hardness conversion, which then can be used to establish a stronger quantum indistinguishability that works well with quantum rewinding just like in the post-quantum setting. Such indistinguishability allows us to establish the security of the Goldreich-Kahan construction of constant-round zero-knowledge proofs for NP instantiated with canonical quantum bit commitments. We thus for the first time construct a constant-round (actually, four-round) quantum computational zero-knowledge proof for NP based on the minimum complexity assumption that is needed for the complexity-based quantum cryptography.
Expand
Alessandro Coglio, Eric McCarthy, Eric Smith, Collin Chin, Pranav Gaddamadugu, Michel Dellepere
ePrint Report ePrint Report
We provide a preliminary report of our ongoing work in formally defining and verifying, in a compositional way, the R1CS gadgets generated by Aleo's snarkVM. The approach is applicable to other systems that generate gadgets in a similar manner, and that may use non-R1CS representations.
Expand
Anaïs Barthoulot, Olivier Blazy, Sébastien Canard
ePrint Report ePrint Report
In 1993, Benaloh and De Mare introduced cryptographic accumulator, a primitive that allows the representation of a set of values by a short object (the accumulator) and offers the possibility to prove that some input values are in the accumulator. For this purpose, so-called asymmetric accumulators require the creation of an additional cryptographic object, called a witness. Through the years, several instantiations of accumulators were proposed either based on number theoretic assumptions, hash functions, bilinear pairings or more recently lattices. In this work, we present the first instantiation of an asymmetric cryptographic accumulator that allows private computation of the accumulator but public witness creation. This is obtained thanks to our unique combination of the pairing based accumulator of Nguyen with dual pairing vector spaces. We moreover introduce the new concept of dually computable cryptographic accumulators, in which we offer two ways to compute the representation of a set: either privately (using a dedicated secret key) or publicly (using only the scheme's public key), while there is a unique witness creation for both cases. All our constructions of accumulators have constant size accumulated value and witness, and satisfy the accumulator security property of collision resistance, meaning that it is not possible to forge a witness for an element that is not in the accumulated set. As a second contribution, we show how our new concept of dually computable cryptographic accumulator can be used to build a Ciphertext Policy Attribute Based Encryption (CP-ABE). Our resulting scheme permits policies expressed as disjunctions of conjunctions (without ``NO'' gates), and is adaptively secure in the standard model. This is the first CP-ABE scheme having both constant-size user secret keys and ciphertexts (i.e. independent of the number of attributes in the scheme, or the policy size). For the first time, we provide a way to use cryptographic accumulators for both key management and encryption process.
Expand
Hanwen Feng, Qiang Tang
ePrint Report ePrint Report
We initiate the study of witness authenticating NIZK proof systems (waNIZKs), in which one can use a witness $w$ of a statement $x$ to identify whether a valid proof for $x$ is indeed generated using $w$. Such a new identification functionality enables more diverse applications, and it also puts new requirements on soundness that: (1) no adversary can generate a valid proof that will not be identified by any witness; (2) or forge a proof using some valid witness to frame others. To work around the obvious obstacle towards conventional zero-knowledgeness, we define entropic zero-knowledgeness that requires the proof to leak no partial information, if the witness has sufficient computational entropy. We give a formal treatment of this new primitive. The modeling turns out to be quite involved and multiple subtle points arise and particular cares are required. We present general constructions from standard assumptions. We also demonstrate three applications in non-malleable (perfectly one-way) hash, group signatures with verifier-local revocations and plaintext-checkable public-key encryption. Our waNIZK provides a new tool to advance the state of the art in all these applications.
Expand
Jacqueline Brendel, Sebastian Clermont, Marc Fischlin
ePrint Report ePrint Report
The Fast IDentity Online (FIDO) Alliance develops open standards to replace password-based authentication methods by token-based solutions. The latest protocol suite FIDO2 provides such a promising alternative which many key players have already adopted or are willing to. The central authentication mechanism WebAuthn uses cryptographic keys stored on the device to authenticate clients to a relying party via a challenge-response protocol. Yet, this approach leaves several open issues about post-quantum secure instantiations and methods for recovery of credentials. Recently Frymann et al. (CCS 2020, ACNS 2023, EuroS&P 2023) made significant progress to advance the security of FIDO2 sys- tems. Following a suggestion by device manufacturer Yubico, they considered a WebAuthn-compliant mechanism to store recovery information at the relying party. If required, the client can recover essential data with the help of a backup authenticator device. They analyzed the Diffie-Hellman based scheme, showing that it provides basic authentication and privacy features. One of their solutions also provides a post-quantum secure variant, but only for a weaker version of authentication security. Our starting point is to note that the security definitions of Fry- mann et al., especially the privacy notion, do not seem to capture real threats appropriately. We thus strengthen the notions. De- spite this strengthening, we show a generic construction based on (anonymous) KEMs and signature schemes. It follows that, us- ing post-quantum secure instances, like Kyber and Dilitihium, one immediately obtains a post-quantum and strongly secure solution.
Expand
Antonio de la Piedra, Marloes Venema, Greg Alpár
ePrint Report ePrint Report
Attribute-based encryption (ABE) is a popular type of public-key encryption that enforces access control cryptographically, and has spurred the proposal of many use cases. To satisfy the requirements of the setting, tailor-made schemes are often introduced. However, designing secure schemes---as well as verifying that they are secure---is notoriously hard. Several of these schemes have turned out to be broken, making them dangerous to deploy in practice.

To overcome these shortcomings, we introduce ACABELLA. ACABELLA simplifies generating and verifying security proofs for pairing-based ABE schemes. It consists of a framework for security proofs that are easy to verify manually and an automated tool that efficiently generates these security proofs. Creating such security proofs generally takes no more than a few seconds. The output is easy to understand, and the proofs can be verified manually. In particular, the verification of a security proof generated by ACABELLA boils down to performing simple linear algebra.

The ACABELLA tool is open source and also available via a web interface. With its help, experts can simplify their proof process by verifying or refuting the security claims of their schemes and practitioners can get an assurance that the ABE scheme of their choice is secure.
Expand

24 August 2023

Peter Gaži, Aggelos Kiayias, Alexander Russell
ePrint Report ePrint Report
We study the problem of committee selection in the context of proof-of-stake consensus mechanisms or distributed ledgers. These settings determine a family of participating parties---each of which has been assigned a non-negative "stake"---and are subject to an adversary that may corrupt a subset of the parties. The challenge is to select a committee of participants that accurately reflects the proportion of corrupt and honest parties, as measured by stake, in the full population. The trade-off between committee size and the probability of selecting a committee that over-represents the corrupt parties is a fundamental factor in both security and efficiency of proof-of-stake consensus, as well as committee-run layer-two protocols.

We propose and analyze several new committee selection schemes that improve upon existing techniques by adopting low-variance assignment of certain committee members that hold significant stake. These schemes provide notable improvements to the size--security trade-off arising from the stake distributions of many deployed ledgers.
Expand
Ashwin Jha, Mustafa Khairallah, Mridul Nandi, Abishanka Saha
ePrint Report ePrint Report
Liskov, Rivest and Wagner laid the theoretical foundations for tweakable block ciphers (TBC). In a seminal paper, they proposed two (up to) birthday-bound secure design strategies --- LRW1 and LRW2 --- to convert any block cipher into a TBC. Several of the follow-up works consider cascading of LRW-type TBCs to construct beyond-the-birthday bound (BBB) secure TBCs. Landecker et al. demonstrated that just two-round cascading of LRW2 can already give a BBB security. Bao et al. undertook a similar exercise in context of LRW1 with TNT --- a three-round cascading of LRW1 --- that has been shown to achieve BBB security as well. In this paper, we present a CCA distinguisher on TNT that achieves a non-negligible advantage with $ O(2^{n/2}) $ queries, directly contradicting the security claims made by the designers. We provide a rigorous and complete advantage calculation coupled with experimental verifications that further support our claim. Next, we provide new and simple proofs of birthday-bound CCA security for both TNT and its single-key variant, which confirm the tightness of our attack. Furthering on to a more positive note, we show that adding just one more block cipher call, referred as 4-LRW1, does not just reestablish the BBB security, but also amplifies it up to $ 2^{3n/4} $ queries. As a side-effect of this endeavour, we propose a new abstraction of the cascaded LRW-design philosophy, referred to as the LRW+ paradigm, comprising two block cipher calls sandwiched between a pair of tweakable universal hashes. This helps us to provide a modular proof approach covering all cascaded LRW constructions with at least $ 2 $ rounds, including 4-LRW1, and its more established relative, the well-known CLRW2, or more aptly, 2-LRW2.
Expand
Tianyi Liu, Tiancheng Xie, Jiaheng Zhang, Dawn Song, Yupeng Zhang
ePrint Report ePrint Report
In the past decade, blockchains have seen various financial and technological innovations, with cryptocurrencies reaching a market cap of over 1 trillion dollars. However, scalability is one of the key issues hindering the deployment of blockchains in many applications. To improve the throughput of the transactions, zkRollups and zkEVM techniques using the cryptographic primitive of zero-knowledge proofs (ZKPs) have been proposed and many companies are adopting these technologies in the layer-2 solutions. However, in these technologies, the proof generation of the ZKP is the bottleneck and the companies have to deploy powerful machines with TBs of memory to batch a large number of transactions in a ZKP.

In this work, we improve the scalability of these techniques by proposing new schemes of fully distributed ZKPs. Our schemes can improve the efficiency and the scalability of ZKPs using multiple machines, while the communication among the machines is minimal. With our schemes, the ZKP generation can be distributed to multiple participants in a model similar to the mining pools. Our protocols are based on Plonk, an efficient zero-knowledge proof system with a universal trusted setup. The first protocol is for data-parallel circuits. For a computation of $M$ sub-circuits of size $T$ each, using $M$ machines, the prover time is $O(T\log T + M \log M)$, while the prover time of the original Plonk on a single machine is $O(MT\log (MT))$. Our protocol incurs only $O(1)$ communication per machine, and the proof size and verifier time are both $O(1)$, the same as the original Plonk. Moreover, we show that with minor modifications, our second protocol can support general circuits with arbitrary connections while preserving the same proving, verifying, and communication complexity. The technique is general and may be of independent interest for other applications of ZKP.

We implement Pianist (Plonk vIA uNlimited dISTribution), a fully distributed ZKP system using our protocols. Pianist can generate the proof for 8192 transactions in 313 seconds on 64 machines. This improves the scalability of the Plonk scheme by 64$\times$. The communication per machine is only 2.1 KB, regardless of the number of machines and the size of the circuit. The proof size is 2.2 KB and the verifier time is 3.5 ms. We further show that Pianist has similar improvements for general circuits. On a randomly generated circuit with $2^{25}$ gates, it only takes 5s to generate the proof using 32 machines, 24.2$\times$ faster than Plonk on a single machine.
Expand
◄ Previous Next ►