International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

06 October 2023

Jacob Leshno, Rafael Pass, Elaine Shi
ePrint Report ePrint Report
Traditional payment processors are the subject of antitrust concerns and regulations. Open decentralized ledgers (e.g., Bitcoin) provide an alternative. They do not rely on a central authority, avoiding antitrust and monopoly concerns. However, the open nature of these systems gives rise to many challenges, including fundamental questions about their security. To address this question, we consider a framework that combines economic theory and dis- tributed systems theory and define economic security for general permissionless decentralized ledgers. Analysis of Bitcoin’s Nakamoto protocol shows that block rewards are ineffective in pro- viding economic security due to limitations of incentives in environments with many anonymous participants. We present an alternative protocol showing that an open decentralized ledger can be economically secure.
Expand
Julia Len, Melissa Chase, Esha Ghosh, Kim Laine, Radames Cruz Moreno
ePrint Report ePrint Report
Key Transparency (KT) refers to a public key distribution system with transparency mechanisms proving its correct operation, i.e., proving that it reports consistent values for each user's public key. While prior work on KT systems have offered new designs to tackle this problem, relatively little attention has been paid on the issue of scalability. Indeed, it is not straightforward to actually build a scalable and practical KT system from existing constructions, which may be too complex, inefficient, or non-resilient against machine failures.

In this paper, we present OPTIKS, a full featured and optimized KT system that focuses on scalability. Our system is simpler and more performant than prior work, supporting smaller storage overhead while still meeting strong notions of security and privacy. Our design also incorporates a crash-tolerant and scalable server architecture, which we demonstrate by presenting extensive benchmarks. Finally, we address several real-world problems in deploying KT systems that have received limited attention in prior work, including account decommissioning and user-to-device mapping.
Expand
José Bacelar Almeida, Denis Firsov, Tiago Oliveira, Dominique Unruh
ePrint Report ePrint Report
We give a semantic characterization of leakage-freeness through timing side-channels for Jasmin programs. Our characterization also covers probabilistic Jasmin programs that are not constant-time. In addition, we provide a characterization in terms of probabilistic relational Hoare logic and prove equivalence of both definitions. We also prove that our new characterizations are compositional. Finally, we relate new definitions to the existing ones from prior work which only apply to deterministic programs.

To test our definitions we use Jasmin toolchain to develop a rejection sampling algorithm and prove (in EasyCrypt) that the implementation is leakage-free whilst not being constant-time.
Expand
Marcel Tiepelt, Edward Eaton, Douglas Stebila
ePrint Report ePrint Report
The KHAPE-HMQV protocol is a state-of-the-art highly efficient asymmetric password-authenticated key exchange protocol that provides several desirable security properties, but has the drawback of being vulnerable to quantum adversaries due to its reliance on discrete logarithm-based building blocks: solving a single discrete logarithm allows the attacker to perform an offline dictionary attack and recover the password. We show how to modify KHAPE-HMQV to make the protocol quantum-annoying: a classical adversary who has the additional ability to solve discrete logarithms can only break the protocol by solving a discrete logarithm for each guess of the password. While not fully resistant to attacks by quantum computers, a quantum-annoying protocol could offer some resistance to quantum adversaries for whom discrete logarithms are relatively expensive. Our modification to the protocol is small: encryption (using an ideal cipher) is added to one message. Our analysis uses the same ideal cipher model assumption as the original analysis of KHAPE, and quantum annoyingness is modelled using an extension of the generic group model which gives a classical adversary a discrete logarithm oracle.
Expand
Michele Ciampi, Rafail Ostrovsky, Luisa Siniscalchi, Hendrik Waldner
ePrint Report ePrint Report
In this work we study the problem of minimizing the round complexity for securely evaluating multiparty functionalities while making black-box use of polynomial time assumptions. In Eurocrypt 2016, Garg et al. showed that, assuming all parties have access to a broadcast channel, then at least four rounds of communication are required to securely realize non-trivial functionalities in the plain model. A sequence of works follow-up the result of Garg et al. matching this lower bound under a variety of assumptions. Unfortunately, none of these works make black-box use of the underlying cryptographic primitives. In Crypto 2021, Ishai, Khurana, Sahai, and Srinivasan came closer to matching the four-round lower bound, obtaining a five-round protocol that makes black-box use of oblivious transfer and PKE with pseudorandom public keys. In this work, we show how to realize any input-less functionality (e.g., coin-tossing, generation of key-pairs, and so on) in four rounds while making black-box use of two-round oblivious transfer. As an additional result, we construct the first four-round MPC protocol for generic functionalities that makes black-box use of the underlying primitives, achieving security against non-aborting adversaries. Our protocols are based on a new primitive called list two-party computation. This primitive offers relaxed security compared to the standard notion of secure two-party computation. Despite this relaxation, we argue that this tool suffices for our applications. List two-party computation is of independent interest, as we argue it can also be used for the generation of setups, like oblivious transfer correlated randomness, in three rounds. Prior to our work, generating such a setup required at least four rounds of interactions or a trusted third party.
Expand
Tomoki Moriya, Hiroshi Onuki, Yusuke Aikawa, Tsuyoshi Takagi
ePrint Report ePrint Report
In 2022, Moriya, Onuki, Aikawa, and Takagi proposed a new framework named generalized Montgomery coordinates to treat one-coordinate type formulas to compute isogenies. This framework generalizes some already known one-coordinate type formulas of elliptic curves. Their result shows that a formula to compute image points under isogenies is unique in the framework of generalized Montogmery coordinates; however, a formula to compute image curves is not unique. Therefore, we have a question: What formula is the most efficient to compute image curves in the framework of generalized Montogmery coordinates?

In this paper, we analyze the costs of formulas to compute image curves of $3$-isogenies in the framework of generalized Montgomery coordinates. From our result, the lower bound of the costs is $1\mathbf{M}+1\mathbf{S}$ as a formula whose output and input are in affine coordinates, $2\mathbf{S}$ as an affine formula whose output is projective, and $2\mathbf{M}+3\mathbf{S}$ as a projective formula.
Expand
Hiroki Okada, Rachel Player, Simon Pohmann, Christian Weinert
ePrint Report ePrint Report
Private information retrieval (PIR) protocols allow clients to access database entries without revealing the queried indices. They have many real-world applications, including privately querying patent-, compromised credential-, and contact databases. While existing PIR protocols that have been implemented perform reasonably well in practice, they all have suboptimal asymptotic complexities.

A line of work has explored so-called doubly-efficient PIR (DEPIR), which refers to single-server PIR protocols with optimal asymptotic complexities. Recently, Lin, Mook, and Wichs (STOC 2023) presented the first protocol that completely satisfies the DEPIR constraints and can be rigorously proven secure. Unfortunately, their proposal is purely theoretical in nature. It is even speculated that such protocols are completely impractical, and hence no implementation of any DEPIR protocol exists.

In this work, we challenge this assumption. We propose several optimizations for the protocol of Lin, Mook, and Wichs that improve both asymptotic and concrete running times, as well as storage requirements, by orders of magnitude. Furthermore, we implement the resulting protocol and show that for batch queries it outperforms state-of-the-art protocols.
Expand
Neyire Deniz Sarier
ePrint Report ePrint Report
In [1], Sarier presents a practical biometric-based non-transferable credential scheme that maintains the efficiency of the underlying Brands credential. In this paper, we design a new Blockchain-Based E-Voting (BBEV) scheme that combines the system of [1] with encrypted Attribute Based Credentials for a non-transferable code-voting approach to achieve efficient, usable, anonymous, transparent, auditable, verifiable, receipt-free and coercion-resistant remote voting system for small/medium scale elections. To the best of our knowledge, the system is the first user-centric BBEV scheme that depends on the one-show Brands' Credential both for biometric authentication and pre-encrypted ballot generation leading to a natural prevention against double voting. Even though the system is instantiated with Bitcoin Blockchain due to its prevalence and various coin mixers available for anonymity, the system is designed to be generic, i.e. independent of the cryptocurrency. Thus, the new BBEV scheme can be extended to large-scale elections for public Blockchains with higher throughput/cheaper transaction fees. Finally, a cost analysis based on the last USA presidential election data shows that, the new BBEV is advantageous over the traditional one if implemented for three consecutive elections.
Expand

03 October 2023

Douala, Cameroon, 10 July 2024
Event Calendar Event Calendar
Event date: 10 July 2024
Submission deadline: 16 February 2024
Notification: 26 April 2024
Expand
Amaury Pouly, Yixin Shen
ePrint Report ePrint Report
Learning with Errors (LWE) is an important problem for post-quantum cryptography (PQC) that underlines the security of several NIST PQC selected algorithms. Several recent papers have claimed improvements on the complexity of so-called dual attacks on LWE. These improvements make dual attacks comparable to or even better than primal attacks in certain parameter regimes. Unfortunately, those improvements rely on a number of untested and hard-to-test statistical assumptions. Furthermore, a recent paper claims that the whole premise of those improvements might be incorrect.

The goal of this paper is to improve the situation by proving the correctness of a dual attack without relying on any statistical assumption. Although our attack is greatly simplified compared to the recent ones, it shares all the important statistical elements with those attacks and can serve as a basis for the analysis of more advanced attacks.

Our main contribution is to clearly identify a set of parameters under which our attack (and presumably other recent dual attacks) can work. Furthermore, our analysis completely departs from the existing statistics-based analysis and is instead rooted in geometry. We also compare the regime in which our algorithm works to the ``contradictory regime'' of Ducas and Pulles. We observe that those two regimes are essentially complementary but also that their statistical model does not seem to match what happens in our attack.
Expand
Ran Cohen, Julian Loss, Tal Moran
ePrint Report ePrint Report
Byzantine agreement (BA) asks for a set of parties to reach agreement in an adversarial setting. A central question is how to construct efficient BA protocols that scale well with the number of parties. In particular, the communication complexity is a critical barrier for large-scale implementations.

State-of-the-art, scalable BA protocols typically work by sampling a small, unpredictable committee of parties that will send messages in each round. These messages must reach all honest parties, to allow the next round's committee to function. In practice, this is usually accomplished by propagating messages over a gossip network, implemented over a partial communication graph. Most formulations of gossip networks have an ideal guarantee that every message delivered to any honest party will be delivered to every other honest party. Unfortunately, realizing this guarantee necessarily makes the protocol vulnerable to denial-of-service attacks, since an adversary can flood the network with many messages that the protocol must deliver to all parties.

In this paper, we make several contributions towards realizing the goal of efficient, scalable byzantine agreement over a gossip network:

1. We define ``gossip with abort,'' a relaxed gossip model that can be efficiently realized with minor modifications to existing gossip protocols, yet allows for significant savings in communication compared to the full point-to-point model.

2. Our protocols work in a graded PKI model, in which honest parties only have partial agreement about the set of participants in the protocol. This model arises naturally in settings without trusted setup, such as the ``permissionless'' setting underlying many blockchain protocols.

3. We construct a new, player-replaceable BA protocol in the graded PKI model. The concrete communication complexity of our protocol, for typical parameter values, is more than 25 times better than the current state-of-the-art BA protocols in the honest-majority setting.
Expand
Tomoki Moriya
ePrint Report ePrint Report
Isogeny-based cryptography is one of the candidates for post-quantum cryptography. One of the benefits of using isogeny-based cryptography is its compactness. In particular, a key exchange scheme SIDH forgave us to use a $4\lambda$-bit prime for the security parameter $\lambda$.

Unfortunately, SIDH was broken in 2022 by some studies. After that, some isogeny-based key exchange and public key encryption schemes have been proposed; however, most of these schemes use primes whose sizes are not guaranteed as linearly related to the security parameter $\lambda$. As far as we know, the rest schemes have not been implemented due to the computation of isogenies of high dimensional abelian varieties, or they need to use a ``weak" curve (\textit{i.e.}, a curve whose endomorphism ring is known) as the starting curve.

In this study, we propose a novel compact isogeny-based key encapsulation mechanism named IS-CUBE via Kani's theorem and a $3$-dimensional SIDH diagram. A prime used in IS-CUBE is of the size of about $8\lambda$ bits, and its starting curve is a random supersingular elliptic curve. The core idea of IS-CUBE comes from the hardness of some already known computational problems and the novel computational problem (the Computational Long Isogeny with Torsion (CLIT) problem), which is the problem to compute a hidden isogeny from given two supersingular elliptic curves and information of torsion points of a relatively small order. From our PoC implementation of IS-CUBE via \textsf{sagemath}, it takes about $4.34$ sec for the public key generation, $0.61$ sec for the encapsulation, and $17.13$ sec for the decapsulation if $\lambda = 128$.
Expand
Konstantina Miteloudi, Joppe Bos, Olivier Bronchain, Björn Fay, Joost Renes
ePrint Report ePrint Report
This paper explores the challenges and potential solutions of implementing the recommended upcoming post-quantum cryptography standards (the CRYSTALS-Dilithium and CRYSTALS-Kyber algorithms) on resource constrained devices. The high computational cost of polynomial operations, fundamental to cryptography based on ideal lattices, presents significant challenges in an efficient implementation. This paper proposes a hardware/software co-design strategy using RISC-V extensions to optimize resource utilization and speed up the number-theoretic transformations (NTTs). The primary contributions include a lightweight custom arithmetic logic unit (ALU), integrated into a 4-stage pipeline 32-bit RISC-V processor. This ALU is tailored towards the NTT computations and supports modular arithmetic as well as NTT butterfly operations. Furthermore, an extension to the RISC-V instruction set is introduced, with ten new instructions accessing the custom ALU to perform the necessary operations. The new instructions reduce the cycle count of the Kyber and Dilithium NTTs by more than 80% compared to optimized assembly, while being more lightweight than other works that exist in the literature.
Expand
Helger Lipmaa, Roberto Parisella, Janno Siim
ePrint Report ePrint Report
In the algebraic group model (AGM), an adversary has to return with each group element a linear representation with respect to input group elements. In many groups, it is easy to sample group elements obliviously without knowing such linear representations. Since the AGM does not model this, it can be used to prove the security of spurious knowledge assumptions. We show several well-known zk-SNARKs use such assumptions. We propose AGM with oblivious sampling (AGMOS), an AGM variant where the adversary can access an oracle that allows sampling group elements obliviously from some distribution. We show that AGM and AGMOS are different by studying the family of ``total knowledge-of-exponent'' assumptions, showing that they are all secure in the AGM, but most are not secure in the AGMOS if the DL holds. We show an important separation in the case of the KZG commitment scheme. We show that many known AGM reductions go through also in the AGMOS, assuming a novel falsifiable assumption TOFR. We prove that TOFR is secure in a version of GGM with oblivious sampling.
Expand
Jens Ernstberger, Stefanos Chaliasos, George Kadianakis, Sebastian Steinhorst, Philipp Jovanovic, Arthur Gervais, Benjamin Livshits, Michele Orrù
ePrint Report ePrint Report
Zero-Knowledge Proofs (ZKPs), especially Succinct Non-interactive ARguments of Knowledge (SNARKs), have garnered significant attention in modern cryptographic applications. Given the multitude of emerging tools and libraries, assessing their strengths and weaknesses is nuanced and time-consuming. Often, claimed results are generated in isolation, and omissions in details render them irreproducible. The lack of comprehensive benchmarks, guidelines, and support frameworks to navigate the ZKP landscape effectively is a major barrier in the development of ZKP applications.

In response to this need, we introduce zk-Bench, the first benchmarking framework and estimator tool designed for performance evaluation of public-key cryptography, with a specific focus on practical assessment of general-purpose ZKP systems. To simplify navigating the complex set of metrics and qualitative properties, we offer a comprehensive open-source evaluation platform, which enables the rigorous dissection and analysis of tools for ZKP development to uncover their trade-offs throughout the entire development stack; from low-level arithmetic libraries, to high-level tools for SNARK development.

Using zk-Bench, we (i) collect data across $13$ different elliptic curves implemented across $9$ libraries, (ii) evaluate $5$ tools for ZKP development and (iii) provide a tool for estimating cryptographic protocols, instantiated for the $\mathcal{P}\mathfrak{lon}\mathcal{K}$ proof system, achieving an accuracy of 6 − 32% for ZKP circuits with up to millions of gates. By evaluating zk-Bench for various hardware configurations, we find that certain tools for ZKP development favor compute-optimized hardware, while others benefit from memory-optimized hardware. We observed performance enhancements of up to $40$ % for memory-optimized configurations and $50$ % for compute-optimized configurations, contingent on the specific ZKP development tool utilized.
Expand
Michał Wroński, Elżbieta Burek, Mateusz Leśniak
ePrint Report ePrint Report
The security level of a cipher is a key parameter. While general-purpose quantum computers significantly threaten modern symmetric ciphers, other quantum approaches like quantum annealing have been less concerning. However, this paper argues that a quantum annealer specifically designed to attack Grain 128 and Grain 128a ciphers could soon be technologically feasible. Such an annealer would require 5,751 (6,751) qubits and 77,496 (94,708) couplers, with a qubit connectivity of 225 (249). Notably, the forthcoming D-Wave Advantage 2 with Zephyr topology will feature over 7,000 qubits and 60,000 couplers and a qubit connectivity 20. This work also shows that modern stream ciphers like Grain 128 and Grain 128a could be vulnerable to quantum annealing attacks. Although the exact complexity of quantum annealing is unknown, heuristic estimates suggest a \(\sqrt{N}\) exponential advantage over simulated annealing for problems with \(N\) variables. We detail how to transform algebraic attacks on Grain ciphers into the QUBO problem, making our attack potentially more efficient than classical brute-force methods. We also show that applying our attack to rescaled versions of the Grain cipher, namely Grain \( l \) and Grain \( la \) versions, where \( l \) represents the key size, leads to our attack overtaking both brute-force and Grover's attacks for sufficiently large \( l \). This is true, provided that quantum annealing has an exponential advantage over simulated annealing. Specifically, it is sufficient for the time complexity of quantum annealing for problems with \( N \) variables to be $e^{N^{\beta}}$ for any \( \beta < 1 \). Finally, given the general nature of our attack method, all new ciphers should be scrutinized for vulnerability to quantum annealing attacks and at least match the AES cipher in terms of security level.
Expand
Seyoon Ragavan, Vinod Vaikuntanathan
ePrint Report ePrint Report
We improve the space efficiency of Regev's quantum factoring algorithm [Reg23] while keeping the circuit size the same. Our main result constructs a quantum factoring circuit using $O(n \log n)$ qubits and $O(n^{3/2} \log n)$ gates. In contrast, Regev's circuit requires $O(n^{3/2})$ qubits, while Shor's circuit requires $O(n^2)$ gates. As with Regev, to factor an $n$-bit integer $N$, one runs this circuit independently $\approx \sqrt{n}$ times and apply Regev's classical post-processing procedure.

Our optimization is achieved by implementing efficient and reversible exponentiation with Fibonacci numbers in the exponent, rather than the usual powers of 2.
Expand
Susumu Kiyoshima
ePrint Report ePrint Report
A succinct non-interactive argument (SNARG) is called holographic if the verifier runs in time sub-linear in the input length when given oracle access to an encoding of the input. We present holographic SNARGs for P and Batch-NP under the learning with errors (LWE) assumption. Our holographic SNARG for P has a verifier that runs in time $\mathsf{poly}(\lambda, \log T, \log n)$ for $T$-time computations and $n$-bit inputs ($\lambda$ is the security parameter), while our holographic SNARG for Batch-NP has a verifier that runs in time $\mathsf{poly}(\lambda, T, \log k)$ for $k$ instances of $T$-time computations. Before this work, constructions with the same asymptotic efficiency were known in the designated-verifier setting or under the sub-exponential hardness of the LWE assumption. We obtain our holographic SNARGs (in the public-verification setting under the polynomial hardness of the LWE assumption) by constructing holographic SNARGs for certain hash computations and then applying known/trivial transformations.

As an application, we use our holographic SNARGs to weaken the assumption needed for a recent public-coin 3-round zero-knowledge (ZK) argument [Kiyoshima, CRYPTO 2022]. Specifically, we use our holographic SNARGs to show that a public-coin 3-round ZK argument exists under the same assumptions as the state-of-the-art private-coin 3-round ZK argument [Bitansky et al., STOC 2018].
Expand
David Pointcheval
ePrint Report ePrint Report
Electronic voting is one of the most interesting application of modern cryptography, as it involves many innovative tools (such as homomorphic public-key encryption, non-interactive zero-knowledge proofs, and distributed cryptography) to guarantee several a priori contradictory security properties: the integrity of the tally and the privacy of the individual votes. While many efficient solutions exist for honest-but-curious voters, that follow the official procedure but try to learn more than just the public result, preventing attacks from malicious voters is much more complex: when voters may have incentive to send biased ballots, the privacy of the ballots is much harder to satisfy, whereas this is the crucial security property for electronic voting. We present a new technique to prove that an ElGamal ciphertext contains a message from a specific subset (quasi-adaptive NIZK of subset membership), using linearly-homomorphic signatures. The proofs are both quite efficient to generate, allowing the use of low-power devices to vote, and randomizable, which is important for the strong receipt-freeness property. They are well-suited to prevent vote-selling and replay attacks, which are the main threats against the privacy in electronic voting, with security proofs in the generic group model and the random oracle model.
Expand
Yilei Chen, Zihan Hu, Qipeng Liu, Han Luo, Yaxin Tu
ePrint Report ePrint Report
The learning with errors problem (LWE) is one of the most important building blocks for post-quantum cryptography. To better understand the quantum hardness of LWE, it is crucial to explore quantum variants of LWE, show quantum algorithms for those variants, or prove they are as hard as standard LWE.

To this end, Chen, Liu, and Zhandry [Eurocrypt 2022] define the $\sf{S|LWE\rangle}$ problem, which encodes the error of LWE samples into quantum amplitudes. They then show efficient quantum algorithms for $\sf{S|LWE\rangle}$ with a few interesting amplitudes. However, the hardness of the most interesting amplitude, Gaussian, was not addressed by Chen et al., or only known for some restricted settings (for example, when the number of $\sf{S|LWE\rangle}$ samples is very small, it is well known that $\sf{S|LWE\rangle}$ is as hard as standard LWE).

In this paper, we show new hardness and algorithms for $\sf{S|LWE\rangle}$ with Gaussian and other amplitudes. Our main results are

1. There exist quantum reductions from standard LWE or worst-case GapSVP to $\sf{S|LWE\rangle}$ with Gaussian amplitude with unknown phase, and arbitrarily many $\sf{S|LWE\rangle}$ samples.

2. There is a $2^{\widetilde{O}(\sqrt{n})}$-time algorithm for $\sf{S|LWE\rangle}$ with Gaussian amplitude with known phase, given $2^{\widetilde{O}(\sqrt{n})}$ many quantum samples. The algorithm is modified from Kuperberg's sieve, and in fact works for more general amplitudes as long as the amplitudes and phases are completely known.

One way of interpreting our result is: to show a sub-exponential time quantum algorithm for standard LWE, all we need is to handle phases in $\sf{S|LWE\rangle}$ amplitudes better, either in the algorithm or the reduction.
Expand
◄ Previous Next ►