International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

09 October 2023

Sofía Celi, Scott Griffy, Lucjan Hanzlik, Octavio Perez Kempner, Daniel Slamanig
ePrint Report ePrint Report
Digital signature schemes with specific properties have recently seen various real-world applications with a strong emphasis on privacy-enhancing technologies. They have been extensively used to develop anonymous credentials schemes and to achieve an even more comprehensive range of functionalities in the decentralized web.

Substantial work has been done to formalize different types of signatures where an allowable set of transformations can be applied to message-signature pairs to obtain new related pairs. Most of the previous work focused on transformations with respect to the message being signed, but little has been done to study what happens when transformations apply to the signing keys. A first attempt to thoroughly formalize such aspects was carried by Derler and Slamanig (ePrint '16, Designs, Codes and Cryptography '19), followed by the more recent efforts by Backes et. al (ASIACRYPT '18) and Eaton et. al (ePrint '23). However, the literature on the topic is vast and different terminology is used across contributions, which makes it difficult to compare related works and understand the range of applications covered by a given construction.

In this work, we present a unified view of signatures with randomizable keys and revisit their security properties. We focus on state-of-the-art constructions and related applications, identifying existing challenges. Our systematization allows us to highlight gaps, open questions and directions for future research on signatures with randomizable keys.
Expand

06 October 2023

Seung Geol Choi, Dana Dachman-Soled, Mingyu Liang, Linsheng Liu, Arkady Yerukhimovich
ePrint Report ePrint Report
The min-hash sketch is a well-known technique for low-communication approximation of the Jaccard index between two input sets. Moreover, there is a folklore belief that min-hash sketch based protocols protect the privacy of the inputs. In this paper, we investigate this folklore to quantify the privacy of the min-hash sketch.

We begin our investigation by considering the privacy of min-hash in a centralized setting where the hash functions are chosen by the min-hash functionality and are unknown to the participants. We show that in this case the min-hash output satisfies the standard definition of differential privacy (DP) without any additional noise. This immediately yields a privacy-preserving sublinear-communication semi-honest 2-PC protocol based on FHE where the hash function is evaluated homomorphically.

To improve the efficiency of this protocol, we next consider an implementation in the random oracle model. Here, the protocol participants jointly sample public prefixes for domain separation of the random oracle, and locally evaluate the resulting hash functions on their input sets. Unfortunately, we show that in this public hash function setting, the min-hash output is no longer DP. We therefore consider the notion of distributional differential privacy (DDP) introduced by Bassily et al.~(FOCS 2013). We show that if the honest party's set has sufficiently high min-entropy then the output of the min-hash functionality achieves DDP, again without any added noise. This yields a more efficient semi-honest two-party protocol in the random oracle model, where parties first locally hash their input sets and then perform a 2PC for comparison. By proving that our protocols satisfy DP and DDP respectively, our results formally confirm and qualify the folklore belief that min-hash based protocols protect the privacy of their inputs.
Expand
Shiyu Shen, Hao Yang, Wenqian Li, Yunlei Zhao
ePrint Report ePrint Report
The threat posed by quantum computing has precipitated an urgent need for post-quantum cryptography. Recently, the post-quantum digital signature draft FIPS 204 has been published, delineating the details of the ML-DSA, which is derived from the CRYSTALS-Dilithium. Despite these advancements, server environments, especially those equipped with GPU devices necessitating high-throughput signing, remain entrenched in classical schemes. A conspicuous void exists in the realm of GPU implementation or server-specific designs for ML-DSA.

In this paper, we propose the first server-oriented GPU design tailored for the ML-DSA signing procedure in high-throughput servers. We introduce several innovative theoretical optimizations to bolster performance, including depth-prior sparse ternary polynomial multiplication, the branch elimination method, and the rejection-prioritized checking order. Furthermore, exploiting server-oriented features, we propose a comprehensive GPU hardware design, augmented by a suite of GPU implementation optimizations to further amplify performance. Additionally, we present variants for sampling sparse polynomials, thereby streamlining our design. The deployment of our implementation on both server-grade and commercial GPUs demonstrates significant speedups, ranging from 170.7× to 294.2× against the CPU baseline, and an improvement of up to 60.9% compared to related work, affirming the effectiveness and efficiency of the proposed GPU architecture for ML-DSA signing procedure.
Expand
Dragan Lambić
ePrint Report ePrint Report
In this paper a reduced set of submatrices for a faster evaluation of the MDS property of a circulant matrix, with entries that are powers of two, is proposed. A proposition is made that under the condition that all entries of a t × t circulant matrix are powers of 2, it is sufficient to check only its 2x2 submatrices in order to evaluate the MDS property. Although there is no theoretical proof to support this proposition at this point, the experimental results conducted on a sample of a 100 thousand randomly generated matrices indicate that this proposition is true. There are benefits of the proposed MDS test on the efficiency of search methods for the generation of circulant MDS matrices, regardless of the correctness of this proposition. However, if this proposition is correct, its impact on the speed of search methods for circulant MDS matrices will be huge, which will enable generation of MDS matrices of large sizes. Also, a modified version of the make_binary_powers function is presented. Based on this modified function and the proposed MDS test, some examples of efficient 16 x 16 MDS matrices are presented. Also, an examples of efficient 24 x 24 matrices are generated, whose MDS property should be further validated.
Expand
Charlotte Lefevre, Yanis Belkheyar, Joan Daemen
ePrint Report ePrint Report
We present a construction, called Kirby, for building a variable-input-length pseudorandom function (VIL-PRF) from a $b$-bit permutation. For this construction we prove a tight bound of $b/2$ bits of security on the PRF distinguishing advantage in the random permutation model and in the multi-user setting. Similar to full-state keyed sponge/duplex, it supports full-state absorbing and additionally supports full-state squeezing, where the latter can at most squeeze $b-c$ bits per permutation call for a security level of $c$ bits. This advantage is especially relevant on constrained platforms when using a permutation with small width $b$. For instance, for $b=256$ at equal security strength the squeezing rate of Kirby is twice that of keyed sponge/duplex. We define a simple mode on top of Kirby that turns it into a deck function with parallel expansion. This deck function is suited for lightweight applications in the sense that it has a low memory footprint. Moreover, for short inputs it can be used for low-latency stream encryption: the time between the availability of the input and the keystream is only a single permutation call. Another feature that sets Kirby apart from other constructions is that leakage of an intermediate state does not allow recovering the key or $\textit{earlier states}$.
Expand
Rujia Li, Yuanzhao Li, Qin Wang, Sisi Duan, Qi Wang, Mark Ryan
ePrint Report ePrint Report
With the increasing scale and complexity of online activities, accountability, as an after-the-fact mechanism, has become an effective complementary approach to ensure system security. Decades of research have delved into the connotation of accountability. They fail, however, to achieve \textit{practical} accountability of decryption. This paper seeks to address this gap. We consider the scenario where a client (called encryptor, her) encrypts her data and then chooses a delegate (a.k.a. decryptor, him) that stores data for her. If the decryptor does not behave correctly, with non-negligible probability, his behavior will be detected, making the decryptor \textit{accountable} for decryption.

We make three contributions. First, we review key definitions of accountability known so far. Based on extensive investigations, we formalize new definitions of accountability specifically targeting the decryption process, denoted as \textit{accountable decryption}, and discuss the (in)possibilities when capturing this concept. We also define the security goals in correspondence. Secondly, we present a novel hardware-assisted solution aligning with definitions. Instead of fully trusting the TEE like previous TEE-based accountability solutions, we take a further step, making TEE work in the ``trust, but verify" model where a compromised state is detectable. Thirdly, we implement a full-fledged system and conduct evaluations. The results demonstrate that our solution is efficient. Even in a scenario involving $300,000$ log entries, the decryption process concludes in approximately $5.5$ms, and malicious decryptors can be identified within $69$ms.
Expand
Matteo Campanelli, Antonio Faonio, Dario Fiore, Tianyu Li, Helger Lipmaa
ePrint Report ePrint Report
Lookup arguments allow to prove that the elements of a committed vector come from a (bigger) committed table. They enable novel approaches to reduce the prover complexity of general-purpose zkSNARKs, implementing “non-arithmetic operations” such as range checks, XOR and AND more efficiently. We extend the notion of lookup arguments along two directions and improve their efficiency:

(1) we extend vector lookups to matrix lookups (where we can prove that a committed matrix is a submatrix of a committed table).

(2) We consider the notion of zero-knowledge lookup argument that keeps the privacy of both the sub-vector/sub-matrix and the table.

(3) We present new zero-knowledge lookup arguments, dubbed cq+, zkcq+ and cq++, more efficient than the state of art, namely the recent work by Eagen, Fiore and Gabizon named cq.

Finally, we give a novel application of zero-knowledge matrix lookup argument to the domain of zero-knowledge decision tree where the model provider releases a commitment to a decision tree and can prove in zero-knowledge statistics over the committed data structure. Our scheme based on lookup arguments has succinct verification, prover’s time complexity asymptotically better than the state of the art, and is secure in a strong security model where the commitment to the decision tree can be malicious.
Expand
Siemen Dhooghe, Artemii Ovchinnikov
ePrint Report ePrint Report
Modern block ciphers designed for hardware and masked with Threshold Implementations (TIs) provide provable security against first-order attacks. However, the application of TIs leaves designers to deal with a trade-off between its security and its cost, for example, the process to generate its required random bits. This generation cost comes with an increased overhead in terms of area and latency. Decreasing the number of random bits for the masking allows to reduce the aforementioned overhead. We propose to reduce the randomness to mask the secrets, like the plaintext. For that purpose, we suggest relaxing the requirement for the uniformity of the input shares and reuse randomness for their masking in first-order TIs. We apply our countermeasures to first-order TIs of the Prince and Midori64 ciphers with three shares. Since the designs with non-uniform masks are no longer perfect first-order probing secure, we provide further analysis by calculating bounds on the advantage of a noisy threshold-probing adversary. We then make use of the PROLEAD tool, which implements statistical tests verifying the robust probing security to compare its output with our estimates. Finally, we evaluate the designs on FPGA to highlight the practical security of our solution. We observe that their security holds while requiring four times less randomness over uniform TIs.
Expand
Jacob Leshno, Rafael Pass, Elaine Shi
ePrint Report ePrint Report
Traditional payment processors are the subject of antitrust concerns and regulations. Open decentralized ledgers (e.g., Bitcoin) provide an alternative. They do not rely on a central authority, avoiding antitrust and monopoly concerns. However, the open nature of these systems gives rise to many challenges, including fundamental questions about their security. To address this question, we consider a framework that combines economic theory and dis- tributed systems theory and define economic security for general permissionless decentralized ledgers. Analysis of Bitcoin’s Nakamoto protocol shows that block rewards are ineffective in pro- viding economic security due to limitations of incentives in environments with many anonymous participants. We present an alternative protocol showing that an open decentralized ledger can be economically secure.
Expand
Julia Len, Melissa Chase, Esha Ghosh, Kim Laine, Radames Cruz Moreno
ePrint Report ePrint Report
Key Transparency (KT) refers to a public key distribution system with transparency mechanisms proving its correct operation, i.e., proving that it reports consistent values for each user's public key. While prior work on KT systems have offered new designs to tackle this problem, relatively little attention has been paid on the issue of scalability. Indeed, it is not straightforward to actually build a scalable and practical KT system from existing constructions, which may be too complex, inefficient, or non-resilient against machine failures.

In this paper, we present OPTIKS, a full featured and optimized KT system that focuses on scalability. Our system is simpler and more performant than prior work, supporting smaller storage overhead while still meeting strong notions of security and privacy. Our design also incorporates a crash-tolerant and scalable server architecture, which we demonstrate by presenting extensive benchmarks. Finally, we address several real-world problems in deploying KT systems that have received limited attention in prior work, including account decommissioning and user-to-device mapping.
Expand
José Bacelar Almeida, Denis Firsov, Tiago Oliveira, Dominique Unruh
ePrint Report ePrint Report
We give a semantic characterization of leakage-freeness through timing side-channels for Jasmin programs. Our characterization also covers probabilistic Jasmin programs that are not constant-time. In addition, we provide a characterization in terms of probabilistic relational Hoare logic and prove equivalence of both definitions. We also prove that our new characterizations are compositional. Finally, we relate new definitions to the existing ones from prior work which only apply to deterministic programs.

To test our definitions we use Jasmin toolchain to develop a rejection sampling algorithm and prove (in EasyCrypt) that the implementation is leakage-free whilst not being constant-time.
Expand
Marcel Tiepelt, Edward Eaton, Douglas Stebila
ePrint Report ePrint Report
The KHAPE-HMQV protocol is a state-of-the-art highly efficient asymmetric password-authenticated key exchange protocol that provides several desirable security properties, but has the drawback of being vulnerable to quantum adversaries due to its reliance on discrete logarithm-based building blocks: solving a single discrete logarithm allows the attacker to perform an offline dictionary attack and recover the password. We show how to modify KHAPE-HMQV to make the protocol quantum-annoying: a classical adversary who has the additional ability to solve discrete logarithms can only break the protocol by solving a discrete logarithm for each guess of the password. While not fully resistant to attacks by quantum computers, a quantum-annoying protocol could offer some resistance to quantum adversaries for whom discrete logarithms are relatively expensive. Our modification to the protocol is small: encryption (using an ideal cipher) is added to one message. Our analysis uses the same ideal cipher model assumption as the original analysis of KHAPE, and quantum annoyingness is modelled using an extension of the generic group model which gives a classical adversary a discrete logarithm oracle.
Expand
Michele Ciampi, Rafail Ostrovsky, Luisa Siniscalchi, Hendrik Waldner
ePrint Report ePrint Report
In this work we study the problem of minimizing the round complexity for securely evaluating multiparty functionalities while making black-box use of polynomial time assumptions. In Eurocrypt 2016, Garg et al. showed that, assuming all parties have access to a broadcast channel, then at least four rounds of communication are required to securely realize non-trivial functionalities in the plain model. A sequence of works follow-up the result of Garg et al. matching this lower bound under a variety of assumptions. Unfortunately, none of these works make black-box use of the underlying cryptographic primitives. In Crypto 2021, Ishai, Khurana, Sahai, and Srinivasan came closer to matching the four-round lower bound, obtaining a five-round protocol that makes black-box use of oblivious transfer and PKE with pseudorandom public keys. In this work, we show how to realize any input-less functionality (e.g., coin-tossing, generation of key-pairs, and so on) in four rounds while making black-box use of two-round oblivious transfer. As an additional result, we construct the first four-round MPC protocol for generic functionalities that makes black-box use of the underlying primitives, achieving security against non-aborting adversaries. Our protocols are based on a new primitive called list two-party computation. This primitive offers relaxed security compared to the standard notion of secure two-party computation. Despite this relaxation, we argue that this tool suffices for our applications. List two-party computation is of independent interest, as we argue it can also be used for the generation of setups, like oblivious transfer correlated randomness, in three rounds. Prior to our work, generating such a setup required at least four rounds of interactions or a trusted third party.
Expand
Tomoki Moriya, Hiroshi Onuki, Yusuke Aikawa, Tsuyoshi Takagi
ePrint Report ePrint Report
In 2022, Moriya, Onuki, Aikawa, and Takagi proposed a new framework named generalized Montgomery coordinates to treat one-coordinate type formulas to compute isogenies. This framework generalizes some already known one-coordinate type formulas of elliptic curves. Their result shows that a formula to compute image points under isogenies is unique in the framework of generalized Montogmery coordinates; however, a formula to compute image curves is not unique. Therefore, we have a question: What formula is the most efficient to compute image curves in the framework of generalized Montogmery coordinates?

In this paper, we analyze the costs of formulas to compute image curves of $3$-isogenies in the framework of generalized Montgomery coordinates. From our result, the lower bound of the costs is $1\mathbf{M}+1\mathbf{S}$ as a formula whose output and input are in affine coordinates, $2\mathbf{S}$ as an affine formula whose output is projective, and $2\mathbf{M}+3\mathbf{S}$ as a projective formula.
Expand
Hiroki Okada, Rachel Player, Simon Pohmann, Christian Weinert
ePrint Report ePrint Report
Private information retrieval (PIR) protocols allow clients to access database entries without revealing the queried indices. They have many real-world applications, including privately querying patent-, compromised credential-, and contact databases. While existing PIR protocols that have been implemented perform reasonably well in practice, they all have suboptimal asymptotic complexities.

A line of work has explored so-called doubly-efficient PIR (DEPIR), which refers to single-server PIR protocols with optimal asymptotic complexities. Recently, Lin, Mook, and Wichs (STOC 2023) presented the first protocol that completely satisfies the DEPIR constraints and can be rigorously proven secure. Unfortunately, their proposal is purely theoretical in nature. It is even speculated that such protocols are completely impractical, and hence no implementation of any DEPIR protocol exists.

In this work, we challenge this assumption. We propose several optimizations for the protocol of Lin, Mook, and Wichs that improve both asymptotic and concrete running times, as well as storage requirements, by orders of magnitude. Furthermore, we implement the resulting protocol and show that for batch queries it outperforms state-of-the-art protocols.
Expand
Neyire Deniz Sarier
ePrint Report ePrint Report
In [1], Sarier presents a practical biometric-based non-transferable credential scheme that maintains the efficiency of the underlying Brands credential. In this paper, we design a new Blockchain-Based E-Voting (BBEV) scheme that combines the system of [1] with encrypted Attribute Based Credentials for a non-transferable code-voting approach to achieve efficient, usable, anonymous, transparent, auditable, verifiable, receipt-free and coercion-resistant remote voting system for small/medium scale elections. To the best of our knowledge, the system is the first user-centric BBEV scheme that depends on the one-show Brands' Credential both for biometric authentication and pre-encrypted ballot generation leading to a natural prevention against double voting. Even though the system is instantiated with Bitcoin Blockchain due to its prevalence and various coin mixers available for anonymity, the system is designed to be generic, i.e. independent of the cryptocurrency. Thus, the new BBEV scheme can be extended to large-scale elections for public Blockchains with higher throughput/cheaper transaction fees. Finally, a cost analysis based on the last USA presidential election data shows that, the new BBEV is advantageous over the traditional one if implemented for three consecutive elections.
Expand

03 October 2023

Douala, Cameroon, 10 July 2024
Event Calendar Event Calendar
Event date: 10 July 2024
Submission deadline: 16 February 2024
Notification: 26 April 2024
Expand
Amaury Pouly, Yixin Shen
ePrint Report ePrint Report
Learning with Errors (LWE) is an important problem for post-quantum cryptography (PQC) that underlines the security of several NIST PQC selected algorithms. Several recent papers have claimed improvements on the complexity of so-called dual attacks on LWE. These improvements make dual attacks comparable to or even better than primal attacks in certain parameter regimes. Unfortunately, those improvements rely on a number of untested and hard-to-test statistical assumptions. Furthermore, a recent paper claims that the whole premise of those improvements might be incorrect.

The goal of this paper is to improve the situation by proving the correctness of a dual attack without relying on any statistical assumption. Although our attack is greatly simplified compared to the recent ones, it shares all the important statistical elements with those attacks and can serve as a basis for the analysis of more advanced attacks.

Our main contribution is to clearly identify a set of parameters under which our attack (and presumably other recent dual attacks) can work. Furthermore, our analysis completely departs from the existing statistics-based analysis and is instead rooted in geometry. We also compare the regime in which our algorithm works to the ``contradictory regime'' of Ducas and Pulles. We observe that those two regimes are essentially complementary but also that their statistical model does not seem to match what happens in our attack.
Expand
Ran Cohen, Julian Loss, Tal Moran
ePrint Report ePrint Report
Byzantine agreement (BA) asks for a set of parties to reach agreement in an adversarial setting. A central question is how to construct efficient BA protocols that scale well with the number of parties. In particular, the communication complexity is a critical barrier for large-scale implementations.

State-of-the-art, scalable BA protocols typically work by sampling a small, unpredictable committee of parties that will send messages in each round. These messages must reach all honest parties, to allow the next round's committee to function. In practice, this is usually accomplished by propagating messages over a gossip network, implemented over a partial communication graph. Most formulations of gossip networks have an ideal guarantee that every message delivered to any honest party will be delivered to every other honest party. Unfortunately, realizing this guarantee necessarily makes the protocol vulnerable to denial-of-service attacks, since an adversary can flood the network with many messages that the protocol must deliver to all parties.

In this paper, we make several contributions towards realizing the goal of efficient, scalable byzantine agreement over a gossip network:

1. We define ``gossip with abort,'' a relaxed gossip model that can be efficiently realized with minor modifications to existing gossip protocols, yet allows for significant savings in communication compared to the full point-to-point model.

2. Our protocols work in a graded PKI model, in which honest parties only have partial agreement about the set of participants in the protocol. This model arises naturally in settings without trusted setup, such as the ``permissionless'' setting underlying many blockchain protocols.

3. We construct a new, player-replaceable BA protocol in the graded PKI model. The concrete communication complexity of our protocol, for typical parameter values, is more than 25 times better than the current state-of-the-art BA protocols in the honest-majority setting.
Expand
Tomoki Moriya
ePrint Report ePrint Report
Isogeny-based cryptography is one of the candidates for post-quantum cryptography. One of the benefits of using isogeny-based cryptography is its compactness. In particular, a key exchange scheme SIDH forgave us to use a $4\lambda$-bit prime for the security parameter $\lambda$.

Unfortunately, SIDH was broken in 2022 by some studies. After that, some isogeny-based key exchange and public key encryption schemes have been proposed; however, most of these schemes use primes whose sizes are not guaranteed as linearly related to the security parameter $\lambda$. As far as we know, the rest schemes have not been implemented due to the computation of isogenies of high dimensional abelian varieties, or they need to use a ``weak" curve (\textit{i.e.}, a curve whose endomorphism ring is known) as the starting curve.

In this study, we propose a novel compact isogeny-based key encapsulation mechanism named IS-CUBE via Kani's theorem and a $3$-dimensional SIDH diagram. A prime used in IS-CUBE is of the size of about $8\lambda$ bits, and its starting curve is a random supersingular elliptic curve. The core idea of IS-CUBE comes from the hardness of some already known computational problems and the novel computational problem (the Computational Long Isogeny with Torsion (CLIT) problem), which is the problem to compute a hidden isogeny from given two supersingular elliptic curves and information of torsion points of a relatively small order. From our PoC implementation of IS-CUBE via \textsf{sagemath}, it takes about $4.34$ sec for the public key generation, $0.61$ sec for the encapsulation, and $17.13$ sec for the decapsulation if $\lambda = 128$.
Expand
◄ Previous Next ►