International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

24 March 2023

Jiaxin Guan, Daniel Wichs, Mark Zhandry
ePrint Report ePrint Report
Consider a state-level adversary who observes and stores large amounts of encrypted data from all users on the Internet, but does not have the capacity to store it all. Later, it may target certain "persons of interest" in order to obtain their decryption keys. We would like to guarantee that, if the adversary's storage capacity is only (say) $1\%$ of the total encrypted data size, then even if it can later obtain the decryption keys of arbitrary users, it can only learn something about the contents of (roughly) $1\%$ of the ciphertexts, while the rest will maintain full security. This can be seen as an extension of incompressible cryptography (Dziembowski CRYPTO '06, Guan, Wichs and Zhandry EUROCRYPT '22) to the multi-user setting. We provide solutions in both the symmetric key and public key setting with various trade-offs in terms of computational assumptions and efficiency. As the core technical tool, we study an information-theoretic problem which we refer to as "somewhere randomness extraction". Suppose $X_1, \ldots, X_t$ are correlated random variables whose total joint min-entropy rate is $\alpha$, but we know nothing else about their individual entropies. We choose $t$ random and independent seeds $S_1, \ldots, S_t$ and attempt to individually extract some small amount of randomness $Y_i = \mathsf{Ext}(X_i;S_i)$ from each $X_i$. We'd like to say that roughly an $\alpha$-fraction of the extracted outputs $Y_i$ should be indistinguishable from uniform even given all the remaining extracted outputs and all the seeds. We show that this indeed holds for specific extractors based on Hadamard and Reed-Muller codes.
Expand
Manuel Barbosa, François Dupressoir, Benjamin Grégoire, Andreas Hülsing, Matthias Meijers, Pierre-Yves Strub
ePrint Report ePrint Report
This work presents a novel machine-checked tight security proof for $\mathrm{XMSS}$ — a stateful hash-based signature scheme that is (1) standardized in RFC 8391 and NIST SP 800-208, and (2) employed as a primary building block of $\mathrm{SPHINCS}^{+}$, one of the signature schemes recently selected for standardization as a result of NIST’s post-quantum competition. In 2020, Kudinov, Kiktenko, and Fedoro pointed out a flaw affecting the tight security proofs of $\mathrm{SPHINCS}^{+}$ and $\mathrm{XMSS}$. For the case of $\mathrm{SPHINCS}^{+}$, this flaw was fixed in a subsequent tight security proof by Hülsing and Kudinov. Unfortunately, employing the fix from this proof to construct an analogous tight security proof for XMSS would merely demonstrate security with respect to an insufficient notion. At the cost of modeling the message-hashing function as a random oracle, we complete the tight security proof for $\mathrm{XMSS}$ and formally verify it using the EasyCrypt proof assistant. As part of this endeavor, we formally verify the crucial step common to (the security proofs of) $\mathrm{SPHINCS}^{+}$ and $\mathrm{XMSS}$ that was found to be flawed before, thereby confirming that the core of the aforementioned security proof by Hülsing and Kudinov is correct. As this is the first work to formally verify proofs for hash-based signature schemes in EasyCrypt, we develop several novel libraries for the fundamental cryptographic concepts underlying such schemes — e.g., hash functions and digital signature schemes — establishing a common starting point for future formal verification efforts. These libraries will be particularly helpful in formally verifying proofs of other hash-based signature schemes such as $\mathrm{LMS}$ or $\mathrm{SPHINCS}^{+}$.
Expand
Simone Galimberti, Maria Potop-Butucaru
ePrint Report ePrint Report
We study the rational behaviors of participants in $DAG$-Based Distributed Ledgers. We analyze generic algorithms that encapsulate the main actions of participants in a $DAG$-based distributed ledger: voting for a block, and checking its validity. Knowing that those actions have costs, and validating a block gives rewards to users who participated in the validation procedure, we study using game theory how strategic participants behave while trying to maximize their gains. We consider scenarios with different type of participants and investigate if there exist equilibria where the properties of the protocols are guaranteed. The analysis is focused on the study of equilibria with trembling participants (i.e. rational participants that can do unintended actions with a low probability). We found that in presence of trembling participants, there exist equilibria where protocols properties may be violated.
Expand
Claude Carlet, Abderrahman Daif, Sylvain Guilley, Cédric Tavernier
ePrint Report ePrint Report
The implementation of cryptographic algorithms must be protected against physical attacks. Side-channel and fault injection analyses are two prominent such implem\-entation-level attacks. Protections against either do exist; they are characterized by security orders: the higher the order, the more difficult the attack. In this paper, we leverage fast discrete Fourier transform to reduce the complexity of high-order masking, and extend it to allow for fault detection and/or correction. The security paradigm is that of code-based masking. Coding theory is amenable both to mix the information and masking material at a prescribed order, and to detect and/or correct errors purposely injected by an attacker. For the first time, we show that quasi-linear masking (pioneered by Goudarzi, Joux and Rivain at ASIACRYPT 2018) can be achieved alongside with cost amortisation. This technique consists in masking several symbols/bytes with the same masking material, therefore improving the efficiency of the masking. Similarly, it allows to optimize the detection capability of codes as linear codes are all the more efficient as the information to protect is longer. Namely, we prove mathematically that our scheme features side-channel security order of $d+1-t$, detects $d$ faults and corrects $\lfloor(d-1)/2\rfloor$ faults, where $2d+1$ is the encoding length and $t$ is the information size ($t\geq1$). Applied to AES, one can get side-channel protection of order $d=7$ when masking one column/line ($t=4$ bytes) at once. In addition to the theory, that makes use of the Frobenius Additive Fast Fourier Transform, we show performance results, both in software and hardware.
Expand
Carsten Baum, Bernardo David, Elena Pagnin, Akira Takahashi
ePrint Report ePrint Report
Time-based cryptographic primitives such as Time-Lock Puzzles (TLPs) and Verifiable Delay Functions (VDFs) have recently found many applications to the efficient design of secure protocols such as randomness beacons or multiparty computation with partial fairness. However, current TLP and VDF candidate constructions rely on the average hardness of sequential computational problems. Unfortunately, obtaining concrete parameters for these is notoriously hard, as there cannot be a large gap between the honest parties’ and the adversary’s runtime when solving the same problem. Moreover, even a constant improvement in algorithms for solving these problems can render parameter choices, and thus deployed systems, insecure - unless very conservative and therefore highly inefficient parameters are chosen.

In this work, we investigate how to construct time-based cryptographic primitives from communication delay, which has a known lower bound given the physical distance between devices: the speed of light. In order to obtain high delays, we explore the sequential communication delay that arises when sending a message through a constellation of satellites. This has the advantage that distances between protocol participants are guaranteed as positions of satellites are observable, so delay lower bounds can be easily computed. At the same time, building cryptographic primitives for this setting is challenging due to the constrained resources of satellites and possible corruptions of parties within the constellation.

We address these challenges by constructing efficient proofs of sequential communication delay to convince a verifier that a message has accrued delay by traversing a path among satellites. As part of this construction, we propose the first ordered multisignature scheme with security under a version of the the discrete logarithm assumption, which enjoys constant-size signatures and, modulo preprocessing, computational complexity independent of the number of signers. Building on our proofs of sequential communication delay, we show new constructions of Publicly Verifiable TLPs and VDFs whose delay guarantees are rooted on physical communication delay lower bounds. Our protocols as well as the ordered multisignature are analysed in the Universal Composability framework using novel models for sequential communication delays and (ordered) multisignatures. A direct application of our results is a randomness beacon that only accesses expensive communication resources in case of cheating.
Expand
Nico Döttling, Dimitris Kolonelos, Russell W. F. Lai, Chuanwei Lin, Giulio Malavolta, Ahmadreza Rahimi
ePrint Report ePrint Report
Laconic cryptography is an emerging paradigm that enables cryptographic primitives with sublinear communication complexity in just two messages. In particular, a two-message protocol between Alice and Bob is called laconic if its communication and computation complexity are essentially independent of the size of Alice's input. This can be thought of as a dual notion of fully-homomorphic encryption, as it enables "Bob-optimized" protocols. This paradigm has led to tremendous progress in recent years. However, all existing constructions of laconic primitives are considered only of theoretical interest: They all rely on non-black-box cryptographic techniques, which are highly impractical.

This work shows that non-black-box techniques are not necessary for basic laconic cryptography primitives. We propose a completely algebraic construction of laconic encryption, a notion that we introduce in this work, which serves as the cornerstone of our framework. We prove that the scheme is secure under the standard Learning With Errors assumption (with polynomial modulus-to-noise ratio). We provide proof-of-concept implementations for the first time for laconic primitives, demonstrating the construction is indeed practical: For a database size of $2^{50}$, encryption and decryption are in the order of single digit milliseconds.

Laconic encryption can be used as a black box to construct other laconic primitives. Specifically, we show how to construct:

- Laconic oblivious transfer - Registration-based encryption scheme - Laconic private-set intersection protocol

All of the above have essentially optimal parameters and similar practical efficiency. Furthermore, our laconic encryption can be preprocessed such that the online encryption step is entirely combinatorial and therefore much more efficient. Using similar techniques, we also obtain identity-based encryption with an unbounded identity space and tight security proof (in the standard model).
Expand
Daniel Collins, Simone Colombo, Loïs Huguenin-Dumittan
ePrint Report ePrint Report
This work discusses real world deniability in messaging. We highlight how the different models for cryptographic deniability do not ensure practical deniability. To overcome this situation, we propose a model for real world deniability that takes into account the entire messaging system. We then discuss how deniability is (not) used in practice and the challenges arising from the design of a deniable system. We propose a simple, yet powerful solution for deniability: applications should enable direct modification of local messages; we discuss the impacts of this strong deniability property.
Expand
Kang Hoon Lee, Ji Won Yoon
ePrint Report ePrint Report
In recent history of fully homomorphic encryption, bootstrapping has been actively studied throughout many HE schemes. As bootstrapping is an essential process to transform somewhat homomorphic encryption schemes into fully homomorphic, enhancing its performance is one of the key factors of improving the utility of homomorphic encryption.

In this paper, we propose an extended bootstrapping for TFHE, which we name it by EBS. One of the main drawback of TFHE bootstrapping was that the precision of bootstrapping is mainly decided by the polynomial dimension $N$. Thus if one wants to bootstrap with high precision, one must enlarge $N$, or take alternative method. Our EBS enables to use small $N$ for parameter selection, but to bootstrap in higher dimension to keep high precision. Moreover, it can be easily parallelized for faster computation. Also, the EBS can be easily adapted to other known variants of TFHE bootstrappings based on the original bootstrapping algorithm.

We implement our EBS along with the full domain bootstrapping methods known ($\mathsf{FDFB}$, $\mathsf{TOTA}$, $\mathsf{Comp}$), and show how much our EBS can improve the precision for those bootstrapping methods. We provide experimental results and thorough analysis with our EBS, and show that EBS is capable of bootstrapping with high precision even with small $N$, thus small key size, and small complexity than selecting large $N$ by birth.
Expand
Keita Emura
ePrint Report ePrint Report
As a multi-receiver variant of public key authenticated encryption with keyword search (PAEKS), broadcast authenticated encryption with keyword search (BAEKS) was proposed by Liu et al. (ACISP 2021). BAEKS focuses on receiver anonymity, where no information about the receiver is leaked from ciphertexts, which is reminiscent of the anonymous broadcast encryption. Here, there are rooms for improving their security definitions, e.g., two challenge sets of receivers are selected before the setup phase, and an adversary is not allowed to corrupt any receiver. In this paper, we propose a generic construction of BAEKS derived from PAEKS that provides ciphertext anonymity and consistency in a multi-receiver setting. The proposed construction is an extension of the generic construction proposed by Libert et al. (PKC 2012) for the anonymous broadcast encryption and provides adaptive corruptions. We also demonstrate that the Qin et al. PAEKS scheme (ProvSec 2021) provides ciphertext anonymity and consistency in a multi-receiver setting.
Expand
Antigoni Polychroniadou, Gilad Asharov, Benjamin Diamond, Tucker Balch, Hans Buehler, Richard Hua, Suwen Gu, Greg Gimler, Manuela Veloso
ePrint Report ePrint Report
Inventory matching is a standard mechanism for trading financial stocks by which buyers and sellers can be paired. In the financial world, banks often undertake the task of finding such matches between their clients. The related stocks can be traded without adversely impacting the market price for either client. If matches between clients are found, the bank can offer the trade at advantageous rates. If no match is found, the parties have to buy or sell the stock in the public market, which introduces additional costs.

A problem with the process as it is presently conducted is that the involved parties must share their order to buy or sell a particular stock, along with the intended quantity (number of shares), to the bank. Clients worry that if this information were to “leak” somehow, then other market participants would become aware of their intentions and thus cause the price to move adversely against them before their transaction finalizes.

We provide a solution, Prime Match, that enables clients to match their orders efficiently with reduced market impact while maintaining privacy. In the case where there are no matches, no information is revealed. Our main cryptographic innovation is a two-round secure linear comparison protocol for computing the minimum between two quantities without preprocessing and with malicious security, which can be of independent interest. We report benchmarks of our Prime Match system, which runs in production and is adopted by a large bank in the US -- J.P. Morgan. The system is designed utilizing a star topology network, which provides clients with a centralized node (the bank) as an alternative to the idealized assumption of point-to-point connections, which would be impractical and undesired for the clients to implement in reality.

Prime Match is the first secure multiparty computation solution running live in the traditional financial world.
Expand
Wai-Kong Lee, Raymond K. Zhao, Ron Steinfeld, Amin Sakzad, Seong Oun Hwang
ePrint Report ePrint Report
The US National Institute of Standards and Technology initiated a standardization process for post-quantum cryptography in 2017, with the aim of selecting key encapsulation mechanisms and signature schemes that can withstand the threat from emerging quantum computers. In 2022, Falcon was selected as one of the standard signature schemes, eventually attracting effort to optimize the implementation of Falcon on various hardware architectures for practical applications. Recently, Mitaka was proposed as an alternative to Falcon, allowing parallel execution of most of its operations. These recent advancements motivate us to develop high throughput implementations of Falcon and Mitaka signature schemes on Graphics Processing Units (GPUs), a massively parallel architecture widely available on cloud service platforms. In this paper, we propose the first parallel implementation of Falcon on various GPUs. An iterative version of the sampling process in Falcon, which is also the most time-consuming Falcon operation, was developed. This allows us to implement Falcon signature generation without relying on expensive recursive function calls on GPUs. In addition, we propose a parallel random samples generation approach to accelerate the performance of Mitaka on GPUs. We evaluate our implementation techniques on state-of-the-art GPU architectures (RTX 3080, A100, T4 and V100). Experimental results show that our Falcon-512 implementation achieves 58, 595 signatures/second and 2, 721, 562 verifications/second on an A100 GPU, which is 20.03× and 29.51× faster than the highly optimized AVX2 implementation on CPU. Our Mitaka implementation achieves 161, 985 signatures/second and 1, 421, 046 verifications/second on the same GPU. Due to the adoption of a parallelizable sampling process, Mitaka signature generation enjoys ≈ 2 – 20× higher throughput than Falcon on various GPUs. The high throughput signature generation and verification achieved by this work can be very useful in various emerging applications, including the Internet of Things.
Expand
Tomer Ashur, Erik Takke
ePrint Report ePrint Report
In SAC’14, Biham and Carmeli presented a novel attack on DES, involving a variation of Partitioning Cryptanalysis. This was further extended in ToSC’18 by Biham and Perle into the Conditional Linear Cryptanalysis in the context of Feistel ciphers. In this work, we formalize this cryptanalytic technique for block ciphers in general and derive several properties. This conditional approximation is then used to approximate the inv : GF(2^8) → GF(2^8) : x → x^254 function which forms the only source of non-linearity in the AES. By extending the approximation to encompass the full AES round function, a linear distinguisher for four-round AES in the known-plaintext model is constructed; the existence of which is often understood to be impossible. We furthermore demonstrate a key-recovery attack capable of extracting 32 bits of information in 4-round AES using 2^125.62 data and time. In addition to suggesting a new approach to advancing the cryptanalysis of the AES, this result moreover demonstrates a caveat in the standard interpretation of the Wide Trail Strategy — the design framework underlying many SPN-based ciphers published in recent years.
Expand
Dahlia Malkhi, Kartik Nayak
ePrint Report ePrint Report
In this paper, we observe that it is possible to solve partially-synchronous BFT and simultaneously achieves $O(n^2)$ worst-case communication, optimistically linear communication, a two-phase commit regime within a view, and optimistic responsiveness. Prior work falls short in achieving one or more of these properties, e.g., the most closely related work, HotStuff, requires a three-phase view while achieving all other properties. We demonstrate that these properties are achievable through a two-phase HotStuff variant named HotStuff-2.

The quest for two-phase HotStuff variants that achieve all the above desirable properties has been long, producing a series of results that are yet sub-optimal and, at the same time, are based on somewhat heavy hammers. HotStuff-2 demonstrates that none of these are necessary: HotStuff-2 is remarkably simple, adding no substantive complexity to the original HotStuff protocol.

The main takeaway is that two phases are enough for BFT after all.
Expand
Giuseppe D'Alconzo
ePrint Report ePrint Report
Starting from the problem of $d$-Tensor Isomorphism ($d$-TI), we study the relation between various Code Equivalence problems in different metrics. In particular, we show a reduction from the sum-rank metric (CE${}_{sr}$) to the rank metric (CE${}_{rk}$). To obtain this result, we investigate reductions between tensor problems. We define the Monomial Isomorphism problem for $d$-tensors ($d$-TI${}^*$ ), where, given two $d$-tensors, we ask if there are $d-1$ invertible matrices and a monomial matrix sending one tensor into the other. We link this problem to the well-studied $d$-TI and the TI-completeness of $d$-TI${}^*$ is shown. Due to this result, we obtain a reduction from CE${}_{sr}$ to CE${}_{rk}$. In the literature, a similar result was known, but it needs an additional assumption on the automorphisms of matrix codes. Since many constructions based on the hardness of Code Equivalence problems are emerging in cryptography, we analyze how such reductions can be taken into account in the design of cryptosystems based on CE${}_{sr}$.
Expand
Danilo Francati, Daniele Friolo, Monosij Maitra, Giulio Malavolta, Ahmadreza Rahimi, Daniele Venturi
ePrint Report ePrint Report
Registered encryption (Garg {\em et al.}, TCC'18) is an emerging paradigm that tackles the key-escrow problem associated with identity-based encryption by replacing the private-key generator with a much weaker entity known as the key curator. The key curator holds no secret information, and is responsible to: (i) update the master public key whenever a new user registers its own public key to the system; (ii) provide helper decryption keys to the users already in the system, in order to make them still able to decrypt. For practical purposes, tasks (i) and (ii) need to be efficient, in the sense that the size of the public parameters, of the master public key, and of the helper decryption keys, as well as the running time for key generation and user registration, and the number of updates, must be small.

In this paper, we generalize the notion of registered encryption to the setting of functional encryption (FE). Our contributions are twofold: On the one hand, we show that registered FE exists assuming indistinguishability obfuscation and somewhere statistically binding hash functions. On the other hand, we show an efficient construction of registered FE for the special case of inner-product predicates, over asymmetric bilinear groups of prime order, with provable security in the generic group model.
Expand
Joël Alwen, Marta Mularczyk, Yiannis Tselekounis
ePrint Report ePrint Report
Continuous Group Key Agreement (CGKA) lets a evolving group of clients agree on a sequence of group keys. An important application of CGKA is scalable asynchronous end-to-end (E2E) encrypted group messaging.

A major problem preventing the use of CGKA over unreliable infrastructure are so-called forks. A fork occurs when group members have diverging views of the group's history (and thus its current state); e.g. due to network or server failures. Once communication channels are restored, members resolve a fork by agreeing on the state of the group again. Today's CGKA protocols make fork resolution challenging, as natural resolution strategies seem to conflict with the way the protocols enforce group state agreement and forward secrecy. Meanwhile, secure group messaging protocols which do support fork resolution do not scale nearly as well as CGKA does.

In this work, we pave the way to practical scalable E2E messaging over unreliable infrastructure. To that end, we generalize CGKA to Fork Resilient-CGKA which allows clients to process significantly more types of out-of-order network traffic. This is important for many natural fork resolution procedures as they are based, in part, on replaying missed traffic. Next, we give two FR-CGKA constructions: a practical one based on the CGKA underlying the MLS messaging standard and an optimally secure one (albeit with only theoretical efficiency). To further assist with fork resolution, we introduce a simple new abstraction to describe a client's local protocol state. The abstraction describes all and only the information relevant to natural fork resolution, making it easier for higher-level fork resolution procedures to work with and reason about. We define a black-box extension of an FR-CGKA which maintains such a description of a client's internal state. Finally, as a proof of concept, we give a basic fork resolution protocol.
Expand
Liam Eagen, Ariel Gabizon
ePrint Report ePrint Report
Given two KZG-committed polynomials $f(X),g(X)\in \mathbb{F}_{
Expand
Justin Holmgren, Ruta Jawale
ePrint Report ePrint Report
The goal of a covert learning algorithm is to learn a function $f$ by querying it, while ensuring that an adversary, who sees all queries and their responses, is unable to (efficiently) learn any more about $f$ than they could learn from random input-output pairs. We focus on a relaxation that we call local covertness, in which queries are distributed across $k$ servers and we only limit what is learnable by $k - 1$ colluding servers.

For any constant $k$, we give a locally covert algorithm for efficiently learning any Fourier-sparse function (technically, our notion of learning is improper, agnostic, and with respect to the uniform distribution). Our result holds unconditionally and for computationally unbounded adversaries. Prior to our work, such an algorithm was known only for the special case of $O(\log n)$-juntas, and only with $k = 2$ servers, Ishai et al. (Crypto 2019).

Our main technical observation is that the original Goldreich-Levin algorithm only utilizes i.i.d. pairs of correlated queries, where each half of every pair is uniformly random. We give a simple generalization of this algorithm in which pairs are replaced by $k$-tuples in which any $k - 1$ components are jointly uniform. The cost of this generalization is that the number of queries needed grows exponentially with $k$.
Expand
Rhys Weatherley
ePrint Report ePrint Report
NIST selected the A SCON family of cryptographic primitives for standardization in February 2023 as the final step in the Lightweight Cryptography Competition. The ASCON submission to the competition provided Authenticated Encryption with Associated Data (AEAD), hashing, and Extensible Output Function (XOF) modes. Real world cryptography systems often need more than packet encryption and simple hashing. Keyed message authentication, key derivation, cryptographically secure pseudo-random number generation (CSPRNG), password hashing, and encryption of sensitive values in memory are also important. This paper defines additional modes that can be deployed on top of ASCON based on proven designs from the literature.
Expand
Dmitrii Koshelev
ePrint Report ePrint Report
The present article provides a novel hash function $\mathcal{H}$ to any elliptic curve of $j$-invariant $\neq 0, 1728$ over a finite field $\mathbb{F}_{\!q}$ of large characteristic. The unique bottleneck of $\mathcal{H}$ consists in extracting a square root in $\mathbb{F}_{\!q}$ as well as for most hash functions. However, $\mathcal{H}$ is designed in such a way that the root can be found by (Cipolla-Lehmer-)Müller's algorithm in constant time. Violation of this security condition is known to be the only obstacle to applying the given algorithm in the cryptographic context. When the field $\mathbb{F}_{\!q}$ is highly $2$-adic and $q \equiv 1 \ (\mathrm{mod} \ 3)$, the new batching technique is the state-of-the-art hashing solution except for some sporadic curves. Indeed, Müller's algorithm costs $\approx 2\log_2(q)$ multiplications in $\mathbb{F}_{\!q}$. In turn, (constant-time) Tonelli-Shanks's square root algorithm has asymptotic complexity $O(\log(q) + \nu^2)$, where $\nu$ is the $2$-adicity of $\mathbb{F}_{\!q}$. As an example, Müller's algorithm needs $\approx 4561$ fewer multiplications in the field $\mathbb{F}_{\!q}$ (whose $\nu = 96$) of the standardized curve NIST P-224. In other words, there is an acceleration of about $11$ times.
Expand
◄ Previous Next ►