International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

10 May 2024

Justine Paillet, Pierre-Augustin Berthet, Cédric Tavernier
ePrint Report ePrint Report
FALCON is candidate for standardization of the new Post Quantum Cryptography (PQC) primitives by the National Institute of Standards and Technology (NIST). However, it remains a challenge to define efficient countermeasures against side-channel attacks (SCA) for this algorithm. FALCON is a lattice-based signature that relies on rational numbers which is unusual in the cryptography field. While recent work proposed a solution to mask the addition and the multiplication, some roadblocks remain, most noticeably how to protect the floor function. We propose in this work to complete the existing first trials of hardening FALCON against SCA. We perform the mathematical proofs of our methods as well as formal security proof in the probing model using the Non-Interference concepts.
Expand
Nicolai Müller, Amir Moradi
ePrint Report ePrint Report
Fault Injection (FI) attacks, which involve intentionally introducing faults into a system to cause it to behave in an unintended manner, are widely recognized and pose a significant threat to the security of cryptographic primitives implemented in hardware, making fault tolerance an increasingly critical concern. However, protecting cryptographic hardware primitives securely and efficiently, even with well-established and documented methods such as redundant computation, can be a time-consuming, error-prone, and expertise-demanding task. In this research, we present a comprehensive and fully-automated software solution for the Automated Generation of Fault-Resistant Circuits (AGEFA). Our application employs a generic and extensively researched methodology for the secure integration of countermeasures based on Error-Correcting Codes (ECCs) into cryptographic hardware circuits. Our software tool allows designers without hardware security expertise to develop fault-tolerant hardware circuits with pre-defined correction capabilities under a comprehensive fault adversary model. Moreover, our tool applies to masked designs without violating the masking security requirements, in particular to designs generated by the tool AGEMA. We evaluate the effectiveness of our approach through experiments on various block ciphers and demonstrate its ability to produce fault-tolerant circuits. Additionally, we assess the security of examples generated by AGEFA against Side-Channel Analysis (SCA) and FI using state-of-the-art leakage and fault evaluation tools.
Expand
Sejun Kim, Wen Wang, Duhyeong Kim, Adish Vartak, Michael Steiner, Rosario Cammarota
ePrint Report ePrint Report
Fully Homomorphic Encryption (FHE) is a transformative technology that enables computations on encrypted data without requiring decryption, promising enhanced data privacy. However, its adoption has been limited due to significant performance overheads. Recent advances include the proposal of domain-specific, highly-parallel hardware accelerators designed to overcome these limitations. This paper introduces PICA, a comprehensive compiler framework designed to simplify the programming of these specialized FHE accelerators and integration with existing FHE libraries. PICA leverages a novel polynomial Instruction Set Architecture (p-ISA), which abstracts polynomial rings and their arithmetic operations, serving as a fundamental data type for the creation of compact, efficient code embracing high-level operations on polynomial rings, referred to as kernels, e.g., encompassing FHE primitives like arithmetic and ciphertext management. We detail a kernel generation framework that translates high-level FHE operations into pseudo-code using p-ISA, and a subsequent tracing framework that incorporates p-ISA functionalities and kernels into established FHE libraries. Additionally, we introduce a mapper to coordinate multiple FHE kernels for optimal application performance on targeted hardware accelerators. Our evaluations demonstrate PICA's efficacy in creation of compact and efficient code, when compared with an x64 architecture. Particularly in managing complex FHE operations such as relinearization, where we observe a 25.24x instruction count reduction even when a large batch size (8192) is taken into account.
Expand
Zahra Javar, Bruce M. Kapron
ePrint Report ePrint Report
We extend the Linicrypt framework for characterizing hash function security as proposed by McQuoid, Swope, and Rosulek (TCC 2018) to support constructions in the ideal cipher model. In this setting, we give a characterization of collision- and second-preimage-resistance in terms of a linear-algebraic condition on Linicrypt programs, and present an efficient algorithm for determining whether a program satisfies the condition. As an application, we consider the case of the block cipherbased hash functions proposed by Preneel, Govaerts, and Vandewall (Crypto 1993), and show that the semantic analysis of PGV given by Black et. al. (J. Crypto. 2010) can be captured as a special case of our characterization. In addition, We model hash functions constructed through the Merkle-Damgård transformation within the Linicrypt framework. Finally, we appy this model to an analysis of how various attacks on the underlying compression functions can compromise the collision resistance of the resulting hash function.
Expand
Remco Bloemen, Daniel Kales, Philipp Sippl, Roman Walch
ePrint Report ePrint Report
In this work we tackle privacy concerns in biometric verification systems that typically require server-side processing of sensitive data (e.g., fingerprints and Iris Codes). Concretely, we design a solution that allows us to query whether a given Iris Code is similar to one contained in a given database, while all queries and datasets are being protected using secure multiparty computation (MPC). Addressing the substantial performance demands of operational systems like World ID and aid distributions by the Red Cross, we propose new protocols to improve performance by more than three orders of magnitude compared to the recent state-of-the-art system Janus (S&P 24). Our final protocol can achieve a throughput of over a million Iris Code comparisons per second on a single CPU core, while protecting the privacy of both the query and database Iris Codes. We additionally investigate GPU acceleration for some building blocks of our protocol, which results in further speedups of over 38x compared to the respective multi-threaded CPU implementation.
Expand
Krishnendu Chatterjee, Amirali Ebrahim-Zadeh, Mehrdad Karrabi, Krzysztof Pietrzak, Michelle Yeo, Djordje Zikelic
ePrint Report ePrint Report
We study selfish mining attacks in longest-chain blockchains like Bitcoin, but where the proof of work is replaced with efficient proof systems -- like proofs of stake or proofs of space -- and consider the problem of computing an optimal selfish mining attack which maximizes expected relative revenue of the adversary, thus minimizing the chain quality. To this end, we propose a novel selfish mining attack that aims to maximize this objective and formally model the attack as a Markov decision process (MDP). We then present a formal analysis procedure which computes an $\epsilon$-tight lower bound on the optimal expected relative revenue in the MDP and a strategy that achieves this $\epsilon$-tight lower bound, where $\epsilon>0$ may be any specified precision. Our analysis is fully automated and provides formal guarantees on the correctness. We evaluate our selfish mining attack and observe that it achieves superior expected relative revenue compared to two considered baselines.

In concurrent work [Sarenche FC'24] does an automated analysis on selfish mining in predictable longest-chain blockchains based on efficient proof systems. Predictable means the randomness for the challenges is fixed for many blocks (as used e.g., in Ouroboros), while we consider unpredictable (Bitcoin-like) chains where the challenge is derived from the previous block.
Expand
Tao Lu, Haoyu Wang, Wenjie Qu, Zonghui Wang, Jinye He, Tianyang Tao, Wenzhi Chen, Jiaheng Zhang
ePrint Report ePrint Report
In recent years, cloud vendors have started to supply paid services for data analysis by providing interfaces of their well-trained neural network models. However, customers lack tools to verify whether outcomes supplied by cloud vendors are correct inferences from particular models, in the face of lazy or malicious vendors. The cryptographic primitive called zero-knowledge proof (ZKP) addresses this problem. It enables the outcomes to be verifiable without leaking information about the models. Unfortunately, existing ZKP schemes for neural networks have high computational overheads, especially for the non-linear layers in neural networks.

In this paper, we propose an efficient and extensible ZKP framework for neural networks. Our work improves the performance of the proofs for non-linear layers. Compared to previous works relying on the technology of bit decomposition, we convert complex non-linear relations into range and exponent relations, which significantly reduces the number of constraints required to prove non-linear layers. Moreover, we adopt a modular design to make our framework compatible with more neural networks. Specifically, we propose two enhanced range and lookup proofs as basic blocks. They are efficient in proving the satisfaction of range and exponent relations. Then, we constrain the correct calculation of primitive non-linear operations using a small number of range and exponent relations. Finally, we build our ZKP framework from the primitive operations to the entire neural networks, offering the flexibility for expansion to various neural networks.

We implement our ZKPs for convolutional and transformer neural networks. The evaluation results show that our work achieves over $168.6\times$ (up to $477.2\times$) speedup for separated non-linear layers and $41.4\times$ speedup for the entire ResNet-101 convolutional neural network, when compared with the state-of-the-art work, Mystique. In addition, our work can prove GPT-2, a transformer neural network with $117$ million parameters, in $287.1$ seconds, achieving $35.7\times$ speedup over ZKML, which is a state-of-the-art work supporting transformer neural networks.
Expand
Rune Fiedler, Felix Günther
ePrint Report ePrint Report
Signal recently deployed a new handshake protocol named PQXDH to protect against "harvest-now-decrypt-later" attacks of a future quantum computer. To this end, PQXDH adds a post-quantum KEM to the Diffie-Hellman combinations of the prior X3DH handshake.

In this work, we give a reductionist security analysis of Signal's PQXDH handshake in a game-based security model that captures the targeted "maximum-exposure" security, allowing fine-grained compromise of user's long-term, semi-static, and ephemeral key material. We augment prior such models to capture not only the added KEM component but also the signing of public keys, which prior analyses did not capture but which adds an additional flavor of post-quantum security in PQXDH. We then establish a fully parameterized, concrete security bound for the session key security of PQXDH, in particular shedding light on a KEM binding property we require for PQXDH's security, and how to avoid it.

Our discussion of KEM binding complements the tool-based analysis of PQXDH by Bhargavan, Jacomme, Kiefer, and Schmidt, which pointed out a potential re-encapsulation attack if the KEM shared secret does not bind the public key. We show that both Kyber (used in PQXDH) and its current NIST draft standard ML-KEM (foreseen to replace Kyber once standardized) satisfy a novel binding notion we introduce and rely on for our PQXDH analysis, which may be of independent interest.
Expand
Tomoyuki Morimae, Shogo Yamada, Takashi Yamakawa
ePrint Report ePrint Report
Unpredictable functions (UPFs) play essential roles in classical cryptography, including message authentication codes (MACs) and digital signatures. In this paper, we introduce a quantum analog of UPFs, which we call unpredictable state generators (UPSGs). UPSGs are implied by pseudorandom function-like states generators (PRFSs), which are a quantum analog of pseudorandom functions (PRFs), and therefore UPSGs could exist even if one-way functions do not exist, similar to other recently introduced primitives like pseudorandom state generators (PRSGs), one-way state generators (OWSGs), and EFIs. In classical cryptography, UPFs are equivalent to PRFs, but in the quantum case, the equivalence is not clear, and UPSGs could be weaker than PRFSs. Despite this, we demonstrate that all known applications of PRFSs are also achievable with UPSGs. They include IND-CPA-secure secret-key encryption and EUF-CMA-secure MACs with unclonable tags. Our findings suggest that, for many applications, quantum unpredictability, rather than quantum pseudorandomness, is sufficient.
Expand
Yun Li, Daniel Escudero, Yufei Duan, Zhicong Huang, Cheng Hong, Chao Zhang, Yifan Song
ePrint Report ePrint Report
Multiple works have designed or used maliciously secure honest majority MPC protocols over $\mathbb{Z}_{2^k}$ using replicated secret sharing (e.g. Koti et al. USENIX’21, and the references therein). A recent trend in the design of such MPC protocols is to first execute a semi-honest protocol, and then use a check that verifies the correctness of the computation requiring only sublinear amount of communication in terms of the circuit size. The so-called Galois ring extensions are needed in order to execute such checks over $\mathbb{Z}_{2^k}$, but these rings incur incredibly high computation overheads, which completely undermine any potential benefits the ring $\mathbb{Z}_{2^k}$ had to begin with. In this work we revisit the task of designing sublinear distributed product checks on replicated secret-shared data over $\mathbb{Z}_{2^k}$ among three parties with an honest majority. We present a novel technique for verifying the correctness of a set of multiplication (in fact, inner product) triples, involving a sublinear cost in terms of the amount of multiplications. Most importantly, unlike previous works, our tools entirely avoid Galois ring extensions, and only require computation over rings of the form $\mathbb{Z}_{2^l}$ . In terms of communication, our checks are 3∼5× lighter than existing checks using ring extensions, which is already quite remarkable. However, our most noticeable improvement is in terms of computation: avoiding extensions allows our checks to be 17.7∼44.2× better than previous approaches, for many parameter regimes of interest. Our experimental results show that checking a 10 million gate circuit with the 3PC protocol from (Boyle et al., CCS’19) takes about two minutes, while our approach takes only 2.82 seconds. Finally, our techniques are not restricted to the three-party case, and we generalize them to replicated secret-sharing with an arbitrary number of parties n. Even though the share size in this scheme grows exponentially with n, prior works have used it for n = 4 or n = 5—or even general n for feasibility results—and our distributed checks also represent improvements in these contexts.
Expand
Elijah Pelofske
ePrint Report ePrint Report
RSA is an incredibly successful and useful asymmetric encryption algorithm. One of the types of implementation flaws in RSA is low entropy of the key generation, specifically the prime number creation stage. This can occur due to flawed usage of random prime number generator libraries, or on computers where there is a lack of a source of external entropy. These implementation flaws result in some RSA keys sharing prime factors, which means that the full factorization of the public modulus can be recovered incredibly efficiently by performing a computation GCD between the two public key moduli that share the prime factor. However, since one does not know which of the composite moduli share a prime factor a-priori, to determine if any such shared prime factors exist, an all-to-all GCD attack (also known as a batch GCD attack, or a bulk GCD attack) can be performed on the available public keys so as to recover any shared prime factors. This study describes a novel all-to-all batch GCD algorithm, which will be referred to as the binary tree batch GCD algorithm, that is more efficient than the current best batch GCD algorithm (the remainder tree batch GCD algorithm). A comparison against the best existing batch GCD method (which is a product tree followed by a remainder tree computation) is given using a dataset of random RSA moduli that are constructed such that some of the moduli share prime factors. This proposed binary tree batch GCD algorithm has better runtime than the existing remainder tree batch GCD algorithm, although asymptotically it has nearly identical scaling and its complexity is dependent on how many shared prime factors exist in the set of RSA keys. In practice, the implementation of the proposed binary tree batch GCD algorithm has a roughly 6x speedup compared to the standard remainder tree batch GCD approach.
Expand
Vladimir Braverman, Kevin Garbe, Eli Jaffe, Rafail Ostrovsky
ePrint Report ePrint Report
We present a framework for privacy-preserving streaming algorithms which combine the memory-efficiency of streaming algorithms with strong privacy guarantees. These algorithms enable some number of servers to compute aggregate statistics efficiently on large quantities of user data without learning the user's inputs. While there exists limited prior work that fits within our model, our work is the first to formally define a general framework, interpret existing methods within this general framework, and develop new tools broadly applicable to this model. To highlight our model, we designed and implemented a new privacy-preserving streaming algorithm to compute heavy hitters, which are the most frequent elements in a data stream. We provide a performance comparison between our system and Poplar, the only other private statistics algorithm which supports heavy hitters. We benchmarked ours and Poplar's systems and provided direct performance comparisons within the same hardware platform. Of note, Poplar requires linear space compared to our poly-logarithmic space, meaning our system is the first to compute heavy hitters within the privacy-preserving streaming model. A small memory footprint allows our algorithm (among other benefits) to run efficiently on a very large input sizes without running out of memory or crashing.
Expand
Gennady Khalimov, Yevgen Kotukh, Maksym Kolisnyk, Svitlana Khalimova, Oleksandr Sievierinov
ePrint Report ePrint Report
The discourse herein pertains to a directional encryption cryptosystem predicated upon logarithmic signatures interconnected via a system of linear equations (we call it LINE). A logarithmic signature serves as a foundational cryptographic primitive within the algorithm, characterized by distinct cryptographic attributes including nonlinearity, noncommutativity, unidirectionality, and factorizability by key. The confidentiality of the cryptosystem is contingent upon the presence of an incomplete system of equations and the substantial ambiguity inherent in the matrix transformations integral to the algorithm. Classical cryptanalysis endeavors are constrained by the potency of the secret matrix transformation and the indeterminacy surrounding solutions to the system of linear equations featuring logarithmic signatures. Such cryptanalysis methodologies, being exhaustive in nature, invariably exhibit exponential complexity. The absence of inherent group computations within the algorithm, and by extension, the inability to exploit group properties associated with the periodicity of group elements, serves to mitigate quantum cryptanalysis to Grover's search algorithm. LINE is predicated upon an incomplete system of linear equations embodies the security levels ranging from 1 to 5, as stipulated by the NIST, and thus presents a promising candidate for the construction of post-quantum cryptosystems.
Expand
Victor Shoup
ePrint Report ePrint Report
The Asynchronous Common Subset (ACS) problem is a fundamental problem in distributed computing. Very recently, Das et al. (2024) developed a new ACS protocol with several desirable properties: (i) it provides optimal resilience, tolerating up to $t < n/3$ corrupt parties out of $n$ parties in total, (ii) it does not rely on a trusted set up, (iii) it utilizes only "lighweight" cryptography, which can be instantiated using just a hash function, and (iv) it has expected round complexity $O(1)$ and expected communication complexity $O(\kappa n^3)$, where $\kappa$ is the output-length of the hash function. The purpose of this paper is to give a detailed, self-contained exposition and analysis of this protocol from the point of view of modern theoretcal cryptography, fleshing out a number of details of the definitions and proofs, providing a complete security analysis based on concrete security assumptions on the hash function (i.e., without relying on random oracles), and developing all of the underlying theory in the universal composability framework.
Expand
Richard Wassmer
ePrint Report ePrint Report
This paper's purpose is to give a new method of analyzing Beale Cipher 1 and Cipher 3 and to show that there is no key which will decipher them into sentences. Previous research has largely used statistical methods to either decipher them or prove they have no solution. Some of these methods show that there is a high probability, but not certainty that they are unsolvable. Both ciphers remain unsolved. The methods used in this paper are not statistical ones based on thousands of samples. The evidence given here shows there is a high correlation between locations of certain numbers in the ciphers with locations in the written text that was given with these ciphers in the 1885 pamphlet called "The Beale Papers". Evidence is correlated with a long monotonically increasing Gillogly String in Cipher 1, when translated with the Declaration of Independence given in the pamphlet. The Beale Papers' writer was anonymous, and words in the three written letters in the 1885 pamphlet are compared with locations of numbers in the ciphers to show who the writer was. Emphasis is on numbers which are controllable by the encipherer. Letter location sums are used when they are the most plausible ones found. Evidence supports the statement that Cipher 1 and Cipher 3 are unintelligible. It also supports the statement that they were designed to have no intelligible sentences because they were part of a complex game made by the anonymous writer of The Beale Papers.
Expand
Jesko Dujmovic, Mohammad Hajiabadi
ePrint Report ePrint Report
Private information retrieval (PIR) is a fundamental cryptographic primitive that allows a user to fetch a database entry without revealing to the server which database entry it learns. PIR becomes non-trivial if the server communication is less than the database size. We show that building (even) very weak forms of single-server PIR protocols, without pre-processing, requires the number of public-key operations to scale linearly in the database size. This holds irrespective of the number of symmetric-key operations performed by the parties. We then use this bound to examine the related problem of communication efficient oblivious transfer (OT) extension. Oblivious transfer is a crucial building block in secure multi-party computation (MPC). In most MPC protocols, OT invocations are the main bottleneck in terms of computation and communication. OT extension techniques allow one to minimize the number of public-key operations in MPC protocols. One drawback of all existing OT extension protocols is their communication overhead. In particular, the sender’s communication is roughly double what is information-theoretically optimal. We show that OT extension with close to optimal sender communication is impossible, illustrating that the communication overhead is inherent. Our techniques go much further; we can show many lower bounds on communication-efficient MPC. E.g., we prove that to build high-rate string OT from generic groups, the sender needs to do linearly many group operations
Expand
Pierre Briaud
ePrint Report ePrint Report
Recently, [eprint/2024/250] and [eprint/2024/347] proposed two algebraic attacks on the $\mathsf{Anemoi}$ permutation [Crypto '23]. In this note, we construct a Gröbner basis for the ideal generated by the naive modeling of the $\mathsf{CICO}$ problem associated to $\mathsf{Anemoi}$, in odd and in even characteristics, for one and several branches. We also infer the degree of the ideal from this Gröbner basis, while previous works relied on upper bounds.
Expand

07 May 2024

Sapienza University of Rome, Italy
Job Posting Job Posting
A postdoctoral position is open in the Computer Science Department at Sapienza University of Rome, in Italy, hosted by Prof. Daniele Venturi.

The position involves performing research in theoretical and applied cryptography, particularly on public-key cryptography and secure multi-party computation. The successful candidate will start as soon as possible, and in any case not later than December 2024.

The position is offered for 1 year and can be extended contingent upon funding and satisfactory performance. The salary is competitive with similar positions at top institutions in Europe.

Applicants should ideally have a strong background in provable security, and are expected to be highly motivated and mathematically mature. Applications should be sent by email at venturi@di.uniroma1.it and must include: (i) a CV with a list of publications, (ii) a short research statement, (iii) names and contact information of 2-3 potential references.

Closing date for applications:

Contact: Daniele Venturi

Expand
Heriot-Watt University, UK
Job Posting Job Posting
The proposed PhD research aims to address the critical need for efficient baseband processing architectures in next-generation communication systems like 6G. Baseband processing is pivotal for enhancing communication system performance, impacting factors such as data throughput, energy efficiency, latency, and overall reliability. This study will delve into optimizing baseband processing algorithms by exploring innovative techniques from machine learning, parallel computing, and hardware acceleration. The objective is to design architectures capable of meeting the stringent demands of emerging communication standards while prioritizing energy efficiency and cost-effectiveness. By tackling these challenges, this research seeks to significantly contribute to advancing communication systems, enabling faster, more reliable, and scalable wireless networks.

Closing date for applications:

Contact: Dr. Mohd Tasleem Khan

Expand

06 May 2024

Lukas Aumayr, Zeta Avarikioti, Matteo Maffei, Giulia Scaffino, Dionysis Zindros
ePrint Report ePrint Report
Designing light clients for Proof-of-Work blockchains has been a foundational problem since Nakamoto's SPV construction in the Bitcoin paper. Over the years, communication was reduced from O(C) down to O(polylog(C)) in the system's lifetime C. We present Blink, the first provably secure O(1) light client that does not require a trusted setup.
Expand
◄ Previous Next ►