International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

30 January 2023

Gabrielle De Micheli, Duhyeong Kim, Daniele Micciancio, Adam Suhl
ePrint Report ePrint Report
Amortized bootstrapping offers a way to simultaneously refresh many ciphertexts of a fully homomorphic encryption scheme, at a total cost comparable to that of refreshing a single ciphertext. An amortization method for FHEW-style cryptosystems was first proposed by (Micciancio and Sorrell, ICALP 2018), who showed that the amortized cost of bootstrapping n FHEW-style ciphertexts can be reduced from $O(n)$ basic cryptographic operations to just $O(n^{\epsilon})$, for any constant $\epsilon>0$. However, despite the promising asymptotic saving, the algorithm was rather inpractical due to a large constant (exponential in $1/\epsilon$) hidden in the asymptotic notation. In this work, we propose an alternative amortized boostrapping method with much smaller overhead, still achieving $O(n^\epsilon)$ asymptotic amortized cost, but with a hidden constant that is only linear in $1/\epsilon$, and with reduced noise growth. This is achieved following the general strategy of (Micciancio and Sorrell), but replacing their use of the Nussbaumer transform, with a much more practical Number Theoretic Transform, with multiplication by twiddle factors implemented using ring automorphisms. A key technical ingredient to do this is a new "scheme switching" technique proposed in this paper which may be of independent interest.
Expand
Vahid Amin-Ghafari, Mohammad Ali Orumiehchiha, Saeed Rostami
ePrint Report ePrint Report
A few small-state stream ciphers (SSCs) were proposed for constrained environments. All of the SSCs before the LILLE stream cipher suffered from distinguishing attacks and fast correlation attacks. The designers of LILLE claimed that it is based on the well-studied two-key Even-Mansour scheme and so is resistant to various types of attacks. This paper proposes a distinguishing attack on LILLE, the first attack since 2018. The data and time complexities to attack LILLE-40 are 2^(50.7) and 2^(41.2), respectively. We verified practically our attack on a halved version of LILLE-40. A countermeasure is suggested to strengthen LILLE against the proposed attack. We hope our attack opens the door to more cryptanalyses of LILLE.
Expand
Ripon Patgiri, Laiphrakpam Dolendro Singh
ePrint Report ePrint Report
In this paper, we propose a variable-sized, one-way, and randomized secure hash algorithm, VORSHA for short. We present six variants of VORSHA, which are able to generate a randomized secure hash value. VORSHA is the first secure hash algorithm to randomize the secure hash value fully. The key embodiment of our proposed algorithm is to generate a pool of pseudo-random bits using the primary hash functions and selects a few bits from the pool of bits to form the final randomized secure hash value. Each hash value of the primary hash function produces a single bit (either 0 or 1) for the pool of pseudo-random bits. Thus, VORSHA randomized the generated bit string to produce the secure hash value, and we term it as a randomized secure hash value. Moreover, the randomized secure hash value is tested using NIST-SP 800-22 statistical test suite, and the generated randomized secure hash value of VORSHA has passed all 15 statistical tests of NIST-SP 800-22. It proves that the VORSHA is able to generate a highly unpredictable yet consistent secure hash value. Moreover, VORSHA features a memory-hardness property to restrict a high degree of parallelism, which features a tiny memory footprint for legal users but massive memory requirements for adversaries. Furthermore, we demonstrate how to prevent Rainbow Table as a Service (RTaaS) attack using VORSHA. The source code is available at https://github.com/patgiri/VORSHA.
Expand
Bologna, Italia, 25 May - 26 May 2023
Event Calendar Event Calendar
Event date: 25 May to 26 May 2023
Submission deadline: 24 February 2023
Notification: 7 April 2023
Expand
Neuchâtel, Switzerland, 27 June - 30 June 2023
Event Calendar Event Calendar
Event date: 27 June to 30 June 2023
Submission deadline: 17 January 2023
Notification: 27 April 2023
Expand
Chicago, USA, 2 July - 8 July 2023
Event Calendar Event Calendar
Event date: 2 July to 8 July 2023
Submission deadline: 5 March 2023
Expand
Guangzhou, China, 4 December - 8 December 2023
Asiacrypt Asiacrypt
Event date: 4 December to 8 December 2023
Expand

29 January 2023

Lyon, France, 23 April 2023
Event Calendar Event Calendar
Event date: 23 April 2023
Submission deadline: 7 March 2023
Expand
Canterbury, United Kingdom, 14 August - 16 August 2023
Event Calendar Event Calendar
Event date: 14 August to 16 August 2023
Submission deadline: 3 March 2023
Expand
Yokohama, Japan, 29 August - 31 August 2023
Event Calendar Event Calendar
Event date: 29 August to 31 August 2023
Submission deadline: 26 March 2023
Notification: 30 May 2023
Expand

28 January 2023

Ling Sun, Meiqin Wang
ePrint Report ePrint Report
Automatic methods for differential and linear characteristic search are well-established at the moment. Typically, the designers of novel ciphers also give preliminary analytical findings for analysing the differential and linear properties using automatic techniques. However, neither MILP-based nor SAT/SMT-based approaches have fully resolved the problem of searching for actual differential and linear characteristics of ciphers with large S-boxes. To tackle the issue, we present three strategies for developing SAT models for 8-bit S-boxes that are geared toward differential probabilities and linear correlations. While these approaches cannot guarantee a minimum model size, the time needed to obtain models is drastically reduced. The newly proposed SAT model for large S-boxes enables us to establish that the upper bound on the differential probability for 14 rounds of SKINNY-128 is 2^{-131}, thereby completing the unsuccessful work of Abdelkhalek et al. We also analyse the seven AES-based constructions C1 - C7 designed by Jean and Nikolic and compute the minimum number of active S-boxes necessary to cause an internal collision using the SAT method. For two constructions C3 and C5, the current lower bound on the number of active S-boxes is increased, resulting in a more precise security analysis for these two structures.
Expand
Kyle Storrier, Adithya Vadapalli, Allan Lyons, Ryan Henry
ePrint Report ePrint Report
We introduce Grotto, a framework and C++ library for space- and time-efficient $(2+1)$-party piecewise polynomial (i.e., spline) evaluation on secrets additively shared over $\mathbb{Z}_{2^{n}}$. Grotto improves on the state-of-the-art approaches based on distributed comparison functions (DCFs) in almost every metric, offering asymptotically superior communication and computation costs with the same or lower round complexity. At the heart of Grotto is a novel observation about the structure of the ``tree'' representation underlying the most efficient distributed point functions (DPFs) from the literature, alongside an efficient algorithm that leverages this structure to do with a single DPF what state-of-the-art approaches require many DCFs to do. Our open-source Grotto implementation supports evaluating dozens of useful functions out of the box, including trigonometric and hyperbolic functions (and their inverses); various logarithms; roots, reciprocals, and reciprocal roots; sign testing and bit counting; and over two dozen of the most common (univariate) activation functions from the deep-learning literature.
Expand
Alan Szepieniec, Alexander Lemmens, Jan Ferdinand Sauer, Bobbin Threadbare
ePrint Report ePrint Report
This paper specifies a new arithmetization-oriented hash function called Tip5. It uses the SHARK design strategy with low-degree power maps in combination with lookup tables, and is tailored to the field with $p=2^{64}-2^{32}+1$ elements.

The context motivating this design is the recursive verification of STARKs. This context imposes particular design constraints, and therefore the hash function's arithmetization is discussed at length.
Expand
Jonathan Komada Eriksen, Lorenz Panny, Jana Sotáková, Mattia Veroni
ePrint Report ePrint Report
Constructing a supersingular elliptic curve whose endomorphism ring is isomorphic to a given quaternion maximal order (one direction of the Deuring correspondence) is known to be polynomial-time assuming the generalized Riemann hypothesis [KLPT14; Wes21], but notoriously daunting in practice when not working over carefully selected base fields. In this work, we speed up the computation of the Deuring correspondence in general characteristic, i.e., without assuming any special form of the characteristic. Our algorithm follows the same overall strategy as earlier works, but we add simple (yet effective) optimizations to multiple subroutines to significantly improve the practical performance of the method. To demonstrate the impact of our improvements, we show that our implementation achieves highly practical running times even for examples of cryptographic size. One implication of these findings is that cryptographic security reductions based on KLPT-derived algorithms (such as [EHLMP18; Wes22]) have become tighter, and therefore more meaningful in practice. Another is the pure bliss of fast(er) computer algebra: We provide a Sage implementation which works for general primes and includes many necessary tools for computational number theorists' and cryptographers' needs when working with endomorphism rings of supersingular elliptic curves. This includes the KLPT algorithm, translation of ideals to isogenies, and finding supersingular elliptic curves with known endomorphism ring for general primes. Finally, the Deuring correspondence has recently received increased interest because of its role in the SQISign signature scheme [DeF+20]. We provide a short and self-contained summary of the state-of-the-art algorithms without going into any of the cryptographic intricacies of SQISign.
Expand
Georg Land, Adrian Marotzke, Jan Richter-Brockmann, Tim Güneysu
ePrint Report ePrint Report
Streamlined NTRU Prime is a lattice-based Key Encapsulation Mechanism (KEM) that is, together with X25519, currently the default algorithm in OpenSSH 9. Being based on lattice assumptions, it is assumed to be secure also against attackers with access to large-scale quantum computers. While Post-Quantum Cryptography (PQC) schemes have been subject to extensive research in the recent years, challenges remain with respect to protection mechanisms against attackers that have additional side-channel information such as the power consumption of a device processing secret data. As a countermeasure to such attacks, masking has been shown to be a promising and effective approach. For public-key schemes, including any recent PQC schemes, usually a mixture of Boolean and arithmetic approaches are applied on an algorithmic level. Our generic hardware implementation of Streamlined NTRU Prime decapsulation, however, follows an idea that until now was assumed to be only applicable to symmetric cryptography: gate-level masking. There, a hardware design that consists of logic gates is transformed into a secure implementation by replacing each gate with a composably secure gadget that operates on uniform random shares of secret values. In our work, we show the feasibility of applying this approach also to PQC schemes and present the first Public-Key Cryptography (PKC) – pre- and post-quantum – implementation masked at gate level considering several trade-offs and design choices. We synthesize our implementation both for Artix-7 Field-Programmable Gate Arrays (FPGAs) and 45 nm Application-Specific Integrated Circuits (ASICs), yielding practically feasible results regarding area, randomness demand and latency. Finally, we also analyze the applicability of our concept to Kyber which will be standardized by the National Institute of Standards and Technology (NIST).
Expand

27 January 2023

Anamaria Costache, Lea Nürnberger, Rachel Player
ePrint Report ePrint Report
In this work, we investigate the BGV scheme as implemented in HElib. We begin by performing an implementation-specific noise analysis of BGV. This allows us to derive much tighter bounds than what was previously done. To confirm this, we compare our bounds against the state of the art. We find that, while our bounds are at most $1.8$ bits off the experimentally observed values, they are as much as $29$ bits tighter than previous work. Finally, to illustrate the importance of our results, we propose new and optimised parameters for HElib. In HElib, the special modulus is chosen to be $k$ times larger than the current ciphertext modulus $Q_i$. For a ratio of subsequent ciphertext moduli $\log\left( \frac{Q_i}{Qi−1}\right) = 54$ (a very common choice in HElib), we can optimise $k$ by up to $26$ bits. This means that we can either enable more multiplications without having to switch to larger parameters, or reduce the size of the evaluation keys, thus reducing on communication costs in relevant applications. We argue that our results are near-optimal.
Expand
Runchao Han, Jiangshan Yu
ePrint Report ePrint Report
Thesecurityofmanyprotocolssuchasvotingandblockchains relies on a secure source of randomness. Decentralised Randomness Beacon (DRB) has been considered as a promising approach, where a set of participants jointly generates a sequence of random outputs. While the DRBs have been extensively studied, they failed to capture the advantage that some participants learn random outputs earlier than other participants. In time-sensitive protocols whose execution depends on the randomness from a DRB, such an advantage allows the adversary to behave adaptively according to random outputs, compromising the fairness and/or security in these protocols.

In this paper, we formalise a new property, delivery-fairness, to quantify the advantage. In particular, we distinguish two aspects of delivery-fairness, namely length-advantage, i.e., how many random outputs an adversary can learn earlier than correct participants, and time-advantage, i.e., how much time an adversary can learn a given random output earlier than correct participants. In addition, we prove the lower bound of delivery-fairness showing optimal guarantee. We further analyse the delivery-fairness guarantee of state-of-the-art DRBs and discuss insights, which, we show through case studies, could help improve delivery-fairness of existing systems to its optimal.
Expand
Senyang Huang, Rui Qi Sim, Chitchanok Chuengsatiansup, Qian Guo, Thomas Johansson
ePrint Report ePrint Report
In this paper, we present the first chosen-ciphertext (CC) cache-timing attacks on the reference implementation of HQC. We build a cache-timing based distinguisher for implementing a plaintext-checking (PC) oracle. The PC oracle uses side-channel information to check if a given ciphertext decrypts to a given message. This is done by identifying a vulnerability during the generating process of two vectors in the reference implementation of HQC. We also propose a new method of using PC oracles for chosen-ciphertext side-channel attacks against HQC, which may have independent interest.

We show a general proof-of-concept attack, where we use the Flush&Reload technique and also derive, in more detail, a practical attack on an HQC execution on Intel SGX, where the Prime&Probe technique is used. We show the exact path to do key recovery by explaining the detailed steps, using the PC oracle. In both scenarios, the new attack requires $53,857$ traces on average with much fewer PC oracle calls than the timing attack of Guo et al. CHES 2022 on an HQC implementation.
Expand
Xiaoen Lin, Le He, Hongbo Yu
ePrint Report ePrint Report
This paper combines techniques from several previous papers with some modifications to improve the previous cryptanalysis of 3-round Keccak-256. Furthermore, we propose a fast rebuilding method to improve the efficiency of solving equation systems. As a result, the guessing times of finding a preimage for 3-round Keccak-256 are decreased from $2^{65}$ to $2^{52}$, and the solving time of each guess is decreased from $2^{9}$ 3-round Keccak calls to $2^{5.3}$ 3-round Keccak calls. We identify a preimage of all '0' digest for 3-round Keccak-256 to support the effectiveness of our methodology.
Expand
Ye Dong, Xiaojun Chen, Weizhan Jing, Kaiyun Li, Weiping Wang
ePrint Report ePrint Report
Secure neural network inference has been a promising solution to private Deep-Learning-as-a-Service, which enables the service provider and user to execute neural network inference without revealing their private inputs. However, the expensive overhead of current schemes is still an obstacle when applied in real applications. In this work, we present \textsc{Meteor}, an online communication-efficient and fast secure 3-party computation neural network inference system aginst semi-honest adversary in honest-majority. The main contributions of \textsc{Meteor} are two-fold: \romannumeral1) We propose a new and improved 3-party secret sharing scheme stemming from the \textit{linearity} of replicated secret sharing, and design efficient protocols for the basic cryptographic primitives, including linear operations, multiplication, most significant bit extraction, and multiplexer. \romannumeral2) Furthermore, we build efficient and secure blocks for the widely used neural network operators such as Matrix Multiplication, ReLU, and Maxpool, along with exploiting several specific optimizations for better efficiency. Our total communication with the setup phase is a little larger than SecureNN (PoPETs'19) and \textsc{Falcon} (PoPETs'21), two state-of-the-art solutions, but the gap is not significant when the online phase must be optimized as a priority. Using \textsc{Meteor}, we perform extensive evaluations on various neural networks. Compared to SecureNN and \textsc{Falcon}, we reduce the online communication costs by up to $25.6\times$ and $1.5\times$, and improve the running-time by at most $9.8\times$ (resp. $8.1\times$) and $1.5\times$ (resp. $2.1\times$) in LAN (resp. WAN) for the online inference.
Expand
◄ Previous Next ►