International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

30 November 2022

Jonghyun Kim, Jong Hwan Park
ePrint Report ePrint Report
NTRU was the first practical public-key encryption scheme constructed on a lattice over a polynomial-based ring, and has been still considered secure against significant cryptanalytic attacks in a few decades. Despite such a long history, NTRU and its variants proposed to date suffer from several drawbacks, such as the difficulty of achieving worst-case correctness error in a moderate modulus, inconvenient sampling distributions for messages, and relatively slower algorithms than other lattice-based schemes.

In this work, we suggest a new NTRU-based key encapsulation mechanism (KEM), called NTRU+, which overcomes almost all existing drawbacks. NTRU+ is constructed based on two new generic transformations called $\mathsf{ACWC}_{2}$ and $\overline{\mathsf{FO}}^{\perp}$. $\mathsf{ACWC}_{2}$ is used for easily achieving a worst-case correctness error, and $\overline{\mathsf{FO}}^{\perp}$ (as a variant of the Fujisaki-Okamoto transform) is used for achieving chosen-ciphertext security without re-encryption. $\mathsf{ACWC}_{2}$ and $\overline{\mathsf{FO}}^{\perp}$ are all defined using a randomness-recovery algorithm and an encoding method. Especially, our simple encoding method, called $\mathsf{SOTP}$, allows us to sample a message from a natural bit-sting space with an arbitrary distribution. We provide four parameter sets for NTRU+ and give implementation results, using NTT-friendly rings over cyclotomic trinomials.
Expand
Jon-Lark Kim, Jihoon Hong, Terry Shue Chien Lau, YounJae Lim, Chik How Tan, Theo Fanuela Prabowo, Byung-Sun Won
ePrint Report ePrint Report
We propose a REinforced modified Dual-Ouroboros based on Gabidulin codes, shortly called REDOG. This is a code-based cryptosystem based on the well-known rank metric codes, Gabidulin codes. The public key sizes of REDOG are 14KB, 33KB, 63KB at the security levels of 128, 192, 256 bits respectively. There is no decoding failure in decryption. REDOG is IND-CPA. As a new result, we give the performance results of implementing REDOG including the time for Key generation, encryption, and decryption for each security level.
Expand
Marta Bellés-Muñoz, Jorge Jiménez Urroz, Javier Silva
ePrint Report ePrint Report
A recent area of interest in cryptography is recursive composition of proof systems. One of the approaches to make recursive composition efficient involves cycles of pairing-friendly elliptic curves of prime order. However, known constructions have very low embedding degrees. This entails large parameter sizes, which makes the overall system inefficient.

In this paper, we explore $2$-cycles composed of curves from families parameterized by polynomials, and show that such cycles do not exist unless a strong condition holds. As a consequence, we prove that no $2$-cycles can arise from the known families, except for those cycles already known. Additionally, we show some general properties about cycles, and provide a detailed computation on the density of pairing-friendly cycles among all cycles.
Expand
Han Wu, Guangwu Xu
ePrint Report ePrint Report
Primal attack, BKW attack, and dual attack are three well-known attacks to LWE. To build efficient post-quantum cryptosystems in practice, the structured variants of LWE (i.e. MLWE/RLWE) are often used. Some efforts have been spent on addressing concerns about additional vulnerabilities introduced by algebraic structures and no effective attack method based on ideal lattices or module lattices has been proposed so far; these include refining primal attack and BKW attack to MLWE/RLWE. It is thus an interesting problem to consider how to enhance the dual attack against LWE with the rich algebraic structure of MLWE (including RLWE). In this paper, we present the first attempt to this problem by observing that each short vector found by BKZ generates another n − 1 vectors of the same length automatically and all of these short vectors can be used to distinguish. To this end, an interesting property which indicates the rotations are consistent with certain linear transformations is proved, and a new kind of intersection lattice is constructed with some tricks. Moreover, we notice that coefficient vectors of different rotations of the same polynomial are near-orthogonal in high-dimensional spaces. This is validated by extensive experiments and is treated as an extension to the assumption under the original dual attack against LWE. Taking Newhope512 as an example, we show that by our enhanced dual attack method, the required blocksize and time complexity (in both classical and quantum cases) all decrease. It is remarked that our improvement is not significant and its limitation is also touched on. Our results do not reveal a severe security problem for MLWE/RLWE compared to that of a general LWE, this is consistent with the findings by the previous work for using primal and BKW attacks to MLWE/RLWE.
Expand
Mashrukh Zayed, Adnan Anwar, Ziaur Rahman, Sk. Shezan Arefin, Rafiqul Islam
ePrint Report ePrint Report
On the Internet of Connected Vehicles, a vehicle has to communicate bi-directionally with several devices for establishing a shared network for inter-vehicle and intra-vehicle connectivity. These connection protocols are commonly structured to connect all the individual components with an implicit degree of trust, which is supposed to protect the whole system from unauthorized users. Technologies like Automotive Ethernet tend to increase security by reducing the implicit trust within the local network devices. However, the lack of individual security protocols in vehicle-to-vehicle communication still keeps the possession of vulnerability to hacks, external attacks, and further disruption. This is where Zero Trust Architecture can become a reliable technology for the exchange of information in between vehicles. Zero trust is a security system that means no one is trusted by default and verification is required from anyone or any device willing to get connected to the intra-vehicle network. In this paper, we have scoped the preliminary and most vital step of this system: verifying the owner identity of a vehicle with zero trust manner. Our approach involves recognizing vehicle license plates and utilizing the license information for retrieving the vehicle owner details to establish trust before allowing connection to the network. Our proposed methodology operates with 85\% to 99\% accuracy on the license recognition part within recognizable distances using PyTesseract OCR. Reliability to the zero trust solution is gained through necessary information retrieved using GET and POST requests to and from the corresponding driving license information databases.
Expand
Yi Chen, Zhenzhen Bao, Yantian Shen, Hongbo Yu
ePrint Report ePrint Report
In the seminal work published by Gohr in CRYPTO 2019, neural networks were successfully exploited to perform differential attacks on Speck32/64, the smallest member in the block cipher family Speck. The deep learning aided key-recovery attack by Gohr achieves considerable improvement in terms of time complexity upon the state-of-the-art result from the conventional cryptanalysis method. A further question is whether the advantage of deep learning aided attacks can be kept on large-state members of Speck and other primitives. Since there are several key points in Gohr’s key-recovery frameworks that seem not fit for large-state ciphers, this question stays open for years.

This work provides an answer to this question by proposing a deep learning aided multi-stage key-recovery framework. To apply this key-recovery framework on large-state members of Speck, multiple neural distinguishers (NDs) are trained and carefully combined into groups. Employing the groups of NDs under the multi-stage key-recovery framework, practical attacks are designed and trialed. Experimental results show the effectiveness of the framework. The practical attacks are then extended into theoretical attacks that cover more rounds. To do that, multi-round classical differentials (CDs) are used together with the NDs. To find the CDs’ neutral bits to boost signals from the distinguishers, an efficient algorithm is proposed.

As a result, considerable improvement in terms of both time and data complexity of differential key-recovery attacks on round-reduced Speck with the largest, i.e., the 128-bit state, is obtained. Besides, efficient differential attacks are achieved on round-reduced Speck with 96-bit and 64-bit states. Since most real-world block ciphers have a state size of no less than 64 bits, this work paves the way for performing cryptanalysis using deep learning on more block ciphers. The code is available at https://github.com/AI-Lab-Y/NAAF.
Expand
Andreas Freitag
ePrint Report ePrint Report
Digital Identities are playing an essential role in our digital lives. Today, most Digital Identities are based on central architectures. Central Digital Identity providers control and know our data and thereby our Identity. Self Sovereign Identities are based on decentralized data storage and data exchange architecture, where the user is in sole control of his data and identity. Most of the issued credentials need the possibility of revocation. For a centrally managed Digital Identity system, revocation is not a problem. In decentral architectures, revocation is more challenging. Revocation can be done with different methods e.g. list based, cryptographic accumulators and with credential updates. A revocation method must be privacy preserving and must scale. This paper gives an overview of the available revocation methods, including a survey to define requirements, assess revocation groups against the requirements, highlights shortcomings of the methods and introduces a new revocation method called Linked Validity Verifiable Credentials.
Expand

28 November 2022

Kaveh Aasaraai, Emanuele Cesena, Rahul Maganti, Nicolas Stalder, Javier Varela, Kevin Bowers
ePrint Report ePrint Report
Number-Theoretic-Transform (NTT) is a variation of Fast-Fourier-Transform (FFT) on finite fields. NTT is being increasingly used in blockchain and zero-knowledge proof applications. Although FFT and NTT are widely studied for FPGA implementation, we believe CycloneNTT is the first to solve this problem for large data sets ($\ge2^{24}$, 64-bit numbers) that would not fit in the on-chip RAM. CycloneNTT uses a state-of-the-art butterfly network and maps the dataflow to hybrid FIFOs composed of on-chip SRAM and external memory. This manifests into a quasi-streaming data access pattern minimizing external memory access latency and maximizing throughput. We implement two variants of CycloneNTT optimized for DDR and HBM external memories. Although historically this problem has been shown to be memory-bound, CycloneNTT's quasi-streaming access pattern is optimized to the point that when using HBM (Xilinx C1100), the architecture becomes compute-bound. On the DDR-based platform (AWS F1), the latency of the application is equal to the streaming of the entire dataset $\log N$ times to/from external memory. Moreover, exploiting HBM's larger number of channels, and following a series of additional optimizations, CycloneNTT only requires $\frac{1}{6}\log N$ passes.
Expand
Dan Boneh, Aditi Partap, Lior Rotem
ePrint Report ePrint Report
An accountable threshold signature (ATS) is a threshold signature scheme where every signature identifies the quorum of signers who generated that signature. They are widely used in financial settings where signers need to be held accountable for threshold signatures they generate. In this paper we initiate the study of proactive refresh for accountable threshold signatures. Proactive refresh is a protocol that lets the group of signers refresh their shares of the secret key, without changing the public key or the threshold. We give several definitions for this notion achieving different levels of security. We observe that certain natural constructions for an ATS cannot be proactively refreshed because the secret key generated at setup is needed for accountability. We then construct three types of ATS schemes with proactive refresh. The first is a generic construction that is efficient when the number of signers is small. The second is a hybrid construction that performs well for a large number of signers and satisfies a strong security definition. The third is a collection of very practical constructions derived from ATS versions of the Schnorr and BLS signature schemes; however these practical constructions only satisfy our weaker notion of security.
Expand
Srinivasan Raghuraman, Yibin Yang
ePrint Report ePrint Report
Fitzi, Garay, Maurer, and Ostrovsky (J. Cryptology 2005) showed that in the presence of a dishonest majority, no primitive of cardinality $n - 1$ is complete for realizing an arbitrary $n$-party functionality with guaranteed output delivery. In this work, we show that in the presence of $n - 1$ corrupt parties, no unreactive primitive of cardinality $n - 1$ is complete for realizing an arbitrary $n$-party functionality with fairness. We show more generally that for $t > \frac{n}{2}$, in the presence of $t$ malicious parties, no unreactive primitive of cardinality $t$ is complete for realizing an arbitrary $n$-party functionality with fairness. We complement this result by noting that $(t+1)$-wise fair exchange is complete for realizing an arbitrary $n$-party functionality with fairness. In order to prove our results, we utilize the primitive of fair coin tossing and introduce the notion of predictability in coin tossing protocols, which we believe is of independent interest.
Expand
Daniele Friolo, Matteo Salvino, Daniele Venturi
ePrint Report ePrint Report
The Fujisaki-Okamoto (FO) transform (CRYPTO 1999 and JoC 2013) turns any weakly (i.e., IND-CPA) secure public-key encryption (PKE) scheme into a strongly (i.e., IND-CCA) secure key encapsulation method (KEM) in the random oracle model (ROM). Recently, the FO transform re-gained momentum as part of CRISTAL-Kyber, selected by the NIST as the PKE winner of the post-quantum cryptography standardization project.

Following Fischlin (ICALP 2005), we study the complete non-malleability of KEMs obtained via the FO transform. Intuitively, a KEM is completely non-malleable if no adversary can maul a given public key and ciphertext into a new public key and ciphertext encapsulating a related key for the underlying blockcipher.

On the negative side, we find that KEMs derived via FO are not completely non-malleable in general. On the positive side, we show that complete non-malleability holds in the ROM by assuming the underlying PKE scheme meets an additional property, or by a slight tweak of the transformation.
Expand
Alexandre Debant, Lucca Hirschi
ePrint Report ePrint Report
We conduct a security analysis of the e-voting protocol used for the largest political election using e-voting in the world, the 2022 French legislative election for the citizens overseas. Due to a lack of system and threat model specifications, we built and contributed such specifications by studying the French legal framework and by reverse-engineering the code base accessible to the voters. Our analysis reveals that this protocol is affected by two design-level and implementation-level vulnerabilities. We show how those allow a standard voting server attacker and even more so a channel attacker to defeat the election integrity and ballot privacy due to 6 attack variants. We propose and discuss 5 fixes to prevent those attacks. Our specifications, the attacks, and the fixes were acknowledged by the relevant stakeholders during our responsible disclosure. Our attacks are in the process of being prevented with our fixes for future elections. Beyond this specific protocol, we draw general conclusions and lessons from this instructive experience where an e-voting protocol meets the real-world constraints of a large-scale and political election.
Expand
Yann Disser, Daniel Günther, Thomas Schneider, Maximilian Stillger, Arthur Wigandt, Hossein Yalame
ePrint Report ePrint Report
A Universal Circuit (UC) is a Boolean circuit of size $\Theta(n \log n)$ that can simulate any Boolean function up to a certain size $n$. Valiant (STOC'76) provided the first two UC constructions of asymptotic sizes $\sim5 n\log n$ and $\sim4.75 n\log n$, and today's most efficient construction of Liu et al. (CRYPTO'21) has size $\sim3n\log n$. Evaluating a public UC with a secure Multi-Party Computation (MPC) protocol allows efficient Private Function Evaluation (PFE), where a private function is evaluated on private data. Most existing UC constructions simulate circuits consisting of 2-input gates.

In this work, we study UCs that simulate circuits consisting of ($\rho \rightarrow \omega$)-Lookup Tables (LUTs) that map $\rho$ inputs to $\omega$ outputs. Existing UC constructions can be easily extend to ($\rho \rightarrow$ 1)-LUTs (we call this the fixed UC construction). We further extend this to ($\rho \rightarrow \omega$)-LUTs. Unfortunately, the size of the fixed UC construction is linear in the largest input size $\rho$ of the LUT, i.e., even if only a single LUT in the circuit has a large input size, the size of the whole UC is dominated by this LUT size. To circumvent this, we design a \emph{dynamic} UC construction, where the dimensions of the individual LUTs are public. We implement the fixed and dynamic UC constructions based on the UC construction by Liu et al., which also is the first implementation of their construction. We show that the concrete size of our dynamic UC construction improves by at least $2\times$ over Liu et al.'s UC for all benchmark circuits, that are representative for many PFE applications.
Expand
Seunghwan Park, Chi-Gon Jung, Aesun Park, Joongeun Choi, Honggoo Kang
ePrint Report ePrint Report
The quantum resistance Key Encapsulation Mechanism (PQC-KEM) design aims to replace cryptography in legacy security protocols. It would be nice if PQC-KEM were faster and lighter than ECDH or DH for easy migration to legacy security protocols. However, it seems impossible due to the temperament of the secure underlying problems in a quantum environment. Therefore, it makes reason to determine the threshold of the scheme by analyzing the maximum bandwidth the legacy security protocol can adapt. We specified the bandwidth threshold at 1,244 bytes based on IKEv2 (RFC7296), a security protocol with strict constraints on payload size in the initial exchange for secret key sharing. We propose TiGER that is an IND-CCA secure KEM based on RLWE(R). TiGER has a ciphertext (1,152bytes) and a public key (928 bytes) smaller than 1,244 bytes, even at the AES256 security level. To our knowledge, TiGER is the only scheme with such an achievement. Also, TiGER satisfies security levels 1, 3, and 5 of NIST competition. Based on reference implementation, TiGER is 1.7-2.6x faster than Kyber and 2.2-4.4x faster than LAC.
Expand
Philipp Hoenisch, Subhra Mazumdar, Pedro Moreno-Sanchez, Sushmita Ruj
ePrint Report ePrint Report
Security and privacy issues with centralized exchange services have motivated the design of atomic swap protocols for decentralized trading across currencies. These protocols follow a standard blueprint similar to the 2-phase commit in databases: (i) both users first lock their coins under a certain (cryptographic) condition and a timeout; (ii-a) the coins are swapped if the condition is fulfilled; or (ii-b) coins are released after the timeout. The quest for these protocols is to minimize the requirements from the scripting language supported by the swapped coins, thereby supporting a larger range of cryptocurrencies. The recently proposed universal atomic swap protocol [IEEE S&P’22] demonstrates how to swap coins whose scripting language only supports the verification of a digital signature on a transaction. However, the timeout functionality is cryptographically simulated with verifiable timelock puzzles, a computationally expensive primitive that hinders its use in battery-constrained devices such as mobile phones. In this state of affairs, we question whether the 2-phase commit paradigm is necessary for atomic swaps in the first place. In other words, is it possible to design a secure atomic swap protocol where the timeout is not used by (at least one of the two) users? In this work, we present LightSwap, the first secure atomic swap protocol that does not require the timeout functionality (not even in the form of a cryptographic puzzle) by one of the two users. LightSwap is thus better suited for scenarios where a user, running an instance of LightSwap on her mobile phone, wants to exchange coins with an online exchange service running an instance of LightSwap on a computer. We show how LightSwap can be used to swap Bitcoin and Monero, an interesting use case since Monero does not provide any scripting functionality support other than linkable ring signature verification.
Expand
Shah Fahd
ePrint Report ePrint Report
A Feistel Network (FN) based block cipher relies on a Substitution Box (S-Box) for achieving the non-linearity. S-Box is carefully designed to achieve optimal cryptographic security bounds. The research of the last three decades shows that considerable efforts are being made on the mathematical design of an S-Box. To import the exact cryptographic profile of an S-Box, the designer focuses on the Affine Equivalent (AE) or Extended Affine (EA) equivalent S-Box. In this research, we argue that the Robustness of surjective mappings is invariant under AE and not invariant under EA transformation. It is proved that the EA equivalent of a surjective mapping does not necessarily contribute to the Robustness against the Differential Cryptanalysis (DC) in the light of Seberry's criteria. The generated EA equivalent S-Box(es) of DES and other $6 \times 4$ mappings do not show a good robustness profile compared to the original mappings. This article concludes that a careful selection of affine permutation parameters is significant during the design phase to achieve high Robustness against DC and Differential Power Analysis (DPA) attacks.
Expand
Moumita Dutta, Chaya Ganesh, Sikhar Patranabis, Nitin Singh
ePrint Report ePrint Report
Traditional notions of secure multiparty computation (MPC) allow mutually distrusting parties to jointly compute a function over their private inputs, but typically do not specify how these inputs are chosen. Motivated by real-world applications where corrupt inputs could adversely impact privacy and operational legitimacy, we consider a notion of authenticated MPC where the inputs are authenticated, e.g., signed using a digital signature by some trusted authority. We propose a generic and efficient compiler that transforms any linear secret sharing based MPC protocol into one with input authentication.

Our compiler incurs significantly lower computational costs and competitive communication overheads when compared to the best existing solutions, while entirely avoiding the (potentially expensive) protocol-specific techniques and pre-processing requirements that are inherent to these solutions. For $n$-party MPC protocols with abort security where each party has $\ell$ inputs, our compiler incurs $O(n\log \ell)$ communication overall and a computational overhead of $O(\ell)$ group exponentiations per party (the corresponding overheads for the most efficient existing solution are $O(n^2)$ and $O(\ell n)$). Finally, for a corruption threshold $t
Along the way, we make several technical contributions that are of independent interest. This includes the notion of distributed proofs of knowledge and concrete realizations of the same for several relations of interest, such as proving knowledge of many popularly used digital signature schemes, and proving knowledge of opening of a Pedersen commitment. We also illustrate the practicality of our approach by extending the well-known MP-SPDZ library with our compiler, thus yielding prototype authenticated MPC protocols.
Expand
Trey Li
ePrint Report ePrint Report
In 1993 Bernstein and Vazirani proposed a quantum algorithm for the Bernstein-Vazirani problem, which is given oracle access to the function $f(a_1,\dots,a_n) = a_1x_1+\cdots + a_nx_n \pmod 2$ with respect to a secret string $x = x_1\dots x_n \in \{0,1\}^n$, where $a_1,\dots,a_n \in \{0,1\}$, find $x$. We give a quantum algorithm for a new problem called the oracle subset product problem, which is given oracle access to the function $f(a_1,\dots,a_n) = a_1^{x_1}\cdots a_n^{x_n}$ with respect to a secret string $x = x_1\dots x_n\in\{0,1\}^n$, where $a_1,\dots,a_n\in \mathbb Z$, find $x$. Similar to the Bernstein-Vazirani algorithm, it is a quantum algorithm for a problem that is originally polynomial time solvable by classical algorithms; and that the advantage of the algorithm over classical algorithms is that it only makes one call to the function instead of $n$ calls.
Expand
Matt Davison, Ken King, Trevor Miller
ePrint Report ePrint Report
The tech industry is currently making the transition from Web 2.0 to Web 3.0, and with this transition, authentication and authorization have been reimag- ined. Users can now sign in to websites with their unique public/private key pair rather than generating a username and password for every site. How- ever, many useful features, like role-based access control, dynamic resource owner privileges, and expiration tokens, currently don’t have efficient Web 3.0 solutions. Our solution aims to provide a flexible foundation for resource providers to implement the aforementioned features on any blockchain through a two-step process. The first step, authorization, creates an on-chain asset which is to be presented as an access token when interacting with a resource. The second step, authentication, verifies ownership of an asset through querying the blockchain and cryptographic digital signatures. Our solution also aims to be a multi-chain standard, whereas current Web 3.0 sign-in standards are limited to a single blockchain.
Expand
Carlos Aguilar-Melchor, Nicolas Gama, James Howe, Andreas Hülsing, David Joseph, Dongze Yue
ePrint Report ePrint Report
This paper presents a code-based signature scheme based on the well-known syndrome decoding (SD) problem. The scheme builds upon a recent line of research which uses the Multi-Party-Computation-in-the-Head (MPCitH) approach to construct efficient zero-knowledge proofs, such as Syndrome Decoding in the Head (SDitH), and builds signature schemes from them using the Fiat-Shamir transform.

At the heart of our proposal is a new approach to amplify the soundness of any MPC protocol that uses additive secret sharing. An MPCitH protocol with $N$ parties can be repeated $D$ times using parallel composition to reach the same soundness as a protocol run with $N^D$ parties. However, the former comes with $D$ times higher communication costs, often mainly contributed by the usage of $D$ `auxiliary' states (which in general have a significantly bigger impact on size than random states). Instead of that, we begin by generating $N^D$ shares, arranged into a $D$-dimensional hypercube of side $N$ containing only one `auxiliary' state. We derive from this hypercube $D$ sharings of size $N$ which are used to run $D$ instances of an $N$ party MPC protocol. This approach leads to an MPCitH protocol with $1/N^D$ soundness error, requiring $N^D$ offline computation, only $ND$ online computation, and only $1$ `auxiliary'. As the, potentially offline, share generation phase is generally inexpensive, this leads to trade-offs that are superior to just using parallel composition.

Our novel method of share generation and aggregation not only improves certain MPCitH protocols in general but also shows in concrete improvements of signature schemes. Specifically, we apply it to the work of Feneuil, Joux, and Rivain (CRYPTO'22) on code-based signatures, and obtain a new signature scheme that achieves a 3.3x improvement in global runtime, and a 15x improvement in online runtime for their shortest signatures size (8.5 kB). It is also possible to leverage the fact that most computations are offline to define parameter sets leading to smaller signatures: 6.7 kB for 60 ms offline, or 5.6 kB for 700 ms offline. For NIST security level 1, online signature cost is around 3 million cycles (1 ms on commodity processors), regardless of signature size.
Expand
◄ Previous Next ►