International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

06 June 2023

Noga Amit, Guy Rothblum
ePrint Report ePrint Report
We study the following question: what cryptographic assumptions are needed for obtaining constant-round computationally-sound argument systems? We focus on argument systems with almost-linear verification time for subclasses of $\mathbf{P}$, such as depth-bounded computations. Kilian's celebrated work [STOC 1992] provides such 4-message arguments for $\mathbf{P}$ (actually, for $\mathbf{NP}$) using collision-resistant hash functions. We show that $one$-$way\ functions$ suffice for obtaining constant-round arguments of almost-linear verification time for languages in $\mathbf{P}$ that have log-space uniform circuits of linear depth and polynomial size. More generally, the complexity of the verifier scales with the circuit depth. Furthermore, our argument systems (like Kilian's) are doubly-efficient; that is, the honest prover strategy can be implemented in polynomial-time. Unconditionally sound interactive proofs for this class of computations do not rely on any cryptographic assumptions, but they require a linear number of rounds [Goldwasser, Kalai and Rothblum, STOC 2008]. Constant-round interactive proof systems of linear verification complexity are not known even for $\mathbf{NC}$ (indeed, even for $\mathbf{AC}^1$).
Expand
Charles Bouillaguet, Ambroise Fleury, Pierre-Alain Fouque, Paul Kirchner
ePrint Report ePrint Report
The Number Field Sieve (NFS) is the state-of-the art algorithm for integer factoring, and sieving is a crucial step in the NFS. It is a very time-consuming operation, whose goal is to collect many relations. The ultimate goal is to generate random smooth integers mod $N$ with their prime decomposition, where smooth is defined on the rational and algebraic sides according to two prime factor bases.

In modern factorization tool, such as \textsf{Cado-NFS}, sieving is split into different stages depending on the size of the primes, but defining good parameters for all stages is based on heuristic and practical arguments. At the beginning, candidates are sieved by small primes on both sides, and if they pass the test, they continue to the next stages with bigger primes, up to the final one where we factor the remaining part using the ECM algorithm. On the one hand, first stages are fast but many false relations pass them, and we spend a lot of time with useless relations. On the other hand final stages are more time demanding but outputs less relations. It is not easy to evaluate the performance of the best strategy on the overall sieving step since it depends on the distribution of numbers that results at each stage.

In this article, we try to examine different sieving strategies to speed up this step since many improvements have been done on all other steps of the NFS. Based on the relations collected during the RSA-250 factorization and all parameters, we try to study different strategies to better understand this step. Many strategies have been defined since the discovery of NFS, and we provide here an experimental evaluation.
Expand
Benoit Libert
ePrint Report ePrint Report
Vector commitment schemes are compressing commitments to vectors that make it possible to succinctly open a commitment for individual vector positions without revealing anything about other positions. We describe vector commitments enabling constant-size proofs that the committed vector is small (i.e., binary, ternary, or of small norm). As a special case, we obtain range proofs featuring the shortest proof length in the literature with only $3$ group elements per proof. As another application, we obtain short pairing-based NIZK arguments for lattice-related statements. In particular, we obtain short proofs (comprised of $3$ group elements) showing the validity of ring LWE ciphertexts and public keys. Our constructions are proven simulation-extractable in the algebraic group model and the random oracle model.
Expand
Solane El Hirch, Joan Daemen, Raghvendra Rohit, Rusydi H. Makarim
ePrint Report ePrint Report
We introduce a new type of mixing layer for the round function of cryptographic permutations, called circulant twin column parity mixer (CPM), that is a generalization of the mixing layers in KECCAK-f and XOODOO. While these mixing layers have a bitwise differential branch number of 4 and a computational cost of 2 (bitwise) additions per bit, the circulant twin CPMs we build have a bitwise differential branch number of 12 at the expense of an increase in computational cost: depending on the dimension this ranges between $3$ and $3.34$ XORs per bit. Our circulant twin CPMs operate on a state in the form of a rectangular array and can serve as mixing layer in a round function that has as non-linear step a layer of S-boxes operating in parallel on the columns. When sandwiched between two ShiftRow-like mappings, we can obtain a columnwise branch number of 12 and hence it guarantees 12 active S-boxes per two rounds in differential trails. Remarkably, the linear branch numbers (bitwise and columnwise alike) of these mappings is only 4. However, we define the transpose of a circulant twin CPM that has linear branch number of 12 and a differential branch number of 4. We give a concrete instantiation of a permutation using such a mixing layer, named Gaston. It operates on a state of $5 \times 64$ bits and uses $\chi$ operating on columns for its non-linear layer. Most notably, the Gaston round function is lightweight in that it takes as few bitwise operations as the one of NIST lightweight standard ASCON. We show that the best 3-round differential and linear trails of Gaston have much higher weights than those of ASCON. Permutations like Gaston can be very competitive in applications that rely for their security exclusively on good differential properties, such as keyed hashing as in the compression phase of Farfalle.
Expand
Alexandru Cojocaru, Juan Garay, Fang Song
ePrint Report ePrint Report
In this work we examine the hardness of solving various search problems by hybrid quantum-classical strategies, namely, by algorithms that have both quantum and classical capabilities. Specifically, for search problems that are allowed to have multiple solutions and in which the input is sampled according to uniform or Bernoulli distributions, we establish their hybrid quantum-classical query complexities---i.e., given a fixed number of classical and quantum queries, determine what is the probability of solving the search task. At a technical level, our results generalize the framework for hybrid quantum-classical search algorithms recently proposed by Rosmanis. Namely, for an arbitrary distribution $D$ on Boolean functions, the probability that an algorithm equipped with $\tau_c$ classical queries and $\tau_c$ quantum queries succeeds in finding a preimage of $1$ for a function sampled from $D$ is at most $\nu_D \cdot (2\sqrt{\tau_c} + 2\tau_q + 1)^2$, where $\nu_D$ captures the average (over $D$) fraction of preimages of $1$. As applications of our results, we first revisit and generalize the formal security treatment of the Bitcoin protocol called the Bitcoin backbone [Eurocrypt 2015], to a setting where the adversary has both quantum and classical capabilities, presenting a new hybrid honest majority condition necessary for the protocol to properly operate. Secondly, we re-examine the generic security of hash functions [PKC 2016] against quantum-classical hybrid adversaries.
Expand
Timo Glaser, Alexander May, Julian Nowakowski
ePrint Report ePrint Report
Modern (lattice-based) cryptosystems typically sample their secret keys component-wise and independently from a discrete probability distribution $\chi$. For instance, KYBER has secret key entries from the centered binomial distribution, DILITHIUM from the uniform distribution, and FALCON from the discrete Gaussian. As attacks may require guessing of a subset of the secret key coordinates, the complexity of enumerating such sub-keys is of fundamental importance. Any length-$n$ sub-key with entries sampled from $\chi$ has entropy $\operatorname{H}(\chi)n$, where $\operatorname{H}(\chi)$ denotes the entropy of $\chi$. If $\chi$ is the uniform distribution, then it is easy to see that any length-$n$ sub-key can be enumerated with $2^{\operatorname{H}(\chi)n}$ trials. However, for sub-keys sampled from general probability distributions, Massey (1994) ruled out that the number of key guesses can be upper bounded by a function of the entropy alone. In this work, we bypass Massey's impossibility result by focussing on the typical cryptographic setting, where key entries are sampled independently component-wise from some distribution $\chi$, i.e., we focus on $\chi^n$.

We study the optimal key guessing algorithm that enumerates keys in $\chi^n$ in descending order of probability, but we abort at a certain probability. As our main result, we show that for any discrete probability distribution~$\chi$ our aborted key guessing algorithm tries at most $2^{\operatorname{H}(\chi)n}$ keys, and its success probability asymptotically converges to $\frac 1 2$. Our algorithm allows for a quantum version with at most $2^{\operatorname{H}(\chi) n/ 2}$ key guesses. In other words, for any distribution $\chi$, we achieve a Grover-type square root speedup, which we show to be optimal.

For the underlying key distributions of KYBER and FALCON, we explicitly compute the expected number of key guesses and their success probabilities for our aborted key guessing for all sub-key lengths $n$ of practical interest. Our experiments strongly indicate that our aborted key guessing, while sacrificing only a factor of two in success probability, improves over the usual (non-aborted) key guessing by a run time factor exponential in $n$.
Expand
Bart Mennink, Charlotte Lefevre
ePrint Report ePrint Report
The Ascon authenticated encryption scheme has recently been selected as winner of the NIST Lightweight Cryptography competition. Despite its fame, however, there is no known generic security analysis of its mode: most importantly, all related generic security results only use the key to initialize the state and do not take into account key blinding internally and at the end. In this work we present a thorough multi-user security analysis of the Ascon mode, where particularly the key blinding is taken into account. Most importantly, our analysis includes an authenticity study in various attack settings. This analysis includes a description of a new security model of authenticity under state recovery, that captures the idea that the mode aims to still guarantee authenticity and security against key recovery even if an inner state is revealed to the adversary in some way, for instance through leakage. We prove that Ascon satisfies this security property, thanks to its unique key blinding technique.
Expand
Shun Watanabe, Kenji Yasunaga
ePrint Report ePrint Report
Hardness amplification is one of the important reduction techniques in cryptography, and it has been extensively studied in the literature. The standard XOR lemma known in the literature evaluates the hardness in terms of the probability of correct prediction; the hardness is amplified from mildly hard (close to $1$) to very hard $1/2 + \varepsilon$ by inducing $\varepsilon^2$ multiplicative decrease of the circuit size. Translating such a statement in terms of the bit-security framework introduced by Micciancio-Walter (EUROCRYPT 2018) and Watanabe-Yasunaga (ASIACRYPT 2021), it may cause the bit-security loss by the factor of $\log(1/\varepsilon)$. To resolve this issue, we derive a new variant of the XOR lemma in terms of the R\'enyi advantage, which directly characterizes the bit security. In the course of proving this result, we prove a new variant of the hardcore lemma in terms of the conditional squared advantage; our proof uses a boosting algorithm that may output the $\bot$ symbol in addition to $0$ and $1$, which may be of independent interest.
Expand
Takanori Isobe, Ryoma Ito, Fukang Liu, Kazuhiko Minematsu, Motoki Nakahashi, Kosei Sakamoto, Rentaro Shiba
ePrint Report ePrint Report
In real-world applications, the overwhelming majority of cases require (authenticated) encryption or hashing with relatively short input, say up to 2K bytes. Almost all TCP/IP packets are 40 to 1.5K bytes, and the maximum packet lengths of major protocols, e.g., Zigbee, Bluetooth low energy, and Controller Area Network (CAN), are less than 128 bytes. However, existing schemes are not well optimized for short input. To bridge the gap between real-world needs (in the future) and limited performances of state-of-the-art hash functions and authenticated encryptions with associated data (AEADs) for short input, we design a family of wide-block permutations Areion that fully leverages the power of AES instructions, which are widely deployed in many devices. As for its applications, we propose several hash functions and AEADs. Areion significantly outperforms existing schemes for short input and even competitive to relatively long messages. Indeed, our hash function is surprisingly fast, and its performance is less than three cycles/byte in the latest Intel architecture for any message size. It is significantly much faster than existing state-of-the-art schemes for short messages up to around 100 bytes, which are the most widely-used input size in real-world applications, on both the latest CPU architectures (IceLake, Tiger Lake, and Alder Lake) and mobile platforms (Pixel 7, iPhone 14, and iPad Pro with Apple M2).
Expand
Fabio Campos, Jorge Chavez-Saab, Jesús-Javier Chi-Domínguez, Michael Meyer, Krijn Reijnders, Francisco Rodríguez-Henríquez, Peter Schwabe, Thom Wiggers
ePrint Report ePrint Report
The isogeny-based scheme CSIDH is considered to be the only efficient post-quantum non-interactive key exchange (NIKE) and poses small bandwidth requirements, thus appearing to be an attractive alternative for classical Diffie--Hellman schemes. A crucial CSIDH design point, still under debate, is its quantum security when using prime fields of 512 to 1024 bits. Most work has focused on prime fields of that size and the practicality of CSIDH with large parameters, 2000 to 9000 bits, has so far not been thoroughly assessed, even though analysis of quantum security suggests these parameter sizes.

We fill this gap by providing two CSIDH instantiations: A deterministic and dummy-free instantiation based on SQALE, aiming at high security against physical attacks, and a speed-optimized constant-time instantiation that adapts CTIDH to larger parameter sizes. We provide implementations of both variants, including efficient field arithmetic for fields of such size, and high-level optimizations. Our deterministic and dummy-free version, dCSIDH, is almost twice as fast as SQALE, and, dropping determinism, CTIDH at these parameters is thrice as fast as dCSIDH. We investigate their use in real-world scenarios through benchmarks of TLS using our software.

Although our instantiations of CSIDH have smaller communication requirements than post-quantum KEM and signature schemes, both implementations still result in too-large handshake latency (tens of seconds), which hinder further consideration of using CSIDH in practice for conservative parameter set instantiations.
Expand
Jiangxia Ge, Tianshu Shan, Rui Xue
ePrint Report ePrint Report
The Fujisaki-Okamoto (\textsf{FO}) transformation (CRYPTO 1999 and Journal of Cryptology 2013) and its KEM variants (TCC 2017) are used to construct \textsf{IND-CCA}-secure PKE or KEM schemes in the random oracle model (ROM).

In the post-quantum setting, the ROM is extended to the quantum random oracle model (QROM), and the \textsf{IND-CCA} security of \textsf{FO} transformation and its KEM variants in the QROM has been extensively analyzed. Grubbs et al. (EUROCRYPTO 2021) and Xagawa (EUROCRYPTO 2022) then focused on security properties other than \textsf{IND-CCA} security, such as the anonymity aganist chosen-ciphertext attacks (\textsf{ANO-CCA}) of \textsf{FO} transformation in the QROM.

Beyond the post-quantum setting, Boneh and Zhandry (CRYPTO 2013) considered quantum adversaries that can perform the quantum chosen-ciphertext attacks (\textsf{qCCA}). However, to the best of our knowledge, there are few results on the \textsf{IND-qCCA} or \textsf{ANO-qCCA} security of \textsf{FO} transformation and its KEM variants in the QROM.

In this paper, we define a class of security games called the oracle-hiding game, and provide a lifting theorem for it. This theorem lifts the security reduction of oracle-hiding games in the ROM to that in the QROM. With this theorem, we prove the \textsf{IND-qCCA} and \textsf{ANO-qCCA} security of transformation $\textsf{FO}^{\slashed{\bot}}$, $\textsf{FO}^{\bot}$, $\textsf{FO}_m^{\slashed{\bot}}$ and $\textsf{FO}_m^\bot$, which are KEM variants of \textsf{FO}, in the QROM.

Moreover, we prove the \textsf{ANO-qCCA} security of the hybrid PKE schemes built via the KEM-DEM paradigm, where the underlying KEM schemes are obtained by $\textsf{FO}^{\slashed{\bot}}$, $\textsf{FO}^{\bot}$, $\textsf{FO}_m^{\slashed{\bot}}$ and $\textsf{FO}_m^\bot$. Notably, for those hybrid PKE schemes, our security reduction shows that their anonymity is independent of the security of their underlying DEM schemes. Hence, our result simplifies the anonymity analysis of the hybrid PKE schemes that obtained from the \textsf{FO} transformation.
Expand
Andrea Basso, Tako Boris Fouotsa
ePrint Report ePrint Report
The Supersingular Isogeny Diffie-Hellman (SIDH) protocol has been the main and most efficient isogeny-based encryption protocol, until a series of breakthroughs led to a polynomial-time key-recovery attack. While some countermeasures have been proposed, the resulting schemes are significantly slower and larger than the original SIDH.

In this work, we propose a new countermeasure technique that leads to significantly more efficient and compact protocols. To do so, we introduce the concept of artificially oriented curves, which are curves with an associated pair of subgroups. We show that this information is sufficient to build parallel isogenies and thus obtain an SIDH-like key exchange, while also revealing significantly less information compared to previous constructions.

After introducing artificially oriented curves, we formalize several related computational problems and thoroughly assess their presumed hardness. We then translate the SIDH key exchange to the artificially oriented setting, obtaining the key-exchange protocols binSIDH, or binary SIDH, and terSIDH, or ternary SIDH, which respectively rely on fixed-degree and variable-degree isogenies.

Lastly, we also provide a proof-of-concept implementation of the proposed protocols. Despite being implemented in a high-level, terSIDH has very competitive running times, which suggests that terSIDH might be the most efficient isogeny-based encryption protocol.
Expand
Yaobin Shen, François-Xavier Standaert
ePrint Report ePrint Report
We consider the design of a tweakable block cipher from a block cipher whose inputs and outputs are of size $n$ bits. The main goal is to achieve $2^n$ security with a large tweak (i.e., more than $n$ bits). Previously, Mennink at FSE'15 and Wang et al. at Asiacrypt'16 proposed constructions that can achieve $2^n$ security. Yet, these constructions can have a tweak size up to $n$-bit only. As evident from recent research, a tweakable block cipher with a large tweak is generally helpful as a building block for modes of operation, typical applications including MACs, authenticated encryption, leakage-resilient cryptography and full-disk encryption.

We begin with how to design a tweakable block cipher with $2n$-bit tweak and $n$-bit security from two block cipher calls. For this purpose, we do an exhaustive search for tweakable block ciphers with $2n$-bit tweaks from two block cipher calls, and show that all of them suffer from birthday-bound attacks. Next, we investigate the possibility to design a tweakable block cipher with $2n$-bit tweak and $n$-bit security from three block cipher calls. We start with some conditions to build a such tweakable block cipher and propose a natural construction, called G1, that likely meets them. After inspection, we find a weakness on G1 which leads to a birthday-bound attack. Based on G1, we then propose another construction, called G2, that can avoid this weakness. We finally prove that G2 can achieve $n$-bit security with $2n$-bit tweak.
Expand
Sahiba Suryawanshi, Dhiman Saha
ePrint Report ePrint Report
The current work makes a systematic attempt to describe the effect of the relative order of round constant ( RCon) addition in the round function of an SPN cipher on its algebraic structure. The observations are applied to the SymSum distinguisher, introduced by Saha et al. in FSE 2017 which is one of the best distinguishers on the SHA3 hash function reported in literature. Results show that certain ordering (referred to as Type-LCN) of RCon makes the distinguisher less effective but it still works with some limitations. Results in the form of new SymSum distinguishers are reported on concrete Type-LCN constructions - NIST LWC competition finalist Xoodyak-Hash and its internal permutation Xoodoo. New linear structures are also reported on Xoodoo that augment the distinguisher to penetrate more rounds. Final results include SymSum distinguishers on 7 rounds of Xoodoo and 5 rounds of Xoodyak-Hash with complexity 2^128 and 2^32 , respectively. All practical distinguishers have been verified. The characterization encompassing the algebraic structure and effect of RCon provided by the current work improves the under- standing of SymSum in general and constitutes one of the first such result on Xoodyak-Hash and Xoodoo.
Expand

30 May 2023

Steve Thakur
ePrint Report ePrint Report
We describe a pairing-based SNARK with a universal updateable CRS that can be instantiated with any pairing-friendly curve endowed with a sufficiently large prime scalar field. We use the monomial basis, thus sidestepping the need for large smooth order subgroups in the scalar field. In particular, the scheme can be instantiated with outer curves to widely used curves such as Ed25519, secp256k1 and BN254. This allows us to largely circumvent the overhead of non-native field arithmetic for succinct proofs of statements in these base fields. These include proofs of valid signatures in Ed25519 and secp256k1 and one layer recursion with BN254.

The proof size is constant \( (10\; \mathbb{G}_1\), \(20\;\mathbb{F}_p)\), as is the verification runtime, which is dominated by a single pairing check (i.e. two pairings). The Prover time is dominated by the \(10\) multi-scalar multiplications in \(\mathbb{G}_1\) - with a combined MSM length of $22\cdot |\mathrm{Circuit}|$ - and, to a lesser extent, the computation of a single sum of polynomial products over the scalar field.

The scheme supports succinct lookup arguments for subsets as well as subsequences. Our construction relies on homomorphic table commitments, which makes them amenable to vector lookups. The Prover algorithm runs in runtime $O(M\cdot \log(M))$, where $M = \max \{|\text{Circuit}| , \;|\text{Table}|\}.$

When the scalar field has low $2$-adicity - as is inevitably the case with any outer curve to Ed25519, secp256k1 or BN254 - we use the Schonhage-Strassen algorithm or the multimodular FFT algorithm to compute the sum of polynomial products that occurs in one of the steps of the proof generation. Despite the small (but discernible) slowdown compared to polynomial products in FFT-friendly fields, we empirically found that the MSMs dominate the proof generation time. To that end, we have included some benchmarks for polynomial products in FFT-unfriendly fields.

Furthermore, the scheme supports custom gates, albeit at the cost of a larger proof size. As an application of the techniques in this paper, we describe a protocol that supports multiple \( \mathbf{univariate}\) custom gates $\mathcal{G}_i$ of high degree that are sparsely distributed, in the sense that \[ \sum_{i} \deg(\mathcal{G}_i)\cdot \#(\mathcal{G}_i\;\text{gates}) \; = \; O(|\text{Circuit}|). \] This comes at the cost of three additional $\mathbb{G}_1$ elements and does not blow up the proof generation time, i.e. it does not entail MSMs or FFTs of length larger than the circuit size.

At the moment, Panther Protocol's Rust implementation in a 576-bit pairing-friendly outer curve to Ed25519 has a (not yet optimized) Prover time of 45 seconds for a million gate circuit on a 64 vCPU AWS machine.
Expand
Chenghong Wang, David Pujo, Kartik Nayak, Ashwin Machanavajjhala
ePrint Report ePrint Report
Safety, liveness, and privacy are three critical properties for any private proof-of-stake (PoS) blockchain. However, prior work (SP'21) has shown that to obtain safety and liveness, a PoS blockchain must in theory forgo privacy. Specifically, to ensure safety and liveness, PoS blockchains elect parties based on stake proportion, potentially exposing a party's stake even with private transaction processing. In this work, we make two key contributions. First, we present the first stake inference attack applicable to both deterministic and randomized PoS with exponentially less running time in comparison with SOTA designs. Second, we use differentially private stake distortion to achieve privacy in PoS blockchains and design two stake distortion mechanisms that any PoS protocol can use. We further evaluate our proposed methods using Ethereum 2.0, a widely-recognized PoS blockchain in operation. Results demonstrate effective stake inference risk mitigation, reasonable privacy, and preservation of essential safety and liveness properties.
Expand
Zhipeng Wang, Xihan Xiong, William J. Knottenbelt
ePrint Report ePrint Report
The ecosystem around blockchain and Decentralized Finance (DeFi) is seeing more and more interest from centralized regulators. For instance, recently, the US government placed sanctions on the largest DeFi mixer, Tornado.Cash (TC). To our knowledge, this is the first time that centralized regulators sanction a decentralized and open-source blockchain application. It has led various blockchain participants, e.g., miners/validators and DeFi platforms, to censor TC-related transactions. The blockchain community has extensively discussed that censoring transactions could affect users’ privacy.

In this work, we analyze the efficiency and possible security implications of censorship on the different steps during the life cycle of a blockchain transaction, i.e., generation, propagation, and validation. We reveal that fine-grained censorship will reduce the security of block validators and centralized transaction propagation services, and can potentially cause Denial of Service (DoS) attacks. We also find that DeFi platforms adopt centralized third-party services to censor user addresses at the frontend level, which blockchain users could easily bypass. Moreover, we present a tainting attack whereby an adversary can prevent users from interacting normally with DeFi platforms by sending TC-related transactions.
Expand
Dmitrii Koshelev
ePrint Report ePrint Report
This article is dedicated to a new generation method of two ``independent'' $\mathbb{F}_{\!q}$-points $P_0$, $P_1$ on almost any ordinary elliptic curve $E$ over a finite field $\mathbb{F}_{\!q}$ of large characteristic. In particular, the method is relevant for all standardized and real-world elliptic curves of $j$-invariants different from $0$, $1728$. The points $P_0$, $P_1$ are characterized by the fact that nobody (even a generator) knows the discrete logarithm $\log_{P_0}(P_1)$ in the group $E(\mathbb{F}_{\!q})$. Moreover, only one square root extraction in $\mathbb{F}_{\!q}$ (instead of two ones) is required in comparison with all previous generation methods.
Expand
Alessio Meneghetti, Edoardo Signorini
ePrint Report ePrint Report
A sequential aggregate signature (SAS) scheme allows multiple users to sequentially combine their respective signatures in order to reduce communication costs. Historically, early proposals required the use of trapdoor permutation (e.g., RSA). In recent years, a number of attempts have been made to extend SAS schemes to post-quantum assumptions. Many post-quantum signatures have been proposed in the hash-and-sign paradigm, which requires the use of trapdoor functions and appears to be an ideal candidate for sequential aggregation attempts. However, the hardness in achieving post-quantum one-way permutations makes it difficult to obtain similarly general constructions. Direct attempts at generalizing permutation-based schemes have been proposed, but they either lack formal security or require additional properties on the trapdoor function, which are typically not available for multivariate or code-based functions. In this paper, we propose a history-free sequential aggregate signature based on generic trapdoor functions, generalizing existing techniques. We prove the security of our scheme in the random oracle model by adopting the probabilistic hash-and-sign with retry paradigm, and we instantiate our construction with three post-quantum schemes, comparing their compression capabilities. Finally, we discuss how direct extensions of permutation-based SAS schemes are not possible without additional properties, showing the insecurity of two existing multivariate schemes when instantiated with Unbalanced Oil and Vinegar.
Expand
Andrea Di Giusto, Chiara Marcolla
ePrint Report ePrint Report
The Brakerski-Gentry-Vaikuntanathan (BGV) scheme is a Fully Homomorphic Encryption (FHE) cryptosystem based on the Ring Learning With Error (RLWE) problem. Ciphertexts in this scheme contain an error term that grows with operations and causes decryption failure when it surpasses a certain threshold. For this reason, the parameters of BGV need to be estimated carefully, with a trade-off between security and error margin. The ciphertext space of BGV is the ring $\mathcal R_q=\mathbb Z_q[x]/(\Phi_m(x))$, where usually the degree $n$ of the cyclotomic polynomial $\Phi_m(x)$ is chosen as a power of two for efficiency reasons. However, the jump between two consecutive powers-of-two polynomials can sometimes also cause a jump of the security, resulting in parameters that are much bigger than what is needed.

In this work, we explore the non-power-of-two instantiations of BGV. Although our theoretical research encompasses results applicable to any cyclotomic ring, our main investigation is focused on the case of $m=2^s 3^t$, i.e., cyclotomic polynomials with degree $n=2^s 3^{t-1}$. We provide a thorough analysis of the noise growth in this new setting using the canonical norm and compare our results with the power-of-two case considering practical aspects like NTT algorithms. We find that in many instances, the parameter estimation process yields better results for the non-power-of-two setting.
Expand
◄ Previous Next ►