International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

06 June 2023

Thomas Attema, Serge Fehr, Nicolas Resch
ePrint Report ePrint Report
A classic result in the theory of interactive proofs shows that a special-sound $\Sigma$-protocol is automatically a proof of knowledge. This result is very useful to have, since the latter property is typically tricky to prove from scratch, while the former is often easy to argue---if it is satisfied. While classic $\Sigma$-protocols often are special-sound, this is unfortunately not the case for many recently proposed, highly efficient interactive proofs, at least not in this strict sense. Motivated by this, the original result was recently generalized to $k$-special sound $\Sigma$-protocols (for arbitrary, polynomially bounded $k$), and to multi-round versions thereof. This generalization is sufficient to analyze (e.g.) Bulletproofs-like protocols, but is still insufficient for many other examples.

In this work, we push the relaxation of the special soundness property to the extreme, by allowing an arbitrary access structure $\Gamma$ to specify for which subsets of challenges it is possible to compute a witness, when given correct answers to these challenges (for a fixed first message). Concretely, for any access structure $\Gamma$, we identify parameters $t_\Gamma$ and $\kappa_\Gamma$, and we show that any $\Gamma$-special sound $\Sigma$-protocol is a proof of knowledge with knowledge error $\kappa_\Gamma$ if $t_\Gamma$ is polynomially bounded. Similarly for multi-round protocols.

We apply our general result to a couple of simple but important example protocols, where we obtain a tight knowledge error as an immediate corollary. Beyond these simple examples, we analyze the FRI protocol. Here, showing the general special soundness notion is non-trivial, but can be done (for a certain range of parameters) by recycling some of the techniques used to argue ordinary soundness of the protocol (as an IOP). Again as a corollary, we then derive that the FRI protocol, as an interactive proof by using a Merkle-tree commitment, is a proof of knowledge with almost optimal knowledge error.

Finally, building up on the technique for the parallel repetition of $k$-special sound $\Sigma$-protocols, we show the same strong parallel repetition result for $\Gamma$-special sound $\Sigma$-protocol and its multi-round variant.
Expand
Elette Boyle, Geoffroy Couteau, Niv Gilboa, Yuval Ishai, Lisa Kohl, Nicolas Resch, Peter Scholl
ePrint Report ePrint Report
The computational overhead of a cryptographic task is the asymptotic ratio between the computational cost of securely realizing the task and that of realizing the task with no security at all.

Ishai, Kushilevitz, Ostrovsky, and Sahai (STOC 2008) showed that secure two-party computation of Boolean circuits can be realized with constant computational overhead, independent of the desired level of security, assuming the existence of an oblivious transfer (OT) protocol and a local pseudorandom generator (PRG). However, this only applies to the case of semi-honest parties. A central open question in the area is the possibility of a similar result for malicious parties. This question is open even for the simpler task of securely realizing many instances of a constant-size function, such as OT of bits.

We settle the question in the affirmative for the case of OT, assuming: (1) a standard OT protocol, (2) a slightly stronger "correlation-robust" variant of a local PRG, and (3) a standard sparse variant of the Learning Parity with Noise (LPN) assumption. An optimized version of our construction requires fewer than 100 bit operations per party per bit-OT. For 128-bit security, this improves over the best previous protocols by 1-2 orders of magnitude.

We achieve this by constructing a constant-overhead pseudorandom correlation generator (PCG) for the bit-OT correlation. Such a PCG generates $N$ pseudorandom instances of bit-OT by locally expanding short, correlated seeds. As a result, we get an end-to-end protocol for generating $N$ pseudorandom instances of bit-OT with $o(N)$ communication, $O(N)$ computation, and security that scales sub-exponentially with $N$.

Finally, we present applications of our main result to realizing other secure computation tasks with constant computational overhead. These include protocols for general circuits with a relaxed notion of security against malicious parties, protocols for realizing $N$ instances of natural constant-size functions, and reducing the main open question to a potentially simpler question about fault-tolerant computation.
Expand
André Schrottenloher, Marc Stevens
ePrint Report ePrint Report
The meet-in-the-middle (MITM) technique has led to many key-recovery attacks on block ciphers and preimage attacks on hash functions. MITM attacks aim at finding efficiently the internal states conforming to a constrained computational path in the given design. The path is split into two independent computations (forward and backward) which are performed separately and then matched pairwise.

Nowadays, cryptographers use automatic tools that reduce the search of MITM attacks to an optimization problem. Bao et al. (EUROCRYPT 2021) introduced a low-level modeling based on Mixed Integer Linear Programming (MILP) for MITM attacks on hash functions, which was extended to key-recovery attacks by Dong et al. (CRYPTO 2021). However, the modeling only covers AES-like designs. Schrottenloher and Stevens (CRYPTO 2022) proposed a different approach aiming at higher-level simplified models. However, their modeling was limited to cryptographic permutations.

In this paper, we extend the latter simplified modeling to also cover block ciphers with simple key schedules. The resulting modeling enables us to target a large array of primitives, typically lightweight SPN ciphers where the key schedule has a slow diffusion, or none at all. We give several applications such as full breaks of the PIPO-256 and FUTURE block ciphers, and reduced-round classical and quantum attacks on SATURNIN-Hash.
Expand
Morten Dahl, Daniel Demmler, Sarah Elkazdadi, Arthur Meyre, Jean-Baptiste Orfila, Dragos Rotaru, Nigel P. Smart, Samuel Tap, Michael Walter
ePrint Report ePrint Report
We outline a secure and efficient methodology to do threshold distributed decryption for LWE based Fully Homomorphic Encryption schemes. The only ``difficult'' case being that of TFHE (due to the small parameters used in this scheme). We show that the standard technique of ``noise flooding'' can also be used with schemes with small parameters, by utilizing a switch to a scheme with slightly higher parameters and then utilizing the efficient bootstrapping operations which TFHE offers. Our protocol is proved secure via a simulation argument, making it's integration in bigger protocols easier to manage.
Expand
Zhengjun Cao, Lihua Liu
ePrint Report ePrint Report
We show that the searchable encryption scheme [IEEE Trans. Parallel Distrib. Syst., 32 (3), 2021, 561--574] cannot work because the Data Provider's secret key $sk_{DP}$ and the Request User's secret key $sk_{RU}$ are not available to the Cloud Platform (CP) or the Internal Server (IS). The CP and IS cannot finish the secure bit-decomposition protocol, which requires CP or IS to decrypt the blinded integer so as to securely handle the least significant bit of the target integer.
Expand
Seny Kamara, Tarik Moataz
ePrint Report ePrint Report
Sub-linear encrypted search algorithms (ESA) are highly efficient search algorithms that operate on end-to-end encrypted data. ESAs can be built using a variety of cryptographic primitives and can achieve different trade-offs between efficiency, expressiveness and leakage. Since the introduction of ESAs, cryptographers have focused on both minimizing and attacking their leakage but an important open problem in the field has been to provide a theoretical framework with which leakage can be analyzed and better understood.

In this work, we propose such a framework. We model leakage profiles as Bayesian networks and capture leakage attacks as statistical inference algorithms on these networks. We then formalize a notion we call coherence which, roughly speaking, captures the quality of the inference given some observed leakage and an auxiliary distribution. In this work, we focus on partial and full query recovery attacks, though our framework can be extended to capture data recovery attacks as well.

We then use our framework to study the coherence of two common leakage patterns---the query equality pattern and the volume pattern---against two well-known and powerful statistical inference techniques. In each case, we provide generic bounds on the coherence in the sense that they apply to arbitrary query and auxiliary distributions and concrete analyses for specific pairs of query and auxiliary distributions.
Expand
Cody Freitag, Brent Waters, David J. Wu
ePrint Report ePrint Report
Witness encryption is a generalization of public-key encryption where the public key can be any NP statement x and the associated decryption key is any witness w for x. While early constructions of witness encryption relied on multilinear maps and indistinguishability obfuscation (iO), recent works have provided direct constructions of witness encryption that are more efficient than iO (and also seem unlikely to yield iO). Motivated by this progress, we revisit the possibility of using witness encryption to realize advanced cryptographic primitives previously known only in "obfustopia."

In this work, we give new constructions of trustless encryption systems from plain witness encryption (in conjunction with the learning-with-errors assumption): (1) flexible broadcast encryption (a broadcast encryption scheme where users choose their own secret keys and users can encrypt to an arbitrary set of public keys); and (2) registered attribute-based encryption (a system where users choose their own keys and then register their public key together with a set of attributes with a deterministic and transparent key curator). Both primitives were previously only known from iO. We also show how to use our techniques to obtain an optimal broadcast encryption scheme in the random oracle model.

Underlying our constructions is a novel technique for using witness encryption based on a new primitive which we call function-binding hash functions. Whereas a somewhere statistically binding hash function statistically binds a digest to a few bits of the input, a function-binding hash function statistically binds a digest to the output of a function of the inputs. As we demonstrate in this work, function-binding hash functions provide us new ways to leverage the power of plain witness encryption and use it as the foundation of advanced cryptographic primitives. Finally, we show how to build function-binding hash functions for the class of disjunctions of block functions from leveled homomorphic encryption; this in combination with witness encryption yields our main results.
Expand
Giuseppe Persiano, Kevin Yeo
ePrint Report ePrint Report
Oblivious RAMs (ORAMs) are an important cryptographic primitive that enable outsourcing data to a potentially untrusted server while hiding patterns of access to the data. ORAMs provide strong guarantees even in the face of a {\em persistent adversary} that views the transcripts of all operations and resulting memory contents. Unfortunately, the strong guarantees against persistent adversaries comes at the cost of efficiency as ORAMs are known to require $\Omega(\log n)$ overhead.

In an attempt to obtain faster constructions, prior works considered security against {\em snapshot adversaries} that only have limited access to operational transcripts and memory. We consider $(s,\ell)$-snapshot adversaries that perform $s$ data breaches and views the transcripts of $\ell$ total queries. Promisingly, Du, Genkin and Grubbs [Crypto'22] presented an ORAM construction with $O(\log \ell)$ overhead protecting against $(1,\ell)$-snapshot adversaries with the transcript of $\ell$ consecutive operations from a single breach. For small values of $\ell$, this outperforms standard ORAMs.

In this work, we tackle whether it is possible to further push this construction beyond a single breach. Unfortunately, we show that protecting against even slightly stronger snapshot adversaries becomes difficult. As our main result, we present a $\Omega(\log n)$ lower bound for any ORAM protecting against a $(3,1)$-snapshot adversary that performs three breaches and sees the transcript of only one query. In other words, our lower bound holds even if an adversary observes only memory contents during two breaches while managing to view the transcript of only one query in the other breach. Therefore, we surprisingly show that protecting against a snapshot adversary with three data breaches is as difficult as protecting against a persistent adversary
Expand
Seny Kamara, Abdelkarim Kati, Tarik Moataz, Jamie DeMaria, Andrew Park, Amos Treiber
ePrint Report ePrint Report
Encrypted search algorithms (ESAs) enable private search on encrypted data and can be constructed from a variety of cryptographic primitives. All known sub-linear ESA algorithms leak information and, therefore, the design of leakage attacks is an important way to ascertain whether a given leakage profile is exploitable in practice. Recently, Oya and Kerschbaum (Usenix '22) presented an attack called IHOP that targets the query equality pattern---which reveals if and when two queries are for the same keyword---of a sequence of dependent queries.

In this work, we continue the study of query equality leakage on dependent queries and present two new attacks in this setting which can work either as known-distribution or known-sample attacks. They model query distributions as Markov processes and leverage insights and techniques from stochastic processes and machine learning. We implement our attacks and evaluate them on real-world query logs. Our experiments show that they outperform the state-of-the-art in most settings but also have limitations in practical settings.
Expand
Dennis Dayanikli, Anja Lehmann
ePrint Report ePrint Report
Password-based credentials (PBCs), introduced by Zhang et al. (NDSS'20), provide an elegant solution to secure, yet convenient user authentication. Therein the user establishes a strong cryptographic access credential with the server. To avoid the assumption of secure storage on the user side, the user does not store the credential directly, but only a password-protected version of it. The ingenuity of PBCs is that the password-based credential cannot be offline attacked, offering essentially the same strong security as standard key-based authentication. This security relies on a secret key of the server that is needed to verify whether an authentication token derived from a password-based credential and password is correct. However, the work by Zhang et al. assumes that this server key never gets compromised, and their protocol loses all security in case of a breach. As such a passive leak of the server's stored verification data is one of the main threats in user authentication, our work aims to strengthen PBC to remain secure even when the server's key got compromised.

We first show that the desired security against server compromise is impossible to achieve in the original framework. We then introduce a modified version of PBCs that circumvents our impossibility result and formally define a set of security properties, each being optimal for the respective corruption setting. Finally, we propose a surprisingly simple construction that provably achieves our stronger security guarantees, and is generically composed from basic building blocks.
Expand
Benedikt Auerbach, Charlotte Hoffmann, Guillermo Pascual-Perez
ePrint Report ePrint Report
The generic-group model (GGM) aims to capture algorithms working over groups of prime order that only rely on the group operation, but do not exploit any additional structure given by the concrete implementation of the group. In it, it is possible to prove information-theoretic lower bounds on the hardness of problems like the discrete logarithm (DL) or computational Diffie-Hellman (CDH). Thus, since its introduction, it has served as a valuable tool to assess the concrete security provided by cryptographic schemes based on such problems. A work on the related algebraic-group model (AGM) introduced a method, used by many subsequent works, to adapt GGM lower bounds for one problem to another, by means of conceptually simple reductions.

In this work, we propose an alternative approach to extend GGM bounds from one problem to another. Following an idea by Yun [EC15], we show that, in the GGM, the security of a large class of problems can be reduced to that of geometric search-problems. By reducing the security of the resulting geometric-search problems to variants of the search-by-hypersurface problem, for which information theoretic lower bounds exist, we give alternative proofs of several results that used the AGM approach.

The main advantage of our approach is that our reduction from geometric search-problems works, as well, for the GGM with preprocessing (more precisely the bit-fixing GGM introduced by Coretti, Dodis and Guo [Crypto18]). As a consequence, this opens up the possibility of transferring preprocessing GGM bounds from one problem to another, also by means of simple reductions. Concretely, we prove novel preprocessing bounds on the hardness of the $d$-strong discrete logarithm, the $d$-strong Diffie-Hellman inversion, and multi-instance CDH problems, as well as a large class of Uber assumptions. Additionally, our approach applies to Shoup's GGM in its full generality, while the recent works of Zhang, Zhou, and Katz [AC22] and Zhandry [Crypto22] highlight that this is not the case for the AGM approach.
Expand
David Jacquemin, Anisha Mukherjee, Sujoy SINHA ROY, Péter Kutas
ePrint Report ePrint Report
Isogeny-based cryptographic constructions are well-known in the domain of post-quantum security. One such instance is SQISign, that boasts the most compact key and signature sizes among all post-quantum signature schemes. However, its current implementation is not free from side-channel vulnerabilities. At certain steps within the signing proce- dure, it relies on Cornacchia’s algorithm to represent an integer as a sum of squares of two integers. This algorithm in turn uses a ‘half-GCD’ sub- routine that is based on a non-constant time version of the Euclidean algorithm. We show that if inputs of Cornacchia’s algorithm leaks, then one can retrieve the signing key in polynomial time. We propose two timing attack-resistant versions of Cornacchia’s algorithm. The first ver- sion is based on a lattice reduction algorithm. We show that randomising the starting basis with a unimodular matrix would make the execution time independent of the input. The second version uses a constant-time ‘half-GCD’ algorithm that runs a fixed number of times for a given upper bound on the size of inputs.
Expand
Hamza Abusalah
ePrint Report ePrint Report
SNACKs are succinct non-interactive arguments of chain knowledge. They allow for efficient and generic solutions to blockchain light-client bootstrapping. Abusalah et al. construct SNACKs in the random oracle model for any \emph{single-chain} blockchain from any graph-labeling proof of sequential work (PoSW) scheme. Their SNACK construction is a PoSW-like protocol over the augmented blockchain. Unlike single-chain blockchains, such as proof-of-work and proof-of-stake blockchains, proof-of-space (PoSpace) blockchains are composed of two chains: a \emph{canonical} proof chain and a data chain. These two chains are related using a signature scheme. In this work, we construct PoSW-enabled SNACKs for any PoSpace blockchain. Combined with the results of Abusalah et al., this gives the first solution to light-client bootstrapping in PoSpace blockchains. The space cost of our construction is \emph{two} hash values in each augmented PoSpace block. Generating SNACK proofs for a PoSpace blockchain is identical to generating SNACK proofs for single-chain blockchains and amounts to looking up a succinct number of augmented blocks.
Expand
Ohad Klein, Ilan Komargodski
ePrint Report ePrint Report
We study the local leakage resilience of Shamir's secret sharing scheme. In Shamir's scheme, a random polynomial $f$ of degree $t$ is sampled over a field of size $p>n$, conditioned on $f(0)=s$ for a secret $s$. Any $t$ shares $(i, f(i))$ can be used to fully recover $f$ and thereby $f(0)$. But, any $t-1$ evaluations of $f$ at non-zero coordinates are completely independent of $f(0)$. Recent works ask whether the secret remains hidden even if say only 1 bit of information is leaked from each share, independently. This question is well motivated due to the wide range of applications of Shamir's scheme. For instance, it is known that if Shamir's scheme is leakage resilient in some range of parameters, then known secure computation protocols are secure in a local leakage model.

Over characteristic 2 fields, the answer is known to be negative (e.g., Guruswami and Wootters, STOC '16). Benhamouda, Degwekar, Ishai, and Rabin (CRYPTO '18) were the first to give a positive answer assuming computation is done over prime-order fields. They showed that if $t \ge 0.907n$, then Shamir's scheme is leakage resilient. Since then, there has been extensive efforts to improve the above threshold and after a series of works, the current record shows leakage resilience for $t\ge 0.78n$ (Maji et al., ISIT '22). All existing analyses of Shamir's leakage resilience for general leakage functions follow a single framework for which there is a known barrier for any $t \le 0.5 n$.

In this work, we a develop a new analytical framework that allows us to significantly improve upon the previous record and obtain additional new results. Specifically, we show: $\bullet$ Shamir's scheme is leakage resilient for any $t \ge 0.69n$. $\bullet$ If the leakage functions are guaranteed to be ``balanced'' (i.e., splitting the domain of possible shares into 2 roughly equal-size parts), then Shamir's scheme is leakage resilient for any $t \ge 0.58n$. $\bullet$ If the leakage functions are guaranteed to be ``unbalanced'' (i.e., splitting the domain of possible shares into 2 parts of very different sizes), then Shamir's scheme is leakage resilient as long as $t \ge 0.01 n$. Such a result is $provably$ impossible to obtain using the previously known technique.

All of the above apply more generally to any MDS codes-based secret sharing scheme.

Confirming leakage resilience is most important in the range $t \leq n/2$, as in many applications, Shamir’s scheme is used with thresholds $t\leq n/2$. As opposed to the previous approach, ours does not seem to have a barrier at $t=n/2$, as demonstrated by our third contribution.
Expand
Mariya Georgieva Belorgey, Sofia Dandjee, Nicolas Gama, Dimitar Jetchev, Dmitry Mikushin
ePrint Report ePrint Report
We propose a novel protocol, Falkor, for secure aggregation for Federated Learning in the multi-server scenario based on masking of local models via a stream cipher based on AES in counter mode and accelerated by GPUs running on the aggregating servers. The protocol is resilient to client dropout and has reduced clients/servers communication cost by a factor equal to the number of aggregating servers (compared to the naïve baseline method). It scales simultaneously in the two major complexity aspects: 1) large number of clients; 2) highly complex machine learning models such as CNNs, RNNs, Transformers, etc. The AES-CTR-based masking function in our aggregation protocol is built on the concept of counter-based cryptographically-secure pseudorandom number generators (csPRNGs) as described in [SMDS'11] and subsequently used by Facebook for their torchcsprng csPRNG. We improve upon torchcsprng by careful use of shared memory on the GPU device, a recent idea of Cihangir Tezcan [Tezcan'21] and obtain 100x speedup in the masking function compared to a single CPU core.

In addition, we prove the semantic security of the AES-CTR-based masking function. Finally, we demonstrate scalability of our protocol in two real-world Federated Learning scenarios: 1) efficient training of large logistic regression models with 50 features and 50M data points distributed across 1000 clients that can dropout and securely aggregated via three servers (running secure multi-party computation (SMPC)); 2) training a recurrent neural network (RNN) model for sentiment analysis of Twitter feeds coming from a large number of Twitter users (more than 250,000 users). In case 1), our secure aggregation algorithm runs in less than a minute compared to a pure MPC computation (on 3 parties) that takes 27 hours and uses 400GB RAM machines as well as 1 gigabit-per-second network. In case 2), the total training is around $10$ minutes using our GPU powered secure aggregation versus 10 hours using a single CPU core.
Expand
Gideon Samid
ePrint Report ePrint Report
For decades now, mathematical complexity is being regarded as the sole means to creating a sufficient distance between a ciphertext and its generating plaintext. Alas, mathematical complexity operates under the irremovable shadow of stealth cryptanalysis. By its nature mathematical complexity is vulnerable to smarter mathematicians and better equipped adversaries, which is a sufficient motivation to explore an alternative means to project security. Applying the Innovation Solution Protocol such an alternative has been found: randomness. Not as next to mathematical complexity, rather as its replacement. Unlike complexity, randomness is not vulnerable to smarter mathematicians and better equipped adversaries. It removes the shadow under which all modern ciphers operate by proposing a framework wherein the message transmitter may apply arbitrary quantities of ad-hoc randomness with which to secure a transmission over a secret key of arbitrary large size; and where only a part thereto may participate in any instance of encryption; and where security is increased in proportion to the amount of randomness involved. Handling the large quantities of randomness is 'messy' and inconvenient, albeit, the user, not the cipher designer, decides how much inconvenience to put up with in order to build sufficient security to meet the pressing threat. With sufficient randomness, transmission security may exceed One-Time-Pad (OTP) in as much as even the size of the plaintext is not determinable. Ciphers that shift the security responsibility to the user are called "Trans-Vernam", honoring Gilbert S. Vernam's OTP, or "Tesla Ciphers", reflective of the fact that Tesla offered a new power source for the automotive industry, much as the Tesla ciphers offer a new security source for cyberspace. The Tesla cryptographic modality has its security substantiated with a mathematical proof. It is Quantum ready and AI resistant. It is battery-friendly, and ultra fast. Albeit this proposal brings to question a long-established cryptographic premise, with all that is involved.
Expand
Noga Amit, Guy Rothblum
ePrint Report ePrint Report
We study the following question: what cryptographic assumptions are needed for obtaining constant-round computationally-sound argument systems? We focus on argument systems with almost-linear verification time for subclasses of $\mathbf{P}$, such as depth-bounded computations. Kilian's celebrated work [STOC 1992] provides such 4-message arguments for $\mathbf{P}$ (actually, for $\mathbf{NP}$) using collision-resistant hash functions. We show that $one$-$way\ functions$ suffice for obtaining constant-round arguments of almost-linear verification time for languages in $\mathbf{P}$ that have log-space uniform circuits of linear depth and polynomial size. More generally, the complexity of the verifier scales with the circuit depth. Furthermore, our argument systems (like Kilian's) are doubly-efficient; that is, the honest prover strategy can be implemented in polynomial-time. Unconditionally sound interactive proofs for this class of computations do not rely on any cryptographic assumptions, but they require a linear number of rounds [Goldwasser, Kalai and Rothblum, STOC 2008]. Constant-round interactive proof systems of linear verification complexity are not known even for $\mathbf{NC}$ (indeed, even for $\mathbf{AC}^1$).
Expand
Charles Bouillaguet, Ambroise Fleury, Pierre-Alain Fouque, Paul Kirchner
ePrint Report ePrint Report
The Number Field Sieve (NFS) is the state-of-the art algorithm for integer factoring, and sieving is a crucial step in the NFS. It is a very time-consuming operation, whose goal is to collect many relations. The ultimate goal is to generate random smooth integers mod $N$ with their prime decomposition, where smooth is defined on the rational and algebraic sides according to two prime factor bases.

In modern factorization tool, such as \textsf{Cado-NFS}, sieving is split into different stages depending on the size of the primes, but defining good parameters for all stages is based on heuristic and practical arguments. At the beginning, candidates are sieved by small primes on both sides, and if they pass the test, they continue to the next stages with bigger primes, up to the final one where we factor the remaining part using the ECM algorithm. On the one hand, first stages are fast but many false relations pass them, and we spend a lot of time with useless relations. On the other hand final stages are more time demanding but outputs less relations. It is not easy to evaluate the performance of the best strategy on the overall sieving step since it depends on the distribution of numbers that results at each stage.

In this article, we try to examine different sieving strategies to speed up this step since many improvements have been done on all other steps of the NFS. Based on the relations collected during the RSA-250 factorization and all parameters, we try to study different strategies to better understand this step. Many strategies have been defined since the discovery of NFS, and we provide here an experimental evaluation.
Expand
Benoit Libert
ePrint Report ePrint Report
Vector commitment schemes are compressing commitments to vectors that make it possible to succinctly open a commitment for individual vector positions without revealing anything about other positions. We describe vector commitments enabling constant-size proofs that the committed vector is small (i.e., binary, ternary, or of small norm). As a special case, we obtain range proofs featuring the shortest proof length in the literature with only $3$ group elements per proof. As another application, we obtain short pairing-based NIZK arguments for lattice-related statements. In particular, we obtain short proofs (comprised of $3$ group elements) showing the validity of ring LWE ciphertexts and public keys. Our constructions are proven simulation-extractable in the algebraic group model and the random oracle model.
Expand
Solane El Hirch, Joan Daemen, Raghvendra Rohit, Rusydi H. Makarim
ePrint Report ePrint Report
We introduce a new type of mixing layer for the round function of cryptographic permutations, called circulant twin column parity mixer (CPM), that is a generalization of the mixing layers in KECCAK-f and XOODOO. While these mixing layers have a bitwise differential branch number of 4 and a computational cost of 2 (bitwise) additions per bit, the circulant twin CPMs we build have a bitwise differential branch number of 12 at the expense of an increase in computational cost: depending on the dimension this ranges between $3$ and $3.34$ XORs per bit. Our circulant twin CPMs operate on a state in the form of a rectangular array and can serve as mixing layer in a round function that has as non-linear step a layer of S-boxes operating in parallel on the columns. When sandwiched between two ShiftRow-like mappings, we can obtain a columnwise branch number of 12 and hence it guarantees 12 active S-boxes per two rounds in differential trails. Remarkably, the linear branch numbers (bitwise and columnwise alike) of these mappings is only 4. However, we define the transpose of a circulant twin CPM that has linear branch number of 12 and a differential branch number of 4. We give a concrete instantiation of a permutation using such a mixing layer, named Gaston. It operates on a state of $5 \times 64$ bits and uses $\chi$ operating on columns for its non-linear layer. Most notably, the Gaston round function is lightweight in that it takes as few bitwise operations as the one of NIST lightweight standard ASCON. We show that the best 3-round differential and linear trails of Gaston have much higher weights than those of ASCON. Permutations like Gaston can be very competitive in applications that rely for their security exclusively on good differential properties, such as keyed hashing as in the compression phase of Farfalle.
Expand
◄ Previous Next ►