International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

20 June 2022

David Heath, Vladimir Kolesnikov, Jiahui Lu
ePrint Report ePrint Report
Katz et al., CCS 2018 (KKW) is a popular and efficient MPC-in-the-head non-interactive ZKP (NIZK) scheme, which is the technical core of the post-quantum signature scheme Picnic, currently considered for standardization by NIST. The KKW approach simultaneously is concretely efficient, even on commodity hardware, and does not rely on trusted setup. Importantly, the approach scales linearly in the circuit size with low constants with respect to proof generation time, proof verification time, proof size, and RAM consumption. However, KKW works with Boolean circuits only and hence incurs significant cost for circuits that include arithmetic operations.

In this work, we extend KKW with a suite of efficient arithmetic operations over arbitrary rings and Boolean conversions. Rings $\mathbb{Z}_{2^k}$ are important for NIZK as they naturally match the basic operations of modern programs and CPUs. In particular, we: * present a suitable ring representation consistent with KKW, * construct efficient conversion operators that translate between arith- metic and Boolean representations, and * demonstrate how to efficiently operate over the arithmetic representation, including a vector dot product of length-n vectors with cost equal to that of a single multiplication. These improvements substantially improve KKW for circuits with arithmetic. As one example, we can multiply 100 × 100 square matrices of 32-bit numbers using a 3200x smaller proof size than standard KKW (100x improvement from our dot product construction and 32x from moving to an arithmetic representation).

We discuss in detail proof size and resource consumption and argue the practicality of running large proofs on commodity hardware.
Expand
Dmitrii Koshelev
ePrint Report ePrint Report
This article develops a novel method of generating ``independent'' points on an ordinary elliptic curve $E$ over a finite field. Such points are actively used in the Pedersen vector commitment scheme and its modifications. In particular, the new approach is relevant for Pasta curves (of $j$-invariant $0$), which are very popular in the given type of elliptic cryptography. These curves are defined over highly $2$-adic fields, hence successive generation of points via a hash function to $E$ is an expensive solution. Our method also satisfies the NUMS (Nothing Up My Sleeve) principle, but it works faster on average. More precisely, instead of finding each point separately in constant time, we suggest to sample several points at once with some probability.
Expand
Kanav Gupta, Deepak Kumaraswamy, Nishanth Chandran, Divya Gupta
ePrint Report ePrint Report
Secure machine learning (ML) inference can provide meaningful privacy guarantees to both the client (holding sensitive input) and the server (holding sensitive weights of the ML model) while realizing inference-as-a-service. Although many specialized protocols exist for this task, including those in the preprocessing model (where a majority of the overheads are moved to an input independent offline phase), they all still suffer from large online complexity. Specifically, the protocol phase that executes once the parties know their inputs, has high communication, round complexity, and latency. Function Secret Sharing (FSS) based techniques offer an attractive solution to this in the trusted dealer model (where a dealer provides input independent correlated randomness to both parties), and 2PC protocols obtained based on these techniques have a very lightweight online phase. Unfortunately, current FSS-based 2PC works (AriaNN, PoPETS 2022; Boyle et al. Eurocrypt 2021; Boyle et al. TCC 2019) fall short of providing a complete solution to secure inference. First, they lack support for math functions (e.g., sigmoid, and reciprocal square root) and hence, are insufficient for a large class of inference algorithms (e.g. recurrent neural networks). Second, they restrict all values in the computation to be of the same bitwidth and this prevents them from benefitting from efficient float-to-fixed converters such as Tensorflow Lite that crucially use low bitwidth representations and mixed bitwidth arithmetic. In this work, we present LLAMA -- an end-to-end, FSS based, secure inference library supporting precise low bitwidth computations (required by converters) as well as provably precise math functions; thus, overcoming all the drawbacks listed above. We perform an extensive evaluation of LLAMA and show that when compared with non-FSS based libraries supporting mixed bitwidth arithmetic and math functions (SIRNN, IEEE S&P 2021), it has at least an order of magnitude lower communication, rounds, and runtimes. We integrate LLAMA with the EzPC framework (IEEE EuroS&P 2019) and demonstrate its robustness by evaluating it on large benchmarks (such as ResNet-50 on the ImageNet dataset) as well as on benchmarks considered in AriaNN -- here too LLAMA outperforms prior work.
Expand
Chunfu Jia, Shaoqiang Wu, Ding Wang
ePrint Report ePrint Report
As the most dominant authentication mechanism, password-based authentication suffers catastrophic offline password guessing attacks once the authentication server is compromised and the password database is leaked. Password hardening (PH) service, an external/third-party crypto service, has been recently proposed to strengthen password storage and reduce the damage of authentication server compromise. However, all existing schemes are unreliable because they overlook the important restorable property: PH service opt-out. In existing PH schemes, once the authentication server has subscribed to a PH service, it must adopt this service forever, even if it wants to stop the external/third-party PH service and restore its original password storage (or subscribe to another PH service).

To fill the gap, we propose a new PH service called PW-Hero that equips its PH service with an option to terminate its use (i.e., opt-out). In PW-Hero, password authentication is strengthened against offline attacks by adding external secret spices to password records. With the opt-out property, authentication servers can proactively request to end the PH service after successful authentications. Then password records can be securely migrated to their traditional salted hash state, ready for subscription to other PH services. Besides, PW-Hero achieves all existing desirable properties, such as comprehensive verifiability, rate limits against online attacks, and user privacy. We define PW-Hero as a suite of protocols that meet desirable properties and build a simple, secure, and efficient instance. Moreover, we develop a prototype implementation and evaluate its performance, which shows the practicality of our PW-Hero service.
Expand
Ilan Komargodski, Shin’ichiro Matsuo, Elaine Shi, Ke Wu
ePrint Report ePrint Report
It is well-known that in the presence of majority coalitions, strongly fair coin toss is impossible. A line of recent works have shown that by relaxing the fairness notion to game theoretic, we can overcome this classical lower bound. In particular, Chung et al. (CRYPTO'21) showed how to achieve approximately (game-theoretically) fair leader election in the presence of majority coalitions, with round complexity as small as $O(\log \log n)$ rounds.

In this paper, we revisit the round complexity of game-theoretically fair leader election. We construct $O(\log^* n)$ rounds leader election protocols that achieve $(1-o(1))$-approximate fairness in the presence of $(1-o(1)) n$-sized coalitions. Our protocols achieve the same round-fairness trade-offs as Chung et al.'s and have the advantage of being conceptually simpler. Finally, we also obtain game-theoretically fair protocols for committee election which might be of independent interest.
Expand
Gal Arnon, Amey Bhangale, Alessandro Chiesa, Eylon Yogev
ePrint Report ePrint Report
Interactive oracle proofs (IOPs) are a proof system model that combines features of interactive proofs (IPs) and probabilistically checkable proofs (PCPs). IOPs have prominent applications in complexity theory and cryptography, most notably to constructing succinct arguments.

In this work, we study the limitations of IOPs, as well as their relation to those of PCPs. We present a versatile toolbox of IOP-to-IOP transformations containing tools for: (i) length and round reduction; (ii) improving completeness; and (iii) derandomization.

We use this toolbox to establish several barriers for IOPs:

-- Low-error IOPs can be transformed into low-error PCPs. In other words, interaction can be used to construct low-error PCPs; alternatively, low-error IOPs are as hard to construct as low-error PCPs. This relates IOPs to PCPs in the regime of the sliding scale conjecture for inverse-polynomial soundness error. -- Limitations of quasilinear-size IOPs for 3SAT with small soundness error. -- Limitations of IOPs where query complexity is much smaller than round complexity. -- Limitations of binary-alphabet constant-query IOPs.

We believe that our toolbox will prove useful to establish additional barriers beyond our work.
Expand
Lingyue Qin, Xiaoyang Dong, Anyu Wang, Jialiang Hua, Xiaoyun Wang
ePrint Report ePrint Report
Designing symmetric ciphers for particular applications becomes a hot topic. At EUROCRYPT 2020, Naito, Sasaki and Sugawara invented the threshold implementation friendly cipher SKINNYe-64-256 to meet the requirement of the authenticated encryption PFB_Plus. Soon, Thomas Peyrin pointed out that SKINNYe-64-256 may lose the security expectation due the new tweakey schedule. Although the security issue of SKINNYe-64-256 is still unclear, Naito et al. decided to introduce SKINNYe-64-256 v2 as a response.

In this paper, we give a formal cryptanalysis on the new tweakey schedule of SKINNYe-64-256 and discover unexpected differential cancellations in the tweakey schedule. For example, we find the number of cancellations can be up to 8 within 30 consecutive rounds, which is significantly larger than the expected 3 cancellations. Moreover, we take our new discoveries into rectangle, MITM and impossible differential attacks, and adapt the corresponding automatic tools with new constraints from our discoveries. Finally, we find a 41-round related-tweakey rectangle attack on SKINNYe-64-256 and leave a security margin of 3 rounds only.

As STK accepts arbitrary tweakey size, but SKINNYe-64-256 and SKINNYe-64-256 v2 only support up to 4n tweakey size. We introduce a new design of tweakey schedule for SKINNY-64 to further extend the supported tweakey size. We give a formal proof that our new tweakey schedule inherits the security requirement of STK and Skinny.
Expand
Le He, Xiaoen Lin, Hongbo Yu
ePrint Report ePrint Report
This paper provides improved preimage analysis on round-reduced Keccak-384/512. Unlike low-capacity versions, Keccak-384/512 outputs from two planes of its inner state: an entire 320-bit plane and a second plane containing 64/192 bits. Due to lack of degrees of freedom, most existing preimage analysis can only control the 320-bit plane and cannot achieve good results. In this paper, we find out a method to construct linear relations between corresponding bits from the two planes, which means attacker can control two output planes simultaneously with degrees of freedom much less than 320. Besides, we design several linear structures for each different version with additional restrictions that can leave more degrees of freedom. As a result, the complexity of preimage attacks on 2-round Keccak-384/512 and 3-round Keccak-384/512 can be decreased to $2^{28}$/$2^{252}$ and $2^{271}$/$2^{426}$ respectively, which are all the best known results so far. To support the analysis, this paper also provides the first preimage of all `0' digest for 2-round Keccak-384, which can be obtained in hours level by a personal computer. It is worth noting that although our structures contain non-linear parts, the attack algorithms only involve the solution of linear equation systems.
Expand
Muhammad Fahad Khan, Khalid Saleem, Tariq Shah, Mohmmad Mazyad Hazzazi, Ismail Bahkali, Piyush Kumar Shukla
ePrint Report ePrint Report
The protection of confidential information is a global issue and block encryption algorithms are the most reliable option for securing data. The famous information theorist, Claude Shannon has given two desirable characteristics that should exist in a strong cipher which are substitution and permutation in their fundamental research on "Communication Theory of Secrecy Systems.” block ciphers strictly follow the substitution and permutation principle in an iterative manner to generate a ciphertext. The actual strength of the block ciphers against several attacks is entirely based on its substitution characteristic, which is gained by using the substitution box(S-Box). In the current literature, algebraic structure-based and chaos-based techniques are highly used for the construction of S-boxes because both these techniques have favourable features for S-box construction, but also various attacks of these techniques have been identified including SAT solver,Linear and differential attacks,Gröbner-based attacks,XSL attacks,Interpolation attacks,XL based-attacks,Finite precision effect, chaotic systems degradation, predictability,weak randomness, chaotic discontinuity, Limited control parameters. The main objective of this research is to design a novel technique for the dynamic generation of S-boxes that are safe against the cryptanalysis techniques of algebraic structure-based and chaos-based approaches. True randomness has been universally recognized as the ideal method for cipher primitives design because true random numbers are unpredictable, irreversible, and unreproducible. The biggest challenge we faced during this research was how can we generate the true random numbers and how can true random numbers utilized for strengthening the s-box construction technique. The basic concept of the proposed technique is the extraction of true random bits from underwater acoustic waves and to design a novel technique for the dynamic generation of S-boxes using the chain of knight’s tour. Rather than algebraic structure and chaos-based, our proposed technique depends on inevitable high-quality randomness which exists in underwater acoustics waves. The proposed method satisfies all standard evaluation tests of S-boxes construction and true random numbers generation. Two million bits have been analyzed using the NIST randomness test suite, and the results show that underwater sound waves are an impeccable entropy source for true randomness. Additionally, our dynamically generated S-boxes have better or equal strength, over the latest published S-boxes (2020 to 2021). According to our knowledge first time, this type of research has been done, in which natural randomness of underwater acoustic waves has been used for the construction of block cipher's Substitution Box.
Expand
Marcel Dall'Agnol, Nicholas Spooner
ePrint Report ePrint Report
Collapsing and collapse binding were proposed by Unruh (Eurocrypt '16) as post-quantum strengthenings of collision resistance and computational binding (respectively). These notions have been very successful in facilitating the "lifting" of classical security proofs to the quantum setting. A natural question remains, however: is collapsing is the weakest notion that suffices for such lifting? In this work we answer this question in the affirmative by giving a classical commit-and-open protocol which is post-quantum secure if and only if the commitment scheme (resp. hash function) used is collapse binding (resp. collapsing). This result also establishes that a variety of "weaker" post-quantum computational binding notions (sum binding, CDMS binding and unequivocality) are in fact equivalent to collapse binding. Finally, we establish a "win-win" result, showing that a post-quantum collision resistant hash function that is not collapsing can be used to build an equivocal hash function (which can, in turn, be used to build one-shot signatures and other useful quantum primitives). This strengthens a result due to Zhandry (Eurocrypt '19) showing that the same object yields quantum lightning. For this result we make use of recent quantum rewinding techniques.
Expand
Thomas Espitau, Mehdi Tibouchi, Alexandre Wallet, Yang Yu
ePrint Report ePrint Report
Lattice-based digital signature schemes following the hash-and-sign design paradigm of Gentry, Peikert and Vaikuntanathan (GPV) tend to offer an attractive level of efficiency, particularly when instantiated with structured compact trapdoors. In particular, NIST postquantum finalist Falcon~is both quite fast for signing and verification and quite compact: NIST notes that it has the smallest bandwidth (as measured in combined size of public key and signature) of all round 2 digital signature candidates. Nevertheless, while Falcon--512, for instance, compares favorably to ECDSA--384 in terms of speed, its signatures are well over 10 times larger. For applications that store large number of signatures, or that require signatures to fit in prescribed packet sizes, this can be a critical limitation.

In this paper, we explore several approaches to further improve the size of hash-and-sign lattice-based signatures, particularly instantiated over NTRU lattices like Falcon~and its recent variant Mitaka. In particular, while GPV signatures are usually obtained by sampling lattice points according to some \emph{spherical} discrete Gaussian distribution, we show that it can be beneficial to sample instead according to a suitably chosen \emph{ellipsoidal} discrete Gaussian: this is because only half of the sampled Gaussian vector is actually output as the signature, while the other half is recovered during verification. Making the half that actually occurs in signatures shorter reduces signature size at essentially no security loss (in a suitable range of parameters). Similarly, we show that reducing the modulus $q$ with respect to which signatures are computed can improve signature size as well as verification key size almost ``for free''; this is particularly true for constructions like Falcon~and Mitaka~that do not make substantial use of NTT-based multiplication (and rely instead on transcendental FFT). Finally, we show that the Gaussian vectors in signatures can be represented in a more compact way with appropriate coding-theoretic techniques, improving signature size by an additional 7 to 14\%. All in all, we manage to reduce the size of, e.g., Falcon signatures by 30--40\% at the cost of only 4--6 bits of Core-SVP security.
Expand
Jiaxiang Tang, Jinbao Zhu, Songze Li, Kai Zhang, Lichao Sun
ePrint Report ePrint Report
We consider a federated representation learning framework, where with the assistance of a central server, a group of $N$ distributed clients train collaboratively over their private data, for the representations (or embeddings) of a set of entities (e.g., users in a social network). Under this framework, for the key step of aggregating local embeddings trained at the clients in a private manner, we develop a secure embedding aggregation protocol named SecEA, which provides information-theoretical privacy guarantees for the set of entities and the corresponding embeddings at each client $simultaneously$, against a curious server and up to $T < N/2$ colluding clients. As the first step of SecEA, the federated learning system performs a private entity union, for each client to learn all the entities in the system without knowing which entities belong to which clients. In each aggregation round, the local embeddings are secretly shared among the clients using Lagrange interpolation, and then each client constructs coded queries to retrieve the aggregated embeddings for the intended entities. We perform comprehensive experiments on various representation learning tasks to evaluate the utility and efficiency of SecEA, and empirically demonstrate that compared with embedding aggregation protocols without (or with weaker) privacy guarantees, SecEA incurs negligible performance loss (within 5%); and the additional computation latency of SecEA diminishes for training deeper models on larger datasets.
Expand
Mark Zhandry
ePrint Report ePrint Report
We propose a new paradigm for justifying the security of random oracle-based protocols, which we call the Augmented Random Oracle Model (AROM). We show that the AROM captures a wide range of important random oracle impossibility results. Thus a proof in the AROM implies some resiliency to such impossibilities. We then consider three ROM transforms which are subject to impossibilities: Fiat-Shamir (FS), Fujisaki-Okamoto (FO), and Encrypt-with-Hash (EwH). We show in each case how to obtain security in the AROM by strengthening the building blocks or modifying the transform.

Along the way, we give a couple other results. We improve the assumptions needed for the FO and EwH impossibilities from indistinguishability obfuscation to circularly secure LWE; we argue that our AROM still captures this improved impossibility. We also demonstrate that there is no "best possible" hash function, by giving a pair of security properties, both of which can be instantiated in the standard model separately, which cannot be simultaneously satisfied by a single hash function.
Expand
Federico Canale, Gregor Leander, Lukas Stennes
ePrint Report ePrint Report
In this paper we deepen our understanding of how to apply Simon’s algorithm to break symmetric cryptographic primitives. On the one hand, we automate the search for new attacks. Using this approach we automatically find the first efficient key-recovery attacks against constructions like 5-round MISTY L-FK or 5-round Feistel-FK (with internal permutation) using Simon’s algorithm. On the other hand, we study generalizations of Simon’s algorithm using non-standard Hadamard matrices, with the aim to expand the quantum symmetric cryptanalysis toolkit with properties other than the periods. Our main conclusion here is that none of these generalizations can ac- complish that, and we conclude that exploiting non-standard Hadamard matrices with quantum computers to break symmetric primitives will require fundamentally new attacks.
Expand
S. Dov Gordon, Phi Hung Le, Daniel McVicker
ePrint Report ePrint Report
The SPDZ multiparty computation protocol allows $n$ parties to securely compute arithmetic circuits over a finite field, while tolerating up to $n − 1$ active corruptions. A line of work building upon SPDZ have made considerable improvements to the protocol’s performance, typically focusing on concrete efficiency. However, the communication complexity of each of these protocols is $\Omega(n^2 |C|)$.

In this paper, we present a protocol that achieves $O(n|C|)$ communication. Our construction is very similar to those in the SPDZ family of protocols, but for one modular sub-routine for computing a verified sum. There are a handful of times in the SPDZ protocols in which the $n$ parties wish to sum $n$ public values. Rather than requiring each party to broadcast their input to all other parties, clearly it is cheaper to use some designated "dealer" to compute and broadcast the sum. In prior work, it was assumed that the cost of verifying the correctness of these sums is $O(n^2 )$, erasing the benefit of using a dealer. We show how to amortize this cost over the computation of multiple sums, resulting in linear communication complexity whenever the circuit size is $|C| > n$.
Expand
Christian Mouchet, Elliott Bertrand, Jean-Pierre Hubaux
ePrint Report ePrint Report
We propose and implement a multiparty homomorphic encryption (MHE) scheme with a $t$-out-of-$N$-threshold access-structure that is efficient and does not require a trusted dealer in the common reference-string model. We construct this scheme from the ring-learning-with-error (RLWE) assumptions, and as an extension of the MHE scheme of Mouchet et al. (PETS 21). By means of a specially adapted share-resharing procedure, this extension can be used to relax the $N$-out-of-$N$-threshold access structure of the original scheme into a $t$-out-of-$N$-threshold one. This procedure introduces only a single round of communication during the setup phase to instantiate the $t$-out-of-$N$-threshold access structure. Then, the procedure requires only local operations for any set of $t$ parties to compute a $t$-out-of-$t$ additive sharing of the secret key; this sharing can be used directly in the scheme of Mouchet et al. We show that, by performing the re-sharing over the MHE ciphertext-space with a carefully chosen exceptional set, this reconstruction procedure can be made secure and has negligible memory and CPU-time overhead. Hence, in addition to fault tolerance, lowering the corruption threshold also yields considerable efficiency benefits, by enabling the distribution of batched secret-key operations among the online parties. We implemented and open-sourced our scheme in the Lattigo library.
Expand

19 June 2022

Casablanca, Morocco, 26 October - 28 October 2022
Event Calendar Event Calendar
Event date: 26 October to 28 October 2022
Submission deadline: 15 July 2022
Notification: 30 August 2022
Expand
CRYPTO CRYPTO
Crypto 2022 will take place as a hybrid conference in Santa Barbara, USA on August 13-18 2022.

The registration is now open:
https://crypto.iacr.org/2022/registration.php

The deadline for early registration is July 15th.

Information on student stipends can be found on the same link.

Information on affiliated events can be found here:
https://crypto.iacr.org/2022/affiliated.php
Expand

17 June 2022

Qiqi Lai, Feng-Hao Liu, Zhedong Wang
ePrint Report ePrint Report
This work proposes a new two-stage lattice two-stage sampling technique, generalizing the prior two-stage sampling method of Gentry, Peikert, and Vaikuntanathan (STOC '08). By using our new technique as a key building block, we can significantly improve security and efficiency of the current state of the arts of simulation-based functional encryption. Particularly, our functional encryption achieves $(Q,\poly)$ simulation-based semi-adaptive security that allows arbitrary pre- and post-challenge key queries, and has succinct ciphertexts with only an additive $O(Q)$ overhead.

Additionally, our two-stage sampling technique can derive new feasibilities of indistinguishability-based adaptively-secure $\IB$-$\FE$ for inner products and semi-adaptively-secure $\AB$-$\FE$ for inner products, breaking several technical limitations of the recent work by Abdalla, Catalano, Gay, and Ursu (Asiacrypt '20).
Expand

16 June 2022

Eyal Ronen, Eylon Yogev
ePrint Report ePrint Report
The SPHINCS+~[CCS '19] proposal is one of the alternate candidates for digital signatures in NIST's post-quantum standardization process. The scheme is a hash-based signature and is considered one of the most secure and robust proposals. The proposal includes a fast (but large) variant and a small (but costly) variant for each security level. The main problem that might hinder its adoption is its large signature size. Although SPHICS+ supports a tradeoff between signature size and the computational cost of the signature, further reducing the signature size (below the small variants) results in a prohibitively high computational cost for the signer (as well as the verification cost).

This paper presents several novel methods for further compressing the signature size while requiring negligible added computational costs for the signer and faster verification time. Moreover, our approach enables a much more efficient tradeoff curve between signature size and the computational costs of the signer. In many parameter settings, we achieve small signatures and faster running times simultaneously. For example, for $128$-bit security, the small signature variant of SPHINCS+ is $7856$ bytes long, while our variant is only $6304$ bytes long: a compression of approximately $20$\% while still reducing the signer's running time.

The main insight behind our scheme is that there are predefined specific subsets of messages for which the WOTS+ and FORS signatures (that SPHINCS+ uses) can be compressed and made faster (while maintaining the same security guarantees). Although most messages will not come from these subsets, we can search for suitable hashed values to sign. We sign a hash of the message concatenated with a counter that was chosen such that the hashed value is in the subset. The resulting signature is both smaller and faster to sign and verify.

Our schemes are simple to describe and implement. We provide an implementation and benchmark results.
Expand
◄ Previous Next ►