International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

17 August 2022

Ray Perlner, John Kelsey, David Cooper
ePrint Report ePrint Report
SPHINCS$^+$ is a stateless hash-based signature scheme that has been selected for standardization as part of the NIST post-quantum cryptography (PQC) standardization process. Its security proof relies on the distinct-function multi-target second-preimage resistance (DM-SPR) of the underlying keyed hash function. The SPHINCS$^+$ submission offered several instantiations of this keyed hash function, including one based on SHA-256. A recent observation by Sydney Antonov on the PQC mailing list demonstrated that the construction based on SHA-256 did not have DM-SPR at NIST category five, for several of the parameter sets submitted to NIST; however, it remained an open question whether this observation leads to a forgery attack. We answer this question in the affirmative by giving a complete forgery attack that reduces the concrete classical security of these parameter sets by approximately 40 bits of security.

Our attack works by applying Antonov's technique to the {WOTS$^+$} public keys in {\SPHINCS}, leading to a new one-time key that can sign a very limited set of hash values. From that key, we construct a slightly altered version of the original hypertree with which we can sign arbitrary messages, yielding signatures that appear valid.
Expand
Elette Boyle, Niv Gilboa, Yuval Ishai, Victor I. Kolobov
ePrint Report ePrint Report
A distributed point function (DPF) is a cryptographic primitive that enables compressed additive sharing of a secret unit vector across two or more parties. Despite growing ubiquity within applications and notable research efforts, the best 2-party DPF construction to date remains the tree-based construction from (Boyle et al, CCS'16), with no significantly new approaches since.

We present a new framework for 2-party DPF construction, which applies in the setting of feasible (polynomial-size) domains. This captures in particular all DPF applications in which the keys are expanded to the full domain. Our approach is motivated by a strengthened notion we put forth, of programmable DPF (PDPF): in which a short, input-independent "offline" key can be reused for sharing many point functions.

* PDPF from OWF: We construct a PDPF for feasible domains from the minimal assumption that one-way functions exist, where the second "online" key size is polylogarithmic in the domain size $N$.

Our approach offers multiple new efficiency features and applications:

* Privately puncturable PRFs: Our PDPF gives the first OWF-based privately puncturable PRFs (for feasible domains) with sublinear keys.

* $O(1)$-round distributed DPF Gen: We obtain a (standard) DPF with polylog-size keys that admits an analog of Doerner-shelat (CCS'17) distributed key generation, requiring only $O(1)$ rounds (versus $\log N$).

* PCG with 1 short key: Compressing useful correlations for secure computation, where one key is of minimal size. This provides up to exponential communication savings in some application scenarios.
Expand
Diana Davidova, Nikolay Kaleyski
ePrint Report ePrint Report
We describe how any function over a finite field $\mathbb{F}_{p^n}$ can be represented in terms of the values of its derivatives. In particular, we observe that a function of algebraic degree $d$ can be represented uniquely through the values of its derivatives of order $(d-1)$ up to the addition of terms of algebraic degree strictly less than $d$. We identify a set of elements of the finite field, which we call the degree $d$ extension of the basis, which has the property that for any choice of values for the elements in this set, there exists a function of algebraic degree $d$ whose values match the given ones. We discuss how to reconstruct a function from the values of its derivatives, and discuss the complexity involved in transitioning between the truth table of the function and its derivative representation. We then specialize to the case of quadratic functions, and show how to directly convert between the univariate and derivative representation without going through the truth table. We thus generalize the matrix representation of qaudratic vectorial Boolean functions due to Yu et al. to the case of arbitrary characteristic. We also show how to characterize quadratic planar functions using the derivative representation. Based on this, we adapt the method of Yu et al. for searching for quadratic APN functions with prime field coefficients to the case of planar DO functions. We use this method to find all such functions (up to CCZ-equivalence) over $\mathbb{F}_{3^n}$ for $n \le 7$. We conclude that the currently known planar DO polynomials cover all possible cases for $n \le 7$. We find representatives simpler than the known ones for the Zhou-Pott, Dickson, and Lunardon-Marino-Polverino-Trombetti-Bierbrauer families for $n = 6$. Finally, we discuss the computational resources that would be needed to push this search to higher dimensions.
Expand
Zhenzhen Bao, Jian Guo, Shun Li, Phuong Pham
ePrint Report ePrint Report
In this work, we evaluate the security of Merkle-Damgård (MD) hash functions and their combiners (XOR and concatenation combiners) in quantum settings. Two main quantum scenarios are considered, including the scenario where a substantial amount of cheap quantum random access memory (qRAM) is available and where qRAM is limited and expensive to access. We present generic quantum attacks on the MD hash functions and hash combiners, and carefully analyze the complexities under both quantum scenarios. The considered securities are fundamental requirements for hash functions, including the resistance against collision and (second-)preimage. The results are consistent with the conclusions in the classical setting, that is, the considered resistances of the MD hash functions and their combiners are far less than ideal, despite the significant differences in the expected security bounds between the classical and quantum settings. Particularly, the generic attacks can be improved significantly using quantum computers under both scenarios. These results serve as an indication that classical hash constructions require careful security re-evaluation before being deployed to the post-quantum cryptography schemes.
Expand
Jian Guo, Shun Li, Guozhen Liu, Phuong Pham
ePrint Report ePrint Report
In ToSC'20, a new approach combining Mix-Integer Linear Programming (MILP) tool and Constraint Programming (CP) tool to search for boomerang distinguishers is proposed and later used for rebound attack in ASIACRYPT'21 and CRYPTO'22. In this work, we extend these techniques to mount collision attacks on SKINNY-128-256 MMO hashing mode in classical and quantum settings. The first results of 17-round (and 15-round) free-start collision attack on this variant of SKINNY hashing mode are presented. Moreover, one more round of the inbound phase is covered leading to the best existing classical free-start collision attack of 19-round on the SKINNY-128-384 MMO hashing.
Expand
Jonathan Bootle, Alessandro Chiesa, Ziyi Guan, Siqi Liu
ePrint Report ePrint Report
Interactive oracle proofs (IOPs) are a generalization of probabilistically checkable proofs that can be used to construct succinct arguments. Improvements in the efficiency of IOPs lead to improvements in the efficiency of succinct arguments. The key efficiency goals are achieving provers that run in linear time and verifiers that run in sublinear time, where the time complexity is with respect to the (nondeterministic) time complexity of the proved computation.

For any given finite field $\mathbb{F}$, we construct IOPs for the correctness of (nondeterministic) arithmetic computations over $\mathbb{F}$ with linear-time prover and polylogarithmic query complexity. Specifically, our IOPs work for the NP-complete language R1CS, which in particular captures arithmetic circuit satisfiability, and for the algebraic automata problem. The IOPs imply succinct arguments for (nondeterministic) arithmetic computations over any finite field with linear-time proving (given black-box access to a linear-time collision-resistant hash function). The argument for algebraic automata also achieves sublinear verification time.

The construction leverages recent applications of reverse-multiplication-friendly embeddings and precomputation techniques to amortize the cost of expensive operations. These tools enable us to overcome a key limitation in prior works that required the field $\mathbb{F}$ to be large.
Expand
Sayandeep Saha, Mustafa Khairallah, Thomas Peyrin
ePrint Report ePrint Report
Implementation-based attacks are major concerns for modern cryptography. For symmetric-key cryptography, a significant amount of exploration has taken place in this regard for primitives such as block ciphers. Concerning symmetric-key operating modes, such as Authenticated Encryption with Associated Data (AEAD), the state-of-the-art mainly addresses the passive Side-Channel Attacks (SCA) in the form of leakage resilient cryptography. So far, only a handful of work address Fault Attacks (FA) in the context of AEADs, especially concerning the fundamental properties – integrity and confidentiality. In this paper, we address this gap by exploring mode-level issues arising due to FAs. We emphasize that FAs can be fatal even in cases where the adversary does not aim to extract the long-term secret, but rather tries to violate the basic security requirements (integrity and confidentiality). Notably, we show novel integrity attack examples on state-of-the-art AEAD constructions and even on a prior fault-resilient AEAD construction called SIV$. On the constructive side, we first present new security notions of fault-resilience, for PRF (frPRF), MAC (frMAC) and AEAD (frAE), the latter can be seen as an improved version of the notion introduced by Fischlin and Gunther at CT-RSA’20. Then, we propose new constructions to turn a frPRF into a fault-resilient MAC frMAC (hash-then-frPRF) and into a fault-resilient AEAD frAE (MAC-then-Encrypt-then-MAC or MEM).
Expand
Tako Boris Fouotsa
ePrint Report ePrint Report
We propose a countermeasure to the Castryck-Decru attack on SIDH. The attack heavily relies on the images of torsion points. The main input to our countermeasure consists in masking the torsion point images in SIDH in a way they are not exploitable in the attack, but can be used to complete the key exchange. This comes with a change in the form the field characteristic and a considerable increase in the parameter sizes.
Expand
Onur Gunlu, Rafael F. Schaefer, Holger Boche, H. Vincent Poor
ePrint Report ePrint Report
The distributed source coding problem is extended by positing that noisy measurements of a remote source are the correlated random variables that should be reconstructed at another terminal. We consider a secure and private distributed lossy source coding problem with two encoders and one decoder such that (i) all terminals noncausally observe a noisy measurement of the remote source; (ii) a private key is available to each legitimate encoder and all private keys are available to the decoder; (iii) rate-limited noiseless communication links are available between each encoder and the decoder; (iv) the amount of information leakage to an eavesdropper about the correlated random variables is defined as secrecy leakage, and privacy leakage is measured with respect to the remote source; and (v) two passive attack scenarios are considered, where a strong eavesdropper can access both communication links and a weak eavesdropper can choose only one of the links to access. Inner and outer bounds on the rate regions defined under secrecy, privacy, communication, and distortion constraints are derived for both passive attack scenarios. When one or both sources should be reconstructed reliably, the rate region bounds are simplified.
Expand
Thomas Pornin
ePrint Report ePrint Report
Double-odd curves are curves with order equal to 2 modulo 4. A prime order group with complete formulas and a canonical encoding/decoding process could previously be built over a double-odd curve. In this paper, we reformulate such curves as a specific case of the Jacobi quartic. This allows using slightly faster formulas for point operations, as well as defining a more efficient encoding format, so that decoding and encoding have the same cost as classic point compression (decoding is one square root, encoding is one inversion). We define the prime-order groups jq255e and jq255s as the application of that modified encoding to the do255e and do255s groups. We furthermore define an optimized signature mechanism on these groups, that offers shorter signatures (48 bytes instead of the usual 64 bytes, for 128-bit security) and makes signature verification faster (down to less than 83000 cycles on an Intel x86 Coffee Lake core).
Expand
Henri Devillez, Olivier Pereira, Thomas Peters
ePrint Report ePrint Report
The verifiable encryption of bits is the main computational step that is needed to prepare ballots in many practical voting protocols. Its computational load can also be a practical bottleneck, preventing the deployment of some protocols or requiring the use of computing clusters.

We investigate the question of producing many verifiably encrypted bits in an efficient and portable way, using as a baseline the protocol that is in use in essentially all modern voting systems and libraries supporting homomorphic voting, including ElectionGuard, a state-of-the-art open source voting SDK deployed in government elections. Combining fixed base exponentiation techniques and new encryption and ZK proof mechanisms, we obtain speed-ups by more than one order of magnitude against standard implementations. Our exploration requires balancing conflicting optimization strategies, and the use of asymptotically less efficient protocols that turn out to be very effective in practice. Several of our proposed improvements are now on the ElectionGuard roadmap.
Expand
Héctor Masip Ardevol, Jordi Baylina Melé, Daniel Lubarov, José L. Muñoz-Tapia
ePrint Report ePrint Report
SNARKs for some standard cryptographic primitives tend to be plenty designed with SNARK-unfriendly operations such as XOR. Previous protocols such as [GW20] worked around this problem by the introduction of lookup arguments. However, these protocols were only appliable over the same circuit. RapidUp is a protocol that solves this limitation by unfolding the grand-product polynomial into two (equivalent) polynomials of the same size. Morevoer, a generalization of previous protocols is presented by the introduction of selectors.
Expand
Jiewen Yao, Krystian Matusiewicz, Vincent Zimmer
ePrint Report ePrint Report
The Security Protocol and Data Model (SPDM) defines flows to authenticate hardware identity of a computing device. It also allows for establishing a secure session for confidential and integrity protected data communication between two devices. The present version of SPDM, namely version 1.2, relies on traditional asymmetric cryptographic algorithms that are known to be vulnerable to quantum attacks. This paper describes the means by which support for post-quantum (PQ) cryptography can be added to the SPDM protocol in order to enable SPDM for the upcoming world of quantum computing. We examine SPDM 1.2 protocol and discuss how to negotiate the use of post-quantum cryptography algorithms (PQC), how to support device identity reporting, means to authenticate the device, and how to establish a secure session when using PQC algorithms. We consider so called hybrid modes where both classical and PQC algorithms are used to achieve security properties as these modes are important during the transition period. We also share our experience with implementing PQ-SPDM and provide benchmarks for some of the winning NIST PQC algorithms.
Expand
Ngoc Khanh Nguyen, Gregor Seiler
ePrint Report ePrint Report
We propose a practical sublinear-size zero-knowledge proof system for Rank-1 Constraint Satisfaction (R1CS) based on lattices. The proof size scales asymptotically with the square root of the witness size. Concretely, the size becomes $2$-$3$ times smaller than Ligero (ACM CCS 2017), which also exhibits square root scaling, for large instances of R1CS. At the core lies an interactive variant of the Schwartz-Zippel Lemma that might be of independent interest.
Expand
Ananya Appan, Anirudh Chandramouli, Ashish Choudhury
ePrint Report ePrint Report
In this work, we study perfectly-secure multi-party computation (MPC) against general (non-threshold) adversaries. Known protocols in a synchronous network are secure against $Q^{(3)}$ adversary structures, while in an asynchronous network, known protocols are secure against $Q^{(4)}$ adversary structures. A natural question is whether there exists a single protocol which remains secure against $Q^{(3)}$ and $Q^{(4)}$ adversary structures in a synchronous and in an asynchronous network respectively, where the parties are not aware of the network type. We design the first such best-of-both-worlds protocol against general adversaries. Our result generalizes the result of Appan, Chandramouli and Choudhury (PODC 2022), which presents a best-of-both-worlds perfectly-secure protocol against threshold adversaries.

To design our protocol, we present two important building blocks which are of independent interest. The first building block is a best-of-both-worlds perfectly-secure Byzantine agreement (BA) protocol for $Q^{(3)}$ adversary structures, which remains secure both in a synchronous, as well as an asynchronous network. The second building block is a best-of-both-worlds perfectly-secure verifiable secret-sharing (VSS) protocol, which remains secure against $Q^{(3)}$ and $Q^{(4)}$ adversary structures in a synchronous network and an asynchronous network respectively.
Expand
Joël Alwen, Dominik Hartmann, Eike Kiltz, Marta Mularczyk, Peter Schwabe
ePrint Report ePrint Report
A multi-message multi-recipient PKE (mmPKE) encrypts a batch of messages, in one go, to a corresponding set of independently chosen receiver public keys. The resulting "multi-recipient ciphertext" can be then be reduced (by any 3rd party) to a shorter, receiver specific, "invidual ciphertext". Finally, to recover the $i$-th message in the batch from their indvidual ciphertext the $i$-th receiver only needs their own decryption key. A special case of mmPKE is multi-recipient PKE where all receivers are sent the same message. By treating (m)mPKE and their KEM counterparts as a stand-alone primitives we allow for more efficient constructions than trivially composing individual PKE/KEM instances. This is especially valuable in the post-quantum setting, where PKE/KEM ciphertexts and public keys tend to be far larger than their classic counterparts.

In this work we describe a collection of new results around batched KEMs and PKE. We provide both classic and post-quantum proofs for all results. Our results are geared towards practical constructions and applications (for example in the domain of PQ-secure group messaging).

Concretely, our results include a new non-adaptive to adaptive compiler for CPA-secure mKEMs resulting in public keys roughly half the size of the previous state-of-the-art [Hashimoto et.al., CCS'21]. We also prove their FO transform for mKEMs to be secure in the quantum random oracle model. We provide the first mKEM combiner as well as two mmPKE constructions. The first is an arbitrary message-length black-box construction from an mKEM (e.g. one produced by combining a PQ with a classic mKEM). The second is optimized for short messages and achieves hybrid PQ/classic security more directly. When encrypting $n$ short messages (e.g. as in several recent mmPKE applications) at 256-bits of security the mmPKE ciphertext are $144 n$ bytes shorter than the generic construction. Finally, we provide an optimized implementation of the (CCA secure) mKEM construction based on the NIST PQC winner Kyber and report benchmarks showing a significant speedup for batched encapsulation and up to 79% savings in ciphertext size compared to a naive solution.
Expand
Christian Badertscher, Peter Gaži, Iñigo Querejeta-Azurmendi, Alexander Russell
ePrint Report ePrint Report
Verifiable random functions (Micali et al., FOCS'99) allow a key-pair holder to verifiably evaluate a pseudorandom function under that particular key pair. These primitives enable fair and verifiable pseudorandom lotteries, essential in proof-of-stake blockchains such as Algorand and Cardano, and are being used to secure billions of dollars of capital. As a result, there is an ongoing IRTF effort to standardize VRFs, with a proposed ECVRF based on elliptic-curve cryptography appearing as the most promising candidate.

In this work, towards understanding the general security of VRFs and in particular the ECVRF construction, we provide an ideal functionality in the Universal Composability (UC) framework (Canetti, FOCS'01) that captures VRF security, and show that ECVRF UC-realizes this functionality.

We further show how the range of a VRF can generically be extended in a modular fashion based on the above functionality. This observation is particularly useful for protocols such as Ouroboros since it allows to reduce the number of VRF evaluations (per slot) and VRF verifications (per block) from two to one at the price of additional (but much faster) hash-function evaluations.

Finally, we study batch verification in the context of VRFs. We provide a UC-functionality capturing a VRF with batch-verification capability, and propose modifications to ECVRF that allow for this feature. We again prove that our proposal UC-realizes the desired functionality. We provide a performance analysis showing that verification can yield a factor-two speedup for batches with 1024 proofs, at the cost of increasing the proof size from 80 to 128 bytes.
Expand
Kevin Lewi, Jon Millican, Ananth Raghunathan, Arnab Roy
ePrint Report ePrint Report
Many online applications, such as online file backup services, support the sharing of indexed data between a set of devices. These systems may offer client-side encryption of the data, so that the stored data is inaccessible to the online host. A potentially desirable goal in this setting would be to protect not just the contents of the backed-up files, but also their identifiers. However, as these identifiers are typically used for indexing, a deterministic consistent mapping across devices is necessary. Additionally, in a multi-device setting, it may be desirable to maintain an ability to revoke a device’s access—e.g. through rotating encryption keys for new data.

We present a new primitive, called the Oblivious Revocable Function (ORF), which operates in the above setting and allows identifiers to be obliviously mapped to a consistent value across multiple devices, while enabling the server to permanently remove an individual device’s ability to map values. This permits a stronger threat model against metadata, in which metadata cannot be derived from identifiers by a revoked device colluding with the service provider, so long as the service provider was honest at the instant of revocation. We describe a simple Diffie- Hellman-based construction that achieves ORFs and provide a proof of security under the UC framework.
Expand
Sarah Arpin, Tyler Raven Billingsley, Daniel Rayor Hast, Jun Bo Lau, Ray Perlner, Angela Robinson
ePrint Report ePrint Report
We present experimental findings on the decoding failure rate (DFR) of BIKE, a fourth-round candidate in the NIST Post-Quantum Standardization process, at the 20-bit security level. We select parameters according to BIKE design principles and conduct a series of experiments. We directly compute the average DFR on a range of BIKE block sizes and identify both the waterfall and error floor regions of the DFR curve. We then study the influence on the average DFR of three sets $\mathcal{C}$, $\mathcal{N}$, and $2\mathcal{N}$ of near-codewords --- vectors of low weight that induce syndromes of low weight --- defined by Vasseur in 2021. We find that error vectors leading to decoding failures have small maximum support intersection with elements of these sets; further, the distribution of intersections is quite similar to that of sampling random error vectors and counting the intersections with $\mathcal{C}$, $\mathcal{N}$, and $2\mathcal{N}$. Our results indicate that these three sets are not sufficient in classifying vectors expected to cause decoding failures. Finally, we study the role of syndrome weight on the decoding behavior and conclude that the set of error vectors that lead to decoding failures differ from random vectors by having low syndrome weight.
Expand
Daniël Kuijsters, Denise Verbakel, Joan Daemen
ePrint Report ePrint Report
Lightweight cryptography is characterized by the need for low implementation cost, while still providing sufficient security. This requires careful analysis of building blocks and their composition. SKINNY is an ISO/IEC standardized family of tweakable block ciphers and is used in the NIST lightweight cryptography standardization process finalist Romulus. We present non-trivial linear approximations of two- round SKINNY that have correlation one or minus one and that hold for a large fraction of all round tweakeys. Moreover, we show how these could have been avoided.
Expand
◄ Previous Next ►