International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

10 April 2023

Jesús-Javier Chi-Domínguez, Andre Esser, Sabrina Kunzweiler, Alexander May
ePrint Report ePrint Report
Despite recent breakthrough results in attacking SIDH, the CSIDH protocol remains a secure post-quantum key exchange protocol with appealing properties. However, for obtaining efficient CSIDH instantiations one has to resort to small secret keys. In this work, we provide novel methods to analyze small key CSIDH, thereby introducing the representation method ---that has been successfully applied for attacking small secret keys in code- and lattice-based schemes--- also to the isogeny-based world.

We use the recently introduced Restricted Effective Group Actions ($\mathsf{REGA}$) to illustrate the analogy between CSIDH and Diffie-Hellman key exchange. This framework allows us to introduce a $\mathsf{REGA}\text{-}\mathsf{DLOG}$ problem as a level of abstraction to computing isogenies between elliptic curves, analogous to the classic discrete logarithm problem. This in turn allows us to study $\mathsf{REGA}\text{-}\mathsf{DLOG}$ with ternary key spaces such as $\{-1, 0, 1\}^n, \{0,1,2\}^n$ and $\{-2,0,2\}^n$, which lead to especially efficient, recently proposed CSIDH instantiations. The best classic attack on these key spaces is a Meet-in-the-Middle algorithm that runs in time $3^{0.5 n}$, using also $3^{0.5 n}$ memory.

We first show that $\mathsf{REGA}\text{-}\mathsf{DLOG}$ with ternary key spaces $\{0,1,2\}^n$ or $\{-2,0,2\}^n$ can be reduced to the ternary key space $\{-1,0,1\}^n$.

We further provide a heuristic time-memory tradeoff for $\mathsf{REGA}\text{-}\mathsf{DLOG}$ with keyspace $\{-1,0,1\}^n$ based on Parallel Collision Search with memory requirement $M$ that under standard heuristics runs in time $3^{0.75 n}/M^{0.5}$ for all $M \leq 3^{n/2}$. We then use the representation technique to heuristically improve to $3^{0.675n}/M^{0.5}$ for all $M \leq 3^{0.22 n}$, and further provide more efficient time-memory tradeoffs for all $M \leq 3^{n/2}$.

Although we focus in this work on $\mathsf{REGA}\text{-}\mathsf{DLOG}$ with ternary key spaces for showing its efficacy in providing attractive time-memory tradeoffs, we also show how to use our framework to analyze larger key spaces $\{-m, \ldots, m\}^n$ with $m = 2,3$.
Expand
George Tasopoulos, Charis Dimopoulos, Apostolos P. Fournaris, Raymond K. Zhao, Amin Sakzad, Ron Steinfeld
ePrint Report ePrint Report
Post-Quantum cryptography (PQC), in the past few years, constitutes the main driving force of the quantum resistance transition for security primitives, protocols and tools. TLS is one of the widely used security protocols that needs to be made quantum safe. However, PQC algorithms TLS integration introduce various implementation overheads compared to traditional TLS that in battery powered embedded devices with constrained resources, cannot be overlooked. While there exist several works, evaluating the PQ TLS execution time overhead in embedded systems there are only a few that explore the PQ TLS energy consumption cost. In this paper, a thorough power/energy consumption evaluation and analysis of PQ TLS 1.3 embedded system implementation has been made. A WolfSSL PQ TLS 1.3 custom implementation is used that integrates all the NIST PQC algorithms selected for standardization as well as those evaluated in NIST Round 4. The BSI recommendations have also been included. The various PQ TLS versions are deployed in a STM Nucleo evaluation board under a mutual and a unilateral client-server authentication scenario. The power and energy consumption collected results are analyzed in detail. The performed comparisons and overall analysis provide very interesting results indicating that the choice of the PQC algorithm TLS 1.3 combination to be deployed on an embedded system may be very different depending on the device use as an authenticated or not authenticated, client or server. Also, the results indicate that in some cases, PQ TLS 1.3 implementations can be equally or more energy consumption efficient compared to traditional TLS 1.3.
Expand
Matthias Probst, Manuel Brosch, Georg Sigl
ePrint Report ePrint Report
Spiking neural networks gain attention due to low power properties and event-based operation, making them suitable for usage in resource constrained embedded devices. Such edge devices allow physical access opening the door for side-channel analysis. In this work, we reverse engineer the parameters of a feed-forward spiking neural network implementation with correlation power analysis. Localized measurements of electro-magnetic emanations enable our attack, despite inherent parallelism and the resulting algorithmic noise of the network. We provide a methodology to extract valuable parameters of integrate-and-fire neurons in all layers, as well as the layer sizes.
Expand
Shuailiang Hu
ePrint Report ePrint Report
Homomorphic encryption requires the homomorphism of encrypted ciphertext, and the operation between ciphertexts can be reflected in plaintexts. Fully homomorphic encryption requires that the encryption algorithm can satisfy additive homomorphism and multiplicative homomorphism at the same time. At present, there are many fully homomorphic encryption schemes, such as fully homomorphic encryption based on ideal lattices, AGCD problem, LWE problem, RLWE problem, and so on. But the improvement of efficiency, length of ciphertext, and calculation limit of the fully homomorphic encryption scheme are still problems that need further study.

Based on Lagrangian interpolation polynomials, we propose a fully homomorphic encryption scheme according to the difficulty of finding roots of a polynomial with the degree of at least two(mod n=p*q, p, q are both private large primes). We reasonably construct polynomials $trap_1$ and $trap_0$ to generate the ciphertext of message m, so that calculation between ciphertexts can directly act on plaintexts. Our scheme is safe as long as the Rabin encryption algorithm cannot be cracked.
Expand

07 April 2023

Wouter Legiest, Jan-Pieter D'Anvers, Michiel Van Beirendonck, Furkan Turan, Ingrid Verbauwhede
ePrint Report ePrint Report
Homomorphic encryption (HE) enables calculating on encrypted data, which makes it possible to perform privacy- preserving neural network inference. One disadvantage of this technique is that it is several orders of magnitudes slower than calculation on unencrypted data. Neural networks are commonly trained using floating-point, while most homomorphic encryption libraries calculate on integers, thus requiring a quantisation of the neural network. A straightforward approach would be to quantise to large integer sizes (e.g., 32 bit) to avoid large quantisation errors. In this work, we reduce the integer sizes of the networks, using quantisation-aware training, to allow more efficient computations. For the targeted MNIST architecture proposed by Badawi et al., we reduce the integer sizes by 33% without significant loss of accuracy, while for the CIFAR architecture, we can reduce the integer sizes by 43%. Implementing the resulting networks under the BFV homomorphic encryption scheme using SEAL, we could reduce the execution time of an MNIST neural network by 80% and by 40% for a CIFAR neural network.
Expand
Nico Döttling, Phillip Gajland, Giulio Malavolta
ePrint Report ePrint Report
Laconic function evaluation (LFE) allows Alice to compress a large circuit $\mathbf{C}$ into a small digest $\mathsf{d}$. Given Alice's digest, Bob can encrypt some input $x$ under $\mathsf{d}$ in a way that enables Alice to recover $\mathbf{C}(x)$, without learning anything beyond that. The scheme is said to be $laconic$ if the size of $\mathsf{d}$, the runtime of the encryption algorithm, and the size of the ciphertext are all sublinear in the size of $\mathbf{C}$. Until now, all known LFE constructions have ciphertexts whose size depends on the $depth$ of the circuit $\mathbf{C}$, akin to the limitation of $levelled$ homomorphic encryption. In this work we close this gap and present the first LFE scheme (for Turing machines) with asymptotically optimal parameters. Our scheme assumes the existence of indistinguishability obfuscation and somewhere statistically binding hash functions.

As further contributions, we show how our scheme enables a wide range of new applications, including two previously unknown constructions: • Non-interactive zero-knowledge (NIZK) proofs with optimal prover complexity. • Witness encryption and attribute-based encryption (ABE) for Turing machines from falsifiable assumptions.
Expand
Marshall Ball, Hanjun Li, Huijia Lin, Tianren Liu
ePrint Report ePrint Report
The beautiful work of Applebaum, Ishai, and Kushilevitz [FOCS'11] initiated the study of arithmetic variants of Yao's garbled circuits. An arithmetic garbling scheme is an efficient transformation that converts an arithmetic circuit $C: \mathcal{R}^n \rightarrow \mathcal{R}^m$ over a ring $\mathcal{R}$ into a garbled circuit $\widehat C$ and $n$ affine functions $L_i$ for $i \in [n]$, such that $\widehat C$ and $L_i(x_i)$ reveals only the output $C(x)$ and no other information of $x$. AIK presented the first arithmetic garbling scheme supporting computation over integers from a bounded (possibly exponentially large) range, based on Learning With Errors (LWE). In contrast, converting $C$ into a Boolean circuit and applying Yao's garbled circuit treats the inputs as bit strings instead of ring elements, and hence is not "arithmetic".

In this work, we present new ways to garble arithmetic circuits, which improve the state-of-the-art on efficiency, modularity, and functionality. To measure efficiency, we define the rate of a garbling scheme as the maximal ratio between the bit-length of the garbled circuit $|\widehat C|$ and that of the computation tableau $|C|\ell$ in the clear, where $\ell$ is the bit length of wire values (e.g., Yao's garbled circuit has rate $O(\lambda)$). $\bullet$ We present the first constant-rate arithmetic garbled circuit for computation over large integers based on the Decisional Composite Residuosity (DCR) assumption, significantly improving the efficiency of the schemes of Applebaum, Ishai, and Kushilevitz. $\bullet$ We construct an arithmetic garbling scheme for modular computation over $\mathcal{R} = \mathbb{Z}_p$ for any integer modulus $p$, based on either DCR or LWE. The DCR-based instantiation achieves rate $O(\lambda)$ for large $p$. Furthermore, our construction is modular and makes black-box use of the underlying ring and a simple key extension gadget. $\bullet$ We describe a variant of the first scheme supporting arithmetic circuits over bounded integers that are augmented with Boolean computation (e.g., truncation of an integer value, and comparison between two values), while keeping the constant rate when garbling the arithmetic part.

To the best of our knowledge, constant-rate (Boolean or arithmetic) garbling was only achieved before using the powerful primitive of indistinguishability obfuscation, or for restricted circuits with small depth.
Expand
Giulio Malavolta, Michael Walter
ePrint Report ePrint Report
Quantum key distribution (QKD) allows Alice and Bob to agree on a shared secret key, while communicating over a public (untrusted) quantum channel. Compared to classical key exchange, it has two main advantages:

(i)The key is unconditionally hidden to the eyes of any attacker, and

(ii) its security assumes only the existence of authenticated classical channels which, in practice, can be realized using Minicrypt assumptions, such as the existence of digital signatures.

On the flip side, QKD protocols typically require multiple rounds of interactions, whereas classical key exchange can be realized with the minimal amount of two messages. A long-standing open question is whether QKD requires more rounds of interaction than classical key exchange.

In this work, we propose a two-message QKD protocol that satisfies everlasting security, assuming only the existence of quantum-secure one-way functions. That is, the shared key is unconditionally hidden, provided computational assumptions hold during the protocol execution. Our result follows from a new quantum cryptographic primitive that we introduce in this work: the quantum-public-key one-time pad, a public-key analogue of the well-known one-time pad.
Expand
Andreas Brüggemann, Robin Hundt, Thomas Schneider, Ajith Suresh, Hossein Yalame
ePrint Report ePrint Report
The concept of using Lookup Tables (LUTs) instead of Boolean circuits is well-known and been widely applied in a variety of applications, including FPGAs, image processing, and database management systems. In cryptography, using such LUTs instead of conventional gates like AND and XOR results in more compact circuits and has been shown to substantially improve online performance when evaluated with secure multi-party computation. Several recent works on secure floating-point computations and privacy-preserving machine learning inference rely heavily on existing LUT techniques. However, they suffer from either large overhead in the setup phase or subpar online performance.

We propose FLUTE, a novel protocol for secure LUT evaluation with good setup and online performance. In a two-party setting, we show that FLUTE matches or even outperforms the online performance of all prior approaches, while being competitive in terms of overall performance with the best prior LUT protocols. In addition, we provide an open-source implementation of FLUTE written in the Rust programming language, and implementations of the Boolean secure two-party computation protocols of ABY2.0 and silent OT. We find that FLUTE outperforms the state of the art by two orders of magnitude in the online phase while retaining similar overall communication.
Expand
Foteini Baldimtsi, Konstantinos Kryptos Chalkias, Francois Garillot, Jonas Lindstrom, Ben Riva, Arnab Roy, Alberto Sonnino, Pun Waiwitlikhit, Joy Wang
ePrint Report ePrint Report
We propose a variant of the original Boneh, Drijvers, and Neven (Asiacrypt '18) BLS multi-signature aggregation scheme best suited to applications where the full set of potential signers is fixed and known and any subset $I$ of this group can create a multi-signature over a message $m$. This setup is very common in proof-of-stake blockchains where a $2f+1$ majority of $3f$ validators sign transactions and/or blocks and is secure against $\textit{rogue-key}$ attacks without requiring a proof of key possession mechanism.

In our scheme, instead of randomizing the aggregated signatures, we have a one-time randomization phase of the public keys: each public key is replaced by a sticky randomized version (for which each participant can still compute the derived private key). The main benefit compared to the original Boneh at al. approach is that since our randomization process happens only once and not per signature we can have significant savings during aggregation and verification. Specifically, for a subset $I$ of $t$ signers, we save $t$ exponentiations in $\mathbb{G}_2$ at aggregation and $t$ exponentiations in $\mathbb{G}_1$ at verification or vice versa, depending on which BLS mode we prefer: $\textit{minPK}$ (public keys in $\mathbb{G}_1$) or $\textit{minSig}$ (signatures in $\mathbb{G}_1$).

Interestingly, our security proof requires a significant departure from the co-CDH based proof of Boneh at al. When $n$ (size of the universal set of signers) is small, we prove our protocol secure in the Algebraic Group and Random Oracle models based on the Discrete Log problem. For larger $n$, our proof also requires the Random Modular Subset Sum (RMSS) problem.
Expand
Sergey Agievich
ePrint Report ePrint Report
Using the representation of bent functions by bent rectangles, that is, special matrices with restrictions on columns and rows, we obtain an upper bound on the number of bent functions that improves previously known bounds in a practical range of dimensions. The core of our method is the following fact based on the recent observation by Potapov (arXiv:2107.14583): a 2-row bent rectangle is completely defined by one of its rows and the remaining values in slightly more than half of the columns.
Expand
Xichao Hu, Yongqiang Li, Lin Jiao, Zhengbin Liu, Mingsheng Wang
ePrint Report ePrint Report
Zero-correlation linear attack is a powerful attack of block ciphers, the lower number of rounds (LNR) which no its distinguisher (named zero-correlation linear approximation, ZCLA) exists reflects the ability of a block cipher against the zero-correlation linear attack. However, due to the large search space, showing there are no ZCLAs exist for a given block cipher under a certain number of rounds is a very hard task. Thus, present works can only prove there no ZCLAs exist in a small search space, such as 1-bit/nibble/word input and output active ZCLAs, which still exist very large gaps to show no ZCLAs exist in the whole search space.

In this paper, we propose the meet-in-the-middle method and double-collision method to show there no ZCLAs exist in the whole search space. The basic ideas of those two methods are very simple, but they work very effectively. As a result, we apply those two methods to AES, Midori64, and ARIA, and show that there no ZCLAs exist for $5$-round AES without the last Mix-Column layer, $7$-round Midori64 without the last Mix-Column layer, and $5$-round ARIA without the last linear layer.

As far as we know, our method is the first automatic method that can be used to show there no ZCLAs exist in the whole search space, which can provide sufficient evidence to show the security of a block cipher against the zero-correlation linear attack in the distinguishers aspect, this feature is very useful for designing block ciphers.
Expand

05 April 2023

Agnese Gini, Pierrick Méaux
ePrint Report ePrint Report
In this article we study the Algebraic Immunity (AI) of Weightwise Perfectly Balanced (WPB) functions. After showing a lower bound on the AI of two classes of WPB functions from the previous literature, we prove that the minimal AI of a WPB $n$-variables function is constant, equal to $2$ for $n\ge 4$ . Then, we compute the distribution of the AI of WPB function in $4$ variables, and estimate the one in $8$ and $16$ variables. For these values of $n$ we observe that a large majority of WPB functions have optimal AI, and that we could not obtain an AI-$2$ WPB function by sampling at random. Finally, we address the problem of constructing WPB functions with bounded algebraic immunity, exploiting a construction from 2022 by Gini and Méaux. In particular, we present a method to generate multiple WPB functions with minimal AI, and we prove that the WPB functions with high nonlinearity exhibited by Gini and Méaux also have minimal AI. We conclude with a construction giving WPB functions with lower bounded AI, and give as example a family with all elements with AI at least $n/2-\log(n)+1$.
Expand
Quang Dao, Paul Grubbs
ePrint Report ePrint Report
Increasing deployment of advanced zero-knowledge proof systems, especially zkSNARKs, has raised critical questions about their security against real-world attacks. Two classes of attacks of concern in practice are adaptive soundness attacks, where an attacker can prove false statements by choosing its public input after generating a proof, and malleability attacks, where an attacker can use a valid proof to create another valid proof it could not have created itself. Prior work has shown that simulation-extractability (SIM-EXT), a strong notion of security for proof systems, rules out these attacks. In this paper, we prove that two transparent, discrete-log-based zkSNARKs, Spartan and Bulletproofs, are simulation-extractable (SIM-EXT) in the random oracle model if the discrete logarithm assumption holds in the underlying group. Since these assumptions are required to prove standard security properties for Spartan and Bulletproofs, our results show that SIM-EXT is, surprisingly, "for free" with these schemes. Our result is the first SIM-EXT proof for Spartan and encompasses both linear- and sublinear-verifier variants. Our result for Bulletproofs encompasses both the aggregate range proof and arithmetic circuit variants, and is the first to not rely on the algebraic group model (AGM), resolving an open question posed by Ganesh et al. (EUROCRYPT '22). As part of our analysis, we develop a generalization of the tree-builder extraction theorem of Attema et al. (TCC '22), which may be of independent interest.
Expand
yufan jiang, Yong Li
ePrint Report ePrint Report
Tremendous efforts have been made to improve the efficiency of secure Multi-Party Computation (MPC), which allows n ≥ 2 parties to jointly evaluate a target function without leaking their own private inputs. It has been confirmed by previous researchers that 3-Party Computation (3PC) and outsourcing computations to GPUs can lead to huge performance improvement of MPC in computationally intensive tasks such as Privacy-Preserving Machine Learning (PPML). A natural question to ask is whether super-linear performance gain is possible for a linear increase in resources. In this paper, we give an affirmative answer to this question. We propose Force, an extremely efficient 4PC system for PPML. To the best of our knowledge, each party in Force enjoys the least number of local computations and lowest data exchanges between parties. This is achieved by introducing a new sharing type X -share along with MPC protocols in privacy-preserving training and inference that are semi-honest secure with an honest-majority. Our contribution does not stop at theory. We also propose engineering optimizations and verify the high performance of the protocols with implementation and experiments. By comparing the results with state-of-the-art researches such as Cheetah, Piranha, CryptGPU and CrypTen, we showcase that Force is sound and extremely efficient, as it can improve the PPML performance by a factor of 2 to 1200 compared with other latest 2PC, 3PC and 4PC system
Expand
Carlos Aguilar-Melchor, Martin R. Albrecht, Thomas Bailleux, Nina Bindel, James Howe, Andreas Hülsing, David Joseph, Marc Manzano
ePrint Report ePrint Report
We revisit batch signatures (previously considered in a draft RFC, and used in multiple recent works), where a single, potentially expensive, "inner" digital signature authenticates a Merkle tree constructed from many messages. We formalise a construction and prove its unforgeability and privacy properties.

We also show that batch signing allows us to scale slow signing algorithms, such as those recently selected for standardisation as part of NIST's post-quantum project, to high throughput, with a mild increase in latency. For the example of Falcon-512 in TLS, we can increase the amount of connections per second by a factor 3.2x, at the cost of an increase in the signature size by ~14% and the median latency by ~25%, where both are ran on the same 30 core server.

We also discuss applications where batch signatures allow us to increase throughput and to save bandwidth. For example, again for Falcon-512, once one batch signature is available, the additional bandwidth for each of the remaining N-1 is only 82 bytes.
Expand
Samuel Bedassa Alemu, Julia Kastner
ePrint Report ePrint Report
Blind signatures were originally introduced by Chaum (CRYPTO ’82) in the context of privacy-preserving electronic payment systems. Nowadays, the cryptographic primitive has also found applications in anonymous credentials and voting systems. However, many practical blind signature schemes have only been analysed in the game-based setting where a single signer is present. This is somewhat unsatisfactory as blind signatures are intended to be deployed in a setting with many signers. We address this in the following ways: – We formalise two variants of one-more-unforgeability of blind signatures in the Multi-Signer Setting. – We show that one-more-unforgeability in the Single-Signer Setting translates straightforwardly to the Multi-Signer Setting with a reduction loss proportional to the number of signers. – We identify a class of blind signature schemes which we call Key-Convertible where this reduction loss can be traded for an increased number of signing sessions in the Single-Signer Setting and show that many practical blind signature schemes such as blind BLS, blind Schnorr, blind Okamoto-Schnorr as well as two pairing-free, ROS immune schemes by Tessaro and Zhu (Eurocrypt’22) fulfil this property. – We further describe how the notion of key substitution attacks (Menezes and Smart, DCC’04) can be translated to blind signatures and provide a generic transformation of how they can be avoided.
Expand
Fuyuki Kitagawa, Tomoyuki Morimae, Ryo Nishimaki, Takashi Yamakawa
ePrint Report ePrint Report
We construct quantum public-key encryption from one-way functions. In our construction, public keys are quantum, but ciphertexts are classical. Quantum public-key encryption from one-way functions (or weaker primitives such as pseudorandom function-like states) are also proposed in some recent works [Morimae-Yamakawa, eprint:2022/1336; Coladangelo, eprint:2023/282; Grilo-Sattath-Vu, eprint:2023/345; Barooti-Malavolta-Walter, eprint:2023/306]. However, they have a huge drawback: they are secure only when quantum public keys can be transmitted to the sender (who runs the encryption algorithm) without being tampered with by the adversary, which seems to require unsatisfactory physical setup assumptions such as secure quantum channels. Our construction is free from such a drawback: it guarantees the secrecy of the encrypted messages even if we assume only unauthenticated quantum channels. Thus, the encryption is done with adversarially tampered quantum public keys. Our construction based only on one-way functions is the first quantum public-key encryption that achieves the goal of classical public-key encryption, namely, to establish secure communication over insecure channels.
Expand
Eric Sageloli, Pierre Pébereau, Pierrick Méaux, Céline Chevalier
ePrint Report ePrint Report
We provide identity-based signature (IBS) schemes with tight security against adaptive adversaries, in the (classical or quantum) random oracle model (ROM or QROM), in both unstructured and structured lattices, based on the SIS or RSIS assumption. These signatures are short (of size independent of the message length). Our schemes build upon a work from Pan and Wagner (PQCrypto’21) and improve on it in several ways. First, we prove their transformation from non-adaptive to adaptive IBS in the QROM. Then, we simplify the parameters used and give concrete values. Finally, we simplify the signature scheme by using a non-homogeneous relation, which helps us reduce the size of the signature and get rid of one costly trapdoor delegation. On the whole, we get better security bounds, shorter signatures and faster algorithms.
Expand
Sagnik Saha, Nikolaj Ignatieff Schwartzbach, Prashant Nalini Vasudevan
ePrint Report ePrint Report
In the average-case $k$-SUM problem, given $r$ integers chosen uniformly at random from $\{0,\ldots,M-1\}$, the objective is to find a set of $k$ numbers that sum to 0 modulo $M$ (this set is called a "solution"). In the related $k$-XOR problem, given $k$ uniformly random Boolean vectors of length log $M$, the objective is to find a set of $k$ of them whose bitwise-XOR is the all-zero vector. Both of these problems have widespread applications in the study of fine-grained complexity and cryptanalysis.

The feasibility and complexity of these problems depends on the relative values of $k$, $r$, and $M$. The dense regime of $M \leq r^k$, where solutions exist with high probability, is quite well-understood and we have several non-trivial algorithms and hardness conjectures here. Much less is known about the sparse regime of $M\gg r^k$, where solutions are unlikely to exist. The best answers we have for many fundamental questions here are limited to whatever carries over from the dense or worst-case settings.

We study the planted $k$-SUM and $k$-XOR problems in the sparse regime. In these problems, a random solution is planted in a randomly generated instance and has to be recovered. As $M$ increases past $r^k$, these planted solutions tend to be the only solutions with increasing probability, potentially becoming easier to find. We show several results about the complexity and applications of these problems.

Conditional Lower Bounds. Assuming established conjectures about the hardness of average-case (non-planted) $k$-SUM when $M = r^k$, we show non-trivial lower bounds on the running time of algorithms for planted $k$-SUM when $r^k\leq M\leq r^{2k}$. We show the same for $k$-XOR as well.

Search-to-Decision Reduction. For any $M>r^k$, suppose there is an algorithm running in time $T$ that can distinguish between a random $k$-SUM instance and a random instance with a planted solution, with success probability $(1-o(1))$. Then, for the same $M$, there is an algorithm running in time $\tilde{O}(T)$ that solves planted $k$-SUM with constant probability. The same holds for $k$-XOR as well.

Hardness Amplification. For any $M \geq r^k$, if an algorithm running in time $T$ solves planted $k$-XOR with success probability $\Omega(1/\text{polylog}(r))$, then there is an algorithm running in time $\tilde O(T)$ that solves it with probability $(1-o(1))$. We show this by constructing a rapidly mixing random walk over $k$-XOR instances that preserves the planted solution.

Cryptography. For some $M \leq 2^{\text{polylog}(r)}$, the hardness of the $k$-XOR problem can be used to construct Public-Key Encryption (PKE) assuming that the Learning Parity with Noise (LPN) problem with constant noise rate is hard for $2^{n^{0.01}}$-time algorithms. Previous constructions of PKE from LPN needed either a noise rate of $O(1/\sqrt{n})$, or hardness for $2^{n^{0.5}}$-time algorithms.

Algorithms. For any $M \geq 2^{r^2}$, there is a constant $c$ (independent of $k$) and an algorithm running in time $r^c$ that, for any $k$, solves planted $k$-SUM with success probability $\Omega(1/8^k)$. We get this by showing an average-case reduction from planted $k$-SUM to the Subset Sum problem. For $r^k \leq M \ll 2^{r^2}$, the best known algorithms are still the worst-case $k$-SUM algorithms running in time $r^{\lceil{k/2}\rceil-o(1)}$.
Expand
◄ Previous Next ►