International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

BKW Meets Fourier: New Algorithms for LPN with Sparse Parities

Authors:
Dana Dachman-Soled
Huijing Gong
Hunter Kippen
Aria Shahverdi
Download:
DOI: 10.1007/978-3-030-90453-1_23
Search ePrint
Search Google
Abstract: We consider the Learning Parity with Noise (LPN) problem with a sparse secret, where the secret vector $\mathbf{s}$ of dimension $n$ has Hamming weight at most $k$. We are interested in algorithms with asymptotic improvement in the \emph{exponent} beyond the state of the art. Prior work in this setting presented algorithms with runtime $n^{c \cdot k}$ for constant $c < 1$, obtaining a constant factor improvement over brute force search, which runs in time ${n \choose k}$. We obtain the following results: - We first consider the \emph{constant} error rate setting, and in this case present a new algorithm that leverages a subroutine from the acclaimed BKW algorithm [Blum, Kalai, Wasserman, J.~ACM '03] as well as techniques from Fourier analysis for $p$-biased distributions. Our algorithm achieves asymptotic improvement in the exponent compared to prior work, when the sparsity $k = k(n) = \frac{n}{\log^{1+ 1/c}(n)}$, where $c \in o(\log \log(n))$ and $c \in \omega(1)$. The runtime and sample complexity of this algorithm are approximately the same. - We next consider the \emph{low noise} setting, where the error is subconstant. We present a new algorithm in this setting that requires only a \emph{polynomial} number of samples and achieves asymptotic improvement in the exponent compared to prior work, when the sparsity $k = \frac{1}{\eta} \cdot \frac{\log(n)}{\log(f(n))}$ and noise rate of $\eta \neq 1/2$ and $\eta^2 = \left(\frac{\log(n)}{n} \cdot f(n)\right)$, for $f(n) \in \omega(1) \cap n^{o(1)}$. To obtain the improvement in sample complexity, we create subsets of samples using the \emph{design} of Nisan and Wigderson [J.~Comput.~Syst.~Sci. '94], so that any two subsets have a small intersection, while the number of subsets is large. Each of these subsets is used to generate a single $p$-biased sample for the Fourier analysis step. We then show that this allows us to bound the covariance of pairs of samples, which is sufficient for the Fourier analysis. - Finally, we show that our first algorithm extends to the setting where the noise rate is very high $1/2 - o(1)$, and in this case can be used as a subroutine to obtain new algorithms for learning DNFs and Juntas. Our algorithms achieve asymptotic improvement in the exponent for certain regimes. For DNFs of size $s$ with approximation factor $\epsilon$ this regime is when $\log \frac{s}{\epsilon} \in \omega \left( \frac{c}{\log n \log \log c}\right)$, and $\log \frac{s}{\epsilon} \in n^{1 - o(1)}$, for $c \in n^{1 - o(1)}$. For Juntas of $k$ the regime is when $k \in \omega \left( \frac{c}{\log n \log \log c}\right)$, and $k \in n^{1 - o(1)}$, for $c \in n^{1 - o(1)}$.
Video from TCC 2021
BibTeX
@article{tcc-2021-31554,
  title={BKW Meets Fourier: New Algorithms for LPN with Sparse Parities},
  booktitle={Theory of Cryptography;19th International Conference},
  publisher={Springer},
  doi={10.1007/978-3-030-90453-1_23},
  author={Dana Dachman-Soled and Huijing Gong and Hunter Kippen and Aria Shahverdi},
  year=2021
}