IACR News item: 01 December 2025
Xin Li, Songtao Mao, Zhaienhe Zhou
We study the Batch Learning Parity with Noise (LPN) variant, where the oracle returns $k$ samples in a batch, and draws the noise vector from a joint noise distribution $\mathcal{D}$ on $\mathbb{F}_2^k$ (instead of i.i.d.). This model captures a broad range of correlated or structured noise patterns studied in cryptography and learning theory, and was formally defined in recent work by Golowich, Moitra, and Rohatgi (FOCS 2024). Consequently, understanding which distributions preserve the hardness of LPN has become an important question.
On the hardness side, we design several reductions from standard LPN to Batch LPN. Our reductions provide a more comprehensive characterization of hard distributions. Specifically, we show that a Batch LPN instance is as hard as standard LPN with noise rate $\eta:=\frac{1}{2}-\varepsilon$ provided that its noise distribution $\mathcal{D}$ satisfies one of the following:
1. The noise distribution $\mathcal{D}$ satisfies a mild Fourier-analytic condition (specifically, $\sum_{s\neq 0}|\widehat{P}_{\mathcal{D}}(s)|\le 2\varepsilon$). 2. The noise distribution $\mathcal{D}$ is $\Omega(\eta \cdot k 2^{-k})$-dense (i.e., every error pattern occurs with probability at least $\Omega(\eta \cdot k 2^{-k})$) for $\eta < 1/k$. 3. The noise distribution $\mathcal{D}$ is a $\delta$-Santha-Vazirani source. Our reduction improves the allowable bias $\delta$ from $O(2^{-k}\varepsilon)$ (in Golowich et al.) to $O(2^{-k/2}\varepsilon)$.
On the algorithmic side, we design an algorithm for solving Batch LPN whenever the noise distribution assigns sufficiently small probability to at least one point, which gives an algorithm--hardness separation for Batch LPN. Our algorithm can be seen as an extension of Arora and Ge's (ICALP 2011) linearization attack.
Our reduction is based on random affine transformations, developed and analyzed through the lens of Fourier analysis, providing a general framework for studying various LPN variants.
On the hardness side, we design several reductions from standard LPN to Batch LPN. Our reductions provide a more comprehensive characterization of hard distributions. Specifically, we show that a Batch LPN instance is as hard as standard LPN with noise rate $\eta:=\frac{1}{2}-\varepsilon$ provided that its noise distribution $\mathcal{D}$ satisfies one of the following:
1. The noise distribution $\mathcal{D}$ satisfies a mild Fourier-analytic condition (specifically, $\sum_{s\neq 0}|\widehat{P}_{\mathcal{D}}(s)|\le 2\varepsilon$). 2. The noise distribution $\mathcal{D}$ is $\Omega(\eta \cdot k 2^{-k})$-dense (i.e., every error pattern occurs with probability at least $\Omega(\eta \cdot k 2^{-k})$) for $\eta < 1/k$. 3. The noise distribution $\mathcal{D}$ is a $\delta$-Santha-Vazirani source. Our reduction improves the allowable bias $\delta$ from $O(2^{-k}\varepsilon)$ (in Golowich et al.) to $O(2^{-k/2}\varepsilon)$.
On the algorithmic side, we design an algorithm for solving Batch LPN whenever the noise distribution assigns sufficiently small probability to at least one point, which gives an algorithm--hardness separation for Batch LPN. Our algorithm can be seen as an extension of Arora and Ge's (ICALP 2011) linearization attack.
Our reduction is based on random affine transformations, developed and analyzed through the lens of Fourier analysis, providing a general framework for studying various LPN variants.
Additional news items may be found on the IACR news page.