IACR News item: 17 June 2025
Shanxiang Lyu, Ling Liu, Cong Ling
The Learning Parity with Noise (LPN) problem has become a cornerstone for building lightweight, post-quantum secure encryption schemes. Despite its widespread adoption, LPN-based constructions suffer from a fundamental efficiency limitation: the essential noise term that provides security simultaneously requires error correction coding, leading to bandwidth overhead. We introduce a variant of LPN termed Learning Parity with Quantization (LPQ). While maintaining the ``learning from noisy equations'' framework, LPQ generates Bernoulli-like noise from code-aided quantization and enables simultaneous security and compression. Formally, the $\text{LPQ}_{N,n,\mathcal{C}}$ problem challenges adversaries to distinguish the triplet $(\mathbf{A}, Q_{\mathcal{C}}(\mathbf{A}\mathbf{s} \oplus \mathbf{u}), \mathbf{u})$ from uniform, where $Q_{\mathcal{C}}$ is a vector quantization function based on an $(N,K)$ code $\mathcal{C}$, and $\mathbf{u}$ serves as a public dither. We establish the hardness of LPQ through a tight reduction from the LPN problem, maintaining equivalent security guarantees. We demonstrate LPQ’s practical efficacy through a full rate (i.e., rate-1) symmetric key encryption scheme, where LPQ combined with an extendable output function (XOF) achieves optimal ciphertext efficiency ($|\text{ct}| = |\text{pt}|$).
Additional news items may be found on the IACR news page.