International Association for Cryptologic Research

International Association
for Cryptologic Research


A Lower Bound for Proving Hardness of Learning with Rounding with Polynomial Modulus

Parker Newton , University of California, Riverside
Silas Richelson , University of California, Riverside
DOI: 10.1007/978-3-031-38554-4_26 (login may be required)
Search ePrint
Search Google
Presentation: Slides
Conference: CRYPTO 2023
Abstract: Regev's Learning with Errors (LWE) problem (STOC 2005) is a fundamental hardness assumption for modern cryptography. The Learning with Rounding (LWR) Problem was put forth by Banerjee, Peikert and Rosen (Eurocrypt 2012) as an alternative to LWE, for use in cryptographic situations which require determinism. The only method we currently have for proving hardness of LWR is the so-called ``rounding reduction'' which is a specific reduction from an analogous LWE problem. This reduction works whenever the LWE error is small relative to the noise introduced by rounding, but it fails otherwise. For this reason, all prior work on establishing hardness of LWR forces the LWE error to be small, either by setting other parameters extremely large (which hurts performance), or by limiting the number of LWR samples seen by the adversary (which rules out certain applications). Hardness of LWR is poorly understood when the LWE modulus ($q$) is polynomial and when the number of LWE samples ($m$) seen by the adversary is an unbounded polynomial. This range of parameters is the most relevant for practical implementations, so the lack of a hardness proof in this situation is not ideal. In this work, we identify an obstacle for proving the hardness of LWR from LWE in the above framework when $q$ is polynomial and $m$ is an unbounded polynomial. Specifically, we show that any ``pointwise'' reduction from LWE to LWR (\emph{i.e.}, any reduction which maps LWE samples independently to LWR samples) admits an efficient algorithm which directly solves LWE (without the use of an LWR solver). Consequently, LWE cannot be reduced to LWR in our pointwise reduction model with our setting of $q$ and $m$, unless LWE is easy. Our model of a pointwise reduction from LWE to LWR captures all prior reductions from LWE to LWR except the rejection sampling reduction of Bogdanov \emph{et al.} (TCC 2016); while their reduction still operates in a pointwise manner, it can reject an LWE sample instead of mapping it to an LWR sample. However we conjecture that our result still holds in this setting. Our argument proceeds roughly as follows. First, we show that any pointwise reduction from LWE to LWR must have good agreement with some affine map. Then, we use the affine agreement of a pointwise reduction together with a type of Goldreich-Levin ``prediction-implies-inversion'' argument to extract the LWE secret from LWE input samples. Both components may be of independent interest.
  title={A Lower Bound for Proving Hardness of Learning with Rounding with Polynomial Modulus},
  author={Parker Newton and Silas Richelson},