IACR News item: 17 May 2025
Fabrice Benhamouda, Caicai Chen, Shai Halevi, Yuval Ishai, Hugo Krawczyk, Tamer Mour, Tal Rabin, Alon Rosen
Motivated by applications to efficient secure computation, we consider the following problem of encrypted matrix–vector product (EMVP). Let $\mathbb F$ be a finite field. In an offline phase, a client uploads an encryption of a matrix $M \in \mathbb F^{m\times \ell}$ to a server, keeping only a short secret key. The server stores the encrypted matrix \(\hat{M}\).
In the online phase, the client may repeatedly send encryptions \(\hat{ q}_i\) of query vectors \(q_i \in \mathbb F^\ell\), which enables the client and the server to locally compute compact shares of the matrix–vector product \(Mq_i\). The server learns nothing about \(M\) or \(q_i\). The shared output can either be revealed to the client or processed by another protocol.
We present efficient EMVP protocols based on variants of the learning parity with noise (LPN) assumption and the related learning subspace with noise (LSN) assumption.
Our EMVP protocols are field-agnostic in the sense that the parties only perform arithmetic operations over \(\mathbb F\), and are close to optimal with respect to both communication and computation. In fact, for sufficiently large \(\ell\) (typically a few hundreds), the online computation and communication costs of our LSN-based EMVP can be \emph{less than twice} the costs of computing \(Mq_i\) in the clear.
Combined with suitable secure post-processing protocols on the secret-shared output, our EMVP protocols are useful for a variety of secure computation tasks, including encrypted fuzzy search and secure ML.
Our technical approach builds on recent techniques for private information retrieval in the secret-key setting. The core idea is to encode the matrix \(M\) and the queries \(q_i\) using a pair of secret dual linear codes, while defeating algebraic attacks by adding noise.
In the online phase, the client may repeatedly send encryptions \(\hat{ q}_i\) of query vectors \(q_i \in \mathbb F^\ell\), which enables the client and the server to locally compute compact shares of the matrix–vector product \(Mq_i\). The server learns nothing about \(M\) or \(q_i\). The shared output can either be revealed to the client or processed by another protocol.
We present efficient EMVP protocols based on variants of the learning parity with noise (LPN) assumption and the related learning subspace with noise (LSN) assumption.
Our EMVP protocols are field-agnostic in the sense that the parties only perform arithmetic operations over \(\mathbb F\), and are close to optimal with respect to both communication and computation. In fact, for sufficiently large \(\ell\) (typically a few hundreds), the online computation and communication costs of our LSN-based EMVP can be \emph{less than twice} the costs of computing \(Mq_i\) in the clear.
Combined with suitable secure post-processing protocols on the secret-shared output, our EMVP protocols are useful for a variety of secure computation tasks, including encrypted fuzzy search and secure ML.
Our technical approach builds on recent techniques for private information retrieval in the secret-key setting. The core idea is to encode the matrix \(M\) and the queries \(q_i\) using a pair of secret dual linear codes, while defeating algebraic attacks by adding noise.
Additional news items may be found on the IACR news page.