International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News item: 11 December 2025

Yanpei Guo, Zhanpeng Guo, Wenjie Qu, Jiaheng Zhang
ePrint Report ePrint Report
A zero-knowledge proof of machine learning (zkML) enables a party to prove that it has correctly executed a committed model using some public input, without revealing any information about the model itself. An ideal zkML scheme should conceal both the model architecture and the model parameters. However, existing zkML approaches for neural networks primarily focus on hiding model parameters. For convolutional neural network (CNN) models, these schemes reveal the entire architecture, including number and sequence of layers, kernel sizes, strides, and residual connections.

In this work, we initiate the study of architecture-private zkML for neural networks, with a focus on CNN models. Our core contributions includes 1) parametrized rank-one constraint system (pR1CS), a generalization of R1CS, allowing the prover to commit to the model architecture in a more friendly manner; 2) a proof of functional relation scheme to demonstrate the committed architecture is valid.

Our scheme matches the prover complexity of BFG+23 (CCS'23), the current state-of-the-art in zkML for CNNs. Concretely, on VGG16 model, when batch proving 64 instances, our scheme achieves only 30% slower prover time than BFG+23 (CCS'23) and 2.3$\times$ faster than zkCNN (CCS'21). This demonstrates that our approach can hide the architecture in zero-knowledge proofs for neural networks with minor overhead. In particular, proving a matrix multiplication using our pR1CS can be at least 3$\times$ faster than using conventional R1CS, highlighting the effectiveness of our optimizations.
Expand

Additional news items may be found on the IACR news page.