International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

ModuloNET: Neural Networks Meet Modular Arithmetic for Efficient Hardware Masking

Authors:
Anuj Dubey , North Carolina State University, Raleigh, US
Afzal Ahmad , The Hong Kong University of Science and Technology, Hong Kong
Muhammad Adeel Pasha , Lahore University of Management Sciences, Lahore, Pakistan
Rosario Cammarota , Intel Labs, San Diego, US
Aydin Aysu , North Carolina State University, Raleigh, US
Download:
DOI: 10.46586/tches.v2022.i1.506-556
URL: https://tches.iacr.org/index.php/TCHES/article/view/9306
Search ePrint
Search Google
Presentation: Slides
Abstract: Intellectual Property (IP) thefts of trained machine learning (ML) models through side-channel attacks on inference engines are becoming a major threat. Indeed, several recent works have shown reverse engineering of the model internals using such attacks, but the research on building defenses is largely unexplored. There is a critical need to efficiently and securely transform those defenses from cryptography such as masking to ML frameworks. Existing works, however, revealed that a straightforward adaptation of such defenses either provides partial security or leads to high area overheads. To address those limitations, this work proposes a fundamentally new direction to construct neural networks that are inherently more compatible with masking. The key idea is to use modular arithmetic in neural networks and then efficiently realize masking, in either Boolean or arithmetic fashion, depending on the type of neural network layers. We demonstrate our approach on the edge-computing friendly binarized neural networks (BNN) and show how to modify the training and inference of such a network to work with modular arithmetic without sacrificing accuracy. We then design novel masking gadgets using Domain-Oriented Masking (DOM) to efficiently mask the unique operations of ML such as the activation function and the output layer classification, and we prove their security in the glitch-extended probing model. Finally, we implement fully masked neural networks on an FPGA, quantify that they can achieve a similar latency while reducing the FF and LUT costs over the state-of-the-art protected implementations by 34.2% and 42.6%, respectively, and demonstrate their first-order side-channel security with up to 1M traces.
BibTeX
@article{tches-2022-31659,
  title={ModuloNET: Neural Networks Meet Modular Arithmetic for Efficient Hardware Masking},
  journal={IACR Transactions on Cryptographic Hardware and Embedded Systems},
  publisher={Ruhr-Universität Bochum},
  volume={2022, Issue 1},
  pages={506-556},
  url={https://tches.iacr.org/index.php/TCHES/article/view/9306},
  doi={10.46586/tches.v2022.i1.506-556},
  author={Anuj Dubey and Afzal Ahmad and Muhammad Adeel Pasha and Rosario Cammarota and Aydin Aysu},
  year=2022
}