International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News item: 21 August 2022

Guilherme Perin, Lichao Wu, Stjepan Picek
ePrint Report ePrint Report
Masked cryptographic implementations can be vulnerable to higher-order attacks. For instance, deep neural networks have proven effective for second-order profiling side-channel attacks even in a black-box setting (no prior knowledge of masks and implementation details). While such attacks have been successful, no explanations were provided for understanding why a variety of deep neural networks can (or cannot) learn high-order leakages and what the limitations are. In other words, we lack the explainability of how neural network layers combine (or not) unknown and random secret shares, which is a necessary step to defeat, e.g., Boolean masking countermeasures.

In this paper, we use information-theoretic metrics to explain the internal activities of deep neural network layers. We propose a novel methodology for the explainability of deep learning-based profiling side-channel analysis to understand the processing of secret masks. Inspired by the Information Bottleneck theory, our explainability methodology uses perceived information to explain and detect the different phenomena that occur in deep neural networks, such as fitting, compression, and generalization. We provide experimental results on masked AES datasets showing where, what, and why deep neural networks learn relevant features from input trace sets while compressing irrelevant ones, including noise. This paper opens new perspectives for the understanding of the role of different neural network layers in profiling side-channel attacks.
Expand

Additional news items may be found on the IACR news page.