CryptoDB
Honggang Hu
Publications
Year
Venue
Title
2025
TCHES
MulLeak: Exploiting Multiply Instruction Leakage to Attack the Stack-optimized Kyber Implementation on Cortex-M4
Abstract
CRYSTALS-Kyber, one of the NIST PQC standardization schemes, has garnered considerable attention from researchers in recent years for its side-channel security. Various targets have been explored in previous studies; however, research on extracting secret information from stack-optimized implementations targeting the Cortex-M4 remains scarce, primarily due to the lack of memory access operations, which increases the difficulty of attacks.This paper shifts the focus to the leakage of multiply instructions and present a novel cycle-level regression-based leakage model for the following attacks. We target the polynomial multiplications in decryption process of the stack-optimized implementation targeting the Cortex-M4, and propose two regression-based profiled attacks leveraging known ciphertext and chosen ciphertext methodologies to recover the secret coefficients individually. The later one can also be extended to the protected implementation.Our practical evaluation, conducted on the stack-optimized Kyber-768 implementation from the pqm4 repository, demonstrates the effectiveness of the proposed attacks. Focusing on the leakage from the pair-pointwise multiplication, specifically the macro doublebasemul_frombytes_asm, we successfully recover all secret coefficients with a success rate exceeding 95% using a modest number of traces for each attack. This research underscores the potential vulnerabilities in PQC implementations against side-channel attacks and contributes to the ongoing discourse on the physical security of cryptographic algorithms.
2025
CRYPTO
Bitwise Garbling Schemes \\ A Model with $\frac{3}{2}\kappa$-bit Lower Bound of Ciphertexts
Abstract
At Eurocrypt 2015, Zahur, Rosulek, and Evans proposed the model of Linear Garbling Schemes. This model proved a $2\lambda$-bit lower bound of ciphertexts for a broad class of garbling schemes. At Crypto 2021, Rosulek and Roy presented the innovative ``three-halves'' garbling scheme in which AND gates cost $1.5\lambda+5$ bits and XOR gates are free. A noteworthy aspect of their scheme is the slicing-and-dicing technique, which is applicable universally to all AND gates when garbling a boolean circuit. Following this revelation, Rosulek and Roy presented several open problems.
Our research primarily addresses one of them: ``\textit{Is $1.5\lambda$ bits optimal for garbled AND gates in a more inclusive model than Linear Garbling Schemes?}''
In this paper, we propose the \textbf{Bitwise Garbling Schemes}. Our key revelation is that $1.5\lambda$ bits is indeed optimal for arbitrary garbled AND gates in our model. Moreover, we prove the necessity of the free-XOR technique: If free-XOR is forbidden, we prove a $2\lambda$-bit lower bound. As an extension, we apply our idea to construct a model for fan-in 3 gates. Somewhat unexpectedly, we prove a $\frac{7}{4}\lambda$-bit lower bound. Unfortunately, the corresponding construction is not suitable for 3-input AND gates.
2024
ASIACRYPT
Attacking ECDSA with Nonce Leakage by Lattice Sieving: Bridging the Gap with Fourier Analysis-based Attacks
Abstract
The Hidden Number Problem (HNP) has found extensive applications in side-channel attacks against cryptographic schemes, such as ECDSA and Diffie-Hellman. There are two primary algorithmic approaches to solving the HNP: lattice-based attacks and Fourier analysis-based attacks. Lattice-based attacks exhibit better efficiency and require fewer samples when sufficiently long substrings of the nonces are known. However, they face significant challenges when only a small fraction of the nonce is leaked, such as 1-bit leakage, and their performance degrades in the presence of errors.
In this paper, we address an open question by introducing an algorithmic tradeoff that significantly bridges the gap between these two approaches. By introducing a parameter $x$ to modify Albrecht and Heninger's lattice, the lattice dimension is reduced by approximately $(\log_2{x})/ l$, where $l$ represents the number of leaked bits. We present a series of new methods, including the interval reduction algorithm, several predicates, and the pre-screening technique. Furthermore, we extend our algorithms to solve the HNP with erroneous input. Our attack outperforms existing state-of-the-art lattice-based attacks against ECDSA. We obtain several records including 1-bit and less than 1-bit leakage on a 160-bit curve, while the best previous lattice-based attack for 1-bit leakage was conducted only on a 112-bit curve.
2020
TCHES
A Novel Evaluation Metric for Deep Learning-Based Side Channel Analysis and Its Extended Application to Imbalanced Data
📺
Abstract
Since Kocher (CRYPTO’96) proposed timing attack, side channel analysis (SCA) has shown great potential to break cryptosystems via physical leakage. Recently, deep learning techniques are widely used in SCA and show equivalent and even better performance compared to traditional methods. However, it remains unknown why and when deep learning techniques are effective and efficient for SCA. Masure et al. (IACR TCHES 2020(1):348–375) illustrated that deep learning paradigm is suitable for evaluating implementations against SCA from a worst-case scenario point of view, yet their work is limited to balanced data and a specific loss function. Besides, deep learning metrics are not consistent with side channel metrics. In most cases, they are deceptive in foreseeing the feasibility and complexity of mounting a successful attack, especially for imbalanced data. To mitigate the gap between deep learning metrics and side channel metrics, we propose a novel Cross Entropy Ratio (CER) metric to evaluate the performance of deep learning models for SCA. CER is closely related to traditional side channel metrics Guessing Entropy (GE) and Success Rate (SR) and fits to deep learning scenario. Besides, we show that it works stably while deep learning metrics such as accuracy becomes rather unreliable when the training data tends to be imbalanced. However, estimating CER can be done as easy as natural metrics in deep learning algorithms with low computational complexity. Furthermore, we adapt CER metric to a new kind of loss function, namely CER loss function, designed specifically for deep learning in side channel scenario. In this way, we link directly the SCA objective to deep learning optimization. Our experiments on several datasets show that, for SCA with imbalanced data, CER loss function outperforms Cross Entropy loss function in various conditions.
Coauthors
- Xiaolin Duan (1)
- Yiming Gao (1)
- Binang He (1)
- Chengcong Hu (1)
- Honggang Hu (4)
- Fan Huang (1)
- Jiehui Nan (1)
- Jinghui Wang (1)
- Fei Xu (1)
- Changhong Xu (1)
- Nenghai Yu (1)
- Jiajia Zhang (1)
- Mengce Zheng (2)