IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
07 January 2025
Benjamin Dowling, Britta Hale, Xisen Tian, Bhagya Wimalasiri
ePrint Report
Space networking has become an increasing area of development with the advent of commercial satellite networks such as those hosted by Starlink and Kuiper, and increased satellite and space presence by governments around the world. Yet, historically such network designs have not been made public, leading to limited formal cryptographic analysis of the security offered by them. One of the few public protocols used in space networking is the Bundle Protocol, which is secured by Bundle Protocol Security (BPSec), an Internet Engineering Task Force (IETF) standard. We undertake a first analysis of BPSec under its default security context, building a model of the secure channel security goals stated in the IETF standard, and note issues therein with message loss detection. We prove BPSec secure, and also provide a stronger construction, one that supports the Bundle Protocol's functionality goals while also ensuring destination awareness of missing message components.
Zhihao Li, Xuan Shen, Xianhui Lu, Ruida Wang, Yuan Zhao, Zhiwei Wang, Benqiang Wei
ePrint Report
Multi-input and large-precision lookup table (LUT) evaluation pose significant challenges in Fully Homomorphic Encryption (FHE). Currently, two modes are employed to address this issue. One is tree-based functional bootstrapping (TFBS), which uses multiple blind rotations to construct a tree for LUT evaluation. The second is circuit bootstrapping, which decomposes all inputs into bits and utilizes a CMux tree for LUT evaluation. In this work, we propose a novel mode that is leveled functional bootstrapping. This mode utilizes the external product tree to perform multi-input functional bootstrapping. We implement TFBS and LFBS within the OpenFHE library. The results demonstrate that our method outperforms TFBS in both computational efficiency and bootstrapping key size. Specifically, for 12-bit and 16-bit input LUTs, our method is approximately two to three orders of magnitude faster than TFBS.
Finally, we introduce a novel scheme-switching framework that supports large-precision polynomial and non-polynomial evaluations. The framework leverages digital extraction techniques to enable seamless switching between the BFV and LFBS schemes.
Thomas Johansson, Mustafa Khairallah, Vu Nguyen
ePrint Report
In this paper, we introduce an oracle version of the Restricted Syndrome Decoding Problem (RSDP) and propose novel authentication protocols based on the hardness of this problem. They follow the basic structure of the HB-family of authentication protocols and later improvements but demonstrate several advantages.
An appropriate choice of multiplicative subgroup and ring structure gives rise to a very efficient hardware implementation compared to other \emph{Learning Parity with Noise} based approaches. In addition, the new protocols also have lower key size, lower communication costs, and potentially better completeness/soundness compared to learning-based alternatives. This is appealing in the context of low-cost, low-powered authenticating devices such as radio frequency identification (RFID) systems.
Lastly, we show that with additional assumptions, RSDP can be used to instantiate a Man-in-the-Middle secured authentication protocol.
Daehyeon Bae, Sujin Park, Minsig Choi, Young-Giu Jung, Changmin Jeong, Heeseok Kim, Seokhie Hong
ePrint Report
Electromagnetic side-channel analysis is a powerful method for monitoring processor activity and compromising cryptographic systems in air-gapped environments. As analytical methodologies and target devices evolve, the importance of leakage localization and probe aiming becomes increasingly apparent for capturing only the desired signals with a high signal-to-noise ratio. Despite its importance, there remains substantial reliance on unreliable heuristic approaches and inefficient exhaustive searches. Furthermore, related studies often fall short in terms of feasibility, practicality, and performance, and are limited to controlled DUTs and low-end MCUs.
To address the limitations and inefficiencies of the previous approaches, we propose a novel methodology―${\rm P{\tiny ROBE}S{\tiny HOOTER}}$―for leakage localization and probe aiming. This approach leverages new insights into the spatial characteristics of amplitude modulation and intermodulation distortion in processors. As a result, ${\rm P{\tiny ROBE}S{\tiny HOOTER}}$ provides substantial improvements in various aspects: $\boldsymbol 1)$ it is applicable to not only simple MCUs but also complex SoCs, $\boldsymbol 2)$ it effectively handles multi-core systems and dynamic frequency scaling, $\boldsymbol 3)$ it is adoptable to uncontrollable DUTs, making it viable for constrained real-world attacks, and $\boldsymbol 4)$ it performs significantly faster than previous methods. To demonstrate this, we experimentally evaluate ${\rm P{\tiny ROBE}S{\tiny HOOTER}}$ on a high-end MCU (the NXP i.MX RT1061 featuring a single ARM Cortex-M7 core) and a complex SoC (the Broadcom BCM2711 equipped with the Raspberry Pi 4 Model B, featuring four ARM Cortex-A72 cores).
To address the limitations and inefficiencies of the previous approaches, we propose a novel methodology―${\rm P{\tiny ROBE}S{\tiny HOOTER}}$―for leakage localization and probe aiming. This approach leverages new insights into the spatial characteristics of amplitude modulation and intermodulation distortion in processors. As a result, ${\rm P{\tiny ROBE}S{\tiny HOOTER}}$ provides substantial improvements in various aspects: $\boldsymbol 1)$ it is applicable to not only simple MCUs but also complex SoCs, $\boldsymbol 2)$ it effectively handles multi-core systems and dynamic frequency scaling, $\boldsymbol 3)$ it is adoptable to uncontrollable DUTs, making it viable for constrained real-world attacks, and $\boldsymbol 4)$ it performs significantly faster than previous methods. To demonstrate this, we experimentally evaluate ${\rm P{\tiny ROBE}S{\tiny HOOTER}}$ on a high-end MCU (the NXP i.MX RT1061 featuring a single ARM Cortex-M7 core) and a complex SoC (the Broadcom BCM2711 equipped with the Raspberry Pi 4 Model B, featuring four ARM Cortex-A72 cores).
Hao Chung, Ke Wu, Elaine Shi
ePrint Report
Today, many auctions are carried out with the help of intermediary platforms like Google and eBay. These platforms serve as a rendezvous point for the buyers and sellers, and charge a fee for its service. We refer to such auctions as platform-assisted auctions. Traditionally, the auction theory literature mainly focuses on designing auctions that incentivize the buyers to bid truthfully, assuming that the platform always faithfully implements the auction. In practice, however, the platforms have been found to manipulate the auctions to earn more profit, resulting in high-profile anti-trust lawsuits.
We propose a new model for studying platform-assisted auctions in the permissionless setting, where anyone can register and participate in the auction. We explore whether it is possible to design a dream auction in this new model, such that honest behavior is the utility-maximizing strategy for each individual buyer, the platform, the seller, as well as platform-seller or platform-buyer coalitions. Through a collection of feasibility and infeasibility results, we carefully characterize the mathematical landscape of platform-assisted auctions.
Interestingly, our work reveals exciting connections between cryptography and mechanism design. We show how cryptography can lend to the design of an efficient platform-assisted auction with dream properties. Although a line of works have also used multi-party computation (MPC) or the blockchain to remove the reliance on a trusted auctioneer, our work is distinct in nature in several dimensions. First, we initiate a systematic exploration of the game theoretic implications when the service providers (e.g., nodes that implement the MPC or blockchain protocol) are strategic and can collude with sellers or buyers. Second, we observe that the full simulation paradigm used in the standard MPC literature is too stringent and leads to high asymptotical costs. Specifically, because every player has a different private outcome in an auction protocol, to the best of our knowledge, running any generic MPC protocol among the players would incur at least $n^2$ total cost where $n$ is the number of buyers.We propose a new notion of simulation called {\it utility-dominated emulation} that is sufficient for guaranteeing the game-theoretic properties needed in an auction. Under this new notion of simulation, we show how to design efficient auction protocols with quasilinear efficiency, which gives an $n$-fold improvement over any generic approach.
We propose a new model for studying platform-assisted auctions in the permissionless setting, where anyone can register and participate in the auction. We explore whether it is possible to design a dream auction in this new model, such that honest behavior is the utility-maximizing strategy for each individual buyer, the platform, the seller, as well as platform-seller or platform-buyer coalitions. Through a collection of feasibility and infeasibility results, we carefully characterize the mathematical landscape of platform-assisted auctions.
Interestingly, our work reveals exciting connections between cryptography and mechanism design. We show how cryptography can lend to the design of an efficient platform-assisted auction with dream properties. Although a line of works have also used multi-party computation (MPC) or the blockchain to remove the reliance on a trusted auctioneer, our work is distinct in nature in several dimensions. First, we initiate a systematic exploration of the game theoretic implications when the service providers (e.g., nodes that implement the MPC or blockchain protocol) are strategic and can collude with sellers or buyers. Second, we observe that the full simulation paradigm used in the standard MPC literature is too stringent and leads to high asymptotical costs. Specifically, because every player has a different private outcome in an auction protocol, to the best of our knowledge, running any generic MPC protocol among the players would incur at least $n^2$ total cost where $n$ is the number of buyers.We propose a new notion of simulation called {\it utility-dominated emulation} that is sufficient for guaranteeing the game-theoretic properties needed in an auction. Under this new notion of simulation, we show how to design efficient auction protocols with quasilinear efficiency, which gives an $n$-fold improvement over any generic approach.
06 January 2025
Fukuoka, Japan, 25 November - 27 November 2025
Event Calendar
Event date: 25 November to 27 November 2025
Submission deadline: 3 April 2025
Notification: 2 June 2025
Submission deadline: 3 April 2025
Notification: 2 June 2025
Tarragona, Spain, 12 May - 14 May 2025
Event Calendar
Event date: 12 May to 14 May 2025
SnT, University of Luxembourg (Esch-sur-Alzette, Luxembourg)
Job Posting
CryptoLUX team at the University of Luxembourg searches for a post-doc to work on the PQseal project about cryptanalysis of post-quantum signatures. The duration of the position is 1.5 years (18 months), with planned start in March-April 2025. The postdoc will work closely with Aleksei Udovenko who leads the project funded by the highly competitive FNR's CORE Junior grant.
The candidate should have obtained or going to soon obtain PhD in Mathematics or Computer science. The research profile includes (any) cryptanalysis and/or equation system solving (e.g., Gröbner bases), parallel computing. Preference would be given to applicants with experience in multivariate and/or code-based cryptosystems and cryptanalysis methods, familiarity with computer algebra (SageMath, Magma).
The prospective candidates should send their CV with a list of publications to aleksei.udovenko at uni.lu (same address can be used for any questions related to the position). Deadline for applications is 1st of February 2025, but applications will be considered upon receipt, so early application is encouraged.
The candidate should have obtained or going to soon obtain PhD in Mathematics or Computer science. The research profile includes (any) cryptanalysis and/or equation system solving (e.g., Gröbner bases), parallel computing. Preference would be given to applicants with experience in multivariate and/or code-based cryptosystems and cryptanalysis methods, familiarity with computer algebra (SageMath, Magma).
The prospective candidates should send their CV with a list of publications to aleksei.udovenko at uni.lu (same address can be used for any questions related to the position). Deadline for applications is 1st of February 2025, but applications will be considered upon receipt, so early application is encouraged.
Closing date for applications:
Contact: Aleksei Udovenko (aleksei.udovenko at uni.lu)
Free University of Brussels
Job Posting
The Faculty of Sciences of the Université libre de Bruxelles (ULB) invites applications for a full-time
academic position in Computer Science to begin October 1st, 2025. All areas of Computer Science, including Cryptography, will be
considered.
ULB and the Computer Science Department is strongly committed to improving gender equality. Therefore, a proactive attitude to supporting and encouraging underrepresented groups, particularly women in Computer Science, will be one of the criteria for appointment. To combat the under-representation of women in STEM disciplines, the University will ensure that a minimum of 30% of women with the required qualifications are on the shortlist. Additionally, other criteria being equal, the appointment will be given to a woman. The scheme also includes awareness raising of unconscious biases among members of selection boards.
The Computer Science Department of ULB is a leading research and dynamic education center. Located in the capital of Europe, it offers a wide range of funding and research opportunities. Brussels is a cosmopolitan city with an excellent quality of life and easy access to all major cities in Europe.
The position involves both teaching and research and some commitment to administrative tasks.
For candidates not fluent in French, a temporary period of teaching in English shall be granted.
ULB and the Computer Science Department is strongly committed to improving gender equality. Therefore, a proactive attitude to supporting and encouraging underrepresented groups, particularly women in Computer Science, will be one of the criteria for appointment. To combat the under-representation of women in STEM disciplines, the University will ensure that a minimum of 30% of women with the required qualifications are on the shortlist. Additionally, other criteria being equal, the appointment will be given to a woman. The scheme also includes awareness raising of unconscious biases among members of selection boards.
The Computer Science Department of ULB is a leading research and dynamic education center. Located in the capital of Europe, it offers a wide range of funding and research opportunities. Brussels is a cosmopolitan city with an excellent quality of life and easy access to all major cities in Europe.
The position involves both teaching and research and some commitment to administrative tasks.
For candidates not fluent in French, a temporary period of teaching in English shall be granted.
Closing date for applications:
Contact: Professor John Jacono (E-mail: presidence.DI@ulb.be), Head of the Computer Science Department.
Applicants with expertise in Cryptography and related areas are also welcome to contact Christophe Petit at christophe.petit@ulb.be
More information: https://www.ulb.be/fr/vacances-d-emplois-academiques-et-scientifiques/faculte-des-sciences
Maxime Bombar, Nicolas Resch, Emiel Wiedijk
ePrint Report
Cryptography based on the presumed hardness of decoding codes -- i.e., code-based cryptography -- has recently seen increased interest due to its plausible security against quantum attackers. Notably, of the four proposals for the NIST post-quantum standardization process that were advanced to their fourth round for further review, two were code-based. The most efficient proposals -- including HQC and BIKE, the NIST submissions alluded to above -- in fact rely on the presumed hardness of decoding structured codes. Of particular relevance to our work, HQC is based on quasi-cyclic codes, which are codes generated by matrices consisting of two cyclic blocks.
In particular, the security analysis of HQC requires a precise understanding of the Decryption Failure Rate (DFR), whose analysis relies on the following heuristic: given random "sparse" vectors $e_1,e_2$ (say, each coordinate is i.i.d. Bernoulli) multiplied by fixed "sparse" quasi-cyclic matrices $A_1,A_2$, the weight of resulting vector $e_1A_1+e_2A_2$ is very concentrated around its expectation. In the documentation, the authors model the distribution of $e_1A_1+e_2A_2$ as a vector with independent coordinates (and correct marginal distribution). However, we uncover cases where this modeling fails. While this does not invalidate the (empirically verified) heuristic that the weight of $e_1A_1+e_2A_2$ is concentrated, it does suggest that the behavior of the noise is a bit more subtle than previously predicted. Lastly, we also discuss implications of our result for potential worst-case to average-case reductions for quasi-cyclic codes.
In particular, the security analysis of HQC requires a precise understanding of the Decryption Failure Rate (DFR), whose analysis relies on the following heuristic: given random "sparse" vectors $e_1,e_2$ (say, each coordinate is i.i.d. Bernoulli) multiplied by fixed "sparse" quasi-cyclic matrices $A_1,A_2$, the weight of resulting vector $e_1A_1+e_2A_2$ is very concentrated around its expectation. In the documentation, the authors model the distribution of $e_1A_1+e_2A_2$ as a vector with independent coordinates (and correct marginal distribution). However, we uncover cases where this modeling fails. While this does not invalidate the (empirically verified) heuristic that the weight of $e_1A_1+e_2A_2$ is concentrated, it does suggest that the behavior of the noise is a bit more subtle than previously predicted. Lastly, we also discuss implications of our result for potential worst-case to average-case reductions for quasi-cyclic codes.
Kyungbae Jang, Vikas Srivastava, Anubhab Baksi, Santanu Sarkar, Hwajeong Seo
ePrint Report
This paper improves upon the quantum circuits required for the Shor's attack on binary elliptic curves. We present two types of quantum point addition, taking both qubit count and circuit depth into consideration.
In summary, we propose an in-place point addition that improves upon the work of Banegas et al. from CHES'21, reducing the qubit count – depth product by more than $73\%$ – $81\%$ depending on the variant. Furthermore, we develop an out-of-place point addition by using additional qubits. This method achieves the lowest circuit depth and offers an improvement of over $92\%$ in the qubit count – quantum depth product (for a single step).
To the best of our knowledge, our work improves from all previous works (including the CHES'21 paper by Banegas et al., the IEEE Access'22 paper by Putranto et al., and the CT-RSA'23 paper by Taguchi and Takayasu) in terms of circuit depth and qubit count – depth product.
Equipped with the implementations, we discuss the post-quantum security of binary elliptic curve cryptography. Under the MAXDEPTH metric (proposed by the US government's NIST), the quantum circuit with the highest depth in our work is $2^{24}$, which is significantly lower than the MAXDEPTH limit of $2^{40}$. For the gate count – full depth product, a metric for estimating quantum attack cost (used by NIST), the highest value in our work is $2^{60}$, considerably below the post-quantum security level 1 threshold (of the order of $2^{156}$).
In summary, we propose an in-place point addition that improves upon the work of Banegas et al. from CHES'21, reducing the qubit count – depth product by more than $73\%$ – $81\%$ depending on the variant. Furthermore, we develop an out-of-place point addition by using additional qubits. This method achieves the lowest circuit depth and offers an improvement of over $92\%$ in the qubit count – quantum depth product (for a single step).
To the best of our knowledge, our work improves from all previous works (including the CHES'21 paper by Banegas et al., the IEEE Access'22 paper by Putranto et al., and the CT-RSA'23 paper by Taguchi and Takayasu) in terms of circuit depth and qubit count – depth product.
Equipped with the implementations, we discuss the post-quantum security of binary elliptic curve cryptography. Under the MAXDEPTH metric (proposed by the US government's NIST), the quantum circuit with the highest depth in our work is $2^{24}$, which is significantly lower than the MAXDEPTH limit of $2^{40}$. For the gate count – full depth product, a metric for estimating quantum attack cost (used by NIST), the highest value in our work is $2^{60}$, considerably below the post-quantum security level 1 threshold (of the order of $2^{156}$).
Yuval Efron, Ertem Nusret Tas
ePrint Report
Internet-scale consensus protocols used by blockchains are designed to remain operational in the presence of unexpected temporary crash faults (the so-called sleepy model of consensus) -- a critical feature for the latency-sensitive financial applications running on these systems.
However, their leader-based architecture, where a single block proposer is responsible for creating the block at each height, makes them vulnerable to short-term censorship attacks, in which the proposers profit at the application layer by excluding certain transactions.
In this work, we introduce an atomic broadcast protocol, secure in the sleepy model, that ensures the inclusion of all transactions within a constant expected time, provided that at least one participating node is non-censoring at all times.
Unlike traditional approaches, our protocol avoids designating a single proposer per block height, instead leveraging a so-called dynamically available common subset (DACS) protocol -- the first of its kind in the sleepy model. Additionally, our construction guarantees deterministic synchronization -- once an honest node confirms a block, all other honest nodes do so within a constant time, thus addressing a shortcoming of many low-latency sleepy protocols.
Jianjun HU
ePrint Report
Index Calculus (IC) algorithm is the most effective probabilistic algorithm for solving discrete logarithms over finite fields of prime numbers, and it has been widely applied to cryptosystems based on elliptic curves. Since the IC algorithm was proposed in 1920, the research on it has never stopped, especially discretization of prime numbers on the finite fields, both the algorithm itself and its application have been greatly developed. Of course, there has been some research on elliptic curves,but with little success. For the IC algorithm, scholars pay more attention to how to improve the probability of solving and reduce the time complexity of calculation. It is the first time for the IICA to study the optimization problem of the IC by using the method of integer. However, the IICA only studies the case of integer up, and fails to consider the case of integer down. It is found that the integer direction of the IICA can be integer up or integer down, but the concept of modular multiplication needs to be used when integer down. After optimizing the IICA, the probability of successful solution of discrete logarithm is increased by nearly 2 times, and the number of transformations is also reduced to a certain extent, thus reducing the time complexity of solution. The re-optimized the IC algorithm greatly improves the probability of successful the IC solution. This research result poses a serious challenge to cryptosystems based on finite fields of prime numbers.
Md Kawser Bepary, Arunabho Basu, Sajeed Mohammad, Rakibul Hassan, Farimah Farahmandi, Mark Tehranipoor
ePrint Report
The Performance Monitoring Unit (PMU), a standard feature in all modern computing systems, presents significant security risks by leaking sensitive user activities through microarchitectural event data. This work demonstrates the feasibility of remote side-channel attacks leveraging PMU data, revealing vulnerabilities that compromise user privacy and enable covert surveillance without physical access to the target machine. By analyzing the PMU feature space, we create distinct micro-architectural fingerprints for benchmark applications, which are then utilized in machine learning (ML) models to detect the corresponding benchmarks. This approach allows us to build a pre-trained model for benchmark detection using the unique micro-architectural fingerprints derived from PMU data. Subsequently, when an attacker remotely accesses the victim’s PMU data, the pre-trained model enables the identification of applications used by the victim with high accuracy. In our proof-of-concept demonstration, the pre-trained model successfully identifies applications used by a victim when the attacker remotely accesses PMU data, showcasing the potential for malicious exploitation of PMU data. We analyze stress-ng benchmarks and build our classifiers using logistic regression, decision tree, k-nearest neighbors, and random forest ML models. Our proposed models achieve an average prediction accuracy of 98%, underscoring the potential risks associated with remote side-channel analysis using PMU data and emphasizing the need for more robust safeguards. This work underscores the urgent need for robust countermeasures to protect against such vulnerabilities and provides a foundation for future research in micro-architectural security.
04 January 2025
José Reis, Mehmet Ugurbil, Sameer Wagh, Ryan Henry, Miguel de Vega
ePrint Report
This paper introduces new protocols for secure multiparty computation (MPC) leveraging Discrete Wavelet Transforms (DWTs) for computing nonlinear functions over large domains. By employing DWTs, the protocols significantly reduce the overhead typically associated with Lookup Table-style (LUT) evaluations in MPC. We state and prove foundational results for DWT-compressed LUTs in MPC, present protocols for 9 of the most common activation functions used in ML, and experimentally evaluate the performance of our protocols for large domain sizes in the LAN and WAN settings. Our protocols are extremely fast -- for instance, when considering 64-bit inputs, computing 1000 parallel instances of the sigmoid function, with an error less than $2^{-24}$ takes only a few hundred milliseconds incurs just 29\,KiB of online communication (40 bytes per evaluation).
Wouter Legiest, Jan-Pieter D'Anvers, Bojan Spasic, Nam-Luc Tran, Ingrid Verbauwhede
ePrint Report
This paper presents a novel approach to calculating the Levenshtein (edit) distance within the framework of Fully Homomorphic Encryption (FHE), specifically targeting third-generation schemes like TFHE. Edit distance computations are essential in applications across finance and genomics, such as DNA sequence alignment. We introduce an optimised algorithm that significantly reduces the cost of edit distance calculations called \LVS{}. This algorithm specifically reduces the number of programmable bootstraps (PBS) needed per cell of the calculation, lowering it from approximately 28 operations—required by the conventional Wagner-Fisher algorithm—to just 1. Additionally, we propose an efficient method for performing equality checks on characters, reducing ASCII character comparisons to only 2 PBS operations. Finally, we explore the potential for further performance improvements by utilizing preprocessing when one of the input strings is unencrypted. Our Leuvenshtein achieves up to $205\times$ faster performance compared to the best available TFHE implementation and up to $39\times$ faster than an optimised implementation of the Wagner-Fisher algorithm. Moreover, when offline preprocessing is possible due to the presence of one unencrypted input on the server side, an additional $3\times$ speedup can be achieved.
03 January 2025
Dipayan Saha, Farimah Farahmandi
ePrint Report
Side-channel analysis (SCA) does not aim at the algorithm's weaknesses but rather its implementations. The rise of machine learning (ML) and deep learning (DL) is giving adversaries advanced capabilities to perform stealthy attacks. In this paper, we propose DL-SCADS, a DL-based approach along with signal decomposition techniques to leverage the power of secret key extraction from post-silicon EM/power side-channel traces. We integrate previously proven effective ideas of model ensembling and automated hyperparameter tuning with signal decomposition to develop an efficient and robust side-channel attack. Extensive experiments are performed on Advanced Encryption Standard (AES) and Post-Quantum Cryptography (PQC) to demonstrate the efficacy of our approach. The evaluation of the performance of the side-channel attack employing various decomposition techniques and comparison with the proposed approach in a range of datasets are also tabulated.
Anandarup Roy, Bimal Kumar Roy, Kouichi Sakurai, Suprita Talnikar
ePrint Report
This article explores the potential of Secret Sharing-Based Internet of Things (SBIoT) as a promising cryptographic element across diverse applications, including secure data storage in commercial cloud systems (Datachest), smart home environments (encompassing sensors, cameras, smart locks, and smart assistants), and e-health applications (protecting patient data and medical records). Beyond these applications, the paper makes two key contributions: the introduction of a novel cheater identification algorithm designed to verify the correct submission of shares during secret reconstruction, and empirical validation through experimental studies to support the theoretical advancements. This multifaceted approach not only demonstrates the versatility of SBIoT but also proposes innovative mechanisms to enhance security and integrity, contributing to the development of a more robust cryptographic framework.
This article expands upon the work presented in the poster "A Combinatorial Approach to IoT Data Security" at IWSEC 2023, Yokohama, Japan.
Merve Karabulut, Reza Azarderakhsh
ePrint Report
Side-channel attacks (SCA) pose a significant threat to cryptographic implementations, including those designed to withstand the computational power of quantum computers.
This paper introduces the first side-channel attack on an industry-grade post-quantum cryptography implementation, Adam's Bridge.
Specifically, we present a Correlation Power Analysis (CPA) attack targeting the hardware implementation of ML-DSA within Caliptra, an open-source Silicon Root of Trust framework developed through a multi-party collaboration involving Google, AMD, and Microsoft.
Our attack focuses on the modular reduction process that follows the Number Theoretic Transform-based polynomial pointwise multiplication. By exploiting side-channel leakage from a distinctive reduction algorithm unique to Adam's Bridge and leveraging the zeroization mechanism used to securely erase sensitive information by clearing internal registers, we significantly enhance the attack's efficacy. Our findings reveal that an adversary can extract Caliptra's ML-DSA secret keys using only 10,000 power traces. With access to these keys, an attacker could forge signatures for certificate generation, thereby compromising the integrity of the root of trust. This work highlights the vulnerabilities of industry-standard root-of-trust systems to side-channel attacks. It underscores the urgent need for robust countermeasures to secure commercially deployed systems against such threats.
Our attack focuses on the modular reduction process that follows the Number Theoretic Transform-based polynomial pointwise multiplication. By exploiting side-channel leakage from a distinctive reduction algorithm unique to Adam's Bridge and leveraging the zeroization mechanism used to securely erase sensitive information by clearing internal registers, we significantly enhance the attack's efficacy. Our findings reveal that an adversary can extract Caliptra's ML-DSA secret keys using only 10,000 power traces. With access to these keys, an attacker could forge signatures for certificate generation, thereby compromising the integrity of the root of trust. This work highlights the vulnerabilities of industry-standard root-of-trust systems to side-channel attacks. It underscores the urgent need for robust countermeasures to secure commercially deployed systems against such threats.
Angold Wang
ePrint Report
This survey provides a comprehensive examination of zero-knowledge interactive verifiable computing, emphasizing the utilization of randomnes in low-degree polynomials. We begin by tracing the evolution of general-purpose verifiable computing, starting with the foundational concepts of complexity theory developed in the 1980s, including classes such as P, NP and NP-completeness. Through an exploration of the Cook-Levin Theorem and the transformation between NP problems like HAMPATH and SAT, we demonstrate the reducibility of NP problems to a unified framework, laying the groundwork for subsequent advancements.
Recognizing the limitations of NP-based proof systems in effectively verifying certain problems, we then delve into interactive proof systems (IPS) as a probabilistic extension of NP. IPS enhance verification efficiency by incorporating randomness and interaction, while accepting a small chance of error for that speed. We address the practical challenges of traditional IPS, where the assumption of a prover with unlimited computational power is unrealistic, and introduce the concept of secret knowledge. This approach allows a prover with bounded computational resources to convincingly demonstrate possession of secret knowledge to the verifier, thereby enabling high-probability verification by the verifier. We quantify this knowledge by assessing the verifier's ability to distinguish between a simulator and genuine prover, referencing seminal works such as Goldwasser et al.'s "The knowledge Complexity of Theorem Proving Procedures"
The survey further explores essential mathematical theories and cryptographic protocols, including the Schwartz-Zippel lemma and Reed-Solomon error correction, which underpin the power of low-degree polynomials in error detection and interactive proof systems. We provide a detailed, step-by-step introduction to tyhe sum-check protocol, proving its soundness and runtime characteristics.
Despite the sum-check protocol's theoretical applicability to all NP problems via SAT reduction, we highlight the sum-check protocol's limitation in requiring superpolynomial time for general-purpose computations of a honest prover. To address these limitations, we introduce the GKR protocol, a sophisticate general-purpose interactive proof system developed in the 2010s. We demonstrate how the sum-check protocol integrates into the GKR framework to achieve efficient, sound verification of computations in polynomial time. This survey not only reviews the historical and theoretical advancement in verifiable computing in the past 30 years but also offers an accessible introduction for newcomers by providing a solid foundation to understand the significant advancements in verifiable computing over the past decade, including developments such as ZK-SNARKs.
Recognizing the limitations of NP-based proof systems in effectively verifying certain problems, we then delve into interactive proof systems (IPS) as a probabilistic extension of NP. IPS enhance verification efficiency by incorporating randomness and interaction, while accepting a small chance of error for that speed. We address the practical challenges of traditional IPS, where the assumption of a prover with unlimited computational power is unrealistic, and introduce the concept of secret knowledge. This approach allows a prover with bounded computational resources to convincingly demonstrate possession of secret knowledge to the verifier, thereby enabling high-probability verification by the verifier. We quantify this knowledge by assessing the verifier's ability to distinguish between a simulator and genuine prover, referencing seminal works such as Goldwasser et al.'s "The knowledge Complexity of Theorem Proving Procedures"
The survey further explores essential mathematical theories and cryptographic protocols, including the Schwartz-Zippel lemma and Reed-Solomon error correction, which underpin the power of low-degree polynomials in error detection and interactive proof systems. We provide a detailed, step-by-step introduction to tyhe sum-check protocol, proving its soundness and runtime characteristics.
Despite the sum-check protocol's theoretical applicability to all NP problems via SAT reduction, we highlight the sum-check protocol's limitation in requiring superpolynomial time for general-purpose computations of a honest prover. To address these limitations, we introduce the GKR protocol, a sophisticate general-purpose interactive proof system developed in the 2010s. We demonstrate how the sum-check protocol integrates into the GKR framework to achieve efficient, sound verification of computations in polynomial time. This survey not only reviews the historical and theoretical advancement in verifiable computing in the past 30 years but also offers an accessible introduction for newcomers by providing a solid foundation to understand the significant advancements in verifiable computing over the past decade, including developments such as ZK-SNARKs.