International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

22 March 2024

Lena Heimberger, Florian Lugstein, Christian Rechberger
ePrint Report ePrint Report
Lattice-based cryptography has emerged as a promising new candidate to build cryptographic primitives. It offers resilience against quantum attacks, enables fully homomorphic encryption, and relies on robust theoretical foundations. Zero-knowledge proofs (ZKPs) are an essential primitive for various privacy-preserving applications. For example, anonymous credentials, group signatures, and verifiable oblivious pseudorandom functions all require ZKPs. Currently, the majority of ZKP systems are based on elliptic curves, which are susceptible to attacks from quantum computers. This project presents the first implementation of Lantern, a state-of-the-art lattice-based ZKP system that can create compact proofs, which are a few dozen kilobytes large, for basic statements. We thoroughly explain the theory behind the scheme and give a full implementation in a Jupyter Notebook using SageMath to make Lantern more accessible to researchers. Our interactive implementation allows users to fully understand the scheme and its building blocks, providing a valuable resource to understand both ZKPs and lattice cryptography. Albeit not optimized for performance, this implementation allows us to construct a Module-LWE secret proof in 35s on a consumer laptop. Through our contributions, we aim to advance the understanding and practical utilization of lattice-based ZKP systems, particularly emphasizing accessibility for the broader research community.
Expand
Yibin Yang, David Heath, Carmit Hazay, Vladimir Kolesnikov, Muthuramakrishnan Venkitasubramaniam
ePrint Report ePrint Report
We explore Zero-Knowledge proofs (ZKP) of statements expressed as programs written in high-level languages, e.g., C or assembly. At the core of executing such programs in ZK is the repeated evaluation of a CPU step, achieved by branching over the CPU’s instruction set. This approach is general and covers traversal-execution of a program’s control flow graph (CFG): here CPU instructions are straight-line program fragments (of various sizes) associated with the CFG nodes. This highlights the usefulness of ZK CPUs with a large number of instructions of varying sizes.

We formalize and design an efficient tight ZK CPU, where the cost (both computation and communication, for each party) of each step depends only on the instruction taken. This qualitatively improves over state-of-the-art, where cost scales with the size of the largest CPU instruction (largest CFG node).

Our technique is formalized in the standard commit-and-prove paradigm, so our results are compatible with a variety of (interactive and non-interactive) general-purpose ZK.

We implemented an interactive tight arithmetic (over $\mathbb{F}_{2^{61}-1}$) ZK CPU based on Vector Oblivious Linear Evaluation (VOLE) and compared it to the state-of-the-art non-tight VOLE-based ZK CPU Batchman (Yang et al. CCS’23). In our experiments, under the same hardware configuration, we achieve comparable performance when instructions are of the same size and a $5$-$18×$ improvement when instructions are of varied size. Our VOLE-based ZK CPU can execute $100$K (resp. $450$K) multiplication gates per second in a WAN-like (resp. LAN-like) setting. It requires ≤ $102$ Bytes per multiplication gate. Our basic building block, ZK Unbalanced Read-Only Memory (ZK UROM), may be of an independent interest.
Expand

18 March 2024

Connor Bell, Saba Eskandarian
ePrint Report ePrint Report
Private messaging platforms provide strong protection against platform eavesdropping, but malicious users can use privacy as cover for spreading abuse and misinformation. In an attempt to identify the sources of misinformation on private platforms, researchers have proposed mechanisms to trace back the source of a user-reported message (CCS '19,'21). Unfortunately, the threat model considered by initial proposals allowed a single user to compromise the privacy of another user whose legitimate content the reporting user did not like. More recent work has attempted to mitigate this side effect by requiring a threshold number of users to report a message before its origins can be identified (NDSS '22). However, the state of the art scheme requires the introduction of new probabilistic data structures and only achieves a "fuzzy" threshold guarantee. Moreover, false positives, where the source of an unreported message is identified, are possible.

This paper introduces a new threshold source tracking technique that allows a private messaging platform, with the cooperation of a third-party moderator, to operate a threshold reporting scheme with exact thresholds and no false positives. Unlike prior work, our techniques require no modification of the message delivery process for a standard source tracking scheme, affecting only the abuse reporting procedure, and do not require tuning of probabilistic data structures.
Expand
Zhengjun Cao, Zhenfu Cao
ePrint Report ePrint Report
Quantum Fourier Transformation (QFT) needs to construct the rotation gates with extremely tiny angles. Since it is impossible to physically manipulate such tiny angles (corresponding to extremely weak energies), those gates should be replaced by some scaled and controllable gates. The version of QFT is called banded QFT (BQFT), and can be mathematically specified by Kronecker product and binary fraction. But the systemic errors of BQFT has never been heuristically estimated. In this paper, we generate the programming code for BQFT and argue that its systemic errors are not negligible, which means the physical implementation of QFT with a huge transform size is still a challenge. To the best of our knowledge, it is the first time to obtain the result.
Expand
Stanislav Kruglik, Son Hoang Dau, Han Mao Kiah, Huaxiong Wang, Liang Feng Zhang
ePrint Report ePrint Report
A function secret sharing (FSS) (Boyle et al., Eurocrypt 2015) is a cryptographic primitive that enables additive secret sharing of functions from a given function family $\mathcal{F}$. FSS supports a wide range of cryptographic applications, including private information retrieval (PIR), anonymous messaging systems, private set intersection and more. Formally, given positive integers $r \geq 2$ and $t < r$, and a class $\mathcal{F}$ of functions $f: [n] \to \mathbb{G}$ for an Abelian group $\mathbb{G}$, an $r$-party $t$-private FSS scheme for $\mathcal{F}$ allows a dealer to split each $f \in \mathcal{F}$ into $r$ function shares $f_1, \ldots, f_r$ among $r$ servers. Shares have the property that $f = f_1 + \cdots + f_r$ and functions are indistinguishable for any coalition of up to $t$ servers. FSS for point function $f_{\alpha_1,\ldots,\alpha_l,\beta_1,\ldots,\beta_l}$ for different $\alpha$ and $l
Most existing FSS schemes are based on the existence of one-way functions or pseudo-random generators, and as a result, hiding of function $f$ holds only against computationally bounded adversaries. Protocols employing them as building blocks are computationally secure. Several exceptions mostly focus on DPF for four, eight or $d(t+1)$ servers for positive integer $d$, and none of them provide verifiability.

In this paper, we propose DPF for $d(t+l-1)+1$ servers, where $d$ is a positive integer, offering a better key size compared to the previously proposed DPF for $d(t+1)$ servers and DCF for $dt+1$ servers, also for positive integer $d$. We introduce their verifiable extension in which any set of servers holding $t$ keys cannot persuade us to accept the wrong value of the function. This verifiability notion differs from existing verifiable FSS schemes in the sense that we verify not only the belonging of the function to class $\mathcal{F}$ but also the correctness of computation results. Our schemes provide a secret key size $O(n^{1/d}\cdot s\log(p))$ for DPF and $O(n^{1/d}\cdot s\log(p))$ for DCF, where $p^s$ is the size of group $\mathbb{G}$.
Expand
Hans Schmiedel, Runchao Han, Qiang Tang, Ron Steinfeld, Jiangshan Yu
ePrint Report ePrint Report
Targeted Denial-of-Service (DoS) attacks have been a practical concern for permissionless blockchains. Potential solutions, such as random sampling, are adopted by blockchains. However, the associated security guarantees have only been informally discussed in prior work. This is due to the fact that existing adversary models are either not fully capturing this attack or giving up certain design choices (as in the sleepy model or asynchronous network model), or too strong to be practical (as in the mobile Byzantine adversary model).

This paper provides theoretical foundations and desired properties for consensus protocols that resist against targeted DoS attacks. In particular, we define the Mobile Crash Adaptive Byzantine (MCAB) model to capture such an attack. In addition, we identify and formalize two properties for consensus protocols under the MCAB model, and analyze their trade-offs. As case studies, we prove that Ouroboros Praos and Algorand are secure in our MCAB model, giving the first formal proofs supporting their security guarantee against targeted DoS attacks, which were previously only informally discussed. We also illustrate an application of our properties to secure a streamlined BFT protocol, chained Hotstuff, against targeted DoS attacks.
Expand
Louis Tremblay Thibault, Michael Walter
ePrint Report ePrint Report
In this work we demonstrate for the first time that a full FHE bootstrapping operation can be proven using a SNARK in practice. We do so by designing an arithmetic circuit for the bootstrapping operation and prove it using plonky2. We are able to prove the circuit on an AWS C6i.metal instance in about 20 minutes. Proof size is about 200kB and verification takes less than 10ms. As the basis of our bootstrapping operation we use TFHE's programmable bootstrapping and modify it in a few places to more efficiently represent it as an arithmetic circuit (while maintaining full functionality and security). In order to achieve our results in a memory-efficient way, we take advantage of the structure of the computation and plonky2's ability to efficiently prove its own verification circuit to implement a recursion-based IVC scheme.
Expand
Ward Beullens, Lucas Dodgson, Sebastian Faller, Julia Hesse
ePrint Report ePrint Report
An Oblivious Pseudo-Random Function (OPRF) is a two-party protocol for jointly evaluating a Pseudo-Random Function (PRF), where a user has an input x and a server has an input k. At the end of the protocol, the user learns the evaluation of the PRF using key k at the value x, while the server learns nothing about the user's input or output.

OPRFs are a prime tool for building secure authentication and key exchange from passwords, private set intersection, private information retrieval, and many other privacy-preserving systems. While classical OPRFs run as fast as a TLS Handshake, current *quantum-safe* OPRF candidates are still practically inefficient.

In this paper, we propose a framework for constructing OPRFs from post-quantum multi-party computation. The framework captures a family of so-called "2Hash PRFs", which sandwich a function evaluation in between two hashes. The core of our framework is a compiler that yields an OPRF from a secure evaluation of any function that is key-collision resistant and one-more unpredictable. We instantiate this compiler by providing such functions built from Legendre symbols, and from AES encryption. We then give a case-tailored protocol for securely evaluating our Legendre-based function, built from oblivious transfer (OT) and zero-knowledge proofs (ZKP). Instantiated with lattice-based OT and ZKPs, we obtain a quantum-safe OPRF that completes in 0.57 seconds, with less than 1MB of communication.
Expand
Nabil Alkeilani Alkadri, Nico Döttling, Sihang Pu
ePrint Report ePrint Report
$n$-out-of-$n$ distributed signatures are a special type of threshold $t$-out-of-$n$ signatures. They are created by a group of $n$ signers, each holding a share of the secret key, in a collaborative way. This kind of signatures has been studied intensively in recent years, motivated by different applications such as reducing the risk of compromising secret keys in cryptocurrencies. Towards maintaining security in the presence of quantum adversaries, Damgård et al. (J Cryptol 35(2), 2022) proposed lattice-based constructions of $n$-out-of-$n$ distributed signatures and multi-signatures following the Fiat-Shamir with aborts paradigm (ASIACRYPT 2009). Due to the inherent issue of aborts, the protocols either require to increase their parameters by a factor of $n$, or they suffer from a large number of restarts that grows with $n$. This has a significant impact on their efficiency, even if $n$ is small. Moreover, the protocols use trapdoor homomorphic commitments as a further cryptographic building block, making their deployment in practice not as easy as standard lattice-based Fiat-Shamir signatures. In this work, we present a new construction of $n$-out-of-$n$ distributed signatures. It is designed specifically for applications with small number of signers. Our construction follows the Fiat-Shamir with aborts paradigm, but solves the problem of large number of restarts without increasing the parameters by a factor of $n$ and utilizing any further cryptographic primitive. To demonstrate the practicality of our protocol, we provide a software implementation and concrete parameters aiming at 128 bits of security. Furthermore, we select concrete parameters for the construction by Damgård et al. and for the most recent lattice-based multi-signature scheme by Chen (CRYPTO 2023), and show that our approach provides a significant improvement in terms of all efficiency metrics. Our results also show that the multi-signature schemes by Damgård et al. and Chen as well as a multi-signature variant of our protocol produce signatures that are not smaller than a naive multi-signature derived from the concatenation of multiple standard signatures.
Expand
Manjeet Kaur, Tarun Yadav, Manoj Kumar, Dhananjoy Dey
ePrint Report ePrint Report
In this study, we investigate the newly developed low energy lightweight block cipher (LELBC), specifically designed for resource-constrained Internet of Things (IoT) devices in smart agriculture. The designers conducted a preliminary differential cryptanalysis of LELBC through mixed-integer linear programming (MILP). This paper further delves into LELBC’s differential characteristics in both single and related-key frameworks using MILP, identifying a nine-round differential characteristic with a probability of $2^{-60}$ in a single-key framework and a 12-round differential characteristic with a probability of $2^{-60}$ in a related-key framework.
Expand

15 March 2024

Jens Ernstberger, Jan Lauinger, Yinnan Wu, Arthur Gervais, Sebastian Steinhorst
ePrint Report ePrint Report
Transport Layer Security ( TLS ) is foundational for safeguarding client-server communication. However, it does not extend integrity guarantees to third-party verification of data authenticity. If a client wants to present data obtained from a server, it cannot convince any other party that the data has not been tampered with.

TLS oracles ensure data authenticity beyond the client-server TLS connection, such that clients can obtain data from a server and ensure provenance to any third party, without server-side modifications. Generally, a TLS oracle involves a third party, the verifier, in a TLS session to verify that the data obtained by the client is accurate. Existing protocols for TLS oracles are communication-heavy, as they rely on interactive protocols. We present ORIGO, a TLS oracle with constant communication. Similar to prior work, ORIGO introduces a third party in a TLS session, and provides a protocol to ensure the authenticity of data transmitted in a TLS session, without forfeiting its confidentiality. Compared to prior work, we rely on intricate details specific to TLS 1.3, which allow us to prove correct key derivation, authentication and encryption within a Zero Knowledge Proof (ZKP). This, combined with optimizations for TLS 1.3, leads to an efficient protocol with constant communication in the online phase. Our work reduces online communication by $375 \times$ and online runtime by up to $4.6 \times$, compared to prior work.
Expand
Ahmed Bendary, Wendson A. S. Barbosa, Andrew Pomerance, C. Emre Koksal
ePrint Report ePrint Report
With the ongoing advances in machine learning (ML), cybersecurity solutions and security primitives are becoming increasingly vulnerable to successful attacks. Strong physically unclonable functions (PUFs) are a potential solution for providing high resistance to such attacks. In this paper, we propose a generalized attack model that leverages multiple chips jointly to minimize the cloning error. Our analysis shows that the entropy rate over different chips is a relevant measure to the new attack model as well as the multi-bit strong PUF classes. We explain the sources of randomness that affect unpredictability and its possible measures using models of state-of-the-art strong PUFs. Moreover, we utilize min-max entropy estimators to measure the unpredictability of multi-bit strong PUF classes for the first time in the PUF community. Finally, we provide experimental results for a multi-bit strong PUF class, the hybrid Boolean network PUF, showing its high unpredictability and resistance to ML attacks.
Expand
Aikaterini Mitrokotsa, Sayantan Mukherjee, Mahdi Sedaghat, Daniel Slamanig, Jenit Tomy
ePrint Report ePrint Report
Structure-preserving signatures (SPS) have emerged as an important cryptographic building block, as their compatibility with the Groth-Sahai (GS) NIZK framework allows to construct protocols under standard assumptions with reasonable efficiency. Over the last years there has been a significant interest in the design of threshold signature schemes. However, only very recently Crites et al. (ASIACRYPT 2023) have introduced threshold SPS (TSPS) along with a fully non-interactive construction. While this is an important step, their work comes with several limitations. With respect to the construction, they require the use of random oracles, interactive complexity assumptions and are restricted to so called indexed Diffie-Hellman message spaces. Latter limits the use of their construction as a drop-in replacement for SPS. When it comes to security, they only support static corruptions and do not allow partial signature queries for the forgery. In this paper, we ask whether it is possible to construct TSPS without such restrictions. We start from an SPS from Kiltz, Pan and Wee (CRYPTO 2015) which has an interesting structure, but thresholdizing it requires some modifications. Interestingly, we can prove it secure in the strongest model (TS-UF-1) for fully non-interactive threshold signatures (Bellare et al., CRYPTO 2022) and even under fully adaptive corruptions. Surprisingly, we can show the latter under a standard assumption without requiring any idealized model. All known constructions of efficient threshold signatures in the discrete logarithm setting require interactive assumptions and idealized models. Concretely, our scheme in type III bilinear groups under the SXDH assumption has signatures consisting of 7 group elements. Compared to the TSPS from Crites et al. (2 group elements), this comes at the cost of efficiency. However, our scheme is secure under standard assumptions, achieves strong and adaptive security guarantees and supports general message spaces, i.e., represents a drop-in replacement for many SPS applications. Given these features, the increase in the size of the signature seems acceptable even for practical applications.
Expand
Mario Yaksetig
ePrint Report ePrint Report
We introduce a private cryptocurrency design based on the original e-cash protocol. Our proposal allows for private payments on existing blockchain systems. In our design, the issuance of the private cash is transparent and is associated with a blockchain transfer to provide stronger security.
Expand
Niklas Nolte, Mohamed Malhou, Emily Wenger, Samuel Stevens, Cathy Yuanchen Li, Francois Charton, Kristin Lauter
ePrint Report ePrint Report
Sparse binary LWE secrets are under consideration for standardization for Homomorphic Encryption and its applications to private computation. Known attacks on sparse binary LWE secrets include the sparse dual attack and the hybrid sparse dual-meet in the middle attack, which requires significant memory. In this paper, we provide a new statistical attack with low memory requirement. The attack relies on some initial parallelized lattice reduction. The key observation is that, after lattice reduction is applied to the rows of a q-ary-like embedded random matrix A, the entries with high variance are concentrated in the early columns of the extracted matrix. This allows us to separate out the “hard part” of the LWE secret. We can first solve the sub-problem of finding the “cruel” bits of the secret in the early columns, and then find the remaining “cool” bits in linear time. We use statistical techniques to distinguish distributions to identify both the cruel and the cool bits of the secret. We provide concrete attack timings for recovering secrets in dimensions n = 256, 512, and 768. For the lattice reduction stage, we leverage recent improvements in lattice reduction (flatter) applied in parallel. We also apply our new attack in the RLWE setting for 2-power cyclotomic rings, showing that these RLWE instances are much more vulnerable to this attack than LWE.
Expand
Kostas Kryptos Chalkias, Jonas Lindstrøm, Deepak Maram, Ben Riva, Arnab Roy, Alberto Sonnino, Joy Wang
ePrint Report ePrint Report
In the rapidly evolving fields of encryption and blockchain technologies, the efficiency and security of cryptographic schemes significantly impact performance. This paper introduces a comprehensive framework for continuous benchmarking in one of the most popular cryptography Rust libraries, fastcrypto. What makes our analysis unique is the realization that automated benchmarking is not just a performance monitor and optimization tool, but it can be used for cryptanalysis and innovation discovery as well. Surprisingly, benchmarks can uncover spectacular security flaws and inconsistencies in various cryptographic implementations and standards, while at the same time they can identify unique opportunities for innovation not previously known to science, such as providing a) hints for novel algorithms, b) indications for mix-and-match library functions that result in world record speeds, and c) evidences of biased or untested real world algorithm comparisons in the literature.

Our approach transcends traditional benchmarking methods by identifying inconsistencies in multi-threaded code, which previously resulted in unfair comparisons. We demonstrate the effectiveness of our methodology in identifying the fastest algorithms for specific cryptographic operations like signing, while revealing hidden performance characteristics and security flaws. The process of continuous benchmarking allowed fastcrypto to break many crypto-operations speed records in the Rust language ecosystem.

A notable discovery in our research is the identification of vulnerabilities and unfair speed claims due to missing padding checks in high-performance Base64 encoding libraries. We also uncover insights into algorithmic implementations such as multi-scalar elliptic curve multiplications, which exhibit different performance gains when applied in different schemes and libraries. This was not evident in conventional benchmarking practices. Further, our analysis highlights bottlenecks in cryptographic algorithms where pre-computed tables can be strategically applied, accounting for L1 and L2 CPU cache limitations.

Our benchmarking framework also reveals that certain algorithmic implementations incur additional overheads due to serialization processes, necessitating a refined `apples to apples' comparison approach. We identified unique performance patterns in some schemes, where efficiency scales with input size, aiding blockchain technologies in optimal parameter selection and data compression.

Crucially, continuous benchmarking serves as a tool for ongoing audit and security assurance. Variations in performance can signal potential security issues during upgrades, such as cleptography, hardware manipulation or supply chain attacks. This was evidenced by critical private key leakage vulnerabilities we found in one of the most popular EdDSA Rust libraries. By providing a dynamic and thorough benchmarking approach, our framework empowers stakeholders to make informed decisions, enhance security measures, and optimize cryptographic operations in an ever-changing digital landscape.
Expand
Guilhem Mureau, Alice Pellet-Mary, Heorhii Pliatsok, Alexandre Wallet
ePrint Report ePrint Report
We formally define the Lattice Isomorphism Problem for module lattices (module-LIP) in a number field $K$. This is a generalization of the problem defined by Ducas, Postlethwaite, Pulles, and van Woerden (Asiacrypt 2022), taking into account the arithmetic and algebraic specificity of module lattices from their representation using pseudo-bases. We also provide the corresponding set of algorithmic and theoretical tools for the future study of this problem in a module setting. Our main contribution is an algorithm solving module-LIP for modules of rank $2$ in $K^2$, when $K$ is a totally real number field. Our algorithm exploits the connection between this problem, relative norm equations and the decomposition of algebraic integers as sums of two squares. For a large class of modules (including $\mathcal{O}_K^2$), and a large class of totally real number fields (including the maximal real subfield of cyclotomic fields) it runs in classical polynomial time in the degree of the field and the residue at 1 of the Dedekind zeta function of the field (under reasonable number theoretic assumptions). We provide a proof-of-concept code running over the maximal real subfield of some cyclotomic fields.
Expand
Chloé Baïsse, Antoine Moran, Guillaume Goy, Julien Maillard, Nicolas Aragon, Philippe Gaborit, Maxime Lecomte, Antoine Loiseau
ePrint Report ePrint Report
Soft Analytical Side Channel Attacks (SASCA) are a powerful family of Side Channel Attacks (SCA) that allow to recover secret values with only a small number of traces. Their effectiveness lies in the Belief Propagation (BP) algorithm, which enables efficient computation of the marginal distributions of intermediate values. Post-quantum schemes such as Kyber, and more recently, Hamming Quasi-Cyclic (HQC), have been targets of SASCA. Previous SASCA on HQC focused on Reed-Solomon (RS) codes and successfully retrieved the shared key with a high success rate for high noise levels using a single trace. In this work, we present new SASCA on HQC where both the shared key and the secret key are targeted. Unlike the previous SASCA, we take a closer look at the Reed-Muller (RM) code. The advantage of this choice, is that the RM decoder is applied before the RS decoder. This is what makes it possible to attack the two keys. We build a factor graph of the Fast Hadamard Transform (FHT) function from the HQC reference implementation of April 2023. The information recovered from BP allows us to retrieve the shared key with a single trace. In addition to the previous SASCA targeting HQC, we also manage to recover the secret key with two chosen ciphertext attacks. One of them require a single trace and is successful until high noise levels.
Expand
Enrico Piccione
ePrint Report ePrint Report
The threshold implementation technique has been proposed in 2006 by Nikova et al. as a countermeasure to mitigate cryptographic side-channel attacks on hardware implementations when the effect of glitches is taken into account. This technique is based on Boolean sharing (also called masking) and it was developed for securing symmetric ciphers such as AES. In 2023, Piccione et al. proposed a general construction of threshold implementations that is universal for S-boxes that are bijective vectorial Boolean function (functions from a binary vector space $\mathbb{F}_{2}^n$ into itself). In this paper, we further generalize the construction and we propose a general theory of threshold implementations for any type of S-boxes. We investigate the case of functions (also not necessarily bijective) that are defined between two finite Abelian groups and we use the definition of threshold implementation given by Dhooghe et al. in 2019 with additive sharing. To show that this generalized notion is as useful as the one for Boolean sharing, we prove that many classical results still hold. An important tool in this theory is the notion of functional degree introduced by Aichinger and Moosbauer in 2021 which generalizes the algebraic degree of a vectorial Boolean function. We show that if $F$ has functional degree (at most) $d$ and the cardinality of the domain is divisible by the cardinality of the codomain, then $F$ admits a threshold implementation $\mathcal{F}$ with $s\geq d+2$ shares in input and $d+2$ shares in output. Moreover, we provide a complete overview on which are the available tools for studying the functional degree and how to represent those functions using a Integer-Valued (IV) polynomial representation. Then we apply our theory for the following applications: defining the inner product masking in our setting, providing a threshold implementation of any multiplication map, and computing the functional degree and the IV polynomial representations of the conversion maps between $\mathbb{F}_p^n$ and $\mathbb{Z}_{p^n}$.
Expand
Arpan Jati, Naina Gupta, Anupam Chattopadhyay, Somitra Kumar Sanadhya
ePrint Report ePrint Report
Side-channel analysis has become a cornerstone of modern hardware security evaluation for cryptographic accelerators. Recently, these techniques are also being applied in fields such as AI and Machine Learning to investigate possible threats. Security evaluations are reliant on standard test setups including commercial and open-source evaluation boards such as, SASEBO/SAKURA and ChipWhisperer. However, with shrinking design footprints and overlapping tasks on the same platforms, the quality of the side channel information as well as the speed of data capture can significantly influence security assessment.

In this work, we designed EFFLUX-F2, a hardware security evaluation board to improve the quality and speed of side-channel information capture. We also designed a measurement setup to benchmark the signal differences between target boards. Multiple experimental evaluations like noise analysis, CPA and TVLA performed on EFFLUX-F2 and competing evaluation boards showcase the significant superiority of our design in all aspects.
Expand
◄ Previous Next ►