IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
11 September 2024
Nazlı Deniz TÜRE, Murat CENK
Digital signatures ensure authenticity and secure communication. They are used to verify the integrity and authenticity of signed documents and are widely utilized in various fields such as information technologies, finance, education, and law. They are crucial in securing servers against cyber attacks and authenticating connections between clients and servers. Additionally, encryption is used in many areas, such as secure communication, cloud, server and database security to ensure data confidentiality. Performing batch encryption, signature generation, and signature verification simultaneously and efficiently is highlighted as a beneficial approach for many systems. This work focuses on efficient batch signature generation with Dilithium, batch verifications of signatures from the same user using Crystals Dilithium (NIST's post-quantum digital signature standard) and batch encryption to a single user with Crystals Kyber (NIST's post-quantum encryption/KEM standard). One of the main operations of Dilithium and Kyber is the matrix-vector product with polynomial entries. So, the naive approach to generate/verify m signatures with Dilithium (or encrypt $m$ messages with Kyber) where m>1 is to perform $m$ such multiplications. In this paper, we propose to use efficient matrix multiplications of sizes greater than four to generate/verify m signatures with Dilithium and greater than two to encrypt $m$ messages with Kyber. To this end, batch algorithms that transform the polynomial matrix-vector multiplication in Dilithium's and Kyber's structures into polynomial matrix-matrix multiplication are designed. The batch numbers and the sizes of the matrices to be multiplied based on the number of repetitions of Dilithium's signature algorithm are determined. Also, batch versions of Dilithium verification and Kyber encryption algorithms are proposed. Moreover, many efficient matrix-matrix multiplication algorithms, such as Strassen-like multiplications and commutative matrix multiplications, are analyzed to design the best algorithms that are compatible with the specified dimensions and yield improvements. Various multiplication formulas are derived for different security levels of Dilithium signature generation, verification, and Kyber encryption. Improvements up to 28.1%, 33.3%, and 31.5% in the arithmetic complexities are observed at three different security levels of Dilithium's signature, respectively. The proposed batch Dilithium signature algorithm and the efficient multiplication algorithms are also implemented, and 34.22%, 17.40%, and 10.15% improvements on CPU cycle counts for three security levels are obtained. The multiplication formulas used for batch Dilithium signature generation are also applied for batch Dilithium verification. At three different levels of security, improvements in the arithmetic complexity are observed of up to 28.13%, 33.33%, and 31.25%. Furthermore, 49.88%, 56.60%, and 61.08% improvements on CPU cycle counts for three security levels are achieved, respectively. As a result of implementing Kyber Batch Encryption with efficient multiplication algorithms, 12.50%, 22.22%, and 28.13% improvements on arithmetic complexity, as well as 22.34%, 24.07%, and 30.83\% improvements on CPU cycle counts, are observed for three security levels.
Lars Ran, Simona Samardjiska
Recently, there has been a lot of interest in improving the understanding of the practical hardness of the 3-Tensor Isomorphism (3-TI) problem, which, given two 3-tensors, asks for an isometry between the two. The current state-of-the-art for solving this problem is the algebraic algorithm of Ran et al. '23 and the graph-theoretic algorithm of Narayanan et al. '24 that have both slightly reduced the security of the signature schemes MEDS and ALTEQ, based on variants of the 3-TI problem (Matrix Code Equivalence (MCE) and Alternating Trilinear Form Equivalence (ATFE) respectively).
In this paper, we propose a new combined technique for solving the 3-TI problem. Our algorithm, as typically done in graph-based algorithms, looks for an invariant in the graphs of the isomorphic tensors that can be used to recover the secret isometry. However, contrary to usual combinatorial approaches, our approach is purely algebraic. We model the invariant as a system of non-linear equations and solve it. Using this modelling we are able to find very rare invariant objects in the graphs of the tensors — cycles of length 3 (triangles) — that exist with probability approximately $1/q$. For solving the system of non-linear equations we use Gröbner-basis techniques adapted to tri-graded polynomial rings. We analyze the algorithm theoretically, and we provide lower and upper bounds on its complexity. We further provide experimental support for our complexity claims. Finally, we describe two dedicated versions of our algorithm tailored to the specifics of the MCE and the ATFE problems.
The implications of our algorithm are improved cryptanalysis of both MEDS and ALTEQ for the cases when a triangle exists, i.e. in approximately $1/q$ of the cases. While for MEDS, we only marginally reduce the security compared to previous work, for ALTEQ our results are much more significant with at least 60 bits improvement compared to previous work for all security levels. For Level I parameters, our attack is practical, and we are able to recover the secret key in only 1501 seconds. The code is available for testing and verification of our results.
In this paper, we propose a new combined technique for solving the 3-TI problem. Our algorithm, as typically done in graph-based algorithms, looks for an invariant in the graphs of the isomorphic tensors that can be used to recover the secret isometry. However, contrary to usual combinatorial approaches, our approach is purely algebraic. We model the invariant as a system of non-linear equations and solve it. Using this modelling we are able to find very rare invariant objects in the graphs of the tensors — cycles of length 3 (triangles) — that exist with probability approximately $1/q$. For solving the system of non-linear equations we use Gröbner-basis techniques adapted to tri-graded polynomial rings. We analyze the algorithm theoretically, and we provide lower and upper bounds on its complexity. We further provide experimental support for our complexity claims. Finally, we describe two dedicated versions of our algorithm tailored to the specifics of the MCE and the ATFE problems.
The implications of our algorithm are improved cryptanalysis of both MEDS and ALTEQ for the cases when a triangle exists, i.e. in approximately $1/q$ of the cases. While for MEDS, we only marginally reduce the security compared to previous work, for ALTEQ our results are much more significant with at least 60 bits improvement compared to previous work for all security levels. For Level I parameters, our attack is practical, and we are able to recover the secret key in only 1501 seconds. The code is available for testing and verification of our results.
Felix Linker, Ralf Sasse, David Basin
We present the formal verification of Apple’s iMessage PQ3, a highly performant, device-to-device messaging protocol offering strong security guarantees even against an adversary with quantum computing capabilities. PQ3 leverages Apple’s identity services together with a custom, post-quantum secure initialization phase and afterwards it employs a double ratchet construction in the style of Signal, extended to provide post-quantum, post-compromise security.
We present a detailed formal model of PQ3, a precise specification of its fine-grained security properties, and machine-checked security proofs using the TAMARIN prover. Particularly novel is the integration of post-quantum secure key encapsulation into the relevant protocol phases and the detailed security claims along with their complete formal analysis. Our analysis covers both key ratchets, including unbounded loops, which was believed by some to be out of scope of symbolic provers like TAMARIN (it is not!).
We present a detailed formal model of PQ3, a precise specification of its fine-grained security properties, and machine-checked security proofs using the TAMARIN prover. Particularly novel is the integration of post-quantum secure key encapsulation into the relevant protocol phases and the detailed security claims along with their complete formal analysis. Our analysis covers both key ratchets, including unbounded loops, which was believed by some to be out of scope of symbolic provers like TAMARIN (it is not!).
Erki Külaots, Toomas Krips, Hendrik Eerikson, Pille Pullonen-Raudvere
Multiparty computation (MPC) is an important field of cryptography that deals with protecting the privacy of data, while allowing to do computation on that data. A key part of MPC is the parties involved having correlated randomness that they can use to make the computation or the communication between themselves more efficient, while still preserving the privacy of the data. Examples of these correlations include random oblivious transfer (OT) correlations, oblivious linear-function evaluation (OLE) correlations, multiplication triples (also known as Beaver triples) and one-time truth tables. Multi-point function secret sharing (FSS) has been shown to be a great building block for pseudo-random correlation generators. The main question is how to construct fast and efficient multi-point FSS schemes. Here we propose a natural generalization of the scheme of Boyle et al 2016 using a tree structure, a pseudorandom generator and systems of linear equations.
Our schemes SLAMP-FSS and SLAMPR-FSS are more efficient in the evaluation phase than other previously proposed multi-point FSS schemes while being also more flexible and being similar in other efficiency parameters.
Yekaterina Podiatchev, Ariel Orda, Ori Rottenstreich
Payment channel networks (PCNs) are a leading method to scale the transaction throughput in cryptocurrencies. Two participants can use a bidirectional payment channel for making multiple mutual payments without committing them to the blockchain. Opening a payment channel is a slow operation that involves an on-chain transaction locking a certain amount of funds. These aspects limit the number of channels that can be opened or maintained. Users may route payments through a multi-hop path and thus avoid opening and maintaining a channel for each new destination. Unlike regular networks, in PCNs capacity depends on the usage patterns and, moreover, channels may become unidirectional. Since payments often fail due to channel depletion, a protection scheme to overcome failures is of interest. We define the stopping time of a payment channel as the time at which the channel becomes depleted. We analyze the mean stopping time of a channel as well as that of a network with a set of channels and examine the stopping time of channels in particular topologies. We then propose a scheme for optimizing the capacity distribution among the channels in order to increase the minimal stopping time in the network. We conduct experiments and demonstrate the accuracy of our model and the efficiency of the proposed optimization scheme.
Madické Diadji Mbodj, Anis Bkakria
We present the first Key Policy Attribute-Based Encryption (KP-ABE) scheme employing isogeny-based cryptography through class group actions, specifically utilizing the Csi-FiSh instantiation and pairing groups. We introduce a new assumption, denoted Isog-DLin, which combines the isogeny and DLin assumptions. We propose the following constructions: a small universe KP-ABE and a large universe KP-ABE under the Isog-DBDH assumption, and a small universe KP-ABE under the Isog-DLin assumption. In these constructions, the master key is designed to be secure against quantum computer attacks, while the ciphertext remains secure against classical computer attacks. This dual-layered approach ensures robust security across classical and quantum computational paradigms, addressing current and potential future cryptographic challenges.
Kai Du, Jianfeng Wang, Jiaojiao Wu, Yunling Wang
Secure join queries over encrypted database, the most expressive class of SQL queries, have attracted extensive attention recently. The state-of-the-art JXT (Jutla et al. ASIACRYPT 2022) enables join queries on encrypted relational database without pre-computing all possible joins. However, JXT can merely support join queries over two tables (in encrypted database) with some high-entropy join attributes.
In this paper, we propose an equi-join query protocol over two tables dubbed JXT+, that allows the join attributes with arbitrary names instead of JXT requiring the identical name for join attributes. JXT+ reduces the query complexity from $O(\ell_1 \cdot \ell_2)$ to $O(\ell_1)$ as compared to JXT, where $\ell_1$ and $\ell_2$ denote the numbers of matching records in two tables respectively. Furthermore, we present JXT++, the \emph{first} equi-join queries across three or more tables over encrypted database without pre-computation. Specifically, JXT++ supports joins of arbitrary attributes, i.e., all attributes (even low-entropy) can be candidates for join, while JXT requires high-entropy join attributes. In addition, JXT++ can alleviate sub-query leakage on three or more tables, which hides the leakage from the matching records of two-table join.
Finally, we implement and compare our proposed schemes with the state-of-the-art JXT. The experimental results demonstrate that both of our schemes are superior to JXT in search and storage costs. In particular, JXT+ (resp., JXT++) brings a saving of 49% (resp., 68%) in server storage cost and achieves a speedup of 51.7$\times$ (resp., 54.3$\times$) in search latency.
In this paper, we propose an equi-join query protocol over two tables dubbed JXT+, that allows the join attributes with arbitrary names instead of JXT requiring the identical name for join attributes. JXT+ reduces the query complexity from $O(\ell_1 \cdot \ell_2)$ to $O(\ell_1)$ as compared to JXT, where $\ell_1$ and $\ell_2$ denote the numbers of matching records in two tables respectively. Furthermore, we present JXT++, the \emph{first} equi-join queries across three or more tables over encrypted database without pre-computation. Specifically, JXT++ supports joins of arbitrary attributes, i.e., all attributes (even low-entropy) can be candidates for join, while JXT requires high-entropy join attributes. In addition, JXT++ can alleviate sub-query leakage on three or more tables, which hides the leakage from the matching records of two-table join.
Finally, we implement and compare our proposed schemes with the state-of-the-art JXT. The experimental results demonstrate that both of our schemes are superior to JXT in search and storage costs. In particular, JXT+ (resp., JXT++) brings a saving of 49% (resp., 68%) in server storage cost and achieves a speedup of 51.7$\times$ (resp., 54.3$\times$) in search latency.
04 September 2024
Shibam Mukherjee, Christian Rechberger, Markus Schofnegger
The area of modern zero-knowledge proof systems has seen a significant rise in popularity over the last couple of years, with new techniques and optimized constructions emerging on a regular basis.
As the field matures, the aspect of implementation attacks becomes more relevant, however side-channel attacks on zero-knowledge proof systems have seen surprisingly little treatment so far. In this paper we give an overview of potential attack vectors and show that some of the underlying finite field libraries, and implementations of heavily used components like hash functions, are vulnerable w.r.t. cache attacks on CPUs.
On the positive side, we demonstrate that the computational overhead to protect against these attacks is relatively small.
As the field matures, the aspect of implementation attacks becomes more relevant, however side-channel attacks on zero-knowledge proof systems have seen surprisingly little treatment so far. In this paper we give an overview of potential attack vectors and show that some of the underlying finite field libraries, and implementations of heavily used components like hash functions, are vulnerable w.r.t. cache attacks on CPUs.
On the positive side, we demonstrate that the computational overhead to protect against these attacks is relatively small.
Tomáš Gerlich, Jakub Breier, Pavel Sikora, Zdeněk Martinásek, Aron Gohr, Anubhab Baksi, Xiaolu Hou
The see-in-the-middle (SITM) attack combines differential cryptanalysis and the ability to observe differential patterns in the side-channel leakage traces to reveal the secret key of SPN-based ciphers. While SITM presents a fresh perspective to side-channel analysis and allows attacks on deeper cipher rounds, there are practical difficulties that come with this method. First, one must realize a visual inspection of millions of power traces. Second, there is a strong requirement to reduce noise to a minimum, achieved by averaging over 1000 traces in the original work, to see the patterns. Third, the presence of a jitter-based countermeasure greatly affects pattern identification, making the visual inspection infeasible. In this paper we aim to tackle these difficulties by using a machine learning approach denoted as DL-SITM (deep learning SITM). The fundamental idea of our approach is that, while a collision obscured by noise is imperceptible in a manual inspection, a powerful deep learning model can identify it, even when a jitter-based countermeasure is in place. As we show with a practical experiment, the proposed DL-SITM approach can distinguish the two valid differentials from over 4M differential traces with only six false positives. Extrapolating from the parameters of this experiment, we get a rough estimate of $2^{43}$ key candidates for the post-processing step of our attack, which places it easily in the practical range. At the same time, we show that even with a jitter countermeasure shifting the execution by $\pm15$ samples, the testing f1-score stays at a relatively high (0.974).
Shuichi Hirahara, Zhenjian Lu, Igor C. Oliveira
We introduce $\mathsf{pKt}$ complexity, a new notion of time-bounded Kolmogorov complexity that can be seen as a probabilistic analogue of Levin's $\mathsf{Kt}$ complexity. Using $\mathsf{pKt}$ complexity, we upgrade two recent frameworks that characterize one-way functions ($\mathsf{OWF}$) via symmetry of information and meta-complexity, respectively. Among other contributions, we establish the following results:
- $\mathsf{OWF}$ can be based on the worst-case assumption that $\mathsf{BPEXP}$ is not contained infinitely often in $\mathsf{P}/\mathsf{poly}$ if the failure of symmetry of information for $\mathsf{pKt}$ in the $\textit{worst-case}$ implies its failure on $\textit{average}$. - $\mathsf{OWF}$ exist if and only if the average-case easiness of approximating $\mathsf{pKt}$ with $\textit{two-sided}$ error implies its (mild) average-case easiness with $\textit{one-sided}$ error.
Previously, in a celebrated result, Liu and Pass (CRYPTO 2021 and CACM 2023) proved that one can base (infinitely-often) $\mathsf{OWF}$ on the assumption that $\mathsf{EXP} \nsubseteq \mathsf{BPP}$ if and only if there is a reduction from computing $\mathsf{Kt}$ on average with $\textit{zero}$ error to computing $\mathsf{Kt}$ on average with $\textit{two-sided}$ error. In contrast, our second result shows that closing the gap between two-sided error and one-sided error average-case algorithms for approximating $\mathsf{pKt}$ is both necessary and sufficient to $\textit{unconditionally}$ establish the existence of $\mathsf{OWF}$.
- $\mathsf{OWF}$ can be based on the worst-case assumption that $\mathsf{BPEXP}$ is not contained infinitely often in $\mathsf{P}/\mathsf{poly}$ if the failure of symmetry of information for $\mathsf{pKt}$ in the $\textit{worst-case}$ implies its failure on $\textit{average}$. - $\mathsf{OWF}$ exist if and only if the average-case easiness of approximating $\mathsf{pKt}$ with $\textit{two-sided}$ error implies its (mild) average-case easiness with $\textit{one-sided}$ error.
Previously, in a celebrated result, Liu and Pass (CRYPTO 2021 and CACM 2023) proved that one can base (infinitely-often) $\mathsf{OWF}$ on the assumption that $\mathsf{EXP} \nsubseteq \mathsf{BPP}$ if and only if there is a reduction from computing $\mathsf{Kt}$ on average with $\textit{zero}$ error to computing $\mathsf{Kt}$ on average with $\textit{two-sided}$ error. In contrast, our second result shows that closing the gap between two-sided error and one-sided error average-case algorithms for approximating $\mathsf{pKt}$ is both necessary and sufficient to $\textit{unconditionally}$ establish the existence of $\mathsf{OWF}$.
Camille Nuoskala, Hossein Abdinasibfar, Antonis Michalas
Functional Encryption (FE) is a cryptographic technique established to guarantee data privacy while allowing the retrieval of specific results from the data.
While traditional decryption methods rely on a secret key disclosing all the data, FE introduces a more subtle approach. The key generation algorithm generates function-specific decryption keys that can be adaptively provided based on policies. Adaptive access control is a good feature for privacy-preserving techniques. Generic schemes have been designed to run basic functions, such as linear regression. However, they often provide a narrow set of outputs, resulting in a lack of thorough analysis. The bottom line is that despite significant research, FE still requires appropriate constructions to unleash its full potential in securely analyzing data and providing more insights. In this article, we introduce SPADE, a novel FE scheme that features multiple users and offers fine-grained access control through partial decryption of the ciphertexts. Unlike existing FE schemes, our construction also supports qualitative data, such as genomics, expanding the applications of privacy-preserving analysis to enable a comprehensive study of the data. SPADE is a significant advancement that balances privacy and data analysis with clear implications in healthcare and finance.
To verify its applicability, we conducted extensive experiments on datasets used in sleep medicine (hypnogram data) and DNA analysis (genomic records).
Tobias Frauenschläger, Jürgen Mottok
In recent years, cybersecurity has also become relevant for Operational Technology (OT). Critical systems like industrial automation systems or transportation systems are faced with new threats, and therefore require the implementation of thorough security measures. Regulations further mandate the deployment and regular verification of these security measures. However, OT systems differ from well-known systems of classic Information Technology (IT), such as mission times spanning decades, infrequent updates only during on-site maintenance, or diverse devices with varying support for security measures. The growing field of crypto-agility examines approaches to integrate security measures in an agile and flexible way, making updates easier and, therefore, encouraging a more frequent deployment of them. This paper contributes to this research field in the context of secure communication in two ways. We first examine the current state of crypto-agility by providing an overview of existing measures for OT systems. Then, we propose a new architecture concept with different deployment approaches to integrate security measures in a crypto-agile way. Based on a security library with a generic interface and a flexible proxy application, our architecture is capable of securing both new OT systems and existing ones via retrofit.
Eden Aldema Tshuva, Elette Boyle, Ran Cohen, Tal Moran, Rotem Oshman
The field of distributed certification is concerned with certifying properties of distributed networks, where the communication topology of the network is represented as an arbitrary graph; each node of the graph is a separate processor, with its own internal state. To certify that the network satisfies a given property, a prover assigns each node of the network a certificate, and the nodes then communicate with one another and decide whether to accept or reject. We require soundness and completeness: the property holds if and only if there exists an assignment of certificates to the nodes that causes all nodes to accept. Our goal is to minimize the length of the certificates, as well as the communication between the nodes of the network. Distributed certification has been extensively studied in the distributed computing community, but it has so far only been studied in the information- theoretic setting, where the prover and the network nodes are computationally unbounded.
In this work we introduce and study computationally bounded distributed certification: we define locally verifiable distributed $\mathsf{SNARG}$s ($\mathsf{LVD}$-$\mathsf{SNARG}$s), which are an analog of SNARGs for distributed networks, and are able to circumvent known hardness results for information-theoretic distributed certification by requiring both the prover and the verifier to be computationally efficient (namely, PPT algorithms).
We give two $\mathsf{LVD}$-$\mathsf{SNARG}$ constructions: the first allows us to succinctly certify any network property in $\mathsf{P}$, using a global prover that can see the entire network; the second construction gives an efficient distributed prover, which succinctly certifies the execution of any efficient distributed algorithm. Our constructions rely on non-interactive batch arguments for $\mathsf{NP}$ ($\mathsf{BARG}$s) and on $\mathsf{RAM}$ $\mathsf{SNARG}$s, which have recently been shown to be constructible from standard cryptographic assumptions.
Sebastian Faller, Tobias Handirk, Julia Hesse, Máté Horváth, Anja Lehmann
Password-protected key retrieval (PPKR) enables users to store and retrieve high-entropy keys from a server securely. The process is bootstrapped from a human-memorizable password only, addressing the challenge of how end-users can manage cryptographic key material. The core security requirement is protection against a corrupt server, which should not be able to learn the key or offline- attack it through the password protection. PPKR is deployed at a large scale with the WhatsApp Backup Protocol (WBP), allowing users to access their encrypted messaging history when switching to a new device. Davies et al. (Crypto’23) formally analyzed the WBP, proving that it satisfies most of the desired security. The WBP uses the OPAQUE protocol for password-based key exchange as a building block and relies on the server using a hardware security module (HSM) for most of its protection. In fact, the security analysis assumes that the HSM is incorruptible – rendering most of the heavy cryptography in the WBP obsolete. In this work, we explore how provably secure and efficient PPKR can be built that either relies strongly on an HSM – but then takes full advantage of that – or requires less trust assumption for the price of more advanced cryptography. To this end, we expand the definitional work by Davies et al. to allow the analysis of PPKR with fine-grained HSM corruption, such as leakage of user records or attestation keys. For each scenario, we aim to give minimal PPKR solutions. For the strongest corruption setting, namely a fully corrupted HSM, we propose a protocol with a simpler design and better efficiency than the WBP. We also fix an attack related to client authentication that was identified by Davies et al.
René Rodríguez Aldama, Enes Pasalic, Fengrong Zhang, Yongzhuang Wei
In this article, we derive the weight distribution of linear codes stemming from a subclass of (vectorial) $p$-ary plateaued functions (for a prime $p$), which includes all the explicitly known examples of weakly and non-weakly regular plateaued functions. This construction of linear codes is referred in the literature as the first generic construction. First, we partition the class of $p$-ary plateaued functions into three classes $\mathscr{C}_1, \mathscr{C}_2,$ and $\mathscr{C}_3$, according to the behavior of their dual function $f^*$. Using these classes, we refine the results presented in a series of articles \cite{Mesnager2017, MesOzSi,Pelen2020, RodPasZhaWei, WeiWangFu}. Namely, we derive the full weight distributions of codes stemming from all $s$-plateaued functions for $n+s$ odd (parametrized by the weight of the dual $wt(f^*)$), whereas for $n+s$ even, the weight distributions are derived from the class of $s$-plateaued functions in $\mathscr{C}_1$ parametrized using two parameters (including $wt(f^*)$ and a related parameter $Z_0$). Additionally, we provide more results on the different weight distributions of codes stemming from functions in subclasses of the three different classes. The exact derivation of such distributions is achieved by using some well-known equations over finite fields to count certain dual preimages. In order to improve the dimension of these codes, we then study the vectorial case, thus providing the weight distributions of a few codes associated to known vectorial plateaued functions and obtaining codes with parameters $[p^n-1,2n, p^n-p^{n-1} - {p}^{(n+s-2)/2}(p-1)]$. For the first time, we provide the full weight distributions of codes from (a subclass of) vectorial $p$-ary plateaued functions. This class includes all known explicit examples in the literature. The obtained codes are minimal and self-orthogonal virtually in all cases.
Arghya Bhattacharjee, Ritam Bhaumik, Chandranan Dhar
An ongoing research challenge in symmetric cryptography is to design an authenticated encryption (AE) with a commitment to the secret key or preferably to the entire context. One way to achieve this is to use a transform on an existing AE scheme, if possible with no output length expansion. At EUROCRYPT'22, Bellare and Hoang proposed the HtE transform, which lifts key-commitment to context-commitment. In the same year at ESORICS'22, Chan and Rogaway proposed the CTX transform, which works on any AE scheme where the tag is not required for decryption. However, for AE schemes which are not key-committing to begin with and which use the tag for decryption, no such transform exists till date. The latter category encompasses all AE schemes based on the design paradigms SIV, MAC-then-Encrypt, and Encode-then-Encipher. In this work, we propose PACT, a transform to convert any AE scheme into a context-committing one without any output length expansion. In addition, PACT preserves both nonce-respecting and nonce-misuse security of the legacy AE scheme. However, this is not the case with all the existing transforms. To demonstrate this, we show that a combination of CTY and SC (proposed by Bellare and Hoang, CRYPTO'24) doesn't preserve the nonce-misuse security of the legacy AE scheme. PACT requires only one call to a collision-resistant unkeyed hash function and one call to a block cipher. Finally, we propose a lighter transform comPACT, which converts a nonce-respecting AE scheme into a context-committing one.
Shivam Bhasin, Harishma Boyapally, Dirmanto Jap
AES implementation has been vastly analysed against side-channel attacks in the last two decades particularly targeting resource-constrained microcontrollers. Still, less research has been conducted on AES implementations on advanced hardware platforms. In this study, we examine the resilience of AES on an ARM Cortex A72 processor within the Raspberry Pi 4B model. Unlike their microcontroller counterparts, these platforms operate within the complex ecosystem of an operating system (OS), resulting in EM traces characterized by low signal-to-noise ratios and jitter. We discuss the inefficacy of traditional CPA attacks in the presence of noise, misalignment, and jitter (in trace and trigger signals). The interrupts and daemons cause these effects, resulting in context switch overheads leading to increased variability in execution times. Additionally, there are no fixed methods or set rules for pre-processing; the approach varies depending on the target device. Our experiments show that CPA is ineffective against masked and unmasked AES implementations on ARM Cortex A72. Therefore, we resort to deep learning-based side-channel analysis (DL-SCA) techniques, that do not require extensive data pre-processing and can effectively work with EM traces that have low signal-to-noise ratios. Using DL-SCA we could recover the AES secret key. Our experiments underscore the formidable challenge posed by breaking AES on ARM Cortex processors compared to conventional microcontroller-based implementations. Importantly, our findings extend beyond previous studies, marking the first successful attack on ARM Cortex A72 and demonstrating the efficacy of DL-SCA even when pre-processing techniques are varied and not standardized.
Thomas Roche
Secure elements are small microcontrollers whose main purpose is to generate/store secrets and then execute cryptographic operations. They undergo the highest level of security evaluations that exists (Common Criteria) and are often considered inviolable, even in the worst-case attack scenarios. Hence, complex secure systems build their security upon them.
FIDO hardware tokens are strong authentication factors to sign in to applications (any web service supporting FIDO); they often embed a secure element and the FIDO protocol uses Elliptic Curve Digital Signature Algorithm (ECDSA for short) as its core cryptographic primitive. YubiKey 5 Series are certainly the most widespread FIDO hardware tokens, their secure element is an Infineon SLE78.
This document shows how – finding a JavaCard open platform (the Feitian A22) based on a similar Infineon SLE78 – we understood the Infineon ECDSA implementation, found a side-channel vulnerability and designed a practical side-channel attack. The attack is then demonstrated on a YubiKey 5Ci. Finally, we show that the vulnerability extends to the more recent Infineon Optiga Trust M and Infineon Optiga TPM security microcontrollers.
Our work unearths a side-channel vulnerability in the cryptographic library of Infineon Technologies, one of the biggest secure element manufacturers. This vulnerability – that went unnoticed for 14 years and about 80 highest-level Common Criteria certification evaluations – is due to a non constant-time modular inversion.
The attack requires physical access to the secure element (few local electromagnetic side-channel acquisitions, i.e. few minutes, are enough) in order to extract the ECDSA secret key. In the case of the FIDO protocol, this allows to create a clone of the FIDO device.
All YubiKey 5 Series (with firmware version below 5.7) are impacted by the attack and in fact all Infineon security microcontrollers (including TPMs) that run the Infineon cryptographic library (as far as we know, any existing version) are vulnerable to the attack. These security microcontrollers are present in a vast variety of secure systems – often relying on ECDSA – like electronic passports and crypto-currency hardware wallets but also smart cars or homes. However, we did not check (yet) that the EUCLEAK attack applies to any of these products.
FIDO hardware tokens are strong authentication factors to sign in to applications (any web service supporting FIDO); they often embed a secure element and the FIDO protocol uses Elliptic Curve Digital Signature Algorithm (ECDSA for short) as its core cryptographic primitive. YubiKey 5 Series are certainly the most widespread FIDO hardware tokens, their secure element is an Infineon SLE78.
This document shows how – finding a JavaCard open platform (the Feitian A22) based on a similar Infineon SLE78 – we understood the Infineon ECDSA implementation, found a side-channel vulnerability and designed a practical side-channel attack. The attack is then demonstrated on a YubiKey 5Ci. Finally, we show that the vulnerability extends to the more recent Infineon Optiga Trust M and Infineon Optiga TPM security microcontrollers.
Our work unearths a side-channel vulnerability in the cryptographic library of Infineon Technologies, one of the biggest secure element manufacturers. This vulnerability – that went unnoticed for 14 years and about 80 highest-level Common Criteria certification evaluations – is due to a non constant-time modular inversion.
The attack requires physical access to the secure element (few local electromagnetic side-channel acquisitions, i.e. few minutes, are enough) in order to extract the ECDSA secret key. In the case of the FIDO protocol, this allows to create a clone of the FIDO device.
All YubiKey 5 Series (with firmware version below 5.7) are impacted by the attack and in fact all Infineon security microcontrollers (including TPMs) that run the Infineon cryptographic library (as far as we know, any existing version) are vulnerable to the attack. These security microcontrollers are present in a vast variety of secure systems – often relying on ECDSA – like electronic passports and crypto-currency hardware wallets but also smart cars or homes. However, we did not check (yet) that the EUCLEAK attack applies to any of these products.
Hyewon Sung, Sieun Seo, Taekyung Kim, Chohong Min
Bootstrapping stands as a fundamental component of fully homomorphic encryption (FHE) schemes, facilitating an infinite number of operations by recovering the ciphertext modulus. This work is aimed at significantly reducing the consumption of modulus in bootstrapping, thereby enhancing the efficiency of FHE performance, specifically for the Cheon--Kim--Kim--Song (CKKS) scheme proposed by Cheon et al. Building on the EvalRound bootstrapping method proposed by Kim et al., which includes the steps of ModRaise, CoeffToSlot, EvalRound and SlotToCoeff, we introduce $\textrm{EvalRound}^{+}$ bootstrapping. This bootstrapping inherits the advantage of EvalRound bootstrapping in CoeffToSlot and resolves its disadvantage in SlotToCoeff. Furthermore, we conduct a set of rigorous and comprehensive analyses to precisely determine the optimal choices of the parameters. The implementation of $\textrm{EvalRound}^{+}$ bootstrapping, along with optimal choices, has achieved a reduction in modulus consumption by over $40\%$ for CoeffToSlot and SlotToCoeff. Additionally, it has increased the number of levels for general multiplication by 2-4 in the most widely used bootstrapping parameter sets.
Michael Klooß, Michael Reichle, Benedikt Wagner
Blind signatures have garnered significant attention in recent years, with several efficient constructions in the random oracle model relying on well-understood assumptions. However, this progress does not apply to pairing-free cyclic groups: fully secure constructions over cyclic groups rely on pairings, remain inefficient, or depend on the algebraic group model or strong interactive assumptions. To address this gap, Chairattana-Apirom, Tessaro, and Zhu (CTZ, Crypto 2024) proposed a new scheme based on the CDH assumption. Unfortunately, their construction results in large signatures and high communication complexity.
In this work, we propose a new blind signature construction in the random oracle model that significantly improves upon the CTZ scheme. Compared to CTZ, our scheme reduces communication complexity by a factor of more than 10 and decreases the signature size by a factor of more than 45, achieving a compact signature size of only 224 Bytes. The security of our scheme is based on the DDH assumption over pairing-free cyclic groups, and we show how to generalize it to the partially blind setting.
In this work, we propose a new blind signature construction in the random oracle model that significantly improves upon the CTZ scheme. Compared to CTZ, our scheme reduces communication complexity by a factor of more than 10 and decreases the signature size by a factor of more than 45, achieving a compact signature size of only 224 Bytes. The security of our scheme is based on the DDH assumption over pairing-free cyclic groups, and we show how to generalize it to the partially blind setting.