IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
18 November 2024
Oleksandr Kurbatov, Lasha Antadze, Ameen Soleimani, Kyrylo Riabov, Artem Sdobnov
ePrint Report
This article proposes an extension for privacy-preserving applications to introduce sanctions or prohibition lists. When initiating a particular action, the user can prove, in addition to the application logic, that they are not part of the sanctions lists (one or more) without compromising sensitive data. We will show how this solution can be integrated into applications, using the example of extending Freedom Tool (a voting solution based on biometric passports). We will also consider ways to manage these lists, versioning principles, configuring the filter data set, combining different lists, and using the described method in other privacy-preserving applications.
15 November 2024
Oxford University, Department of Computer Science
Job Posting
Oxford University’s Computer Science Department (together with St John's College at Oxford) is hiring a senior Professor in Computer Science. We are looking for somebody with a world-leading research reputation with broad academic leadership.
Closing date for applications:
Contact: Head of Department, Professor Leslie Ann Goldberg (head-of-dept@cs.ox.ac.uk).
More information: https://my.corehr.com/pls/uoxrecruit/erq_jobspec_details_form.jobspec?p_id=173846
Hao Lei, Raghvendra Rohit, Guoxiao Liu, Jiahui He, Mohamed Rachidi, Keting Jia, Kai Hu, Meiqin Wang
ePrint Report
The circulant twin column parity mixer (TCPM) is a type of mixing layer for the round function of cryptographic permutations designed by Hirch et al. at CRYPTO 2023. It has a bitwise differential branch number of 12 and a bitwise linear branch number of 4, which makes it competitive in applications where differential security is required. Hirch et al. gave a concrete instantiation of a permutation using such a mixing layer, named Gaston, and showed the best 3-round differential and linear trails of Gaston have much higher weights than those of ASCON. In this paper, we first prove why the TCPM has linear branch number 4 and then show that Gaston's linear behavior is worse than ASCON for more than 3 rounds. Motivated by these facts, we aim to enhance the linear security of the TCPM. We show that adding a specific set of row cyclic shifts to the TCPM can make its differential and linear branch numbers both 12. Notably, by setting a special relationship between the row shift parameters of the modified TCPM, we obtain a special kind of mixlayer called the symmetric circulant twin column parity mixer. The symmetric TCPM has a unique design property that its differential and linear branch histograms are the same, which makes the parameter selection process and the security analysis convenient. Using the symmetric TCPM, we present two new 320-bit cryptographic permutations, namely (1) Gaston-S where we replace the mixing layer in Gaston with the symmetric TCPM and (2) SBD which uses a low-latency degree-4 S-box as the non-linear layer and the symmetric TCPM as the mixing layer. We evaluate the security of these permutations considering differential, linear and algebraic analysis, and then provide the performance comparison with Gaston in both hardware and software. Our results indicate that Gaston-S and SBD are competitive with Gaston in both security and performance.
Asmita Adhikary, Abraham J. Basurto Becerra, Lejla Batina, Ileana Buhan, Durba Chatterjee, Senna van Hoek, Eloi Sanfelix Gonzalez
ePrint Report
Side-channel attacks pose a serious risk to cryptographic implementations, particularly in embedded systems. While current methods, such as test vector leakage assessment (TVLA), can identify leakage points, they do not provide insights into their root causes. We propose ARCHER, an architecture-level tool designed to perform side-channel analysis and root cause identification for software cryptographic implementations on RISC-V processors. ARCHER has two main components: (1) Side-Channel Analysis to identify leakage using TVLA and its variants, and (2) Data Flow Analysis to track intermediate values across instructions, explaining observed leaks.
Taking the binary file of the target implementation as input, ARCHER generates interactive visualizations and a detailed report highlighting execution statistics, leakage points, and their causes. It is the first architecture-level tool tailored for the RISC-V architecture to guide the implementation of cryptographic algorithms resistant to power side-channel attacks. ARCHER is algorithm-agnostic, supports pre-silicon analysis for both high-level and assembly code, and enables efficient root cause identification. We demonstrate ARCHER’s effectiveness through case studies on AES and ASCON implementations, where it accurately traces the source of side-channel leaks.
Taking the binary file of the target implementation as input, ARCHER generates interactive visualizations and a detailed report highlighting execution statistics, leakage points, and their causes. It is the first architecture-level tool tailored for the RISC-V architecture to guide the implementation of cryptographic algorithms resistant to power side-channel attacks. ARCHER is algorithm-agnostic, supports pre-silicon analysis for both high-level and assembly code, and enables efficient root cause identification. We demonstrate ARCHER’s effectiveness through case studies on AES and ASCON implementations, where it accurately traces the source of side-channel leaks.
Emanuele Di Giandomenico, Doreen Riepel, Sven Schäge
ePrint Report
In this work, we present a new paradigm for constructing Group Authenticated Key Exchange (GAKE). This result is the first tightly secure GAKE scheme in a strong security model that allows maximum exposure attacks (MEX) where the attacker is allowed to either reveal the secret session state or the long-term secret of all communication partners. Moreover, our protocol features the strong and realistic notion of (full) perfect forward secrecy (PFS), that allows the attacker to actively modify messages before corrupting parties.
We obtain our results via a series of tightly secure transformations. Our first transformation is from weakly secure KEMs to unilateral authenticated key exchange (UAKE) with weak forward secrecy (WFS). Next, we show how to turn this into an UAKE with PFS in the random oracle model.
Finally, and as one of our major novel conceptual contributions, we describe how to build GAKE protocols from UAKE protocols, also in the random oracle model.
We apply our transformations to obtain two practical GAKE protocols with tight security. The first is based on the DDH assumption and features low message complexity. Our second result is based on the LWE assumption. In this way, we obtain the first GAKE protocol from a post-quantum assumption that is tightly secure in a strong model of security allowing MEX attacks.
Sougata Mandal
ePrint Report
In ASIACRYPT 2019, Andreeva et al. introduced a new symmetric key primitive called the $\textit{forkcipher}$, designed for lightweight applications handling short messages. A forkcipher is a keyed function with a public tweak, featuring fixed-length input and fixed-length (expanding) output. They also proposed a specific forkcipher, ForkSkinny, based on the tweakable block cipher SKINNY, and its security was evaluated through cryptanalysis. Since then, several efficient AEAD and MAC schemes based on forkciphers have been proposed, catering not only to short messages but also to various purposes such as leakage resilience and cloud security. While forkciphers have proven to be efficient solutions for designing AEAD schemes, the area of forkcipher design remains unexplored, particularly the lack of provably secure forkcipher constructions.
In this work, we propose forkcipher design for various tweak lengths, based on a block cipher as the underlying primitive. We provide proofs of security for these constructions, assuming the underlying block cipher behaves as an ideal block cipher. First, we present a forkcipher, $\widetilde{\textsf{F}}1$, for an $n$-bit tweak and prove its optimal ($n$-bit) security. Next, we propose another construction, $\widetilde{\textsf{F}}2$, for a $2n$-bit tweak, also proving its optimal ($n$-bit) security. Finally, we introduce a construction, $\widetilde{\textsf{F}}r$, for a general $rn$-bit tweak, achieving $n$-bit security.
In this work, we propose forkcipher design for various tweak lengths, based on a block cipher as the underlying primitive. We provide proofs of security for these constructions, assuming the underlying block cipher behaves as an ideal block cipher. First, we present a forkcipher, $\widetilde{\textsf{F}}1$, for an $n$-bit tweak and prove its optimal ($n$-bit) security. Next, we propose another construction, $\widetilde{\textsf{F}}2$, for a $2n$-bit tweak, also proving its optimal ($n$-bit) security. Finally, we introduce a construction, $\widetilde{\textsf{F}}r$, for a general $rn$-bit tweak, achieving $n$-bit security.
Umut Pekel, Oguz Yayla
ePrint Report
This article tries to offer a solution to an environmental sustainability problem using a forward-thinking approach and tries to construct a carbon footprint tracking system based on blockchain technology while also introducing tokenization intertwined with the blockchain to make everyday use as accessible and effective as possible.
This effort aims to provide a solid use case for environmental sustainability and lays the groundwork of a new generation social construct where carbon footprint is a valuable unit like money next to the other important tokenized attributes a person can possibly hold. The study proposes a blockchain-based solution to store the data. Through tokenization, the transacting and sharing is facilitated. As a result, carbon footprint data can be treated as a fungible utility token.
The article tries to explain how and which blockchain technology offers an effective solution to challenges in global carbon tracking systems. In this context, a use case was proposed. The critical features of the blockchain-based platform are examined. In addition, the roles of parties and user interactions within the system are detailed.
In conclusion, this article proposes the adaptation of blockchain technology together with smart contracts and tokenization to the management of carbon footprints.
Tao Lu, Yuxun Chen, Zonghui Wang, Xiaohang Wang, Wenzhi Chen, Jiaheng Zhang
ePrint Report
Zero-knowledge proof (ZKP) is a cryptographic primitive that enables one party to prove the validity of a statement to other parties without disclosing any secret information. With its widespread adoption in applications such as blockchain and verifiable machine learning, the demand for generating zero-knowledge proofs has increased dramatically. In recent years, considerable efforts have been directed toward developing GPU-accelerated systems for proof generation. However, these previous systems only explored efficiently generating a single proof by reducing latency rather than batch generation to provide high throughput.
We propose a fully pipelined GPU-accelerated system for batch generation of zero-knowledge proofs. Our system has three features to improve throughput. First, we design a pipelined approach that enables each GPU thread to continuously execute its designated proof generation task without being idle. Second, our system supports recent efficient ZKP protocols with their computational modules: sum-check protocol, Merkle tree, and linear-time encoder. We customize these modules to fit our pipelined execution. Third, we adopt a dynamic loading method for the data required for proof generation, reducing the required device memory. Moreover, multi-stream technology enables the overlap of data transfers and GPU computations, reducing overhead caused by data exchanges between host and device memory.
We implement our system and evaluate it on various GPU cards. The results show that our system achieves more than 259.5× higher throughput compared to state-of-the-art GPU-accelerated systems. Moreover, we deploy our system in the verifiable machine learning application, where our system generates 9.52 proofs per second, successfully achieving sub-second proof generation for the first time in this field.
We propose a fully pipelined GPU-accelerated system for batch generation of zero-knowledge proofs. Our system has three features to improve throughput. First, we design a pipelined approach that enables each GPU thread to continuously execute its designated proof generation task without being idle. Second, our system supports recent efficient ZKP protocols with their computational modules: sum-check protocol, Merkle tree, and linear-time encoder. We customize these modules to fit our pipelined execution. Third, we adopt a dynamic loading method for the data required for proof generation, reducing the required device memory. Moreover, multi-stream technology enables the overlap of data transfers and GPU computations, reducing overhead caused by data exchanges between host and device memory.
We implement our system and evaluate it on various GPU cards. The results show that our system achieves more than 259.5× higher throughput compared to state-of-the-art GPU-accelerated systems. Moreover, we deploy our system in the verifiable machine learning application, where our system generates 9.52 proofs per second, successfully achieving sub-second proof generation for the first time in this field.
George Teseleanu
ePrint Report
Let $N=pq$ be the product of two balanced prime numbers $p$ and $q$. In 2015, Roman'kov introduced an interesting RSA-like cryptosystem that, unlike the classical RSA key equation $ed - k (p-1)(q-1) = 1$, uses the key equation $ed - k r = 1$, where $r | p-1$ and is a large prime number. In this paper, we study if small private key attacks based on lattices can be applied to Roman'kov's cryptosystem. More precisely, we argue that such attacks do not appear to be applicable to this scheme.
Sihem Mesnager, Ahmet SINAK
ePrint Report
The construction of self-orthogonal codes from functions over finite fields has been widely studied in the literature. In this paper, we construct new families of self-orthogonal linear codes with few weights from trace functions and weakly regular plateaued functions over the finite fields of odd characteristics. We determine all parameters of the constructed self-orthogonal codes and their dual codes. Moreover, we employ the constructed $p$-ary self-orthogonal codes to construct $p$-ary LCD codes.
Seungwan Hong, Jiseung Kim, Changmin Lee, Minhye Seo
ePrint Report
As privacy concerns have arisen in machine learning, privacy-preserving machine learning (PPML) has received significant attention. Fully homomorphic encryption (FHE) and secure multi-party computation (MPC) are representative building blocks for PPML. However, in PPML protocols based on FHE and MPC, interaction between the client (who provides encrypted input data) and the evaluator (who performs the computation) is essential to obtain the final result in plaintext.
Functional encryption (FE) is a promising candidate to remove this constraint, but existing FE-based PPML protocols are restricted to evaluating only simple ML models, such as one-layer neural networks, or they support partially encrypted PPML, which makes them vulnerable to information leakage beyond the inference results.
In this paper, we propose a fully encrypted FE-based PPML protocol, which supports the evaluation of arbitrary functions over encrypted data with no information leakage during computation, for the first time. To achieve this, we newly construct a vector functional encryption scheme for quadratic polynomials and combine it with an inner product encryption scheme. This enables multiple compositions of quadratic polynomials to compute arbitrary complex functions in an encrypted manner.
Our FE-based PPML protocol is secure in the malicious model, which means that an adversary cannot obtain any information about the input data even though they intentionally deviate from the protocol. We then show how to use our protocol to build a fully encrypted 2-layer neural network model with quadratic activation functions and present experimental results.
In this paper, we propose a fully encrypted FE-based PPML protocol, which supports the evaluation of arbitrary functions over encrypted data with no information leakage during computation, for the first time. To achieve this, we newly construct a vector functional encryption scheme for quadratic polynomials and combine it with an inner product encryption scheme. This enables multiple compositions of quadratic polynomials to compute arbitrary complex functions in an encrypted manner.
Our FE-based PPML protocol is secure in the malicious model, which means that an adversary cannot obtain any information about the input data even though they intentionally deviate from the protocol. We then show how to use our protocol to build a fully encrypted 2-layer neural network model with quadratic activation functions and present experimental results.
Wonhee Cho, Jiseung Kim, Changmin Lee
ePrint Report
Boneh et al. (CRYPTO'18) proposed two $t$-out-of-$N$ threshold fully homomorphic encryption ($\sf TFHE$) schemes based on Shamir secret sharing scheme and $\{0,1\}$-linear secret sharing scheme. They demonstrated the simulation security, ensuring no information leakage during partial or final decryption. This breakthrough allows any scheme to be converted into a threshold scheme by using $\sf TFHE$.
We propose two polynomial time algorithms to break the simulation security of $t$-out-of-$N$ $\sf TFHE$ based on Shamir secret sharing scheme proposed by Boneh et al.. First, we show that an adversary can break the simulation security by recovering the secret key under some constraints on $t$ and $N$, which does not violate the conditions for security proof. Next, we introduce a straightforward fix that theoretically satisfies the simulation security. However, we argue that this modification remains insecure insecure when implemented with any state-of-the-art fully homomorphic encryption libraries in practice. To ensure robustness against our subsequent attacks, we recommend using an error-refreshing algorithm, such as bootstrapping or modulus switching, for each addition operation.
We propose two polynomial time algorithms to break the simulation security of $t$-out-of-$N$ $\sf TFHE$ based on Shamir secret sharing scheme proposed by Boneh et al.. First, we show that an adversary can break the simulation security by recovering the secret key under some constraints on $t$ and $N$, which does not violate the conditions for security proof. Next, we introduce a straightforward fix that theoretically satisfies the simulation security. However, we argue that this modification remains insecure insecure when implemented with any state-of-the-art fully homomorphic encryption libraries in practice. To ensure robustness against our subsequent attacks, we recommend using an error-refreshing algorithm, such as bootstrapping or modulus switching, for each addition operation.
Ojaswi Acharya, Weiqi Feng, Roman Langrehr, Adam O'Neill
ePrint Report
We extend the concept of access control for functional encryption, introduced by Abdalla et al. (ASIACRYPT 2020), to function-revealing encryption (Joy and Passelègue, SCN 2018). Here “access control”
means that function evaluation is only possible when a specified access
policy is met. Specifically, we introduce access-controlled inner-product function-revealing encryption (AC-IPFRE) and give two applications.
On the theoretical side, we use AC-IPFRE to show that function- hiding inner-product functional encryption (FH-IPFE), introduced by Bishop et al. (ASIACRYPT 2015), is equivalent to IPFRE. To show this, we in particular generically construct AC-IPFRE from IPFRE for the “non-zero inner-product” (NZIP) access policy. This result uses an effective version of Lagrange’s Four Square Theorem. One consequence of this result is that lower bounds by Ünal (EUROCRYPT 2020) suggest that, as for FH-IPFE, bilinear pairings will be needed to build IPFRE.
On the practical side, we build an outsourced approximate nearest- neighbor (ANN) search protocol and mitigate its leakage via AC-IPFRE. For this, we construct a practical AC-IPFRE scheme in the generic bilinear group model for a specific access policy for ANN search. To this end, we show that techniques of Wee (TCC 2020) implicitly give the most practical FH-IPFE scheme to date. We implement the resulting outsourced ANN search protocol and report on its performance.
Of independent interest, we show AC-IPFRE for NZIP implies attribute-hiding small-universe AC-IPFRE for arbitrary access policies. Previous work on access control for FE did not achieve attribute hiding. Overall, our results demonstrate that AC-IPFRE is of both theoretical and practical interest and set the stage for future work in the area.
On the theoretical side, we use AC-IPFRE to show that function- hiding inner-product functional encryption (FH-IPFE), introduced by Bishop et al. (ASIACRYPT 2015), is equivalent to IPFRE. To show this, we in particular generically construct AC-IPFRE from IPFRE for the “non-zero inner-product” (NZIP) access policy. This result uses an effective version of Lagrange’s Four Square Theorem. One consequence of this result is that lower bounds by Ünal (EUROCRYPT 2020) suggest that, as for FH-IPFE, bilinear pairings will be needed to build IPFRE.
On the practical side, we build an outsourced approximate nearest- neighbor (ANN) search protocol and mitigate its leakage via AC-IPFRE. For this, we construct a practical AC-IPFRE scheme in the generic bilinear group model for a specific access policy for ANN search. To this end, we show that techniques of Wee (TCC 2020) implicitly give the most practical FH-IPFE scheme to date. We implement the resulting outsourced ANN search protocol and report on its performance.
Of independent interest, we show AC-IPFRE for NZIP implies attribute-hiding small-universe AC-IPFRE for arbitrary access policies. Previous work on access control for FE did not achieve attribute hiding. Overall, our results demonstrate that AC-IPFRE is of both theoretical and practical interest and set the stage for future work in the area.
Upasana Mandal, Rupali Kalundia, Nimish Mishra, Shubhi Shukla, Sarani Bhattacharya, Debdeep Mukhopadhyay
ePrint Report
Modern micro-architectural attacks use a variety of building blocks chained to develop a final exploit. However, since in most cases, the footprint of such attacks is not visible architecturally (like, in the file-system), it becomes trickier to defend against these. In light of this, several automated defence mechanisms use Hardware Performance Counters (HPCs) detect when the micro-architectural elements are being misused for a potential attacks (like flush-reload, Spectre, Meltdown etc.). In order to bypass such defences, recent works have proposed the idea of "probabilistic interleaving": the adversary interleaves the actual attack code with benign code with very low frequency. Such a strategy tips off the HPCs used for detection with a lot of unnecessary noise; recent studies have shown that probabilistically interleaved attacks can achieve an attack evasion rate of 100% (i.e. are virtually undetectable).
In this work, we contend this folklore. We develop a theoretical model of interleaved attacks using lightweight statistical tools like Gaussian Mixture Models and Dip Test for Unimodality and prove they are detectable for the correct choices of HPCs. Furthermore, we also show possible defence strategy against a stronger threat model than considered in literature: where the attacker interleaves multiple attacks instead of a single attack. Empirically, to instantiate our detector, in contrast to prior detection strategies, we choose LLMs for a number of reasons: (1) LLMs can easily contextualize data from a larger set of HPCs than generic machine learning techniques, and (2) with simple prompts, LLMs can quickly switch between different statistical analysis methods. To this end, we develop an LLM-based methodology to detect probabilistically interleaved attacks. Our experiments establish that our improved methodology is able to achieve 100% speculative attacks like Spectre v1/v2/v3, Meltdown, and Spectre v2 (with improved gadgets that even evade recent protections like Enhanced IBRS, IBPB conditional, and so on). This makes our methodology suitable for detecting speculative attacks in a non-profiled setting: where attack signatures might not be known in advance. All in all, we achieve a 100% attack detection rate, even with very low interleave frequencies (i.e. $10^{-6}$). Our detection principle and its instantiation through LLMs shows how probabilistically interleaving attack code in benign execution is not a perfect strategy, and more research is still needed into developing and countering better attack evasion strategies.
In this work, we contend this folklore. We develop a theoretical model of interleaved attacks using lightweight statistical tools like Gaussian Mixture Models and Dip Test for Unimodality and prove they are detectable for the correct choices of HPCs. Furthermore, we also show possible defence strategy against a stronger threat model than considered in literature: where the attacker interleaves multiple attacks instead of a single attack. Empirically, to instantiate our detector, in contrast to prior detection strategies, we choose LLMs for a number of reasons: (1) LLMs can easily contextualize data from a larger set of HPCs than generic machine learning techniques, and (2) with simple prompts, LLMs can quickly switch between different statistical analysis methods. To this end, we develop an LLM-based methodology to detect probabilistically interleaved attacks. Our experiments establish that our improved methodology is able to achieve 100% speculative attacks like Spectre v1/v2/v3, Meltdown, and Spectre v2 (with improved gadgets that even evade recent protections like Enhanced IBRS, IBPB conditional, and so on). This makes our methodology suitable for detecting speculative attacks in a non-profiled setting: where attack signatures might not be known in advance. All in all, we achieve a 100% attack detection rate, even with very low interleave frequencies (i.e. $10^{-6}$). Our detection principle and its instantiation through LLMs shows how probabilistically interleaving attack code in benign execution is not a perfect strategy, and more research is still needed into developing and countering better attack evasion strategies.
Noel Elias
ePrint Report
Efficiently verifying mathematical proofs and computations has been a heavily researched topic within Computer Science. Particularly, even repetitive steps within a proof become much more complex and inefficient to validate as proof sizes grow. To solve this problem, we suggest viewing it through the lens of Incrementally Verifiable Computation (IVC). However, many IVC methods, including the state-of-the-art Nova recursive SNARKs, require proofs to be linear and for each proof step to be identical. This paper proposes Lova, a novel framework to verify mathematical proofs end-to-end that solves these problems. Particularly, our approach achieves a few novelties alongside the first-of-its-kind implementation of Nova: (i) an innovative proof splicing mechanism to generate independent proof sequences, (ii) a system of linear algorithms to verify a variety of mathematical logic rules, and (iii) a novel multiplexing circuit allowing non-homogeneous proof sequences to be verified together in a single Nova proof. The resulting Lova pipeline has linear prover time, constant verifying capability, dynamic/easy modification, and optional zero-knowledge privacy to efficiently validate mathematical proofs. Code is available at https://github.com/noelkelias/lova.
Tom Gur, Jack O'Connor, Nicholas Spooner
ePrint Report
We show that for every polynomial q∗ there exist polynomial-size, constant-query, non-adaptive PCPs
for NP which are perfect zero knowledge against (adaptive) adversaries making at most q∗ queries to
the proof. In addition, we construct exponential-size constant-query PCPs for NEXP with perfect zero
knowledge against any polynomial-time adversary. This improves upon both a recent construction of
perfect zero-knowledge PCPs for #P (STOC 2024) and the seminal work of Kilian, Petrank and Tardos
(STOC 1997).
Ali Raya, Vikas Kumar, Aditi Kar Gangopadhyay, Sugata Gangopadhyay
ePrint Report
NTRU-like constructions are among the most studied lattice-based schemes. The freedom of design of NTRU resulted in many variants in literature motivated by faster computations or more resistance against lattice attacks by changing the underlying algebra. To the best of our knowledge, BQTRU (DCC 2017), a noncommutative NTRU-like cryptosystem, is the fastest claimed variant of NTRU built over the quaternion algebra of the bivariate ring of polynomials. The key generation and the encryption of BQTRU are claimed to be 16/7 times faster than standard NTRU for equivalent levels of security. For key recovery attacks, the authors claim that retrieving a decryption key is equivalent to solving the Shortest Vector Problem (SVP) in expanded Euclidean lattices of giant dimensions. This work disproves this claim and proposes practical key and message recovery attacks that break the moderate parameter sets of BQTRU estimated to achieve $2^{92}$ message security and $2^{166}$ key security on a standard desktop within less than two core weeks. Furthermore, our analysis shows that the proposed parameter set for the highest security level claiming $2^{212}$ message security and $2^{396}$ key security can barely achieve $2^{82}$ message security and $2^{125}$ key security. Our work not only provides cryptanalysis for BQTRU but also demonstrates the potential of extending Gentry's attack to other rings beyond the cyclotomic polynomial ring.
Shiping Cai, Mingjie Chen, Christophe Petit
ePrint Report
Any isogeny between two supersingular elliptic curves can be defined over $\mathbb{F}_{p^2}$, however, this does not imply that computing such isogenies can be done with field operations in $\mathbb{F}_{p^2}$. In fact, the kernel generators of such isogenies are defined over extension fields of $\mathbb{F}_{p^2}$, generically with extension degree linear to the isogeny degree. Most algorithms related to isogeny computations are only efficient when the extension degree is small. This leads to efficient algorithms used in isogeny-based cryptographic constructions, but also limits their parameter choices at the same time. In this paper, we consider three computational subroutines regarding isogenies, focusing on cases with large extension degrees: computing a basis of $\ell$-torsion points, computing the kernel polynomial of an isogeny given a kernel generator, and computing the kernel generator of an isogeny given the corresponding quaternion ideal under the Deuring correspondence. We then apply our algorithms to the constructive Deuring correspondence algorithm from Eriksen, Panny, Sotáková and Veroni (LuCaNT'23) in the case of a generic prime characteristic, achieving around 30% speedup over their results.
Jingwei Chen, Linhan Yang, Chen Yang, Shuai Wang, Rui Li, Weijie Miao, Wenyuan Wu, Li Yang, Kang Wu, Lizhong Dai
ePrint Report
Protein sequence classification is crucial in many research areas, such as predicting protein structures and discovering new protein functions. Leveraging large language models (LLMs) is greatly promising to enhance our ability to tackle protein sequence classification problems; however, the accompanying privacy issues are becoming increasingly prominent. In this paper, we present a privacy-preserving, non-interactive, efficient, and accurate protocol called encrypted DASHformer to evaluate a transformer-based neural network for protein sequence classification named DASHformer, provided by the iDASH 2024-Track 1 competition. The presented protocol is based on our solution for this competition, which won the first place. It is arguably the first secure transformer inference protocol capable of performing batch classification for multiple protein sequences in a single execution only using leveled homomorphic encryption (i.e., without bootstrapping). To achieve this, we propose a series of new techniques and algorithmic improvements, including data-driven non-polynomial function fitting, tensor packing, and double baby-step-giant-step for computing the product of multiple encrypted matrices. These techniques and improvements enable the protocol to classify $163$ encrypted protein sequences in about $165$ seconds with $128$-bit security, achieving an amortized time of about one second per sequence.
Sönke Jendral, Elena Dubrova
ePrint Report
In response to the quantum threat, new post-quantum cryptographic algorithms will soon be deployed to replace existing public-key schemes. MAYO is a quantum-resistant digital signature scheme whose small keys and signatures make it suitable for widespread adoption, including on embedded platforms with limited security resources. This paper demonstrates two single-trace side-channel attacks on a MAYO implementation in ARM Cortex-M4 that recover a secret key with probabilities of 99.9% and 91.6%, respectively. Both attacks use deep learning-assisted power analysis exploiting information leakage during modular multiplication to reveal a vector in the oil space. This vector is then extended to a full secret key using algebraic techniques.