IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
30 August 2024
Juan Carlos Ku-Cauich, Javier Diaz-Vargas
We are using the extended Maiorana-McFarland construction to create new bent functions. When we start with a bent function of dimension s-r, we can produce a new function of dimension s+r while ensuring that its balance is limited to the set of vectors with an even Hamming weight in its domain. We also compare this approach with the case where r=1 and apply it multiple times. Finally, we provide an algorithm as an example, focusing on the case where r=2 and another algorithm using r=1 two times.
Debrup Chakraborty, Sebati Ghosh, Cuauhtemoc Mancillas Lopez, Palash Sarkar
A fixed length tweakable enciphering scheme (TES) is the appropriate cryptographic functionality for low level disk encryption. Research on TES over the last two decades have led to a number of proposals many of which have already been implemented using FPGAs. This paper considers the FPGA implementations of two more recent and promising TESs, namely AEZ and FAST. The relevant architectures are described and simulation results on the Xilinx Virtex 5 and Virtex 7 FPGAs are presented. For comparison, two IEEE standard schemes, XCB and EME2 are considered. The results indicate that FAST outperforms the other schemes making it a serious candidate for future incorporation by disk manufacturers and standardisation bodies.
Kai Hu, Trevor Yap
Modular addition is often the most complex component of typical Addition-Rotation-XOR (ARX) ciphers, and the division property is the most effective tool for detecting integral distinguishers. Thus, having a precise division property model for modular addition is crucial in the search for integral distinguishers in ARX ciphers.
Current division property models for modular addition either (a) express the operation as a Boolean circuit and apply standard propagation rules for basic operations (COPY, XOR, AND), or (b) treat it as a sequence of smaller functions with carry bits, modeling each function individually. Both approaches were originally proposed for the two-subset bit-based division property (2BDP), which is theoretically imprecise and may overlook some balanced bits.
Recently, more precise versions of the division property, such as parity sets, three-subset bit-based division property without unknown subsets (3BDPwoU) or monomial prediction (MP), and algebraic transition matrices have been proposed. However, little attention has been given to modular addition within these precise models.
The propagation rule for the precise division property of a vectorial Boolean function $\boldsymbol{f}$ requires that $\boldsymbol{u}$ can propagate to $\boldsymbol{v}$ if and only if the monomial $\pi_{\boldsymbol{u}}({\boldsymbol{x}})$ appears in $\pi_{\boldsymbol{v}}( \boldsymbol{f} )$. Braeken and Semaev (FSE 2005) studied the algebraic structure of modular addition and showed that for $\boldsymbol{x} \boxplus \boldsymbol{y} = \boldsymbol{z}$, the monomial $\pi_{\boldsymbol{u}}(\boldsymbol{x})\pi_{\boldsymbol{v}}(\boldsymbol{v})$ appears in $\pi_{\boldsymbol{w}}(\boldsymbol{w})$ if and only if $\boldsymbol{u} + \boldsymbol{v} = \boldsymbol{w}$. Their theorem directly leads to a precise division property model for modular addition. Surprisingly, this model has not been applied in division property searches, to the best of our knowledge.
In this paper, we apply Braeken and Semaev's theorem to search for integral distinguishers in ARX ciphers, leading to several new results. First, we improve the state-of-the-art integral distinguishers for all variants of the Speck family, significantly enhancing search efficiency for Speck-32/48/64/96 and detecting new integral distinguishers for Speck-48/64/96/128. Second, we determine the exact degrees of output bits for $7$-round Speck-$32$ and all/16/2 output bits for $2/3/4$-round Alzette for the first time. Third, we revisit the choice of rotation parameters in Speck instances, providing a criterion that enhances resistance against integral distinguishers. Additionally, we offer a simpler proof for Braeken and Semaev's theorem using monomial prediction, demonstrating the potential of division property methods in the study of Boolean functions.
We hope that the proposed methods will be valuable in the future design of ARX ciphers.
Current division property models for modular addition either (a) express the operation as a Boolean circuit and apply standard propagation rules for basic operations (COPY, XOR, AND), or (b) treat it as a sequence of smaller functions with carry bits, modeling each function individually. Both approaches were originally proposed for the two-subset bit-based division property (2BDP), which is theoretically imprecise and may overlook some balanced bits.
Recently, more precise versions of the division property, such as parity sets, three-subset bit-based division property without unknown subsets (3BDPwoU) or monomial prediction (MP), and algebraic transition matrices have been proposed. However, little attention has been given to modular addition within these precise models.
The propagation rule for the precise division property of a vectorial Boolean function $\boldsymbol{f}$ requires that $\boldsymbol{u}$ can propagate to $\boldsymbol{v}$ if and only if the monomial $\pi_{\boldsymbol{u}}({\boldsymbol{x}})$ appears in $\pi_{\boldsymbol{v}}( \boldsymbol{f} )$. Braeken and Semaev (FSE 2005) studied the algebraic structure of modular addition and showed that for $\boldsymbol{x} \boxplus \boldsymbol{y} = \boldsymbol{z}$, the monomial $\pi_{\boldsymbol{u}}(\boldsymbol{x})\pi_{\boldsymbol{v}}(\boldsymbol{v})$ appears in $\pi_{\boldsymbol{w}}(\boldsymbol{w})$ if and only if $\boldsymbol{u} + \boldsymbol{v} = \boldsymbol{w}$. Their theorem directly leads to a precise division property model for modular addition. Surprisingly, this model has not been applied in division property searches, to the best of our knowledge.
In this paper, we apply Braeken and Semaev's theorem to search for integral distinguishers in ARX ciphers, leading to several new results. First, we improve the state-of-the-art integral distinguishers for all variants of the Speck family, significantly enhancing search efficiency for Speck-32/48/64/96 and detecting new integral distinguishers for Speck-48/64/96/128. Second, we determine the exact degrees of output bits for $7$-round Speck-$32$ and all/16/2 output bits for $2/3/4$-round Alzette for the first time. Third, we revisit the choice of rotation parameters in Speck instances, providing a criterion that enhances resistance against integral distinguishers. Additionally, we offer a simpler proof for Braeken and Semaev's theorem using monomial prediction, demonstrating the potential of division property methods in the study of Boolean functions.
We hope that the proposed methods will be valuable in the future design of ARX ciphers.
George Teseleanu
In 2023, Mfungo et al. presented an image encryption scheme that relies on a series of diffusion techniques and uses a chaotic map to generate three secret keys. Note that two out of three keys are dynamically generated based on the size of the original image, while the remaining key is static. The authors claim that their proposal offers $149$ bits of security. Unfortunately, we found a chosen plaintext attack that requires $2$ oracle queries and has a worse case complexity of $\mathcal O(2^{32})$. If the attacker has access to $1$ encryption oracle query and $1$ decryption oracle query, we can lower the complexity to $\mathcal O(2^{18.58})$. Encrypting an image with Mfungo et al.'s scheme has a worst case complexity of $\mathcal O(2^{33})$. Therefore, both our attacks are faster than encrypting an image. Our attacks are feasible because the dynamic keys remain unchanged for different plaintext images of the same size, and the static key remains the same for all images.
Yan Jiang, Youwen Zhu, Jian Wang, Yudi Zhang
Identity-based threshold signature (IDTS) enables the generation of valid signatures without revealing cryptographic keys in the signing process. While current protocols have achieved much progress in their efficiency, many schemes easily suffer from denial-of-service attacks in which misbehaving parties could keep from generating signatures without being caught. The identifiable abort property is designed to withstand such an attack in some recent IDTS protocols. However, all these schemes require many rounds of interaction for the resulting signature or utilize cryptographic techniques, which have a high complexity. In this study, we put forward a novel IDTS protocol that can achieve identifiable abort and resist arbitrary collusion attacks. Precisely, this ensures that corrupted parties are responsible in case of failure and cannot jointly obtain the input of honest parties. Moreover, we present the ideal IDTS functionality and provide the provable security of the proposed protocol with the global random oracle model. Our scheme has non-interactive signing, compatibility with the offline/online settings, and practical efficiency for the online phase. Finally, we give theoretical analyses and experimental results of our solution, showing that the signing time is less than two milliseconds and that the scheme is suitable for resource-constrained settings.
Thomas Decru, Tako Boris Fouotsa, Paul Frixons, Valerie Gilchrist, Christophe Petit
Recently, Geraud-Stewart and Naccache proposed two trapdoors based on matrix products. In this paper, we answer the call for cryptanalysis. We explore how using the trace and determinant of a matrix can be used to attack their constructions. We fully break their first construction in a polynomial-time attack. We show an information leak in the second construction using characteristic polynomials, and provide an attack using traces that decreases the bit security by about half.
26 August 2024
Yansong Feng, Zhen Liu, Abderrahmane Nitaj, Yanbin Pan
It is well known that the best small private exponent attack against RSA is that when the private exponent $d < N^{0.292}$, one can factor the RSA modulus $N = pq$. However, the bound $N^{0.292}$ is very difficult to achieve directly since we need to deal with some lattice with very high dimension, which seems infeasible by now. Recently, Li et al. proposed a practical attack that can solve cases when $d$ approaches $N^{0.292}$ within a month for $1024$ bit $N$. In this paper, we propose an improved practical small private exponent attack by enumerating the most significant bits of $p + q$. Together with some skills in implementations, we can also achieve the bound $N^{0.292}$, but with significantly less time compared to previous work.
Yansong Feng, Abderrahmane Nitaj, Yanbin Pan
Coppersmith's method, combined with the Jochemsz-May strategy, is widely used to find the small roots of multivariate polynomials for cryptanalysis.
At Asiacrypt'23, Meers and Nowakowski improved the Jochemsz-May strategy from a single polynomial equation to a system of polynomial equations and proposed a new method, called Automated Coppersmith. Note that it is typically a tedious and non-trivial task to determine asymptotic upper bounds for Coppersmith’s method and
manual analysis has to be performed anew when a new set of polynomials is considered. By making certain heuristic assumption, Meers and Nowakowski showed that the bound can be obtained using Lagrange interpolation with the computer, but it is still time-consuming. Moreover, we find that sometimes the interpolation method may get stuck in local convergence, which will result in an incorrect
bound when a natural termination strategy is employed in the method.
In this paper, we revisit the Jochemsz-May strategy as well as the work of Meers and Nowakowski and point out that the bound can be obtained by calculating the leading coefficient of some Hilbert function, which is exactly the volume of the corresponding Newton polytope. To this end, we introduce the concept of Sumsets theory and propose a series of related results and algorithms. Compared with the Automated Coppersmith, we overcome the issue of getting stuck in local convergence and directly eliminate the time-consuming calculation for $f^m$ in Automated Coppersmith when $m$ is large, which brings a 1000x$\sim$1200x improvement in running time for some polynomials in our experiment.
Additionally, our new method offers a new perspective on understanding Automated Coppersmith, thus providing proof of Meers and Nowakowski's Heuristic 2 for the system of a single polynomial.
In this paper, we revisit the Jochemsz-May strategy as well as the work of Meers and Nowakowski and point out that the bound can be obtained by calculating the leading coefficient of some Hilbert function, which is exactly the volume of the corresponding Newton polytope. To this end, we introduce the concept of Sumsets theory and propose a series of related results and algorithms. Compared with the Automated Coppersmith, we overcome the issue of getting stuck in local convergence and directly eliminate the time-consuming calculation for $f^m$ in Automated Coppersmith when $m$ is large, which brings a 1000x$\sim$1200x improvement in running time for some polynomials in our experiment.
Additionally, our new method offers a new perspective on understanding Automated Coppersmith, thus providing proof of Meers and Nowakowski's Heuristic 2 for the system of a single polynomial.
Yansong Feng, Abderrahmane Nitaj, Yanbin Pan
Let $(N,e)$ be a public key of the RSA cryptosystem, and $d$ be the corresponding private key. In practice, we usually choose a small $e$ for quick encryption. In this paper, we improve partial private key exposure attacks against RSA with MSBs of $d$ and small $e$. The key idea is that under such a setting we can usually obtain more information about the prime factors of $N$ and then, by solving a univariate modular polynomial equation using Coppersmith's method, $N$ can be factored in polynomial time. Compared to previous results, we reduce the number of the leaked bits in $d$ that are needed to mount the attack by $\log_2 (e)$ bits. For $e=65537$, previous work required an additional enumeration of 17 bits to achieve our new bound, resulting in a $2^{10}$ (or 1,024x) increase in time consumption. Furthermore, our experiments show that for a $1024$-bit modulus $N$, our attack can achieve the theoretical bound on a simple personal computer, which verifies the new method.
Roberto Avanzi, Orr Dunkelman, Shibam Ghosh
Recently, the NSA has proposed a block cipher called ARADI and a mode of operation called LLAMA for memory encryption applications.
In this note, we comment on this proposal, on its suitability for the intended application, and describe an attack on LLAMA that breaks confidentiality of ciphertext and allows a straightforward forgery attack breaking integrity of ciphertext (INT-CTXT) using a related-IV attack.
Both attacks have negligible complexity.
Giuseppe Persiano, Duong Hieu Phan, Moti Yung
The notion of (Receiver-) Anamorphic Encryption was put forth recently to show that a dictator (i.e., an overreaching government), which demands to get the receiver’s private key and even dictates messages to the sender, cannot prevent the receiver from getting an additional covert anamorphic message from a sender. The model required an initial private collaboration to share some secret. There may be settings though where an initial collaboration may be impossible or performance-wise prohibitive, or cases when we need an immediate message to be sent without private key generation (e.g., by any casual sender in need). This situation, to date, somewhat limits the applicability of anamorphic encryption.
To overcome this, in this work, we put forth the new notion of “public-key anamorphic encryption,” where, without any initialization, any sender that has not coordinated in any shape or form with the receiver, can nevertheless, under the dictator control of the receiver’s private key, send the receiver an additional anamorphic secret message hidden from the dictator. We define the new notion with its unique new properties, and then prove that, quite interestingly, the known CCA-secure Koppula-Waters (KW) system is, in fact, public-key anamorphic.
We then describe how a public-key anamorphic scheme can support a new hybrid anamorphic encapsulation mode (KDEM) where the public-key anamorphic part serves a bootstrapping mechanism to activate regular anamorphic messages in the same ciphertext, thus together increasing the anamorphic channel capacity.
Looking at the state of research thus far, we observe that the initial system (Eurocrypt’22) that was shown to have regular anamorphic properties is the CCA-secure Naor-Yung (and other related schemes). Here we identify that the KW CCA-secure scheme also provides a new type of anamorphism. Thus, this situation is hinting that there may be a connection between some types of CCA-secure schemes and some type of anamorphic schemes (in spite of the fact that the goals of the two primitives are fundamentally different); this question is foundational in nature. Given this, we identify a sufficient condition for a “CCA-secure scheme which is black-box reduced from a CPA secure scheme” to directly give rise to an “anamorphic encryption scheme!” Furthermore, we identify one extra property of the reduction, that yields a public-key anamorphic scheme as defined here.
To overcome this, in this work, we put forth the new notion of “public-key anamorphic encryption,” where, without any initialization, any sender that has not coordinated in any shape or form with the receiver, can nevertheless, under the dictator control of the receiver’s private key, send the receiver an additional anamorphic secret message hidden from the dictator. We define the new notion with its unique new properties, and then prove that, quite interestingly, the known CCA-secure Koppula-Waters (KW) system is, in fact, public-key anamorphic.
We then describe how a public-key anamorphic scheme can support a new hybrid anamorphic encapsulation mode (KDEM) where the public-key anamorphic part serves a bootstrapping mechanism to activate regular anamorphic messages in the same ciphertext, thus together increasing the anamorphic channel capacity.
Looking at the state of research thus far, we observe that the initial system (Eurocrypt’22) that was shown to have regular anamorphic properties is the CCA-secure Naor-Yung (and other related schemes). Here we identify that the KW CCA-secure scheme also provides a new type of anamorphism. Thus, this situation is hinting that there may be a connection between some types of CCA-secure schemes and some type of anamorphic schemes (in spite of the fact that the goals of the two primitives are fundamentally different); this question is foundational in nature. Given this, we identify a sufficient condition for a “CCA-secure scheme which is black-box reduced from a CPA secure scheme” to directly give rise to an “anamorphic encryption scheme!” Furthermore, we identify one extra property of the reduction, that yields a public-key anamorphic scheme as defined here.
Zhengjun Cao, Lihua Liu
Smart farming uses different vehicles to manage all the operations on the farm. These vehicles should be put to good use for secure data transmission. The Vangala et al.'s key agreement scheme [IEEE TIFS, 18 (2023), 904-9193] is designed for agricultural IoT networks. In this note, we show that the scheme fails to keep anonymity, instead pseudonymity. The scheme simply thinks that anonymity is equivalent to preventing the real identity from being recovered. But the true anonymity means that the adversary cannot attribute different sessions to target users. To the best of our knowledge, it is the first time to clarify the differences between anonymity and pseudonymity.
Francesco Berti, François-Xavier Standaert, Itamar Levi
Robust message authentication codes (MACs) and authenticated encryption (AE) schemes that provide authenticity in the presence of side-channel leakage are essential primitives. These constructions often rely on primitives designed for strong leakage protection, among others including the use of strong-unpredictable (tweakable) block-ciphers.
This paper extends the strong-unpredictability security definition to the versatile and new forkcipher primitive. We show how to construct secure and efficient MAC and AEs that guarantee authenticity in the presence of leakage. We present a leakage-resistant MAC, ForkMAC, and two leakage-resistant AE schemes, ForkDTE1 and ForkDTE2, which use forkciphers instead of traditional secure (tweakable) block-ciphers as compared to the prior art. We prove and analyze their security in the presence of leakage based on a strong unpredictable forkcipher. A comparison with the state-of-the-art in terms of both security and efficiency is followed in the paper.
Key advantages and highlights promoted by the proposed constructions are that for the minimal assumptions they require, unpredictability with leakage-based security, the tag-generation of ForkMAC is the most efficient among leakage-resilient MAC proposals, equivalent to HBC. ForkDTE 1 and 2 have a more efficient encryption than any other scheme, achieving integrity with leakage (and also providing misuse-resistance).
Emanuele Bellini, Mattia Formenti, David Gérault, Juan Grados, Anna Hambitzer, Yun Ju Huang, Paul Huynh, Mohamed Rachidi, Raghvendra Rohit, Sharwan K. Tiwari
In early August 2024, three NSA researchers -- Patricia Greene, Mark Motley, and Bryan Weeks -- published the technical specifications for a new low-latency block cipher, ARADI, along with its corresponding authenticated encryption mode, LLAMA, which is specifically designed for memory encryption applications. Their manuscript offered minimal security analysis of the design, only briefly discussing the differential, linear and algebraic properties of cipher's underlying components. In this work, we present a set of distinguishers for the round reduced ARADI block cipher, discovered using the automated cryptanalysis tool CLAASP. More precisely, using CLAASP, we evaluate the resistance of ARADI against avalanche, statistical and continuous diffusion tests, differential and linear distinguishers, impossible differentials, algebraic attacks, and neural distinguishers.
Accordingly, we give distinguishers that reach up to 9 out of 16 rounds of ARADI. We hope these preliminary findings will encourage further in-depth cryptanalysis of the cipher to enhance confidence in its security.
Hao Cheng, Johann Großschädl, Ben Marshall, Daniel Page, Markku-Juhani O. Saarinen
Framed within the general context of cyber-security, standard cryptographic constructions often represent an enabling technology for associated solutions. Alongside or in combination with their design, therefore, the implementation of such constructions is an important challenge: beyond delivering artefacts that are usable in practice, implementation can impact many quality metrics (such as efficiency and security) which determine fitness-for-purpose. A rich design space of implementation techniques can be drawn on in order to address this challenge, but threat- and opportunity-driven innovation based on clear understanding and empirical evidence remains vital.
In at least some use-cases, software-based implementation of cryptography is important, e.g., because it delivers an attractive trade off or is mandated for some reason. Such an implementation is heavily influenced both by 1) the Instruction Set Architecture (ISA) it is expressed using, and 2) the micro-architecture it is executed using. For example, the extent to which a general-purpose ISA can support more domain-specific requirements of a cryptographic construction will influence how the latter is mapped to the former (i.e., which implementation techniques are viable) and behavioural properties of doing so (e.g., the execution latency stemming from use of a given implementation technique).
This paper attempts to systematise the topic of cryptographic Instruction Set Extensions (ISEs), which represent an approach to provision of a platform where such support is more explicit and extensive. At a high level, the goal is to improve understanding of what is an extensive and somewhat inter-disciplinary body of literature (e.g., spanning academia and industry, hardware and software, as well as cryptographic and non-cryptographic publication venues). We argue that doing so will help to maximise the quality of subsequent work on this and associated topics.
In at least some use-cases, software-based implementation of cryptography is important, e.g., because it delivers an attractive trade off or is mandated for some reason. Such an implementation is heavily influenced both by 1) the Instruction Set Architecture (ISA) it is expressed using, and 2) the micro-architecture it is executed using. For example, the extent to which a general-purpose ISA can support more domain-specific requirements of a cryptographic construction will influence how the latter is mapped to the former (i.e., which implementation techniques are viable) and behavioural properties of doing so (e.g., the execution latency stemming from use of a given implementation technique).
This paper attempts to systematise the topic of cryptographic Instruction Set Extensions (ISEs), which represent an approach to provision of a platform where such support is more explicit and extensive. At a high level, the goal is to improve understanding of what is an extensive and somewhat inter-disciplinary body of literature (e.g., spanning academia and industry, hardware and software, as well as cryptographic and non-cryptographic publication venues). We argue that doing so will help to maximise the quality of subsequent work on this and associated topics.
Debao Wang, Yiwen Gao, Yongbin Zhou, Xian Huang
Side-channel analysis on complex SoC devices with high-frequency microprocessors and multitasking operating systems presents significant challenges in practice due to the high costs of trace acquisition and analysis, generally involving tens of thousands to millions of traces. This work uses a cryptographic execution process on a Broadcom 2837 SoC as a case study to explore ways to reduce costs in electromagnetic side-channel analysis. In the data acquisition phase, we propose an efficient electromagnetic probe positioning strategy that does not require additional tool assistance, significantly accelerating the collection of effective electromagnetic traces. In the side-channel analysis phase, we investigate the combined use of preprocessing techniques, where the optimal preprocessing approach successfully reduces the number of required electromagnetic traces by 12 times, significantly improving the success rate of attacks. Additionally, we implement profiling attacks on such devices, including traditional template attacks, MLP-based, and CNN-based side-channel analysis, demonstrating that even minimal modeling costs can yield excellent analysis performance. Our study confirms the feasibility of low-cost side-channel analysis on complex SoCs and indicates that the sensitive applications running on these devices still require protection.
Enrico Talotti, Matteo Paier, Marino Miculan
The strength of Elliptic curve cryptography (ECC) relies on curve choice. This work analyzes weak keys in standardized curves, i.e., private keys within small subgroups of the auxiliary group $\mathbb{Z}^*_p$. We quantify weak
key prevalence across standardized curves, revealing a potential vulnerability due to numerous small divisors in auxiliary group orders. To address this, we leverage the implicit "baby-steps giant-steps algorithm", which transforms the complex elliptic curve discrete logarithm problem into a simpler problem within $\mathbb{Z}^*_p$. This enables efficient detection of weak keys in small-order subgroups.
Our findings highlight the importance of rigorous key testing in applications using standardized ECC. While random weak keys are unlikely, malicious actors could exploit this by manipulating key generation libraries. To this end, we show how users can assess their private key vulnerabilities and mitigate risks by eliminating weak keys. Hence, this work contributes to improved ECC security through proactive key management practices.
Aditya Singh Rawat, Mahabir Prasad Jhanwar
In a typical network, a DNS(SEC) message over 1232 bytes would either be fragmented into several UDP/IP packets or require a re-transmit over TCP. Unfortunately, IP fragmentation is considered unreliable and a non-trivial number of servers do not support TCP.
We present $\texttt{QNAME}$-Based Fragmentation ($\mathsf{QBF}$): a DNS layer fragmentation scheme that fragments/re-assembles large post-quantum DNS(SEC) messages over UDP in just 1 round-trip while using only standard DNS records. Our experiments show that DNSSEC over $\mathsf{QBF}$, with either Falcon-512, Dilithium-2 or SPHINCS$^{+}$ as the zone signing algorithm, is practically as fast as the currently deployed ECDSA-P256 and RSA-2048 setups in resolving $\texttt{QTYPE}$ $\texttt{A}$ queries.
We present $\texttt{QNAME}$-Based Fragmentation ($\mathsf{QBF}$): a DNS layer fragmentation scheme that fragments/re-assembles large post-quantum DNS(SEC) messages over UDP in just 1 round-trip while using only standard DNS records. Our experiments show that DNSSEC over $\mathsf{QBF}$, with either Falcon-512, Dilithium-2 or SPHINCS$^{+}$ as the zone signing algorithm, is practically as fast as the currently deployed ECDSA-P256 and RSA-2048 setups in resolving $\texttt{QTYPE}$ $\texttt{A}$ queries.
Aditya Singh Rawat, Mahabir Prasad Jhanwar
We present $\mathsf{SL\text{-}DNSSEC}$: a backward-compatible protocol that leverages a quantum-safe KEM and a MAC to perform signature-less $\mathsf{(SL)}$ DNSSEC validations in a single UDP query/response style. Our experiments targeting NIST level I security for QTYPE A query resolution show that $\mathsf{SL\text{-}DNSSEC}$ is practically equivalent to the presently deployed RSA-2048 in terms of bandwidth usage and resolution speeds. Compared to post-quantum signatures, $\mathsf{SL\text{-}DNSSEC}$ reduces bandwidth consumption and resolution times by up to $95\%$ and $60\%$, respectively. Moreover, with response size $<$ query size $\leq 1232$ bytes, $\mathsf{SL\text{-}DNSSEC}$ obviates the long-standing issues of IP fragmentation, TCP re-transmits and DDoS amplification attacks.
Jincheol Ha, Jooyoung Lee
TFHE is a homomorphic encryption scheme supporting fast bootstrapping.
There are two kinds of bootstrapping in TFHE: programmable bootstrapping (also known as gate bootstrapping) and circuit bootstrapping.
Circuit bootstrapping offers more functionality than programmable bootstrapping, but requires heavier computational cost and larger evaluation key size.
A recent work by Wang et al. improving circuit bootstrapping using homomorphic trace evaluation seems to mitigate its heavy cost, while we observe some flaws in their error analysis.
In this paper, we patch the circuit bootstrapping method proposed by Wang et al. with correct error analysis and extend the ciphertext modulus from a prime modulus to a power-of-two modulus, enabling FFT-based implementation of our patched method. In addition, we propose a high precision WWL+ method by adopting GLWE keyswitching, improving the circuit bootstrapping time (resp. key size) of WoP-PBS proposed by Bergerat et al. by factors from $3.26$ to $7.22$ (resp. $2.39$ to $2.63$). We also patch the parameter selection used in the AES evaluation by the WWL+ method, obtaining $26.301$s for a single AES evaluation in a single thread.
In this paper, we patch the circuit bootstrapping method proposed by Wang et al. with correct error analysis and extend the ciphertext modulus from a prime modulus to a power-of-two modulus, enabling FFT-based implementation of our patched method. In addition, we propose a high precision WWL+ method by adopting GLWE keyswitching, improving the circuit bootstrapping time (resp. key size) of WoP-PBS proposed by Bergerat et al. by factors from $3.26$ to $7.22$ (resp. $2.39$ to $2.63$). We also patch the parameter selection used in the AES evaluation by the WWL+ method, obtaining $26.301$s for a single AES evaluation in a single thread.