IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
22 July 2021
Alexander May, Julian Nowakowski, Santanu Sarkar
ePrint Report
Let $(N,e)$ be an RSA public key, where $N=pq$ is the product of equal bitsize primes $p,q$. Let $d_p, d_q$ be the corresponding secret CRT-RSA exponents.
Using a Coppersmith-type attack, Takayasu, Lu and Peng (TLP) recently showed that one obtains the factorization of $N$ in polynomial time, provided that $d_p, d_q \leq N^{0.122}$. Building on the TLP attack, we show the first Partial Key Exposure attack on short secret exponent CRT-RSA. Namely, let $N^{0.122} \leq d_p, d_q \leq N^{0.5}$. Then we show that a constant known fraction of the least significant bits (LSBs) of both $d_p, d_q$ suffices to factor $N$ in polynomial time.
Naturally, the larger $d_p,d_q$, the more LSBs are required. E.g. if $d_p, d_q$ are of size $N^{0.13}$, then we have to know roughly a $\frac 1 5$-fraction of their LSBs, whereas for $d_p, d_q$ of size $N^{0.2}$ we require already knowledge of a $\frac 2 3$-LSB fraction. Eventually, if $d_p, d_q$ are of full size $N^{0.5}$, we have to know all of their bits. Notice that as a side-product of our result we obtain a heuristic deterministic polynomial time factorization algorithm on input $(N,e,d_p,d_q)$.
Using a Coppersmith-type attack, Takayasu, Lu and Peng (TLP) recently showed that one obtains the factorization of $N$ in polynomial time, provided that $d_p, d_q \leq N^{0.122}$. Building on the TLP attack, we show the first Partial Key Exposure attack on short secret exponent CRT-RSA. Namely, let $N^{0.122} \leq d_p, d_q \leq N^{0.5}$. Then we show that a constant known fraction of the least significant bits (LSBs) of both $d_p, d_q$ suffices to factor $N$ in polynomial time.
Naturally, the larger $d_p,d_q$, the more LSBs are required. E.g. if $d_p, d_q$ are of size $N^{0.13}$, then we have to know roughly a $\frac 1 5$-fraction of their LSBs, whereas for $d_p, d_q$ of size $N^{0.2}$ we require already knowledge of a $\frac 2 3$-LSB fraction. Eventually, if $d_p, d_q$ are of full size $N^{0.5}$, we have to know all of their bits. Notice that as a side-product of our result we obtain a heuristic deterministic polynomial time factorization algorithm on input $(N,e,d_p,d_q)$.
Lior Rotem, Gil Segev
ePrint Report
The Schnorr identification and signature schemes have been amongst the most influential cryptographic protocols of the past three decades. Unfortunately, although the best-known attacks on these two schemes are via discrete-logarithm computation, the known approaches for basing their security on the hardness of the discrete logarithm problem encounter the ``square-root barrier''. In particular, in any group of order $p$ where Shoup's generic hardness result for the discrete logarithm problem is believed to hold (and is thus used for setting concrete security parameters), the best-known $t$-time attacks on the Schnorr identification and signature schemes have success probability $t^2/p$, whereas existing proofs of security only rule out attacks with success probabilities $(t^2/p)^{1/2}$ and $(q_{\mathcal{H}} \cdot t^2/p)^{1/2}$, respectively, where $q_{\mathcal{H}}$ denotes the number of random-oracle queries issued by the attacker.
We establish tighter security guarantees for identification and signature schemes which result from $\Sigma$-protocols with special soundness based on the hardness of their underlying relation, and in particular for Schnorr's schemes based on the hardness of the discrete logarithm problem. We circumvent the square-root barrier by introducing a high-moment generalization of the classic forking lemma, relying on the assumption that the underlying relation is ``$d$-moment hard'': The success probability of any algorithm in the task of producing a witness for a random instance is dominated by the $d$-th moment of the algorithm's running time.
In the concrete context of the discrete logarithm problem, already Shoup's original proof shows that the discrete logarithm problem is $2$-moment hard in the generic-group model, and thus our assumption can be viewed as a highly-plausible strengthening of the discrete logarithm assumption in any group where no better-than-generic algorithms are currently known. Applying our high-moment forking lemma in this context shows that, assuming the $2$-moment hardness of the discrete logarithm problem, any $t$-time attacker breaks the security of the Schnorr identification and signature schemes with probabilities at most $(t^2/p)^{2/3}$ and $(q_{\mathcal{H}} \cdot t^2/p)^{2/3}$, respectively.
We establish tighter security guarantees for identification and signature schemes which result from $\Sigma$-protocols with special soundness based on the hardness of their underlying relation, and in particular for Schnorr's schemes based on the hardness of the discrete logarithm problem. We circumvent the square-root barrier by introducing a high-moment generalization of the classic forking lemma, relying on the assumption that the underlying relation is ``$d$-moment hard'': The success probability of any algorithm in the task of producing a witness for a random instance is dominated by the $d$-th moment of the algorithm's running time.
In the concrete context of the discrete logarithm problem, already Shoup's original proof shows that the discrete logarithm problem is $2$-moment hard in the generic-group model, and thus our assumption can be viewed as a highly-plausible strengthening of the discrete logarithm assumption in any group where no better-than-generic algorithms are currently known. Applying our high-moment forking lemma in this context shows that, assuming the $2$-moment hardness of the discrete logarithm problem, any $t$-time attacker breaks the security of the Schnorr identification and signature schemes with probabilities at most $(t^2/p)^{2/3}$ and $(q_{\mathcal{H}} \cdot t^2/p)^{2/3}$, respectively.
Jiaxin Pan, Benedikt Wagner
ePrint Report
We construct a short and adaptively secure identity-based signature scheme tightly based on the well-known Short Integer Solution (SIS) assumption.
Although identity-based signature schemes can be tightly constructed from either standard signature schemes against adaptive corruptions in the multi-user setting or a two-level hierarchical identity-based encryption scheme, neither of them is known with short signature size and tight security based on the SIS assumption. Here ``short'' means the signature size is independent of the message length, which is in contrast to the tree-based (tight) signatures.
Our approach consists of two steps: Firstly, we give two generic transformations (one with random oracles and the other without) from non-adaptively secure identity-based signature schemes to adaptively secure ones tightly. Our idea extends the similar transformation for digital signature schemes. Secondly, we construct a non-adaptively secure identity-based signature scheme based on the SIS assumption in the random oracle model.
Aniruddha Biswas, Palash Sarkar
ePrint Report
The influence of a set of variables on a Boolean function has three separate definitions in the literature, the first due to Ben-Or and Linial (1989), the second due to Fischer et al. (2002) and Blais (2009) and the third due to Tal (2017). The goal of the present work is to carry out a comprehensive study of the notion of influence of a set of variables on a Boolean function. To this end, we introduce a definition of this notion using the auto-correlation function. A modification of the definition leads to the notion of pseudo-influence. Somewhat surprisingly, it turns out that the auto-correlation based definition of influence is equivalent to the definition introduced by Fischer et al. (2002) and Blais (2009) and the notion of pseudo-influence is equivalent to the definition of influence considered by Tal (2017). Extensive analysis of influence and pseduo-influence as well as the Ben-Or and Linial notion of influence is carried out and the relations between these notions are established.
Kemal Bicakci, Kemal Ulker, Yusuf Uzunay
ePrint Report
White-box cryptography aims at providing protection against a powerful adversary which is in complete control of the execution environment of the cryptographic operation. Most existing white-box implementations focus on symmetric encryption. In particular, we are not aware of any previous work on general-purpose digital signature schemes secure against white-box attackers. We present white-box implementations for hash-based signatures so that the security against white-box attackers depends not on the availability of a white-box secure pseudorandom function (in addition to a general one-way function). We also present a hash tree-based solution for one-time passwords secure in a white-box attacker context.
Stephen Holmes, Liqun Chen
ePrint Report
All cryptocurrencies are not the same. Today, they share a common quantum vulnerability through use of non-quantum safe Elliptic Curve Digital Signature Algorithm (ECDSA) digital signatures yet they have very different risks of quantum attack. The risk of attack for a cryptocurrency depends on a number of identified factors such as the block interval time, the vulnerability to an attack that delays the time for an unprocessed transaction to be completed and the behaviour of a cryptocurrency user to increase the cost of a quantum computer attack. Shors algorithm can be used to break ECDSA signatures with a quantum computer. This research addresses the two questions: When will a quantum computer be powerful enough to execute Shor's algorithm? How fast would a quantum computer need to be to break a specific cryptocurrency? In this paper we observe that by benchmarking the speed of circuits and the time for quantum addition on quantum computers we can determine when there is a potential threat to a specific cryptocurrency.
Cláudia Brito, Pedro Ferreira, Bernardo Portela, Rui Oliveira, João Paulo
ePrint Report
Privacy and security are prime obstacles to the wider adoption of machine learning services offered by cloud computing providers. Namely, trusting users' sensitive data to a third-party infrastructure, vulnerable to both external and internal malicious attackers, restricts many companies from leveraging the scalability and flexibility offered by cloud services.
We propose Soteria, a system for distributed privacy-preserving machine learning that combines the Apache Spark system, and its machine learning library (MLlib), with the confidentiality features provided by Trusted Execution Environments (e.g., Intel SGX).
Soteria supports two main designs, each offering specific guarantees in terms of security and performance.
The first encapsulates most of the computation done by Apache Spark on a secure enclave, thus offering stronger security. The second fine-tunes the Spark operations that must be done at the secure enclave to reduce the needed trusted computing base, and consequently the performance overhead, at the cost of an increased attack surface.
An extensive evaluation of Soteria, with classification, regression, dimensionality reduction, and clustering algorithms, shows that our system outperforms state-of-the-art solutions, reducing their performance overhead by up to 41%. Moreover, we show that privacy-preserving machine learning is achievable while providing strong security guarantees.
Shibam Ghosh, Orr Dunkelman
ePrint Report
Division properties, introduced by Todo at Eurocrypt 2015,
are extremely useful in cryptanalysis, are an extension of square attack
(also called saturation attack or integral cryptanalysis). Given their im-
portance, a large number of works tried to offer automatic tools to find
division properties, primarily based on MILP or SAT/SMT. This paper
studies better modeling techniques for finding division properties using
the Constraint Programming and SAT/SMT-based automatic tools. We
use the fact that the Quine-McCluskey algorithm produces a concise
CNF representation corresponding to the division trail table of an Sbox.
As a result, we can offer significantly more compact models, which allow
SAT and Constraint Programming tools to outperform previous results.
To show the strength of our new approach, we look at the NIST lightweight
candidate KNOT and Ascon. We show several new distinguishers with
a lower data complexity for 17-round KNOT-256, KNOT-384 and 19-
round KNOT-512. In addition, for the 5-round Ascon, we get a lower
data distinguisher than the previous division-based results.
Finally, we revisit the method to extend the integral distinguisher by
composing linear layers at the input and output. We provide a formu-
lation to find the optimal number of linear combinations that need to
be considered. As a result of this new formulation, we prove that 18-
round KNOT-256 and KNOT-384 have no integral distinguisher using
conventional division property and we show this more efficiently than
the previous methods.
James Bartusek
ePrint Report
We construct a constant-round composable protocol for blind and verifiable classical delegation of quantum computation, and show applications to secure quantum computation with classical communication. In particular, we give the first maliciously-secure multi-party protocols for BQP (bounded-error quantum polynomial-time) computation that only require a single party to have quantum capabilities. Assuming QLWE (the quantum hardness of learning with errors), we obtain the following.
- A six-round protocol between one quantum server and multiple classical clients in the CRS (common random string) model.
- A three-round protocol between one quantum server and multiple classical clients in the PKI (public-key infrastructure) + QRO (quantum random oracle) model.
- A two-message protocol between quantum sender and classical receiver (a quantum non-interactive secure computation protocol), in the QRO model.
The only previously known approach for obtaining composable security of blind classical verification of quantum computation (Gheorghiu and Vidick, FOCS 2019) has inverse polynomial security and requires polynomially many rounds of interaction.
The property we require of classical verification of quantum computation that enables composability is malicious blindness, which stipulates that the prover does not learn anything about the verifier's delegated computation, even if it is able to observe whether or not the verifier accepted the interaction. To construct a protocol with malicious blindness, we use a classical verification protocol for sampBQP computation (Chung et al., Arxiv 2020), which in general has inverse polynomial soundness error, to prove honest evaluation of QFHE (quantum fully-homomorphic encryption) ciphertexts with negligible soundness error. Obtaining a constant-round protocol requires a strong parallel repetition theorem for classical verification of quantum computation, which we show following the "nearly orthogonal projector" proof strategy (Alagic et al., TCC 2020).
The only previously known approach for obtaining composable security of blind classical verification of quantum computation (Gheorghiu and Vidick, FOCS 2019) has inverse polynomial security and requires polynomially many rounds of interaction.
The property we require of classical verification of quantum computation that enables composability is malicious blindness, which stipulates that the prover does not learn anything about the verifier's delegated computation, even if it is able to observe whether or not the verifier accepted the interaction. To construct a protocol with malicious blindness, we use a classical verification protocol for sampBQP computation (Chung et al., Arxiv 2020), which in general has inverse polynomial soundness error, to prove honest evaluation of QFHE (quantum fully-homomorphic encryption) ciphertexts with negligible soundness error. Obtaining a constant-round protocol requires a strong parallel repetition theorem for classical verification of quantum computation, which we show following the "nearly orthogonal projector" proof strategy (Alagic et al., TCC 2020).
Edward Eaton, Douglas Stebila, Roy Stracovsky
ePrint Report
Anonymity networks, such as the Tor network, are highly decentralized and make heavy use of ephemeral identities. Both of these characteristics run in direct opposition to a traditional public key infrastructure, so entity authentication in an anonymity network can be a challenge. One system that Tor relies on is key-blinded signatures, which allow public keys to be transformed so that authentication is still possible, but the identity public key is masked. This is used in Tor during onion service descriptor lookup, in which a .onion address is resolved to a rendezvous point through which a client and an onion service can communicate. The mechanism currently used is based on elliptic curve signatures, so a post-quantum replacement will be needed.
We consider four fully post-quantum key-blinding schemes, and prove the unlinkability and unforgeability of all schemes in the random-oracle model. We provide a generic framework for proving unlinkability of key-blinded schemes by reducing to two properties, signing with oracle reprogramming and independent blinding. Of the four schemes, two are based on Round 3 candidates in NIST's post-quantum signature standardization process, Dilithium and Picnic. The other two are based on much newer schemes, CSI-FiSh and LegRoast, which have more favourable characteristics for blinding. CSI-FiSh is based on isogenies and boasts a very small public key plus signature sizes, and its group action structure allows for key-blinding in a straightforward way. LegRoast uses the Picnic framework, but with the Legendre symbol PRF as a symmetric primitive, the homomorphic properties of which can be exploited to blind public keys in a novel way. Our schemes require at most small changes to parameters, and are generally almost as fast as their unblinded counterparts, except for blinded Picnic, for which signing and verifying is roughly half as fast.
We consider four fully post-quantum key-blinding schemes, and prove the unlinkability and unforgeability of all schemes in the random-oracle model. We provide a generic framework for proving unlinkability of key-blinded schemes by reducing to two properties, signing with oracle reprogramming and independent blinding. Of the four schemes, two are based on Round 3 candidates in NIST's post-quantum signature standardization process, Dilithium and Picnic. The other two are based on much newer schemes, CSI-FiSh and LegRoast, which have more favourable characteristics for blinding. CSI-FiSh is based on isogenies and boasts a very small public key plus signature sizes, and its group action structure allows for key-blinding in a straightforward way. LegRoast uses the Picnic framework, but with the Legendre symbol PRF as a symmetric primitive, the homomorphic properties of which can be exploited to blind public keys in a novel way. Our schemes require at most small changes to parameters, and are generally almost as fast as their unblinded counterparts, except for blinded Picnic, for which signing and verifying is roughly half as fast.
Thom Wiggers, Simona Samardjiska
ePrint Report
The best algorithms for the Learning Parity with Noise (LPN) problem require sub-exponential time and memory. This often makes memory, and not time, the limiting factor for practical attacks, which seem to be out of reach even for relatively small parameters. In this paper, we try to bring the state-of-the-art in solving LPN closer to the practical realm. We improve upon the existing algorithms by modifying the Coded-BKW algorithm to work under various memory constrains. We correct and expand previous analysis and experimentally verify our findings. As a result we were able to mount practical attacks on the largest parameters reported to date using only $2^{39}$ bits of memory.
Jan Bobolz, Fabian Eidens, Raphael Heitjohann, Jeremy Fell
ePrint Report
We present a cryptographic Java library called Cryptimeleon designed for prototyping and benchmarking privacy-preserving cryptographic schemes.
The library is geared towards researchers wanting to implement their schemes (1) as a sanity check for their constructions, and (2) for benchmark numbers in their papers.
To ease the implementation process, Cryptimeleon "speaks the language" of paper writers.
It offers a similar degree of abstraction as is commonly used in research papers.
For example, bilinear groups can be used as the familiar black-box and Schnorr-style proofs can be described on the level of Camenisch-Stadler notation.
It employs several optimizations (such as multi-exponentation) transparently, allowing the developer to phrase computations as written in the paper instead of having to conform to an artificial API for better performance.
Cryptimeleon implements (among others) finite fields, elliptic curve groups and pairings, hashing, Schnorr-style zero-knowledge proofs, accumulators, digital signatures, secret sharing, group signatures, attribute-based encryption, and other modern cryptographic constructions.
In this paper, we present the library, its capabilities, and explain important design decisions.
Cryptimeleon implements (among others) finite fields, elliptic curve groups and pairings, hashing, Schnorr-style zero-knowledge proofs, accumulators, digital signatures, secret sharing, group signatures, attribute-based encryption, and other modern cryptographic constructions.
In this paper, we present the library, its capabilities, and explain important design decisions.
Gregor Leander, Thorben Moos, Amir Moradi, Shahram Rasoolzadeh
ePrint Report
We introduce SPEEDY, a family of ultra low-latency block ciphers. We mix engineering expertise into each step of the ciphers design process in order to create a secure encryption primitive with an extremely low latency in CMOS hardware. The centerpiece of our constructions is a high-speed 6-bit substitution box whose coordinate functions are realized as two-level NAND trees. In contrast to other low-latency block ciphers such as PRINCE, PRINCEv2, MANTIS and QARMA, we neither constrain ourselves by demanding decryption at low overhead, nor by requiring a super low area or energy. This freedom together with our gate- and transistor-level considerations allows us to create an ultra low-latency cipher which outperforms all known solutions in single-cycle encryption speed. Our main result, SPEEDY-6-192, is a6-round 192-bit block and 192-bit key cipher which can be executed faster in hardware than any other known encryption primitive (including Gimli in Even-Mansour scheme and the Orthros pseudorandom function) and offers 128-bit security. One round more, i.e., SPEEDY-7-192, provides full 192-bit security. SPEEDY primarily targets hardware security solutions embedded in high-end CPUs, where area and energy restrictions are secondary while high performance is the number one priority.
Lichao Wu, Guilherme Perin, Stjepan Picek
ePrint Report
In the last decade, machine learning-based side-channel attacks became a standard option when investigating profiling side-channel attacks. At the same time, the previous state-of-the-art technique, template attack, started losing its importance and was more considered a baseline to compare against. As such, most of the results reported that machine learning (and especially deep learning) could significantly outperform the template attack. This does not mean the template attack does not have certain advantages even when compared to deep learning. The most significant one is that it does not have any hyperparameters to tune, making it easier to use.
We take another look at the template attack, and we devise a feature engineering phase allowing template attacks to compete or even outperform state-of-the-art deep learning-based side-channel attacks. More precisely, we show how a deep learning technique called the triplet model can be used to find highly efficient embeddings of input data, which can then be fed into the template attack resulting in powerful attacks.
We take another look at the template attack, and we devise a feature engineering phase allowing template attacks to compete or even outperform state-of-the-art deep learning-based side-channel attacks. More precisely, we show how a deep learning technique called the triplet model can be used to find highly efficient embeddings of input data, which can then be fed into the template attack resulting in powerful attacks.
Jonas Ruchti, Michael Gruber, Michael Pehl
ePrint Report
Physical Unclonable Functions (PUFs) have been increasingly used as an alternative to non-volatile memory for the storage of cryptographic secrets. Research on side channel and fault attacks with the goal of extracting these secrets has begun to gain interest but no fault injection attack targeting the necessary error correction within a PUF device has been shown so far. This work demonstrates one such attack on a hardware fuzzy commitment scheme implementation and thus shows a new potential attack threat existing in current PUF key storage systems. After presenting evidence for the overall viability of the profiled attack by performing it on an FPGA implementation, countermeasures are analysed: we discuss the efficacy of hashing helper data with the PUF-derived key to prevent the attack as well as codeword masking, a countermeasure effective against a side channel attack. The analysis shows the limits of these approaches. In particular, it demonstrates the criticality of timing in codeword masking by confirming the attack's effectiveness on ostensibly protected hardware.
Arpita Patra, Akshayaram Srinivasan
ePrint Report
We give constructions of three-round secure multiparty computation (MPC) protocols for general functions that make black-box use of a two-round oblivious transfer. For the case of semi-honest adversaries, we make use of a two-round, semi-honest secure oblivious transfer in the plain model. This resolves the round-complexity of black-box (semi-honest) MPC protocols from minimal assumptions and answers an open question of Applebaum et al. (ITCS 2020). For the case of malicious adversaries, we make use of a two-round maliciously-secure oblivious transfer in the common random/reference string model that satisfies a (mild) variant of adaptive security for the receiver.
Mike Hamburg, Julius Hermelink, Robert Primas, Simona Samardjiska, Thomas Schamberger, Silvan Streit, Emanuele Strieder, Christine van Vredendaal
ePrint Report
Single-trace attacks are a considerable threat to implementations of classic public-key schemes, and their implications on newer lattice-based schemes are still not well understood.
Two recent works have presented successful single-trace attacks targeting the Number Theoretic Transform (NTT), which is at the heart of many lattice-based schemes.
However, these attacks either require a quite powerful side-channel adversary or are restricted to specific scenarios such as the encryption of ephemeral secrets.
It is still an open question if such attacks can be performed by simpler adversaries while targeting more common public-key scenarios.
In this paper, we answer this question positively. First, we present a method for crafting ring/module-LWE ciphertexts that result in sparse polynomials at the input of inverse NTT computations, independent of the used private key. We then demonstrate how this sparseness can be incorporated into a side-channel attack, thereby significantly improving noise resistance of the attack compared to previous works. The effectiveness of our attack is shown on the use-case of CCA2 secure Kyber $k$-module-LWE, where $k\in\{2,3,4\}$. Our $k$-trace attack on the long-term secret can handle noise up to a $\sigma \leq 1.2$ in the noisy Hamming weight leakage model, also for masked implementations. A $2k$-trace variant for Kyber1024 even allows noise $\sigma \leq 2.2$ also in the masked case, with more traces allowing us to recover keys up to $\sigma \leq 2.7$. Single-trace attack variants have a noise tolerance depending on the Kyber parameter set, ranging from $\sigma \leq 0.5$ to $\sigma \leq 0.7$. As a comparison, similar previous attacks in the masked setting were only successful with $\sigma \leq 0.5$.
In this paper, we answer this question positively. First, we present a method for crafting ring/module-LWE ciphertexts that result in sparse polynomials at the input of inverse NTT computations, independent of the used private key. We then demonstrate how this sparseness can be incorporated into a side-channel attack, thereby significantly improving noise resistance of the attack compared to previous works. The effectiveness of our attack is shown on the use-case of CCA2 secure Kyber $k$-module-LWE, where $k\in\{2,3,4\}$. Our $k$-trace attack on the long-term secret can handle noise up to a $\sigma \leq 1.2$ in the noisy Hamming weight leakage model, also for masked implementations. A $2k$-trace variant for Kyber1024 even allows noise $\sigma \leq 2.2$ also in the masked case, with more traces allowing us to recover keys up to $\sigma \leq 2.7$. Single-trace attack variants have a noise tolerance depending on the Kyber parameter set, ranging from $\sigma \leq 0.5$ to $\sigma \leq 0.7$. As a comparison, similar previous attacks in the masked setting were only successful with $\sigma \leq 0.5$.
Mathilde Chenu, Benjamin Smith
ePrint Report
We investigate the isogeny graphs of supersingular elliptic curves over
\(\mathbb{F}_{p^2}\) equipped with a \(d\)-isogeny to their Galois conjugate.
These curves are interesting because they are, in a sense,
a generalization of curves defined over \(\mathbb{F}_p\),
and there is an action of the ideal class group of \(\mathbb{Q}(\sqrt{-dp})\) on the isogeny graphs.
We investigate constructive and destructive aspects of these graphs in isogeny-based cryptography,
including generalizations of the CSIDH cryptosystem and the Delfs--Galbraith algorithm.
Jose Maria Bermudo Mera, Angshuman Karmakar, Suparna Kundu, Ingrid Verbauwhede
ePrint Report
In this paper, we introduce Scabbard, a suite of post-quantum key-encapsulation mechanisms. Our suite contains three different schemes Florete, Espada, and Sable based on the hardness of module- or ring-learning with rounding problem. In this work, we first show how the latest advancements on lattice-based cryptography can be utilized to create new better schemes and even improve the state-of-the-art on post-quantum cryptography.
We put particular focus on designing schemes that can optimally exploit the parallelism offered by certain hardware platforms and are also suitable for resource constrained devices. We show that this can be achieved without compromising the security of the schemes or penalizing their performance on other platforms.
To substantiate our claims, we provide optimized implementations of our three new schemes on a wide range of platforms including general-purpose Intel processors using both portable C and vectorized instructions, embedded platforms such as Cortex-M4 microcontrollers, and hardware platforms such as FPGAs. We show that on each platform, our schemes can outperform the state-of-the-art in speed, memory footprint, or area requirements.
Keita Emura, Ryoma Ito, Sachiko Kanamori, Ryo Nojima, Yohei Watanabe
ePrint Report
Searchable symmetric encryption (SSE) has attracted significant attention because it can prevent data leakage from external devices, e.g., clouds. SSE appears to be effective to construct such a secure system; however, it is not trivial to construct such a system from SSE in practice because other parts must be designed, e.g., user login management, defining the keyword space, and sharing secret keys among multiples users who usually do not have public key certificates. In this paper, we describe the implementation of two systems from the state-free dynamic SSE (DSSE) (Watanabe et al., ePrint 2021), i.e., a secure storage system (for a single user) and a chat system (for multiple users). In addition to the DSSE protocol, we employ a secure multipath key exchange (SXKEX) protocol (Costea et al., CCS 2018), which is secure against some classes of unsynchronized active attackers. It allows the chat system users without certificates to share a secret key of the DSSE protocol in a secure manner. To realize end-to-end encryption, the shared key must be kept secret; thus, we must consider how to preserve the secret on, for example, a user's local device. However, this requires additional security assumptions, e.g., tamper resistance, and it seems difficult to assume that all users have such devices. Thus, we propose a secure key agreement protocol by combing the SXKEX and login information (password) that does not require an additional tamper-resistant device. Combining the proposed key agreement protocol and the underlying state-free DSSE protocol allow users who know the password to use the systems on multiple devices.