International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

23 December 2023

Brett Falk, Rafail Ostrovsky, Matan Shtepel, Jacob Zhang
ePrint Report ePrint Report
We design and implement GigaDORAM, a novel 3-server Distributed Oblivious Random Access Memory (DORAM) protocol. Oblivious RAM allows a client to read and write to memory on an untrusted server while ensuring the server itself learns nothing about the client's access pattern. Distributed Oblivious RAM (DORAM) allows a group of servers to efficiently access a secret-shared array at a secret-shared index.

A recent generation of DORAM implementations (e.g. FLORAM, DuORAM) has focused on building DORAM protocols based on Function Secret-Sharing (FSS). These protocols have low communication complexity and low round complexity but linear computational complexity of the servers. Thus, they work for moderate-size databases, but at a certain size, these FSS-based protocols become computationally inefficient.

In this work, we introduce GigaDORAM, a hierarchical-solution-based DORAM featuring poly-logarithmic computation and communication, but with an over $100\times$ reduction in rounds per query compared to previous hierarchical DORAM protocols. In our implementation, we show that for moderate to large databases where FSS-based solutions become computation-bound, our protocol is orders of magnitude more efficient than the best existing DORAM protocols. When $N = 2^{31}$, our DORAM is able to perform over 700 queries per second.
Expand
Diego F. Aranha, Anamaria Costache, Antonio Guimarães, Eduardo Soria-Vazquez
ePrint Report ePrint Report
Homomorphic encryption (HE) enables computation on encrypted data, which in turn facilitates the outsourcing of computation on private data. However, HE offers no guarantee that the returned result was honestly computed by the cloud. In order to have such guarantee, it is necessary to add verifiable computation (VC) into the system.

The most efficient recent works in VC over HE focus on verifying operations on the ciphertext space of the HE scheme, which usually lacks the algebraic structure that would make it compatible with existing VC systems. For example, multiplication of ciphertexts in the current most efficient HE schemes requires non-algebraic operations such as real division and rounding. Therefore, existing works for VC over HE have to either give up on those efficient HE schemes, or incur a large overhead (an amount of constraints proportional to the ciphertext ring's size) in order to emulate these non-algebraic operations.

In this work, we move away from that paradigm by placing the verification checks in the plaintext space of HE, all while the prover remains computing on ciphertexts. We achieve this by introducing a general transformation for Interactive Oracle Proofs (IOPs) to work over HE, whose result we denote as HE-IOPs. We apply this same transformation to the FRI [Ben-Sasson et al., ICALP 2018] IOP of proximity and we show how to compile HE-Reed Solomon-encoded IOPs and HE-$\delta$-correlated-IOPs with HE-FRI into HE-IOPs.

Furthermore, our construction is compatible with a prover that provides input in zero-knowledge, and only relies on building blocks that are plausibly quantum-safe.

Aligning the security parameters of HE and FRI is a difficult task for which we introduce several optimizations. We demonstrate their efficiency with a proof-of-concept implementation in Python and show that, for an encrypted Reed Solomon codeword with degree bound $2^{11}$ and rate $1/16$ in a (plaintext) field of size $2^{256}$, we can run FRI's commit phase in just 43 minutes on a single thread on a c6i.metal instance (which could be reduced to less than a minute in a multi-threaded implementation in a large server). Verification takes less than 0.2 seconds, and, based on micro-benchmarks of the employed techniques, we show it could be up to 100 times faster in a fully optimized implementation.
Expand
Yue Guo, Harish Karthikeyan, Antigoni Polychroniadou
ePrint Report ePrint Report
Anonymous Zether, proposed by Bunz et al. (FC, 2020) and subsequently improved by Diamond (IEEE S&P, 2021) is an account-based confidential payment mechanism that works by using a smart contract to achieve privacy (i.e. identity of receivers to transactions and payloads are hidden). In this work, we look at simplifying the existing protocol while also achieving batching of transactions for multiple receivers, while ensuring consensus and forward secrecy. To the best of our knowledge, this work is the first to formally study the notion of forward secrecy in the setting of blockchain, borrowing a very popular and useful idea from the world of secure messaging. Specifically, we introduce: - FUL-Zether, a forward-secure version of Zether (Bunz et al., FC, 2020). - PRIvate DEcentralized Confidental Transactions (PriDe CT), a much-simplified version of Anonymous Zether that achieves competitive performance and enables batching of transactions for multiple receivers. - PRIvate DEcentralized Forward-secure Until Last update Confidential Transactions (PriDeFUL CT), a forward-secure version of PriDe CT. We also present an open-source, Ethereum-based implementation of our system. PriDe CT uses linear homomorphic encryption as Anonymous Zether but with simpler zero-knowledge proofs. PriDeFUL CT uses an updatable public key encryption scheme to achieve forward secrecy by introducing a new DDH-based construction in the standard model. In terms of transaction sizes, Quisquis (Asiacrypt, 2019), which is the only cryptocurrency that supports batchability (albeit in the UTXO model), has 15 times more group elements than PriDe CT. Meanwhile, for a ring of $N$ receivers, Anonymous Zether requires $6\log N$ more terms even without accounting for the ability to batch in PriDe CT. Further, our implementation indicates that, for $N=32$, even if there were 7 intended receivers, PriDe CT outperforms Anonymous Zether in proving time and gas consumption.
Expand
Marloes Venema, Leon Botros
ePrint Report ePrint Report
Predicate encryption (PE) is a type of public-key encryption that captures many useful primitives such as attribute-based encryption (ABE). Although much progress has been made to generically achieve security against chosen-plaintext attacks (CPA) efficiently, in practice, we also require security against chosen-ciphertext attacks (CCA). Because achieving CCA-security on a case-by-case basis is a complicated task, several generic conversion methods have been proposed, which typically target different subclasses of PE such as ciphertext-policy ABE. As is common, such conversion methods may sacrifice some efficiency. Notably, for ciphertext-policy ABE, all proposed generic transformations incur a significant decryption overhead. Furthermore, depending on the setting in which PE is used, we may also want to require that messages are signed. To do this, predicate signature schemes can be used. However, such schemes provide a strong notion of privacy for the signer, which may be stronger than necessary for some practical settings at the cost of efficiency.

In this work, we propose the notion of predicate extension, which transforms the predicate used in a PE scheme to include one additional attribute, in both the keys and the ciphertexts. Using predicate extension, we can generically obtain CCA-security and signatures from a CPA-secure PE scheme. For the CCA-security transform, we observe that predicate extension implies a two-step approach to achieving CCA-security. This insight broadens the applicability of existing transforms for specific subclasses of PE to cover all PE. We also propose a new transform that incurs slightly less overhead than existing transforms. Furthermore, we show that predicate extension allows us to create a new type of signatures, which we call PE-based signatures. PE-based signatures are weaker than typical predicate signatures in the sense that they do not provide privacy for the signer. Nevertheless, such signatures may be more suitable for some practical settings owing to their efficiency or reduced interactivity. Lastly, to show that predicate extensions may facilitate a more efficient way to achieve CCA-security generically than existing methods, we propose a novel predicate-extension transformation for a large class of pairing-based PE, covered by the pair and predicate encodings frameworks. In particular, this yields the most efficient generic CCA-conversion for ciphertext-policy ABE.
Expand
Xun Liu, Shang Gao, Tianyu Zheng, Bin Xiao
ePrint Report ePrint Report
The succinct non-interactive argument of knowledge (SNARK) technique is widely used in blockchain systems to replace the costly on-chain computation with the verification of a succinct proof. However, when dealing with multiple proofs, most existing applications require each proof to be independently verified, resulting in a heavy load on nodes and high transaction fees for users. To improve the efficiency of verifying multiple proofs, we introduce SnarkFold, a universal SNARK-proof aggregation scheme based on incrementally verifiable computation (IVC). Unlike previous proof aggregation approaches based on inner product arguments, which have a logarithmic proof size and verification cost, SnarkFold achieves constant verification time and proof size. One core technical advance in SnarkFold, of independent interest, is the ``split IVC'': rather than using one running instance to fold/accumulate the computation, we employ two (or more) running instances of different types in the recursive circuit to avoid transferring into the same structure. This distinguishing feature is particularly well-suited for proof aggregation scenarios, as constructing arithmetic circuits for pairings can be expensive. We further demonstrate how to fold Groth16 proofs with our SnarkFold. With some further optimizations, SnarkFold achieves the highest efficiency among all approaches.
Expand

22 December 2023

Thomas Attema, Serge Fehr, Michael Klooß, Nicolas Resch
ePrint Report ePrint Report
The Fiat-Shamir transformation is a general principle to turn any public-coin interactive proof into non-interactive one (with security then typically analyzed in the random oracle model). While initially used for 3-round protocols, many recent constructions use it for multi-round protocols. However, in general the soundness error of the Fiat-Shamir transformed protocol degrades exponentially in the number of rounds. On the positive side, it was shown that for the special class of $(k_1,\dots,k_\mu)$-special-sound protocols the loss is actually only linear in the number of random oracle queries, and independent of the number of rounds, which is optimal.

A natural next question is whether this positive result extends to the Fiat-Shamir transformation of so-called $(\Gamma_1,\dots,\Gamma_\mu)$-special-sound protocols, a notion recently defined and analyzed in the interactive case, with the aim to capture the most general notion of special-soundness.

We show in this work that this is indeed the case. Concretely, we show that the Fiat--Shamir transformation of any $(\Gamma_1,\dots,\Gamma_\mu)$-special-sound interactive proof is knowledge sound under the same condition under which the original interactive proof is knowledge sound. Furthermore, also here the loss is linear in the number of random-oracle queries and independent of the number of rounds.

In light of the above, one might suspect that our argument follows as a straightforward combination of the above mentioned prior works. However, this is not the case. The approach used for $(k_1,\dots,k_\mu)$-special-sound protocols, which is based on an extractor that samples without replacement, does not (seem to) generalize; on the other hand, the other approach, which uses an extractor based on sampling with replacement, comes with an additional loss that would blow up in the recursive multi-round analysis.
Expand
Hanbeom Shin, Insung Kim, Sunyeop Kim, Seonggyeom Kim, Deukjo Hong, Jaechul Sung, Seokhie Hong
ePrint Report ePrint Report
At EUROCRYPT 2017, Grassi et al. proposed the multiple-of-8 property for 5-round AES, where the number $n$ of right pairs is a multiple of 8. At ToSC 2019, Boura et al. generalized the multiple-of property for a general SPN block cipher and applied it to block cipher SKINNY. In this paper, we present that $n$ is not only a multiple but also a fixed value for SKINNY. Unlike the previous proof of generalization of multiple-of property using equivalence class, we investigate the propagation of the set to compute the exact number $n$. We experimentally verified that presented property holds. We extend this property one round more using the lack of the whitening key on the SKINNY and use this property to construct 6-round distinguisher on SKINNY-64 and SKINNY-128. The probability of success of both distinguisher is almost 1 and the total complexities are $2^{16}$ and $2^{32}$ respectively. We verified that this property only holds for SKINNY, not for AES and MIDORI, and provide the conditions under which it exists for AES-like ciphers.
Expand
Jinpeng Liu, Ling Sun
ePrint Report ePrint Report
HALFLOOP-96 is a 96-bit tweakable block cipher used in high frequency radio to secure automatic link establishment messages. In this paper, we concentrate on its differential properties in the contexts of conventional, related-tweak, and related-key differential attacks. Using automatic techniques, we determine the minimum number of active S-boxes and the maximum differential probability in each of the three configurations. The resistance of HALFLOOP-96 to differential attacks in the conventional and related-tweak configurations is good, and the longest distinguishers in both configurations consist of five rounds. In contrast, the security of the cipher against differential attacks in the related-key configuration is inadequate. The most effective related-key distinguisher we can find spans eight rounds. The 8-round related-key differential distinguisher is then utilised to initiate a 9-round weak-key attack. With $2^{92.96}$ chosen-plaintexts, 38.77-bit equivalent information about the keys can be recovered. Even though the attack does not pose a significant security threat to HALFLOOP-96, its security margin in the related-key configuration is exceedingly narrow. Therefore, improper use must be avoided in the application.
Expand
Prashant Agrawal, Abhinav Nakarmi, Mahabir Prasad Jhanwar, Subodh Vishnu Sharma, Subhashis Banerjee
ePrint Report ePrint Report
We introduce the notion of traceable mixnets. In a traditional mixnet, multiple mix-servers jointly permute and decrypt a list of ciphertexts to produce a list of plaintexts, along with a proof of correctness, such that the association between individual ciphertexts and plaintexts remains completely hidden. However, in many applications, the privacy-utility tradeoff requires answering some specific queries about this association, without revealing any information beyond the query result. We consider queries of the following types: a) given a ciphertext in the mixnet input list, whether it encrypts one of a given subset of plaintexts in the output list, and b) given a plaintext in the mixnet output list, whether it is a decryption of one of a given subset of ciphertexts in the input list. Traceable mixnets allow the mix-servers to jointly prove answers to the above queries to a querier such that neither the querier nor a threshold number of mix-servers learn any information beyond the query result. Further, if the querier is not corrupted, the corrupted mix-servers do not even learn the query result. We first comprehensively formalise these security properties of traceable mixnets and then propose a construction of traceable mixnets using novel distributed zero-knowledge proofs (ZKPs) of set membership and of a statement we call reverse set membership. Although set membership has been studied in the single-prover setting, the main challenge in our distributed setting lies in making sure that none of the mix-servers learn the association between ciphertexts and plaintexts during the proof. We implement our distributed ZKPs and show that they are faster than state-of-the-art by at least one order of magnitude.
Expand
Chloe Cachet, Ariel Hamlin, Maryam Rezapour, Benjamin Fuller
ePrint Report ePrint Report
Fuzzy extractors derive stable keys from noisy sources non-interactively (Dodis et al., SIAM Journal of Computing 2008). Since their introduction, research has focused on two tasks: 1) showing security for as many distributions as possible and 2) providing stronger security guarantees including allowing one to enroll the same value multiple times (reusability), security against an active attacker (robustness), and preventing leakage about the enrolled value (privacy). Existing constructions of reusable fuzzy extractors are direct and do not support as many distributions as the best non-reusable constructions. Constructions of robust fuzzy extractors require strong assumptions even in the CRS model. Given the need for progress on the basic fuzzy extractor primitive, it is prudent to seek generic mechanisms to transform a fuzzy extractor into one that is robust, private, and reusable so that it can inherit further improvements. This work asks if one can generically upgrade fuzzy extractors to achieve robustness, privacy, and reusability. We show positive and negative results: we show upgrades for robustness and privacy, but we provide a negative result on reuse. 1. We upgrade (private) fuzzy extractors to be robust under weaker assumptions than previously known in the common reference string model. 2. We show a generic upgrade for a private fuzzy extractor using multi-bit compute and compare (MBCC) obfuscation (Wichs and Zirdelis, FOCS 2017) that requires less entropy than prior work. 3. We show one cannot arbitrarily compose private fuzzy extractors. It is known one cannot reuse an arbitrary fuzzy extractor; each enrollment can leak a constant fraction of the input entropy. We show that one cannot build a reusable private fuzzy extractor by considering other enrollments as auxiliary input. In particular, we show that assuming MBCC obfuscation and collision-resistant hash functions, there does not exist a private fuzzy extractor secure against unpredictable auxiliary inputs strengthening a negative result of Brzuska et al. (Crypto 2014).
Expand
Sreyosi Bhattacharyya, Palash Sarkar
ePrint Report ePrint Report
The first contribution of this work is a generalisation of Stern's information set decoding (ISD) algorithm. Stern's algorithm, a variant of Stern's algorithm due to Dumer, as well as a recent generalisation of Stern's algorithm due to Bernstein and Chou are obtained as special cases of our generalisation. Our second contribution is to introduce the notion of a set of effective time/memory trade-off (TMTO) points for any ISD algorithm for given ranges of values of parameters of the algorithm. Such a set succinctly and uniquely captures the entire landscape of TMTO points with only a minor loss in precision. We further describe a method to compute a set of effective TMTO points. As an application, we compute sets of effective TMTO points for the five variants of the Classic McEliece cryptosystem corresponding to the new algorithm as well as for Stern's, Dumer's and Bernstein and Chou's algorithms. The results show that while Dumer's and Bernstein and Chou's algorithms do not provide any interesting TMTO points beyond what is achieved by Stern's algorithm, the new generalisation that we propose provide about twice the number of effective TMTO points that is obtained from Stern's algorithm. Consequences of the obtained TMTO points to the classification of the variants of Classic McEliece in appropriate NIST categories are discussed.
Expand

21 December 2023

Abderrahmane Nitaj, Tajjeeddine Rachidi
ePrint Report ePrint Report
Artificial intelligence (AI) is a modern technology that allows plenty of advantages in daily life, such as predicting weather, finding directions, classifying images and videos, even automatically generating code, text, and videos. Other essential technologies such as blockchain and cybersecurity also benefit from AI. As a core component used in blockchain and cybersecurity, cryptography can benefit from AI in order to enhance the confidentiality and integrity of cyberspace. In this paper, we review the algorithms underlying four prominent cryptographic cryptosystems, namely the Advanced Encryption Standard, the Rivest--Shamir--Adleman, Learning With Errors, and the Ascon family of cryptographic algorithms for authenticated encryption. Where possible, we pinpoint areas where AI can be used to help improve their security.
Expand
Eli Bradley, Brent Waters, David J. Wu
ePrint Report ePrint Report
Succinctness and zero-knowledge are two fundamental properties in the study of cryptographic proof systems. Several recent works have formalized the connections between these two notions by showing how to realize non-interactive zero-knowledge (NIZK) arguments from succinct non-interactive arguments. Specifically, Champion and Wu (CRYPTO 2023) as well as Bitansky, Kamath, Paneth, Rothblum, and Vasudevan (ePrint 2023) recently showed how to construct a NIZK argument for NP from a (somewhere-sound) non-interactive batch argument (BARG) and a dual-mode commitment scheme (and in the case of the Champion-Wu construction, a local pseudorandom generator). The main open question is whether a BARG suffices for a NIZK (just assuming one-way functions).

In this work, we first show that an adaptively-sound BARG for NP together with an one-way function imply a computational NIZK argument for NP. We then show that the weaker notion of somewhere soundness achieved by existing BARGs from standard algebraic assumptions are also adaptively sound if we assume sub-exponential security. This transformation may also be of independent interest. Taken together, we obtain a NIZK argument for NP from one-way functions and a sub-exponentially-secure somewhere-sound BARG for NP.

If we instead assume plain public-key encryption, we show that a standard polynomially-secure somewhere-sound batch argument for NP suffices for the same implication. As a corollary, this means a somewhere-sound BARG can be used to generically upgrade any semantically-secure public-key encryption scheme into one secure against chosen-ciphertext attacks. More broadly, our results demonstrate that constructing non-interactive batch arguments for NP is essentially no easier than constructing NIZK arguments for NP.
Expand
Tomoyuki Morimae, Alexander Poremba, Takashi Yamakawa
ePrint Report ePrint Report
We study digital signatures with revocation capabilities and show two results. First, we define and construct digital signatures with revocable signing keys from the LWE assumption. In this primitive, the signing key is a quantum state which enables a user to sign many messages and yet, the quantum key is also revocable, i.e., it can be collapsed into a classical certificate which can later be verified. Once the key is successfully revoked, we require that the initial recipient of the key loses the ability to sign. We construct digital signatures with revocable signing keys from a newly introduced primitive which we call two-tier one-shot signatures, which may be of independent interest. This is a variant of one-shot signatures, where the verification of a signature for the message ``0'' is done publicly, whereas the verification for the message ``1'' is done in private. We give a construction of two-tier one-shot signatures from the LWE assumption. As a complementary result, we also construct digital signatures with quantum revocation from group actions, where the quantum signing key is simply ``returned'' and then verified as part of revocation.

Second, we define and construct digital signatures with revocable signatures from OWFs. In this primitive, the signer can produce quantum signatures which can later be revoked. Here, the security property requires that, once revocation is successful, the initial recipient of the signature loses the ability to find accepting inputs to the signature verification algorithm. We construct this primitive using a newly introduced two-tier variant of tokenized signatures. For the construction, we show a new lemma which we call the adaptive hardcore bit property for OWFs, which may enable further applications.
Expand
Hanjun Li, Huijia Lin, Antigoni Polychroniadou, Stefano Tessaro
ePrint Report ePrint Report
This paper introduces LERNA, a new framework for single-server secure aggregation. Our protocols are tailored to the setting where multiple consecutive aggregation phases are performed with the same set of clients, a fraction of which can drop out in some of the phases. We rely on an initial secret sharing setup among the clients which is generated once-and-for-all, and reused in all following aggregation phases. Compared to prior works [Bonawitz et al. CCS’17, Bell et al. CCS’20], the reusable setup eliminates one round of communication between the server and clients per aggregation—i.e., we need two rounds for semi-honest security (instead of three), and three rounds (instead of four) in the malicious model. Our approach also significantly reduces the server’s computational costs by only requiring the reconstruction of a single secret-shared value (per aggregation). Prior work required reconstructing a secret-shared value for each client involved in the computation.

We provide instantiations of LERNA based on both the Decisional Composite Residuosity (DCR) and (Ring) Learning with Rounding ((R)LWR) assumptions respectively and evaluate a version based on the latter assumption. In addition to savings in round-complexity (which result in reduced latency), our experiments show that the server computational costs are reduced by two orders of magnitude in comparison to the state-of-the-art. In settings with a large number of clients, we also reduce the computational costs up to twenty-fold for most clients, while a small set of “heavy clients” is subject to a workload that is still smaller than that of prior work.
Expand
Wenzhe Yang
ePrint Report ePrint Report
The splitting field $F$ of the polynomial $Y^n-2$ is an extension over $\mathbb{Q}$ generated by $\zeta_n=\exp(2 \pi \sqrt{-1} /n)$ and $\sqrt[n]{2}$. When $n$ ($\geq 8$) is a power-of-two integer, the degree of $F$ over $\mathbb{Q}$ is $n^2/4$. In this paper, we lay the foundation for applying the Order-LWE in $\mathcal{R}=\mathbb{Z}[\zeta_n, \sqrt[n]{2}]$ to cryptographic uses. More specifically, We will compute the Galois group $\text{Gal}\left(F/\mathbb{Q} \right)$ and the canonical embedding of $F$ into $\mathbb{C}^{n^2/4}$. Then we study the trace pairings of the integral basis $\zeta_n^{k_0} \sqrt[n]{2}^{k_1}$ and obtain its dual explicitly, which will be crucial when we study the error distributions on the ideal lattices associated with $\mathcal{R}$.

Moreover, we design a Two-Variable Number Theoretic Transform (2NTT) algorithm for the quotient $\mathcal{R}_p=\mathcal{R}/p\mathcal{R}$, where $p$ is a prime number such that $Y^n \equiv 2 \bmod p$ has $n$ distinct solutions. Compared to the one-variable NTT, a crucial advantage of 2NTT is that it enjoys a quadratic saving of twiddle factors. Hence, it is very interesting to see how to leverage this quadratic saving to boost the performance of 2NTT in practical implementations.
Expand
Wicher Malten, Mehmet Ugurbil, Miguel de Vega
ePrint Report ePrint Report
In 1982, Yao introduced the problem of comparing two private values, thereby launching the study of protocols for secure multi-party computation (MPC). Since then, comparison protocols have undergone extensive study and found widespread applications.

We survey state-of-the-art comparison protocols for an arbitrary number of parties, decompose them into smaller primitives and analyse their communication complexity under the usual assumption that the underlying MPC protocol does preprocessing and computes linear operations without communication. We then develop two new comparison protocols and explain why they are faster than similar protocols, including those that are commonly used in practice: they reduce the number of online multiplications, without increasing preprocessing or round complexity. More concretely, online bandwidth is reduced by more than half for the standard comparison protocols whose round complexity is logarithmic in the bit-length, whereas for constant round comparison protocols the reduction is two-thirds.
Expand
Cas Cremers, Alexander Dax, Niklas Medinger
ePrint Report ePrint Report
Key Encapsulation Mechanisms (KEMs) are a critical building block for hybrid encryption and modern security protocols, notably in the post-quantum setting. Given the asymmetric public key of a recipient, the primitive establishes a shared secret key between sender and recipient. In recent years, a large number of abstract designs and concrete implementations of KEMs have been proposed, notably in the context of the NIST selection process for post-quantum primitives. The traditional security notion for KEMs has been the IND-CCA notion that was designed for public-key encryption (PKE). In recent work additional properties, such as robustness and anonymity, were lifted from the PKE setting to the KEMs setting.

In this work we introduce several stronger security notions for KEMs. Our new properties formalize in which sense outputs of the KEM uniquely determine, i.e., bind, other values. Our new notions are based on two orthogonal observations: First, unlike PKEs, KEMs establish a unique key, which leads to natural binding properties for the established keys. Our new binding properties can be used, e.g., to prove the absence of attacks that were not captured by prior security notions, such as re-encapsulation attacks. If we regard KEMs as one-pass key exchanges, our key-binding properties correspond to implicit key agreement properties. Second, to prove the absence of weak keys, we have to consider not only honestly generated key pairs but also adversarially-generated key pairs.

We define a hierarchy of security notions for KEMs based on our observations. We position properties from the literature within our hierarchy, provide separating examples, and give examples of real world KEMs in the context of our hierarchy.
Expand
Sebastian Hasler, Pascal Reisert, Marc Rivinius, Ralf Küsters
ePrint Report ePrint Report
In recent years, actively secure SPDZ-like protocols for dishonest majority, like SPD$\mathbb Z_{2^k}$, Overdrive2k, and MHz2k, over base rings $\mathbb Z_{2^k}$ have become more and more efficient. In this paper, we present a new actively secure MPC protocol Multipars that outperforms these state-of-the-art protocols over $\mathbb Z_{2^k}$ by more than a factor of 2 in the two-party setup in terms of communication. Multipars is the first actively secure N-party protocol over $\mathbb Z_{2^k}$ that is based on linear homomorphic encryption (LHE) in the offline phase (instead of oblivious transfer or somewhat homomorphic encryption in previous works). The strong performance of Multipars relies on a new adaptive packing for BGV ciphertexts that allows us to reduce the parameter size of the encryption scheme and the overall communication cost. Additionally, we use modulus switching for further size reduction, a new type of enhanced CPA security over $\mathbb Z_{2^k}$, a truncation protocol for Beaver triples, and a new LHE-based offline protocol without sacrificing over $\mathbb Z_{2^k}$.

We have implemented Multipars and therewith provide the fastest preprocessing phase over $\mathbb Z_{2^k}$. Our evaluation shows that Multipars offers at least a factor of 8 lower communication costs and up to a factor of 15 faster runtime in the WAN setting compared to the currently best available actively secure MPC implementation over $\mathbb Z_{2^k}$.
Expand
Ruize Wang, Kalle Ngo, Joel Gärtner, Elena Dubrova
ePrint Report ePrint Report
We present a side-channel attack on CRYSTALS-Dilithium, a post-quantum secure digital signature scheme, with two variants of post-processing. The side-channel attack exploits information leakage in the secret key unpacking procedure of the signing algorithm to recover the coefficients of the polynomials in the secret key vectors ${\bf s}_1$ and ${\bf s}_2$ by profiled deep learning-assisted power analysis. In the first variant, one half of the coefficients of ${\bf s}_1$ and ${\bf s}_2$ is recovered by power analysis and the rest is derived by solving a system of linear equations based on ${\bf t} = {\bf A}{\bf s}_1 + {\bf s}_2$, where ${\bf A}$ and ${\bf t}$ are parts of the public key. This case assumes knowledge of the least significant bits of the vector ${\bf t}$, ${\bf t}_0$. The second variant waives this requirement. However, to succeed, it needs a larger portion of ${\bf s}_1$ to be recovered by power analysis. The remainder of ${\bf s}_1$ is obtained by lattice reduction. Once the full ${\bf s}_1$ is recovered, all the other information necessary for generating valid signatures can be trivially derived from the public key. We evaluate both variants on an ARM Cortex-M4 implementation of Dilithium-2. The profiling stage (trace capture and neural network training) takes less than 10 hours. In the attack assuming that ${\bf t}_0$ is known, the probability of successfully recovering the full vector ${\bf s}_1$ from a single trace captured from a different from profiling device is non-negligible (9%). The success rate approaches 100% if multiple traces are available for the attack. Our results demonstrate the necessity of protecting the secret key of CRYSTALS-Dilithium from single-trace attacks and call for a reassessment of the role of compression of the public key vector ${\bf t}$ in the security of CRYSTALS-Dilithium implementations.
Expand
◄ Previous Next ►