IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
25 February 2022
Gabriel Zaid, Lilian Bossuet, Mathieu Carbone, Amaury Habrard, Alexandre Venelli
ePrint Report
Over the recent years, the cryptanalysis community leveraged the potential of research on Deep Learning to enhance attacks. In particular, several studies have recently highlighted the benefits of Deep Learning based Side-Channel Attacks (DLSCA) to target real-world cryptographic implementations. While this new research area on applied cryptography provides impressive result to recover a secret key even when countermeasures are implemented (e.g. desynchronization, masking schemes), the lack of theoretical results make the construction of appropriate models a notoriously hard problem. In this work, we propose the first solution that bridges DL and SCA. Based on theoretical results, we develop the first generative model, called Conditionnal Variational AutoEncoder based on Stochastic Attacks (cVAE-SA), designed from the well-known Stochastic Attacks, that have been introduced by Schindler et al. in $2005$. This model reduces the black-box property of DL and eases the architecture design for every real-world crypto-system as we define theoretical complexity bounds which only depend on the dimension of the (reduced) trace and the targeting variable over $\mathbb{F}_{2}^{n}$. We validate our theoretical proposition through simulations and public datasets on wide-range of use-cases, including multi-task learning, curse of dimensionality and masking scheme.
Qun Liu, Weijia Wang, Yanhong Fan, Lixuan Wu, Ling Sun, Meiqin Wang
ePrint Report
Lightweight cryptography features a small footprint and/or low computational complexity. Low-cost implementations of linear layers usually play an important role in lightweight cryptography. Although it has been shown by Boyar et al. that finding the optimal implementation of a linear layer is a Shortest Linear Program (SLP) problem and NP-hard, there exist a variety of heuristic methods to search for near-optimal solutions. This paper considers the low-latency criteria and focuses on the heuristic search of lightweight implementation for linear layers. Most of the prior approach iteratively combines the inputs (of linear layers) to reach the output, which can be regarded as the forward search. To better adapt the low-latency criteria, we propose a new framework of backward search that attempts to iteratively split every output (into an XORing of two bits) until all inputs appear. By bounding the time of splitting, the new framework can find a sub-optimal solution with a minimized depth of circuits.
We apply our new search algorithm to linear layers of block ciphers and find many low-latency candidates for implementations. Notably, for AES Mixcolumns, we provide an implementation with 103 XOR gates with a depth of 3, which is among the best hardware implementations of the AES linear layer. Besides, we obtain better implementations in XOR gates for 54.3% of 4256 Maximum Distance Separable (MDS) matrices proposed by Li et al. at FSE 2019. We also achieve an involutory MDS matrix (in M4(GL(8, F_2))) whose implementation uses the lowest number (i.e., 86, saving 2 from the state-of-the-art result) of XORs with the minimum depth.
Gregor Haas, Aydin Aysu
ePrint Report
Cryptographic instruction set extensions are commonly used for ciphers which would otherwise face unacceptable side channel risks. A prominent example of such an extension is the ARMv8 Cryptographic Extension, or ARM CE for short, which defines dedicated instructions to securely accelerate AES. However, while these extensions may be resistant to traditional "digital" side channel attacks, they may still vulnerable to physical side channel attacks.
In this work, we demonstrate the first such attack on a standard ARM CE AES implementation. We specifically focus on the implementation used by Apple’s CoreCrypto library which we run on the Apple A10 Fusion SoC. To that end, we implement an optimized side channel acquisition infrastructure involving both custom iPhone software and accelerated analysis code. We find that an adversary which can observe 5-30 million known-ciphertext traces can reliably extract secret AES keys using electromagnetic (EM) radiation as a side channel. This corresponds to an encryption operation on less than half of a gigabyte of data, which could be acquired in less than 2 seconds on the iPhone 7 we examined. Our attack thus highlights the need for side channel defenses for real devices and production, industry-standard encryption software.
In this work, we demonstrate the first such attack on a standard ARM CE AES implementation. We specifically focus on the implementation used by Apple’s CoreCrypto library which we run on the Apple A10 Fusion SoC. To that end, we implement an optimized side channel acquisition infrastructure involving both custom iPhone software and accelerated analysis code. We find that an adversary which can observe 5-30 million known-ciphertext traces can reliably extract secret AES keys using electromagnetic (EM) radiation as a side channel. This corresponds to an encryption operation on less than half of a gigabyte of data, which could be acquired in less than 2 seconds on the iPhone 7 we examined. Our attack thus highlights the need for side channel defenses for real devices and production, industry-standard encryption software.
Markku-Juhani O. Saarinen
ePrint Report
FIPS 140-3 is the main standard defining security requirements for cryptographic modules in U.S. and Canada; commercially viable hardware modules generally need to be compliant with it. The scope of FIPS 140-3 will also expand to the new NIST Post-Quantum Cryptography (PQC) standards when migration from older RSA and Elliptic Curve cryptography begins. FIPS 140-3 mandates the testing of the effectiveness of ``non-invasive attack mitigations'', or side-channel attack countermeasures. At higher security levels 3 and 4, the testing methods and metrics are expected to be based on ISO 17825, which is based on the older Test Vector Leakage Assessment (TVLA) methodology. We discuss how to apply ISO 17825 to hardware modules that implement lattice-based PQC standards for public-key cryptography -- Key Encapsulation Mechanisms (KEMs) and Digital Signatures. We find that simple ``random key'' vs. ``fixed key'' tests are unsatisfactory due to the close linkage between public and private components of PQC keypairs. While the general statistical testing approach and requirements can remain consistent with older public-key algorithms, a non-trivial challenge in creating ISO 17825 testing procedures for PQC is the careful design of test vectors used for power, electromagnetic, and timing measurements so that relevant Critical Security Parameter (CSP) leakage is captured.
Omri Shmueli
ePrint Report
Quantum tokenized signature schemes (Ben-David and Sattath, QCrypt 2017) allow a sender to generate and distribute quantum unclonable states which grant their holder a one-time permission to sign in the name of the sender. Such schemes are a strengthening of public-key quantum money schemes, as they imply public-key quantum money where some channels of communication in the system can be made classical.
An even stronger primitive is semi-quantum tokenized signatures, where the sender is classical and can delegate the generation of the token to a (possibly malicious) quantum receiver. Semi-quantum tokenized signature schemes imply a powerful version of public-key quantum money satisfying two key features:
1. The bank is classical and the scheme can execute on a completely classical communication network. In addition, the bank is \emph{stateless} and after the creation of a banknote, does not hold any information nor trapdoors except the balance of accounts in the system. Such quantum money scheme solves the main open problem presented by Radian and Sattath (AFT 2019). 2. Furthermore, the classical-communication transactions between users in the system are \emph{direct} and do not need to go through the bank. This enables the transactions to be both classical and private.
While fully-quantum tokenized signatures (where the sender is quantum and generates the token by itself) are known based on quantum-secure indistinguishability obfuscation and injective one-way functions, the semi-quantum version is not known under any computational assumption. In this work we construct a semi-quantum tokenized signature scheme based on quantum-secure indistinguishability obfuscation and the sub-exponential hardness of the Learning with Errors problem. In the process, we show new properties of quantum coset states and a new hardness result on indistinguishability obfuscation of classical subspace membership circuits.
An even stronger primitive is semi-quantum tokenized signatures, where the sender is classical and can delegate the generation of the token to a (possibly malicious) quantum receiver. Semi-quantum tokenized signature schemes imply a powerful version of public-key quantum money satisfying two key features:
1. The bank is classical and the scheme can execute on a completely classical communication network. In addition, the bank is \emph{stateless} and after the creation of a banknote, does not hold any information nor trapdoors except the balance of accounts in the system. Such quantum money scheme solves the main open problem presented by Radian and Sattath (AFT 2019). 2. Furthermore, the classical-communication transactions between users in the system are \emph{direct} and do not need to go through the bank. This enables the transactions to be both classical and private.
While fully-quantum tokenized signatures (where the sender is quantum and generates the token by itself) are known based on quantum-secure indistinguishability obfuscation and injective one-way functions, the semi-quantum version is not known under any computational assumption. In this work we construct a semi-quantum tokenized signature scheme based on quantum-secure indistinguishability obfuscation and the sub-exponential hardness of the Learning with Errors problem. In the process, we show new properties of quantum coset states and a new hardness result on indistinguishability obfuscation of classical subspace membership circuits.
Ben Nassi, Ras Swissa, Yuval Elovici, Boris Zadov
ePrint Report
In this paper, we introduce "the little seal bug"
attack, an optical side-channel attack which exploits lightweight
reflective objects (e.g., an iced coffee can, a smartphone stand, a
souvenir) as optical implants for the purpose of recovering the
content of a conversation. We show how fluctuations in the air
pressure on the surface of a shiny object can be exploited by
eavesdroppers to recover speech passively and externally, using
equipment not likely to be associated with spying. These air
pressure fluctuations, which occur in response to sound, cause
the shiny object to vibrate and reflect light which modulates
the nearby sound; as a result, seemingly innocuous objects like
an empty beverage can, desk ornament, or smartphone stand,
which are often placed on desks, can provide the infrastructure
required for eavesdroppers to recover the content of a victim’s
conversation held when the victim is sitting at his/her desk.
First, we conduct a series of experiments aimed at learning
the characteristics of optical measurements obtained from shiny
objects that reflect light, by using a photodiode to analyze the
movement of a shiny weight in response to sound. Based on
our findings, we propose an optical acoustical transformation
(OAT) to recover speech from the optical measurements obtained
from light reflected from shiny objects. Finally, we compare the
performance of the little seal bug attack to related methods
presented in other studies. We show that eavesdroppers located
35 meters away from a victim can use the little seal bug attack to
recover speech at the sound level of a virtual meeting with fair
intelligibility whe
Mark Zhandry
ePrint Report
Generic groups are an important tool for analyzing the feasibility and in-feasibility of group-based cryptosystems. There are two distinct wide-spread versions of generic groups, Shoup's and Maurer's, the main difference being whether or not group elements are given explicit labels. The two models are often treated as equivalent. In this work, however, we demonstrate that the models are in fact quite different, and care is needed when stating generic group results:
- We show that numerous textbook constructions are *not* captured by Maurer, but are captured by Shoup. In the other direction, any construction captured by Maurer *is* captured by Shoup.
- For constructions that exist in both models, we show that security is equivalent for "single stage" games, but Shoup security is strictly stronger than Maurer security for some "multi-stage" games.
- The existing generic group un-instantiability results do not apply to Maurer. We fill this gap with a new un-instantiability result.
- We explain how the known black box separations between generic groups and identity-based encryption do not fully apply to Shoup, and resolve this by providing such a separation.
- We give a new un-instantiability result for the *algebraic* group model.
- We show that numerous textbook constructions are *not* captured by Maurer, but are captured by Shoup. In the other direction, any construction captured by Maurer *is* captured by Shoup.
- For constructions that exist in both models, we show that security is equivalent for "single stage" games, but Shoup security is strictly stronger than Maurer security for some "multi-stage" games.
- The existing generic group un-instantiability results do not apply to Maurer. We fill this gap with a new un-instantiability result.
- We explain how the known black box separations between generic groups and identity-based encryption do not fully apply to Shoup, and resolve this by providing such a separation.
- We give a new un-instantiability result for the *algebraic* group model.
Monika Henzinger, Jalaj Upadhyay
ePrint Report
We study fine-grained error bounds for differentially private algorithms for averaging and counting in the continual observation model. For this, we use the completely bounded spectral norm (cb norm) from operator algebra. For a matrix $W$, its cb norm is defined as
\[
\|{W}\|_{\mathsf{cb}} = \max_{Q} \left\{ \frac{\|{Q \bullet W}\|}{\|{Q}\|} \right\},
\]
where $Q \bullet W$ denotes the Schur product and $\|{\cdot}\|$ denotes the spectral norm. We bound the cb norm of two fundamental matrices studied in differential privacy under the continual observation model: the counting matrix $M_{\mathsf{counting}}$ and the averaging matrix $M_{\mathsf{average}}$. For $M_{\mathsf{counting}}$, we give lower and upper bound whose additive gap is $1 + \frac{1}{\pi}$. Our factorization also has two desirable properties sufficient for streaming setting: the factorization contains of lower-triangular matrices and the number of distinct entries in the factorization is exactly $T$. This allows us to compute the factorization on the fly while requiring the curator to store a $T$-dimensional vector. For $M_{\mathsf{average}}$, we show an additive gap between the lower and upper bound of $\approx 0.64$.
Daniel Rausch, Ralf Kuesters, Céline Chevalier
ePrint Report
Universal Composability is a widely used concept for the design and analysis of protocols. Since Canetti's original UC model and the model by Pfitzmann and Waidner several different models for universal composability have been proposed, including, for example, the IITM model, GNUC, CC, but also extensions and restrictions of the UC model, such as JUC, GUC, and SUC. These were motivated by the lack of expressivity of existing models, ease of use, or flaws in previous models. Cryptographers choose between these models based on their needs at hand (e.g., support for joint state and global state) or simply their familiarity with a specific model. While all models follow the same basic idea, there are huge conceptually differences, which raises fundamental and practical questions: (How) do the concepts and results proven in one model relate to those in another model? Do the different models and the security notions formulated therein capture the same classes of attacks? Most importantly, can cryptographers re-use results proven in one model in another model, and if so, how?
In this paper, we initiate a line of research with the aim to address this lack of understanding, consolidate the space of models, and enable cryptographers to re-use results proven in other models. As a start, here we focus on Canetti's prominent UC model and the IITM model proposed by Kuesters et al. The latter is an interesting candidate for comparison with the UC model since it has been used to analyze a wide variety of protocols, supports a very general protocol class and provides, among others, seamless treatment of protocols with shared state, including joint and global state. Our main technical contribution is an embedding of the UC model into the IITM model showing that all UC protocols, security and composition results carry over to the IITM model. Hence, protocol designers can profit from the features of the IITM model while being able to use all their results proven in the UC model. We also show that, in general, one cannot embed the full IITM model into the UC model.
In this paper, we initiate a line of research with the aim to address this lack of understanding, consolidate the space of models, and enable cryptographers to re-use results proven in other models. As a start, here we focus on Canetti's prominent UC model and the IITM model proposed by Kuesters et al. The latter is an interesting candidate for comparison with the UC model since it has been used to analyze a wide variety of protocols, supports a very general protocol class and provides, among others, seamless treatment of protocols with shared state, including joint and global state. Our main technical contribution is an embedding of the UC model into the IITM model showing that all UC protocols, security and composition results carry over to the IITM model. Hence, protocol designers can profit from the features of the IITM model while being able to use all their results proven in the UC model. We also show that, in general, one cannot embed the full IITM model into the UC model.
Thibauld Feneuil, Jules Maire, Matthieu Rivain, Damien Vergnaud
ePrint Report
We propose (honest verifier) zero-knowledge arguments for the modular subset sum problem. Given a set of integers, this problem asks whether a subset adds up to a given integer t modulo a given integer q. This NP-complete problem is considered since the 1980s as an interesting alternative in cryptography to hardness assumptions based on number theory and it is in particular believed to provide post-quantum security. Previous combinatorial approaches, notably one due to Shamir, yield arguments with cubic communication complexity (in the security parameter). More recent methods, based on the MPC-in-the-head technique, also produce arguments with cubic communication complexity and only for prime modulus q.
We improve this approach by using a secret-sharing over small integers (rather than modulo q) to reduce the size of the arguments and remove the prime modulus restriction. Since this sharing may reveal information on the secret subset, we introduce the idea of rejection to the MPC-in-the-head paradigm. Special care has to be taken to balance completeness and soundness and preserve zero-knowledge of our arguments. We combine this idea with two techniques to prove that the secret vector (which selects the subset) is well made of binary coordinates. Our new techniques have the significant advantage to result in arguments of size independent of the modulus q.
Our new protocols for the subset sum problem achieve an asymptotic improvement by producing arguments of quadratic size (against cubic size for previous proposals). This improvement is also practical: for a 256-bit modulus q, the best variant of our protocols yields 13KB arguments while previous proposals gave 1180KB arguments, for the best general protocol, and 122KB, for the best protocol restricted to prime modulus. Our techniques can also be applied to vectorial variants of the subset sum problem and in particular the inhomogeneous short integer solution (ISIS) problem for which they provide competitive alternatives to state-of-the-art protocols. We also show the application of our protocol to build efficient zero-knowledge arguments of plaintext and/or key knowledge in the context of fully-homomorphic encryption. When applied to the TFHE scheme, the obtained arguments are more than 20 times smaller than those obtained with previous protocols.
We improve this approach by using a secret-sharing over small integers (rather than modulo q) to reduce the size of the arguments and remove the prime modulus restriction. Since this sharing may reveal information on the secret subset, we introduce the idea of rejection to the MPC-in-the-head paradigm. Special care has to be taken to balance completeness and soundness and preserve zero-knowledge of our arguments. We combine this idea with two techniques to prove that the secret vector (which selects the subset) is well made of binary coordinates. Our new techniques have the significant advantage to result in arguments of size independent of the modulus q.
Our new protocols for the subset sum problem achieve an asymptotic improvement by producing arguments of quadratic size (against cubic size for previous proposals). This improvement is also practical: for a 256-bit modulus q, the best variant of our protocols yields 13KB arguments while previous proposals gave 1180KB arguments, for the best general protocol, and 122KB, for the best protocol restricted to prime modulus. Our techniques can also be applied to vectorial variants of the subset sum problem and in particular the inhomogeneous short integer solution (ISIS) problem for which they provide competitive alternatives to state-of-the-art protocols. We also show the application of our protocol to build efficient zero-knowledge arguments of plaintext and/or key knowledge in the context of fully-homomorphic encryption. When applied to the TFHE scheme, the obtained arguments are more than 20 times smaller than those obtained with previous protocols.
Yanbo Chen, Yunlei Zhao
ePrint Report
An aggregate signature (AS) scheme allows an unspecified aggregator to compress many signatures into a short aggregation. AS schemes can save storage costs and accelerate verification. They are desirable for applications where many signatures need to be stored, transferred, or verified together, like blockchain systems, network routing, e-voting, and certificate chains. However, constructing AS schemes based on general groups, only requiring the hardness of the discrete logarithm problem, is quite tricky and has been a long-standing research question.
Recently, Chalkias et al. (CT-RSA 2021) proposed a half-aggregate scheme for Schnorr signatures. We observe the scheme lacks a tight security proof and does not well support incremental aggregation, i.e., adding more signatures into a pre-existing aggregation. Chalkias et al. also presented an aggregate scheme for Schnorr signatures whose security can be tightly reduced to the security of Schnorr signatures in the random oracle model (ROM). However, the scheme is rather expensive and does not achieve half-aggregation. It is a fundamental question whether there exists half-aggregation of Schnorr signatures with tight reduction in the ROM, of both theoretical and practical interests.
This work's contributions are threefold. We first give a tight security proof for the scheme in CT-RSA 2021 in the ROM and the algebraic group model (AGM). Second, we provide a new half-aggregate scheme for Schnorr signatures that perfectly supports incremental aggregation, whose security also tightly reduces to Schnorr's security in the AGM+ROM. Third, we present a Schnorr-based sequential aggregate signature (SAS) scheme that is tightly secure as Schnorr signature scheme in the ROM (without the AGM). Our work may pave the way for applying Schnorr aggregation in real-world cryptographic applications.
This work's contributions are threefold. We first give a tight security proof for the scheme in CT-RSA 2021 in the ROM and the algebraic group model (AGM). Second, we provide a new half-aggregate scheme for Schnorr signatures that perfectly supports incremental aggregation, whose security also tightly reduces to Schnorr's security in the AGM+ROM. Third, we present a Schnorr-based sequential aggregate signature (SAS) scheme that is tightly secure as Schnorr signature scheme in the ROM (without the AGM). Our work may pave the way for applying Schnorr aggregation in real-world cryptographic applications.
Onur Gunlu, Matthieu Bloch, Rafael F. Schaefer, Aylin Yener
ePrint Report
This work considers mitigation of information leakage between communication and sensing operations in joint communication and sensing systems. Specifically, a discrete memoryless state-dependent broadcast channel model is studied in which (i) the presence of feedback enables a transmitter to simultaneously achieve reliable communication and channel state estimation; (ii) one of the receivers is treated as an eavesdropper whose state should be estimated but which should remain oblivious to a part of the transmitted information. The model abstracts the challenges behind security for joint communication and sensing if one views the channel state as a characteristic of the receiver, e.g., its location. For independent identically distributed (i.i.d.) states, perfect output feedback, and when part of the transmitted message should be kept secret, a partial characterization of the secrecy-distortion region is developed. The characterization is exact when the broadcast channel is either physically-degraded or reversely-physically-degraded. The characterization is also extended to the situation in which the entire transmitted message should be kept secret. The benefits of a joint approach compared to separation-based secure communication and state-sensing methods are illustrated with a binary joint communication and sensing model.
Keita Emura, Shiho Moriai, Takuma Nakajima, Masato Yoshimi
ePrint Report
Cache systems are crucial for reducing communication overhead on the Internet. The importance of communication privacy is being increasingly and widely recognized; therefore, we anticipate that nearly all end-to-end communication will be encrypted via secure sockets layer/transport layer security (SSL/TLS) in the near future. Herein we consider a catch-22 situation, wherein the cache server checks whether content has been cached or not, i.e., the cache server needs to observe it, thereby violating end-to-end encryption. We avoid this catch-22 situation by proposing an encrypted cache system which we call Cache-22. To maximize its deployability, we avoid heavy, advanced cryptographic tools, and instead base our Cache-22 system purely on traditional SSL/TLS communication. It employs tags for searching, and its design concept enables the service provider to decide, e.g., via an authentication process, whether or not a particular user should be allowed to access particular content. We provide a prototype implementation of the proposed system using the color-based cooperative cache proposed by Nakajima et al. (IEICE Trans. 2017) under several ciphersuites containing post-quantum key exchanges in addition to ECDHE (Elliptic Curve-based). We consider NIST Post-Quantum Cryptography round 3 finalists and alternate candidates: lattice-based (Kyber, SABER, NTRU), code-based (BIKE), and isogeny-based (SIKE). Compared to direct HTTPS communication between a service provider and a user, employing our Cache-22 system has a merit to drastically reduce communications between a cache server and the service provider (approximately 95%) which is effective in a hierarchical network with a cost disparity.
Hanyu Jia, Xiangxue Li
ePrint Report
We consider private function evaluation (PFE) in malicious adversary model. Current state-of-the-art in PFE from Valiant's universal circuits (Liu, Yu, et al., CRYPTO 2021) does not avoid the logarithmic factor in circuit size. In constructing linear active PFE, one essential building block is to prove the correctness of an extended permutation (EP, Mohassel and Sadeghian at EUROCRYPT 2013) by zero-knowledge protocols with linear complexity. The linear instantiation $\mathcal{ZK}_{EP}$ by Mohassel, Sadeghian, and Smart (ASIACRYPT 2014) is a three-phase protocol, and each phase (dummy placement, replication, and permutation) is of size $2g$. Its overhead thus seems really outrageous, reducing its practicability. We present in this paper a novel and efficient framework $\mathcal{ZK}_{DS}$ for proving the correct EP. We show that \underline{d}ouble \underline{s}huffles suffice for EP (exponentiations and communication overheads are about 27% and 31% of $\mathcal{ZK}_{EP}$, respectively). Data owner(s) generates the randomness for the first shuffle whose outputs determine outgoing wires of the circuit defined by the function. Function owner reuses and extends the randomness in the second shuffle whose outputs determine the incoming wires.
From $\mathcal{ZK}_{DS}$, we build an online/offline PFE framework with linear active security. The online phase could be instantiated by any well-studied secure function evaluation (SFE) with linear active security (e.g., Tiny-OT at CRYPTO 2012). The offline phase depends only on the private function $f$ and uses $\mathcal{ZK}_{DS}$ to prove the EP relationship between outgoing wires and incoming wires in the circuit $\mathcal{C}_f$ derived from $f$.
Per Austrin, Hao Chung, Kai-Min Chung, Shiuan Fu, Yao-Ting Lin, Mohammad Mahmoody
ePrint Report
We study the following question, first publicly posed by Hosoyamada and Yamakawa in 2018. Can parties Alice and Bob with quantum computing power and classical communication rely only on a random oracle (that can be queried in quantum superposition) to agree on a key that is private from eavesdroppers?
We make the first progress on the question above and prove the following.
When only one of the parties is classical and the other party is quantum powered, as long as they ask a total of $d$ oracle queries and agree on a key with probability $1$, then there is always a way to break the key agreement by asking $O(d^2)$ number of classical oracle queries. When both parties can make quantum queries to the random oracle, we introduce a natural conjecture, which if true would imply attacks with $poly(d)$ classical queries to the random oracle. Our conjecture, roughly speaking, states that the multiplication of any two degree-$d$ real-valued polynomials over the Boolean hypercube of influence at most $1/poly(d)$ is nonzero. We then prove our conjecture for exponentially small influences, which leads to an (unconditional) classical $2^{O(md)}$-query attack on any such key agreement protocol, where $m$ is the oracle's output length.
Since our attacks are classical, we then ask whether it is always possible to find classical attacks on key agreements with imperfect completeness in the quantum random oracle model. We proves a barrier for this approach, by showing that if the folklore “Simulation Conjecture” (first formally stated by Aaronson and Ambainis in 2009) about the possibility of simulating efficient-query quantum algorithms using efficient-query classical algorithms is false, then there is in fact such a secure key agreement in the quantum random oracle model that cannot be broken classically.
When only one of the parties is classical and the other party is quantum powered, as long as they ask a total of $d$ oracle queries and agree on a key with probability $1$, then there is always a way to break the key agreement by asking $O(d^2)$ number of classical oracle queries. When both parties can make quantum queries to the random oracle, we introduce a natural conjecture, which if true would imply attacks with $poly(d)$ classical queries to the random oracle. Our conjecture, roughly speaking, states that the multiplication of any two degree-$d$ real-valued polynomials over the Boolean hypercube of influence at most $1/poly(d)$ is nonzero. We then prove our conjecture for exponentially small influences, which leads to an (unconditional) classical $2^{O(md)}$-query attack on any such key agreement protocol, where $m$ is the oracle's output length.
Since our attacks are classical, we then ask whether it is always possible to find classical attacks on key agreements with imperfect completeness in the quantum random oracle model. We proves a barrier for this approach, by showing that if the folklore “Simulation Conjecture” (first formally stated by Aaronson and Ambainis in 2009) about the possibility of simulating efficient-query quantum algorithms using efficient-query classical algorithms is false, then there is in fact such a secure key agreement in the quantum random oracle model that cannot be broken classically.
Luke Beckwith, Duc Tri Nguyen, Kris Gaj
ePrint Report
Many currently deployed public-key cryptosystems are based on the difficulty of the discrete logarithm and integer factorization problems. However, given an adequately sized quantum computer, these problems can be solved in polynomial time as a function of the key size. Due to the future threat of quantum computing to current cryptographic standards, alternative algorithms that remain secure under quantum computing are being evaluated for future use. As a part of this evaluation, high-performance implementations of these candidate algorithms must be investigated. This work presents a high-performance implementation of all operations of CRYSTALS-Dilithium and one operation of FALCON (signature verification) targeting FPGAs. In particular, we present a Dilithium design that achieves the best latency for an FPGA implementation to date and, to the best of our knowledge, the first FALCON hardware implementation to date. We compare our results with the hardware implementations of all viable NIST Round 3 post-quantum digital signature candidates.
Nishanth Chandran, Bhavana Kanukurthi, Sai Lakshmi Bhavana Obbattu, Sruthi Sekar
ePrint Report
Leakage resilient secret sharing (LRSS) allows a dealer to share a secret amongst $n$ parties such that any authorized subset of the parties can recover the secret from their shares, while an adversary that obtains shares of any unauthorized subset of parties along with bounded leakage from the other shares learns no information about the secret. Non-malleable secret sharing (NMSS) provides a guarantee that even shares that are tampered by an adversary will reconstruct to either the original message or something independent of it.
The most important parameter of LRSS and NMSS schemes is the size of each share. For LRSS, in the "local leakage model" (i.e., when the leakage functions on each share are independent of each other and bounded), Srinivasan and Vasudevan (CRYPTO 2019), gave a scheme for threshold access structures with a share size of approximately ($3$.(message length) + $\mu$), where $\mu$ is the number of bits of leakage tolerated from every share. For the case of NMSS, the best known result (again due to the above work) has a share size of ($11$.(message length)).
In this work, we build LRSS and NMSS schemes with much improved share sizes. Additionally, our LRSS scheme obtains optimal share and leakage size. In particular, we get the following results:
-We build an information-theoretic LRSS scheme for threshold access structures with a share size of ((message length) + $\mu$).
-As an application of the above result, we obtain an NMSS with a share size of ($4$.(message length)). Further, for the special case of sharing random messages, we obtain a share size of ($2$.(message length)).
The most important parameter of LRSS and NMSS schemes is the size of each share. For LRSS, in the "local leakage model" (i.e., when the leakage functions on each share are independent of each other and bounded), Srinivasan and Vasudevan (CRYPTO 2019), gave a scheme for threshold access structures with a share size of approximately ($3$.(message length) + $\mu$), where $\mu$ is the number of bits of leakage tolerated from every share. For the case of NMSS, the best known result (again due to the above work) has a share size of ($11$.(message length)).
In this work, we build LRSS and NMSS schemes with much improved share sizes. Additionally, our LRSS scheme obtains optimal share and leakage size. In particular, we get the following results:
-We build an information-theoretic LRSS scheme for threshold access structures with a share size of ((message length) + $\mu$).
-As an application of the above result, we obtain an NMSS with a share size of ($4$.(message length)). Further, for the special case of sharing random messages, we obtain a share size of ($2$.(message length)).
Ky Nguyen, Duong Hieu Phan, David Pointcheval
ePrint Report
Multi-Client Functional Encryption ($\mathsf{MCFE}$) has been considered as an important primitive for making functional encryption useful in practice. It covers the ability to compute joint function over data from multiple parties similar to Multi-Input Functional Encryption ($\mathsf{MIFE}$) but it handles information leakage better than $\mathsf{MIFE}$. Both the $\mathsf{MCFE}$ and $\mathsf{MIFE}$ primitives are aimed at applications in multi-user settings where decryption can be correctly output for legitimate users only. In such a setting, the problem of dealing with access control in a fine-grained manner is particularly relevant. In this paper, we introduce a framework for $\mathsf{MCFE}$ with fine-grained access control and propose constructions for both single-client and multi-client settings, with selective and adaptive security.
The only known work that combines functional encryption in multi-user setting with access control was proposed by Abdalla $\mathit{et al.}$ (Asiacrypt '20), which relies on a generic transformation from the single-client schemes to obtain $\mathsf{MIFE}$ schemes that suffer a quadratic factor of $n$ (where $n$ denotes the number of clients) in the ciphertext size. We present a {duplicate-and-compress} technique to transform the single-client scheme and obtain a $\mathsf{MCFE}$ with fine-grained access control scheme with only a linear factor of $n$ in the ciphertext size. Our final scheme thus outperforms the Abdalla $\mathit{et al.}$'s scheme by a factor $n$, while $\mathsf{MCFE}$ is more difficult to achieve than $\mathsf{MIFE}$ (one can obtain $\mathsf{MIFE}$ from $\mathsf{MCFE}$ by making all the labels in $\mathsf{MCFE}$ a fixed public constant).
Ward Beullens
ePrint Report
This work introduces new key recovery attacks against the Rainbow signature scheme, which is one of the three finalist signature schemes still in the NIST Post-Quantum Cryptography standardization project. The new attacks outperform previously known attacks for all the parameter sets submitted to NIST and make a key-recovery practical for the SL 1 parameters. Concretely, given a Rainbow public key for the SL 1 parameters of the second-round submission, our attack returns the corresponding secret key after on average 53 hours (one weekend) of computation time on a standard laptop.
Jan Bobolz, Fabian Eidens, Stephan Krenn, Sebastian Ramacher, Kai Samelin
ePrint Report
Attribute-based credential systems enable users to authenticate in a privacy-preserving manner.
However, in such schemes verifying a user's credential requires knowledge of the issuer's public key, which by itself might already reveal private information about the user.
In this paper, we tackle this problem by introducing the notion of issuer-hiding attribute-based credential systems. In such a system, the verifier can define a set of acceptable issuers in an ad-hoc manner, and the user can then prove that her credential was issued by one of the accepted issuers -- without revealing which one.
We then provide a generic construction, as well as a concrete instantiation based on Groth's structure preserving signature scheme (ASIACRYPT'15) and simulation-sound extractable NIZK, for which we also provide concrete benchmarks in order to prove its practicability.
The online complexity of all constructions is independent of the number of acceptable verifiers, which makes it also suitable for highly federated scenarios.
In this paper, we tackle this problem by introducing the notion of issuer-hiding attribute-based credential systems. In such a system, the verifier can define a set of acceptable issuers in an ad-hoc manner, and the user can then prove that her credential was issued by one of the accepted issuers -- without revealing which one.
We then provide a generic construction, as well as a concrete instantiation based on Groth's structure preserving signature scheme (ASIACRYPT'15) and simulation-sound extractable NIZK, for which we also provide concrete benchmarks in order to prove its practicability.
The online complexity of all constructions is independent of the number of acceptable verifiers, which makes it also suitable for highly federated scenarios.