IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
20 September 2021
Gizem Kara, Oğuz Yayla
A number of arithmetization-oriented ciphers emerge for use in advanced cryptographic protocols such as secure multi-party computation (MPC), fully homomorphic encryption (FHE) and zero-knowledge proofs (ZK) in recent years. The standard block ciphers like AES and the hash functions SHA2/SHA3 are proved to be efficient in software and hardware but not optimal to use in this field, for this reason, new kind of cryptographic primitives were proposed recently. However, unlike traditional ones, there is no standard approach to design and analyze such block ciphers and the hash functions, therefore their security analysis needs to be done carefully. In 2018, StarkWare launched a public STARK-Friendly Hash (SFH) Challenge to select an efficient and secure hash function to be used within ZK-STARKs, transparent and post-quantum secure proof systems. The block cipher JARVIS is one of the first ciphers designed for STARK applications but, shortly after its publication, the cipher has been shown vulnerable to Gröbner basis attack. This paper aims to describe a Gröbner basis attack on new block ciphers, MiMC, GMiMCerf (SFH candidates) and the variants of JARVIS. We present the complexity of Gröbner basis attack on JARVIS-like ciphers. Then we give results from our experiments for the attack on reduced-round MiMC and a structure we found in the Gröbner basis attack for GMiMCerf.
Aljosha Judmayer, Nicholas Stifter, Philipp Schindler, Edgar Weippl
The term miner extractable value (MEV) has been coined to describe the value which can be extracted by a miner from manipulating the order of transactions within a given timeframe. MEV has been deemed an important factor to assess the overall economic stability of a cryptocurrency. This stability also influences the economically rational choice of the security parameter k, by which a merchant defines the number of required confirmation blocks in cryptocurrencies based on Nakamoto consensus. Unfortunately, to the best of our knowledge, currently no exact definition of MEV exists. In this paper, we provide a definition in accordance to its usage throughout the community and show that a narrow definition of MEV fails to capture the extractable value of other actors like users. Moreover, we show that there is no globally unique MEV which can readily be determined. We further highlight why it is hard, or even impossible, to estimate extractable value precisely, considering the uncertainties in real world systems. Finally, we outline a peculiar yet straightforward technique for choosing the security parameter k, which can act as a workaround to transfer the risk of an insufficiently chosen k to another merchant.
Amit Agarwal, James Bartusek, Vipul Goyal, Dakshita Khurana, Giulio Malavolta
We propose the first maliciously secure multi-party computation (MPC) protocol for general functionalities in two rounds, without any trusted setup. Since polynomial-time simulation is impossible in two rounds, we achieve the relaxed notion of superpolynomial-time simulation security [Pass, EUROCRYPT 2003]. Prior to our work, no such maliciously secure protocols were known even in the two-party setting for functionalities where both parties receive outputs. Our protocol is based on the sub-exponential security of standard assumptions plus a special type of non-interactive non-malleable commitment.
At the heart of our approach is a two-round multi-party conditional disclosure of secrets (MCDS) protocol in the plain model from bilinear maps, which is constructed from techniques introduced in [Benhamouda and Lin, TCC 2020].
At the heart of our approach is a two-round multi-party conditional disclosure of secrets (MCDS) protocol in the plain model from bilinear maps, which is constructed from techniques introduced in [Benhamouda and Lin, TCC 2020].
David Lanzenberger, Ueli Maurer
We revisit one of the most fundamental hardness amplification constructions, originally proposed by Yao (FOCS 1982).
We present a hardness amplification theorem for the direct product of certain games that is simpler, more general, and stronger than previously known hardness amplification theorems of the same kind.
Our focus is two-fold. First, we aim to provide close-to-optimal concrete bounds, as opposed to asymptotic ones. Second, in the spirit of abstraction and reusability, our goal is to capture the essence of direct product hardness amplification as generally as possible.
Furthermore, we demonstrate how our amplification theorem can be applied to obtain hardness amplification results for non-trivial interactive cryptographic games such as MAC forgery or signature forgery games.
Hanwen Feng, Qiang Tang
Robust (fuzzy) extractors are very useful for, e.g., authenticated key exchange from a shared weak secret and remote biometric authentication against active adversaries. They enable two parties to extract the same uniform randomness with a ``helper'' string. More importantly, they have an authentication mechanism built in that tampering of the ``helper'' string will be detected. Unfortunately, as shown by Dodis and Wichs, in the information-theoretic setting, a robust extractor for an $(n,k)$-source requires $k>n/2$, which is in sharp contrast with randomness extractors which only require $k=\omega(\log n)$. Existing works either rely on random oracles or introduce CRS and work only for CRS-independent sources (even in the computational setting).
In this work, we give a systematic study about robust (fuzzy) extractors for general CRS {\em dependent} sources. We show in the information-theoretic setting, the same entropy lower bound holds even in the CRS model; we then show we {\em can} have robust extractors in the computational setting for general CRS-dependent source that is only with minimal entropy. We further extend our construction to robust fuzzy extractors. Along the way, we propose a new primitive called $\kappa$-MAC, which is unforgeable with a weak key and hides all partial information about the key (both against auxiliary input); it may be of independent interests.
In this work, we give a systematic study about robust (fuzzy) extractors for general CRS {\em dependent} sources. We show in the information-theoretic setting, the same entropy lower bound holds even in the CRS model; we then show we {\em can} have robust extractors in the computational setting for general CRS-dependent source that is only with minimal entropy. We further extend our construction to robust fuzzy extractors. Along the way, we propose a new primitive called $\kappa$-MAC, which is unforgeable with a weak key and hides all partial information about the key (both against auxiliary input); it may be of independent interests.
Sarvar Patel, Giuseppe Persiano, Joon Young Seo, Kevin Yeo
Encrypted multi-maps enable outsourcing the storage of a multi-map to an untrusted server while maintaining the ability to query privately. We focus on encrypted Boolean multi-maps that support arbitrary Boolean queries over the multi-map. Kamara and Moataz [Eurocrypt17] presented the first encrypted multi-map, BIEX, that supports CNF queries with optimal communication, worst-case sublinear search time and non-trivial leakage.
We improve on previous work by presenting a new construction CNFFilter for CNF queries with significantly less leakage than BIEX, while maintaining both optimal communication and worst-case sublinear search time. As a direct consequence our construction shows additional resistance to leakage-abuse attacks in comparison to prior works. For most CNF queries, CNFFilter avoids leaking the result sets for any singleton queries for labels appearing in the CNF query. As an example, for the CNF query of the form (l1 ∨ l2) ∧ l3, our scheme does not leak the result sizes of queries to l1, l2 or l3 individually. On the other hand, BIEX does leak some of this information. This is just an example of the reduced leakage obtained by CNFFilter. The core of CNFFilter is a new filtering algorithm that performs set intersections with significantly less leakage compared to prior works.
We implement CNFFilter and show that CNFFilter achieves faster search times and similar communication overhead compared to BIEX at the cost of a small increase in server storage.
We improve on previous work by presenting a new construction CNFFilter for CNF queries with significantly less leakage than BIEX, while maintaining both optimal communication and worst-case sublinear search time. As a direct consequence our construction shows additional resistance to leakage-abuse attacks in comparison to prior works. For most CNF queries, CNFFilter avoids leaking the result sets for any singleton queries for labels appearing in the CNF query. As an example, for the CNF query of the form (l1 ∨ l2) ∧ l3, our scheme does not leak the result sizes of queries to l1, l2 or l3 individually. On the other hand, BIEX does leak some of this information. This is just an example of the reduced leakage obtained by CNFFilter. The core of CNFFilter is a new filtering algorithm that performs set intersections with significantly less leakage compared to prior works.
We implement CNFFilter and show that CNFFilter achieves faster search times and similar communication overhead compared to BIEX at the cost of a small increase in server storage.
Lalita Devadas, Willy Quach, Vinod Vaikuntanathan, Hoeteck Wee, Daniel Wichs
We present a construction of indistinguishability obfuscation (iO) that relies on the learning with errors (LWE) assumption together with a new notion of succinctly sampling pseudo-random LWE samples. We then present a candidate LWE sampler whose security is related to the hardness of solving systems of polynomial equations. Our construction improves on the recent iO candidate of Wee and Wichs (Eurocrypt 2021) in two ways: first, we show that a much weaker and simpler notion of LWE sampling suffices for iO; and secondly, our candidate LWE sampler is secure based on a compactly specified and falsifiable assumption about random polynomials, with a simple error distribution that facilitates cryptanalysis.
Kai Hu, Siwei Sun, Yosuke Todo, Meiqin Wang, Qingju Wang
Determining the exact algebraic structure or some partial information of the superpoly for a given cube is a necessary step in the cube attack -- a generic cryptanalytic technique for symmetric-key primitives with some secret and public tweakable inputs.
Currently, the division property based approach is the most powerful tool for exact superpoly recovery. However, as the algebraic normal form (ANF) of the targeted output bit gets increasingly complicated as the number of rounds grows, existing methods for superpoly recovery quickly hit their bottlenecks. For example, previous method stuck at round 842, 190, and 892 for Trivium, Grain-128AEAD, and Kreyvium, respectively. In this paper, we propose a new framework for recovering the exact ANFs of massive superpolies
based on the monomial prediction technique (ASIACRYPT 2020, an alternative language for the division property).
In this framework, the targeted output bit is first expressed as a polynomial of the bits of some intermediate states. For each term appearing in the polynomial, the monomial prediction technique is applied to determine its superpoly if the corresponding MILP model can be solved within a preset time limit. Terms unresolved within the time limit are further expanded as polynomials of the bits of some deeper intermediate states with symbolic computation, whose terms are again processed with monomial predictions. The above procedure is iterated until all terms are resolved. Finally, all the sub-superpolies are collected and assembled into the superpoly of the targeted bit. We apply the new framework to Trivium, Grain-128AEAD, and Kreyvium. As a result, the exact ANFs of the superpolies for 843-, 844- and 845-round Trivium, 191-round Grain-128AEAD and 894-round Kreyvium are recovered. Moreover, with help of the M\"{o}bius transform, we present a novel key-recovery technique based on superpolies involving all key bits by exploiting the sparse structures, which leads to the best key-recovery attacks on the targets considered.
Suvradip Chakraborty, Stefan Dziembowski, Malgorzata Galazka, Tomasz Lizurej, Krzysztof Pietrzak, Michelle Yeo
Digital hardware Trojans are integrated circuits whose implementation differ from the specification in an arbitrary and malicious way. For example, the circuit can differ from its specified input/output behavior after some fixed number of queries (known as ``time bombs'') or on some particular input (known as ``cheat codes'').
To detect such Trojans, countermeasures using multiparty computation (MPC) or verifiable computation (VC), have been proposed. On a high level, to realize a circuit with specification $\cF$ one has more sophisticated circuits $\cF^\diamond$ manufactured (where $\cF^\diamond$ specifies a MPC or VC of $\cF$), and then embeds these $\cF^\diamond$'s into a \emph{master circuit} which must be trusted but is relatively simple compared to $\cF$. Those solutions have a significant overhead as $\cF^\diamond$ is significantly more complex than $\cF$ and also the master circuits are not exactly trivial either.
In this work, we show that in restricted settings, where $\cF$ has no evolving state and is queried on independent inputs, we can achieve a relaxed security notion using very simple constructions. In particular, we do not change the specification of the circuit at all (i.e., $\cF=\cF^\diamond$). Moreover the master circuit basically just queries a subset of its manufactured circuits and checks if they're all the same.
The security we achieve guarantees that, if the manufactured circuits are initially tested on up to $T$ inputs, the master circuit will catch Trojans that try to deviate on significantly more than a $1/T$ fraction of the inputs. This bound is optimal for the type of construction considered, and we provably achieve it using a construction where $12$ instantiations of $\cF$ need to be embedded into the master. We also discuss an extremely simple construction with just $2$ instantiations for which we conjecture that it already achieves the optimal bound.
To detect such Trojans, countermeasures using multiparty computation (MPC) or verifiable computation (VC), have been proposed. On a high level, to realize a circuit with specification $\cF$ one has more sophisticated circuits $\cF^\diamond$ manufactured (where $\cF^\diamond$ specifies a MPC or VC of $\cF$), and then embeds these $\cF^\diamond$'s into a \emph{master circuit} which must be trusted but is relatively simple compared to $\cF$. Those solutions have a significant overhead as $\cF^\diamond$ is significantly more complex than $\cF$ and also the master circuits are not exactly trivial either.
In this work, we show that in restricted settings, where $\cF$ has no evolving state and is queried on independent inputs, we can achieve a relaxed security notion using very simple constructions. In particular, we do not change the specification of the circuit at all (i.e., $\cF=\cF^\diamond$). Moreover the master circuit basically just queries a subset of its manufactured circuits and checks if they're all the same.
The security we achieve guarantees that, if the manufactured circuits are initially tested on up to $T$ inputs, the master circuit will catch Trojans that try to deviate on significantly more than a $1/T$ fraction of the inputs. This bound is optimal for the type of construction considered, and we provably achieve it using a construction where $12$ instantiations of $\cF$ need to be embedded into the master. We also discuss an extremely simple construction with just $2$ instantiations for which we conjecture that it already achieves the optimal bound.
Fabrice Benhamouda, Elette Boyle, Niv Gilboa, Shai Halevy, Yuval Ishai, Ariel Nof
Secure multiparty computation (MPC) enables $n$ parties, of which up to $t$ may be corrupted, to perform joint computations on their private inputs while revealing only the outputs. Optimizing the asymptotic and concrete costs of MPC protocols has become an important line of research. Much of this research focuses on the setting of an honest majority, where $n \ge 2t+1$, which gives rise to concretely efficient protocols that are either information-theoretic or make a black-box use of symmetric cryptography. Efficiency can be further improved in the case of a {\em strong} honest majority, where $n>2t+1$.
Motivated by the goal of minimizing the communication and latency costs of MPC with a strong honest majority, we make two related contributions. \begin{itemize}[leftmargin=*] \item {\bf Generalized pseudorandom secret sharing (PRSS).} Linear correlations serve as an important resource for MPC protocols and beyond. PRSS enables secure generation of many pseudorandom instances of such correlations without interaction, given replicated seeds of a pseudorandom function. We extend the PRSS technique of Cramer et al.\ (TCC 2015) for sharing degree-$d$ polynomials to new constructions leveraging a particular class of combinatorial designs. Our constructions yield a dramatic efficiency improvement when the degree $d$ is higher than the security threshold $t$, not only for standard degree-$d$ correlations but also for several useful generalizations. In particular, correlations for locally converting between slot configurations in ``share packing'' enable us to avoid the concrete overhead of prior works.
\item {\bf Cheap straggler resilience.} In reality, communication is not fully synchronous: protocol executions suffer from variance in communication delays and occasional node or message-delivery failures. We explore the benefits of PRSS-based MPC with a strong honest majority toward robustness against such failures, in turn yielding improved latency delays. In doing so we develop a novel technique for defending against a subtle ``double-dipping'' attack, which applies to the best existing protocols, with almost no extra cost in communication or rounds.
\end{itemize}
Combining the above tools requires further work, including new methods for batch verification via distributed zero-knowledge proofs (Boneh et al., CRYPTO 2019) that apply to packed secret sharing. Overall, our work demonstrates new advantages of the strong honest majority setting, and introduces new tools---in particular, generalized PRSS---that we believe will be of independent use within other cryptographic applications.
Motivated by the goal of minimizing the communication and latency costs of MPC with a strong honest majority, we make two related contributions. \begin{itemize}[leftmargin=*] \item {\bf Generalized pseudorandom secret sharing (PRSS).} Linear correlations serve as an important resource for MPC protocols and beyond. PRSS enables secure generation of many pseudorandom instances of such correlations without interaction, given replicated seeds of a pseudorandom function. We extend the PRSS technique of Cramer et al.\ (TCC 2015) for sharing degree-$d$ polynomials to new constructions leveraging a particular class of combinatorial designs. Our constructions yield a dramatic efficiency improvement when the degree $d$ is higher than the security threshold $t$, not only for standard degree-$d$ correlations but also for several useful generalizations. In particular, correlations for locally converting between slot configurations in ``share packing'' enable us to avoid the concrete overhead of prior works.
\item {\bf Cheap straggler resilience.} In reality, communication is not fully synchronous: protocol executions suffer from variance in communication delays and occasional node or message-delivery failures. We explore the benefits of PRSS-based MPC with a strong honest majority toward robustness against such failures, in turn yielding improved latency delays. In doing so we develop a novel technique for defending against a subtle ``double-dipping'' attack, which applies to the best existing protocols, with almost no extra cost in communication or rounds.
\end{itemize}
Combining the above tools requires further work, including new methods for batch verification via distributed zero-knowledge proofs (Boneh et al., CRYPTO 2019) that apply to packed secret sharing. Overall, our work demonstrates new advantages of the strong honest majority setting, and introduces new tools---in particular, generalized PRSS---that we believe will be of independent use within other cryptographic applications.
Julius Hermelink, Peter Pessl, Thomas Pöppelmann
hNIST's PQC standardization process is in the third round, and a first final choice between one of three remaining lattice-based key encapsulation mechanisms is expected by the end of 2021. This makes studying the implementation-security aspect of the candidates a pressing matter. However, while the development of side-channel attacks and corresponding countermeasures has seen continuous interest, fault attacks are still a vastly underdeveloped field.
In fact, a first practical fault attack on lattice-based KEMs was demonstrated just very recently by Pessl and Prokop. However, while their attack can bypass some standard fault countermeasures, it may be defeated using shuffling, and their use of skipping faults makes it also highly implementation dependent. Thus, the vulnerability of implementations against fault attacks and the concrete need for countermeasures is still not well understood.
In this work, we shine light on this problem and demonstrate new attack paths. Concretely, we show that the combination of fault injections with chosen-ciphertext attacks is a significant threat to implementations and can bypass several countermeasures. We state an attack on Kyber which combines ciphertext manipulation - flipping a single bit of an otherwise valid ciphertext - with a fault that "corrects" the ciphertext again during decapsulation. By then using the Fujisaki-Okamoto transform as an oracle, i.e., observing whether or not decapsulation fails, we derive inequalities involving secret data, from which we may recover the private key. Our attack is not defeated by many standard countermeasures such as shuffling in time or Boolean masking, and the fault may be introduced over a large execution-time interval at several places. In addition, we improve a known recovery technique to efficiently and practically recover the secret key from a smaller number of inequalities compared to the previous method.
In fact, a first practical fault attack on lattice-based KEMs was demonstrated just very recently by Pessl and Prokop. However, while their attack can bypass some standard fault countermeasures, it may be defeated using shuffling, and their use of skipping faults makes it also highly implementation dependent. Thus, the vulnerability of implementations against fault attacks and the concrete need for countermeasures is still not well understood.
In this work, we shine light on this problem and demonstrate new attack paths. Concretely, we show that the combination of fault injections with chosen-ciphertext attacks is a significant threat to implementations and can bypass several countermeasures. We state an attack on Kyber which combines ciphertext manipulation - flipping a single bit of an otherwise valid ciphertext - with a fault that "corrects" the ciphertext again during decapsulation. By then using the Fujisaki-Okamoto transform as an oracle, i.e., observing whether or not decapsulation fails, we derive inequalities involving secret data, from which we may recover the private key. Our attack is not defeated by many standard countermeasures such as shuffling in time or Boolean masking, and the fault may be introduced over a large execution-time interval at several places. In addition, we improve a known recovery technique to efficiently and practically recover the secret key from a smaller number of inequalities compared to the previous method.
Ofri Nevo, Ni Trieu, Avishay Yanai
We address the problem of multiparty private set intersection against a malicious adversary.
First, we show that when one can assume no collusion amongst corrupted parties then there exists an extremely efficient protocol given only symmetric-key primitives.
Second, we present a protocol secure against an adversary corrupting any strict subset of the parties. Our protocol is based on the recently introduced primitives: oblivious programmable PRF (OPPRF) and oblivious key-value store (OKVS).
Our protocols follow the client-server model where each party is either a client or a server. However, in contrast to previous works where the client has to engage in an expensive interactive cryptographic protocol, our clients need only send a single key to each server and a single message to a {\em pivot} party (where message size is in the order of the set size). Our experiments show that the client's load improves by up to $10 \times$ (compared to both semi-honest and malicious settings) and that factor increases with the number of parties.
We implemented our protocol and conducted an extensive experiment over both LAN and WAN and up to 32 parties with up to $2^{20}$ items each. We provide a comparison of the performance of our protocol and the state-of-the-art for both the semi-honest setting (by Chandran et al.) and the malicious setting (by Ben Efraim et al. and Garimella et al.).
Our protocols follow the client-server model where each party is either a client or a server. However, in contrast to previous works where the client has to engage in an expensive interactive cryptographic protocol, our clients need only send a single key to each server and a single message to a {\em pivot} party (where message size is in the order of the set size). Our experiments show that the client's load improves by up to $10 \times$ (compared to both semi-honest and malicious settings) and that factor increases with the number of parties.
We implemented our protocol and conducted an extensive experiment over both LAN and WAN and up to 32 parties with up to $2^{20}$ items each. We provide a comparison of the performance of our protocol and the state-of-the-art for both the semi-honest setting (by Chandran et al.) and the malicious setting (by Ben Efraim et al. and Garimella et al.).
Denis Diemert, Kai Gellert, Tibor Jager, Lin Lyu
The standard security notion for digital signatures is "single-challenge" (SC) EUF-CMA security, where the adversary outputs a single message-signature pair and "wins" if it is a forgery. Auerbach et al. (CRYPTO 2017) introduced memory-tightness of reductions and argued that the right security goal in this setting is actually a stronger "multi-challenge" (MC) definition, where an adversary may output many message-signature pairs and "wins" if at least one is a forgery. Currently, no construction from simple standard assumptions is known to achieve full tightness with respect to time, success probability, and memory simultaneously. Previous works showed that memory-tight signatures cannot be achieved via certain natural classes of reductions (Auerbach et al., CRYPTO 2017; Wang et al., EUROCRYPT 2018). These impossibility results may give the impression that the construction of memory-tight signatures is difficult or even impossible.
We show that this impression is false, by giving the first constructions of signature schemes with full tightness in all dimensions in the MC setting. To circumvent the known impossibility results, we first introduce the notion of canonical reductions in the SC setting. We prove a general theorem establishing that every signature scheme with a canonical reduction is already memory-tightly secure in the MC setting, provided that it is strongly unforgeable, the adversary receives only one signature per message, and assuming the existence of a tightly-secure pseudorandom function. We then achieve memory-tight many-signatures-per-message security in the MC setting by a simple additional generic transformation. This yields the first memory-tightly, strongly EUF-CMA-secure signature schemes in the MC setting. Finally, we show that standard security proofs often already can be viewed as canonical reductions. Concretely, we show this for signatures from lossy identification schemes (Abdalla et al., EUROCRYPT 2012), two variants of RSA Full-Domain Hash (Bellare and Rogaway, EUROCRYPT 1996), and two variants of BLS signatures (Boneh et al., ASIACRYPT 2001).
We show that this impression is false, by giving the first constructions of signature schemes with full tightness in all dimensions in the MC setting. To circumvent the known impossibility results, we first introduce the notion of canonical reductions in the SC setting. We prove a general theorem establishing that every signature scheme with a canonical reduction is already memory-tightly secure in the MC setting, provided that it is strongly unforgeable, the adversary receives only one signature per message, and assuming the existence of a tightly-secure pseudorandom function. We then achieve memory-tight many-signatures-per-message security in the MC setting by a simple additional generic transformation. This yields the first memory-tightly, strongly EUF-CMA-secure signature schemes in the MC setting. Finally, we show that standard security proofs often already can be viewed as canonical reductions. Concretely, we show this for signatures from lossy identification schemes (Abdalla et al., EUROCRYPT 2012), two variants of RSA Full-Domain Hash (Bellare and Rogaway, EUROCRYPT 1996), and two variants of BLS signatures (Boneh et al., ASIACRYPT 2001).
Julia Hesse, Dennis Hofheinz, Lisa Kohl, Roman Langrehr
We investigate the quality of security reductions for non-interactive key
exchange (NIKE) schemes. Unlike for many other cryptographic building blocks
(like public-key encryption, signatures, or zero-knowledge proofs), all known
NIKE security reductions to date are non-tight, i.e., lose a factor of at least
the number of users in the system. In that sense, NIKE forms a particularly
elusive target for tight security reductions.
The main technical obstacle in achieving tightly secure NIKE schemes are adaptive corruptions. Hence, in this work, we explore security notions and schemes that lie between selective security and fully adaptive security. Concretely:
- We exhibit a tradeoff between key size and reduction loss. We show that a tighter reduction can be bought by larger public and secret NIKE keys. Concretely, we present a simple NIKE scheme with a reduction loss of O(N^2 log(\nu)/\nu^2), and public and secret keys of O(\nu) group elements, where N denotes the overall number of users in the system, and \nu is a freely adjustable scheme parameter.
Our scheme achieves full adaptive security even against multiple "test queries" (i.e., adversarial challenges), but requires keys of size O(N) to achieve (almost) tight security under the matrix Diffie-Hellman assumption. Still, already this simple scheme circumvents existing lower bounds.
- We show that this tradeoff is inherent. We contrast the security of our simple scheme with a lower bound for all NIKE schemes in which shared keys can be expressed as an ``inner product in the exponent''. This result covers the original Diffie-Hellman NIKE scheme, as well as a large class of its variants, and in particular our simple scheme. Our lower bound gives a tradeoff between the ``dimension'' of any such scheme (which directly corresponds to key sizes in existing schemes), and the reduction quality. For \nu = O(N), this shows our simple scheme and reduction optimal (up to a logarithmic factor).
- We exhibit a tradeoff between security and key size for tight reductions. We show that it is possible to circumvent the inherent tradeoff above by relaxing the desired security notion. Concretely, we consider the natural notion of semi-adaptive security, where the adversary has to commit to a single test query after seeing all public keys. As a feasibility result, we bring forward the first scheme that enjoys compact public keys and tight semi-adaptive security under the conjunction of the matrix Diffie-Hellman and learning with errors assumptions.
We believe that our results shed a new light on the role of adaptivity in NIKE security, and also illustrate the special role of NIKE when it comes to tight security reductions.
The main technical obstacle in achieving tightly secure NIKE schemes are adaptive corruptions. Hence, in this work, we explore security notions and schemes that lie between selective security and fully adaptive security. Concretely:
- We exhibit a tradeoff between key size and reduction loss. We show that a tighter reduction can be bought by larger public and secret NIKE keys. Concretely, we present a simple NIKE scheme with a reduction loss of O(N^2 log(\nu)/\nu^2), and public and secret keys of O(\nu) group elements, where N denotes the overall number of users in the system, and \nu is a freely adjustable scheme parameter.
Our scheme achieves full adaptive security even against multiple "test queries" (i.e., adversarial challenges), but requires keys of size O(N) to achieve (almost) tight security under the matrix Diffie-Hellman assumption. Still, already this simple scheme circumvents existing lower bounds.
- We show that this tradeoff is inherent. We contrast the security of our simple scheme with a lower bound for all NIKE schemes in which shared keys can be expressed as an ``inner product in the exponent''. This result covers the original Diffie-Hellman NIKE scheme, as well as a large class of its variants, and in particular our simple scheme. Our lower bound gives a tradeoff between the ``dimension'' of any such scheme (which directly corresponds to key sizes in existing schemes), and the reduction quality. For \nu = O(N), this shows our simple scheme and reduction optimal (up to a logarithmic factor).
- We exhibit a tradeoff between security and key size for tight reductions. We show that it is possible to circumvent the inherent tradeoff above by relaxing the desired security notion. Concretely, we consider the natural notion of semi-adaptive security, where the adversary has to commit to a single test query after seeing all public keys. As a feasibility result, we bring forward the first scheme that enjoys compact public keys and tight semi-adaptive security under the conjunction of the matrix Diffie-Hellman and learning with errors assumptions.
We believe that our results shed a new light on the role of adaptivity in NIKE security, and also illustrate the special role of NIKE when it comes to tight security reductions.
Michel Abdalla, Manuel Barbosa, Jonathan Katz, Julian Loss, Jiayu Xu
The algebraic-group model (AGM), which lies between the generic group model and the standard model of computation, provides a means by which to analyze the security of cryptosystems against so-called algebraic adversaries. We formalize the AGM within the framework of universal composability, providing formal definitions for this setting and proving an appropriate composition theorem.
This extends the applicability of the AGM to more-complex protocols, and lays the foundations for analyzing algebraic adversaries in a composable~fashion.
Our results also clarify the meaning of composing proofs in the AGM with other proofs and they highlight a natural form of independence between idealized groups that seems inherent to the AGM and has not been made formal before---these insights also apply to the composition of game-based proofs in the AGM.
We show the utility of our model by proving several important protocols universally composable for algebraic adversaries, specifically: (1) the Chou-Orlandi protocol for oblivious transfer, and (2) the SPAKE2 and CPace protocols for password-based authenticated key exchange.
17 September 2021
Colin O'Flynn
Electromagnetic Fault Injection (EMFI) is a well known method of introducing faults for security analysis of digital devices. Such faults can be seen as analogous to the faults which are known to naturally occur in digital devices, a known problem with designing safety-critical systems.
Numerous standards have been developed for safety-critical systems, including the development of standards for increasing the rate of naturally occurring faults using particle sources. In this work, we demonstrate that desktop EMFI tooling can be used to accomplish similar testing, but with more control, effectively speeding up the evaluation process. We demonstrate that using EMFI tooling for safety evaluation allows us to recreate a highly publicized safety issue present in an automotive ECU -- one that could not easily be recreated previously with other techniques.
Numerous standards have been developed for safety-critical systems, including the development of standards for increasing the rate of naturally occurring faults using particle sources. In this work, we demonstrate that desktop EMFI tooling can be used to accomplish similar testing, but with more control, effectively speeding up the evaluation process. We demonstrate that using EMFI tooling for safety evaluation allows us to recreate a highly publicized safety issue present in an automotive ECU -- one that could not easily be recreated previously with other techniques.
Akira Ito, Rei Ueno, Naofumi Homma
In this paper, we present solutions to some open problems for constructing efficient deep learning-based side-channel attacks (DL-SCAs) through a theoretical analysis. There are two major open problems in DL-SCAs: (i) the effect of the difference in secret key values used for profiling and attack phases is unclear, and (ii) the optimality of the negative log-likelihood (NLL) loss function used in the conventional learning method is unknown. These two problems have hindered the accurate performance evaluation and optimization of DL-SCAs. To address the problem (i), we clarified the strict conditions under which the use of different correct keys in profiling and attack phases affects the performance of DL-SCA. For the problem (ii), we then analyzed the relationship between the NLL loss and direct performance metrics of DL-SCAs (i.e., success rate (SR)/guessing entropy (GE)) and proved that the minimum NLL loss is sufficient but not necessary to achieve the optimal distinguisher of DL-SCA. This explains why DL-SCA succeeds even when the NLL loss is large and motivated us to design a new loss function. Based on the above analysis result, we also propose a new loss function called the probability concentration inequality (PCI) loss function. We derive the PCI loss as an upper bound of GE and a lower bound of the SR using a probability concentration inequality. Minimizing the PCI loss during training can directly optimize the GE and SR of the subsequent attack phase. In this paper, we describe the characteristics of PCI loss and NLL loss and introduce a new learning method that takes full advantage of the characteristics. We also analytically investigate the difference between the PCI loss and ranking loss reported in a previous work for a similar purpose and explain the advantage of PCI loss over the ranking loss. Finally, we validate the analysis and demonstrate the effectiveness of the proposed DL-SCA using the PCI loss through experimental attacks on public datasets.
Eunsang Lee, Joon-Woo Lee, Young-Sik Kim, Jong-Seon No
Since the sign function can be used to implement the comparison operation, max function, and rectified linear unit (ReLU) function, several studies have been conducted to efficiently evaluate the sign function in the Cheon-Kim-Kim-Song (CKKS) scheme, one of the most promising fully homomorphic encryption schemes. Recently, Lee et al. (IEEE Trans. Depend. Sec. Comp.) proposed a practically optimal approximation method of sign function on the CKKS scheme using a composition of minimax approximate polynomials. However, homomorphic comparison, max function, and ReLU function algorithms that use this approximation method have not yet been successfully implemented on the residue number system variant CKKS (RNS-CKKS) scheme, and the sets of degrees of the component polynomials used by the algorithms are not optimized for the RNS-CKKS scheme. In this paper, we propose the optimized homomorphic comparison, max function, and ReLU function algorithms on the RNS-CKKS scheme using a composition of minimax approximate polynomials for the first time. We propose a fast algorithm for inverse minimax approximation error, a subroutine required to find the optimal set of degrees of component polynomials. This proposed algorithm makes it possible to find the optimal set of degrees of component polynomials with higher degrees than the previous study. In addition, we propose a method to find the degrees of component polynomials optimized for the RNS-CKKS scheme using the proposed algorithm for inverse minimax approximation error. We successfully implement the homomorphic comparison, max function, and ReLU function algorithms on the RNS-CKKS scheme with a low comparison failure rate ($< 2^{-15}$) and provide the various parameter sets according to the precision parameter $\alpha$. We reduce the depth consumption of the homomorphic comparison, max function, and ReLU function algorithms by one depth for several $\alpha$. In addition, the numerical analysis demonstrates that the proposed homomorphic comparison, max function, and ReLU function algorithms reduce running time by 6%, 7%, and 6% on average compared with the previous best-performing algorithms, respectively.
Susumu Kiyoshima
We study the problem of obtaining 2-round interactive arguments for NP with weak zero-knowledge (weak ZK) [Dwork et al., 2003] or with strong witness indistinguishability (strong WI) [Goldreich, 2001] under polynomially hard falsifiable assumptions. We consider both the delayed-input setting [Jain et al., 2017] and the standard non-delayed-input setting, where in the delayed-input setting, (i) prover privacy is only required to hold against delayed-input verifiers (which learn statements in the last round of the protocol) and (ii) soundness is required to hold even against adaptive provers (which choose statements in the last round of the protocol).
Concretely, we show the following black-box (BB) impossibility results by relying on standard cryptographic primitives.
1. It is impossible to obtain 2-round delayed-input weak ZK arguments under polynomially hard falsifiable assumptions if BB reductions are used to prove soundness. This result holds even when non-black-box techniques are used to prove weak ZK.
2. It is impossible to obtain 2-round non-delayed-input strong WI arguments and 2-round publicly verifiable delayed-input strong WI arguments under polynomially hard falsifiable assumptions if a natural type of BB reductions, called "oblivious" BB reductions, are used to prove strong WI.
3. It is impossible to obtain 2-round delayed-input strong WI arguments under polynomially hard falsifiable assumptions if BB reductions are used to prove both soundness and strong WI (the BB reductions for strong WI are required to be oblivious as above). Compared with the above result, this result no longer requires public verifiability in the delayed-input setting.
Concretely, we show the following black-box (BB) impossibility results by relying on standard cryptographic primitives.
1. It is impossible to obtain 2-round delayed-input weak ZK arguments under polynomially hard falsifiable assumptions if BB reductions are used to prove soundness. This result holds even when non-black-box techniques are used to prove weak ZK.
2. It is impossible to obtain 2-round non-delayed-input strong WI arguments and 2-round publicly verifiable delayed-input strong WI arguments under polynomially hard falsifiable assumptions if a natural type of BB reductions, called "oblivious" BB reductions, are used to prove strong WI.
3. It is impossible to obtain 2-round delayed-input strong WI arguments under polynomially hard falsifiable assumptions if BB reductions are used to prove both soundness and strong WI (the BB reductions for strong WI are required to be oblivious as above). Compared with the above result, this result no longer requires public verifiability in the delayed-input setting.
Tsz Hon Yuen, Muhammed F. Esgin, Joseph K. Liu, Man Ho Au, Zhimin Ding
We introduce a novel generic ring signature construction, called DualRing, which can be built from several canonical identification schemes (such as Schnorr identification). DualRing differs from the classical ring signatures by its formation of two rings: a ring of commitments and a ring of challenges. It has a structural difference from the common ring signature approaches based on accumulators or zero-knowledge proofs of the signer index. Comparatively, DualRing has a number of unique advantages.
Considering the DL-based setting by using Schnorr identification scheme, our DualRing structure allows the signature size to be compressed into logarithmic size via an argument of knowledge system such as Bulletproofs. We further improve on the Bulletproofs argument system to eliminate about half of the computation while maintaining the same proof size. We call this Sum Argument and it can be of independent interest. This DL-based construction, named DualRing-EC, using Schnorr identification with Sum Argument has the shortest ring signature size in the literature without using trusted setup.
Considering the lattice-based setting, we instantiate DualRing by a canonical identification based on M-LWE and M-SIS. In practice, we achieve the shortest lattice-based ring signature, named DualRing-LB, when the ring size is between 4 and 2000. DualRing-LB is also 5x faster in signing and verification than the fastest lattice-based scheme by Esgin et al. (CRYPTO'19).
Considering the DL-based setting by using Schnorr identification scheme, our DualRing structure allows the signature size to be compressed into logarithmic size via an argument of knowledge system such as Bulletproofs. We further improve on the Bulletproofs argument system to eliminate about half of the computation while maintaining the same proof size. We call this Sum Argument and it can be of independent interest. This DL-based construction, named DualRing-EC, using Schnorr identification with Sum Argument has the shortest ring signature size in the literature without using trusted setup.
Considering the lattice-based setting, we instantiate DualRing by a canonical identification based on M-LWE and M-SIS. In practice, we achieve the shortest lattice-based ring signature, named DualRing-LB, when the ring size is between 4 and 2000. DualRing-LB is also 5x faster in signing and verification than the fastest lattice-based scheme by Esgin et al. (CRYPTO'19).