International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Papers from Journal of Cryptology 2019

Year
Venue
Title
2019
JOFC
(Efficient) Universally Composable Oblivious Transfer Using a Minimal Number of Stateless Tokens
We continue the line of work initiated by Katz (Eurocrypt 2007) on using tamper-proof hardware tokens for universally composable secure computation. As our main result, we show an oblivious-transfer (OT) protocol in which two parties each create and transfer a single, stateless token and can then run an unbounded number of OTs. We also show a more efficient protocol, based only on standard symmetric-key primitives (block ciphers and collision-resistant hash functions), that can be used if a bounded number of OTs suffice. Motivated by this result, we investigate the number of stateless tokens needed for universally composable OT. We prove that our protocol is optimal in this regard for constructions making black-box use of the tokens (in a sense we define). We also show that nonblack-box techniques can be used to obtain a construction using only a single stateless token.
2019
JOFC
A Practical Forgery Attack on Lilliput-AE
Lilliput-AE is a tweakable block cipher submitted as a candidate to the NIST lightweight cryptography standardization process. It is based upon the lightweight block cipher Lilliput, whose cryptanalysis so far suggests that it has a large security margin. In this note, we present an extremely efficient forgery attack on Lilliput-AE: Given a single arbitrary message of length about $$2^{36}$$ 2 36 bytes, we can instantly produce another valid message that leads to the same tag, along with the corresponding ciphertext. The attack uses a weakness in the tweakey schedule of Lilliput-AE which leads to the existence of a related-tweak differential characteristic with probability 1 in the underlying block cipher. The weakness we exploit, which does not exist in Lilliput, demonstrates the potential security risk in using a very simple tweakey schedule in which the same part of the key/tweak is reused in every round, even when round constants are employed to prevent slide attacks. Following this attack, the Lilliput-AE submission to NIST was tweaked.
2019
JOFC
An Optimal Distributed Discrete Log Protocol with Applications to Homomorphic Secret Sharing
The distributed discrete logarithm (DDL) problem was introduced by Boyle, Gilboa and Ishai at CRYPTO 2016. A protocol solving this problem was the main tool used in the share conversion procedure of their homomorphic secret sharing (HSS) scheme which allows non-interactive evaluation of branching programs among two parties over shares of secret inputs. Let g be a generator of a multiplicative group $${\mathbb {G}}$$ G . Given a random group element $$g^{x}$$ g x and an unknown integer $$b \in [-M,M]$$ b ∈ [ - M , M ] for a small M , two parties A and B (that cannot communicate) successfully solve DDL if $$A(g^{x}) - B(g^{x+b}) = b$$ A ( g x ) - B ( g x + b ) = b . Otherwise, the parties err. In the DDL protocol of Boyle et al., A and B run in time T and have error probability that is roughly linear in M  /  T . Since it has a significant impact on the HSS scheme’s performance, a major open problem raised by Boyle et al. was to reduce the error probability as a function of T . In this paper we devise a new DDL protocol that substantially reduces the error probability to $$O(M \cdot T^{-2})$$ O ( M · T - 2 ) . Our new protocol improves the asymptotic evaluation time complexity of the HSS scheme by Boyle et al. on branching programs of size S from $$O(S^2)$$ O ( S 2 ) to $$O(S^{3/2})$$ O ( S 3 / 2 ) . We further show that our protocol is optimal up to a constant factor for all relevant cryptographic group families, unless one can solve the discrete logarithm problem in a short interval of length R in time $$o(\sqrt{R})$$ o ( R ) . Our DDL protocol is based on a new type of random walk that is composed of several iterations in which the expected step length gradually increases. We believe that this random walk is of independent interest and will find additional applications.
2019
JOFC
Automated Analysis of Cryptographic Assumptions in Generic Group Models
We initiate the study of principled, automated methods for analyzing hardness assumptions in generic group models, following the approach of symbolic cryptography. We start by defining a broad class of generic and symbolic group models for different settings—symmetric or asymmetric (leveled) k -linear groups—and by proving “computational soundness” theorems for the symbolic models. Based on this result, we formulate a very general master theorem that formally relates the hardness of a (possibly interactive) assumption in these models to solving problems in polynomial algebra. Then, we systematically analyze these problems. We identify different classes of assumptions and obtain decidability and undecidability results. Next, we develop and implement automated procedures for verifying the conditions of master theorems, and thus the validity of hardness assumptions in generic group models. The concrete outcome of this work is an automated tool which takes as input the statement of an assumption and outputs either a proof of its generic hardness or shows an algebraic attack against the assumption.
2019
JOFC
Beyond Conventional Security in Sponge-Based Authenticated Encryption Modes
The Sponge function is known to achieve $$2^{c/2}$$ 2 c / 2 security, where c is its capacity. This bound was carried over to its keyed variants, such as SpongeWrap, to achieve a $$\min \{2^{c/2},2^\kappa \}$$ min { 2 c / 2 , 2 κ } security bound, with $$\kappa $$ κ the key length. Similarly, many CAESAR competition submissions were designed to comply with the classical $$2^{c/2}$$ 2 c / 2 security bound. We show that Sponge-based constructions for authenticated encryption can achieve the significantly higher bound of $$\min \{2^{b/2},2^c,2^\kappa \}$$ min { 2 b / 2 , 2 c , 2 κ } , with $$b>c$$ b > c the permutation size, by proving that the CAESAR submission NORX achieves this bound. The proof relies on rigorous computation of multi-collision probabilities, which may be of independent interest. We additionally derive a generic attack based on multi-collisions that matches the bound. We show how to apply the proof to five other Sponge-based CAESAR submissions: Ascon, CBEAM/STRIBOB, ICEPOLE, Keyak, and two out of the three PRIMATEs. A direct application of the result shows that the parameter choices of some of these submissions are overly conservative. Simple tweaks render the schemes considerably more efficient without sacrificing security. We finally consider the remaining one of the three PRIMATEs, APE, and derive a blockwise adaptive attack in the nonce-respecting setting with complexity $$2^{c/2}$$ 2 c / 2 , therewith demonstrating that the techniques cannot be applied to APE.
2019
JOFC
Blockcipher-Based Authenticated Encryption: How Small Can We Go?
This paper presents a lightweight blockcipher-based authenticated encryption mode mainly focusing on minimizing the implementation size, i.e., hardware gates or working memory on software. The mode is called $$\textsf {COFB}$$COFB, for COmbined FeedBack. $$\textsf {COFB}$$COFB uses an n-bit blockcipher as the underlying primitive and relies on the use of a nonce for security. In addition to the state required for executing the underlying blockcipher, $$\textsf {COFB}$$COFB needs only n / 2 bits state as a mask. Till date, for all existing constructions in which masks have been applied, at least n bit masks have been used. Thus, we have shown the possibility of reducing the size of a mask without degrading the security level much. Moreover, it requires one blockcipher call to process one input block. We show $$\textsf {COFB}$$COFB is provably secure up to $$O(2^{n/2}/n)$$O(2n/2/n) queries which is almost up to the standard birthday bound. We first present an idealized mode $$\textsf {iCOFB}$$iCOFB along with the details of its provable security analysis. Next, we extend the construction to the practical mode COFB. We instantiate COFB with two 128-bit blockciphers, AES-128 and GIFT-128, and present their implementation results on FPGAs. We present two implementations, with and without CAESAR hardware API. When instantiated with AES-128 and implemented without CAESAR hardware API, COFB achieves only a few more than 1000 Look-Up-Tables (LUTs) while maintaining almost the same level of provable security as standard AES-based AE, such as GCM. When instantiated with GIFT-128, COFB performs much better in hardware area. It consumes less than 1000 LUTs while maintaining the same security level. However, when implemented with CAESAR hardware API, there are significant overheads both in hardware area and in throughput. COFB with AES-128 achieves about 1475 LUTs. COFB with GIFT-128 achieves a few more than 1000 LUTs. Though there are overheads, still both these figures show competitive implementation results compared to other authenticated encryption constructions.
2019
JOFC
Classical Leakage Resilience from Fault-Tolerant Quantum Computation
Physical implementations of cryptographic algorithms leak information, which makes them vulnerable to the so-called side-channel attacks. The problem of secure computation in the presence of leakage is generally known as leakage resilience. In this work, we establish a connection between leakage resilience and fault-tolerant quantum computation. We first prove that for a general leakage model, there exists a corresponding noise model in which fault tolerance implies leakage resilience. Then we show how to use constructions for fault-tolerant quantum computation to implement classical circuits that are secure in specific leakage models.
2019
JOFC
Constant-Round Maliciously Secure Two-Party Computation in the RAM Model
The random-access memory model of computation allows program constant-time memory lookup and is more applicable in practice today, covering many important algorithms. This is in contrast to the classic setting of secure 2-party computation (2PC) that mostly follows the approach for which the desired functionality must be represented as a Boolean circuit. In this work, we design the first constant-round maliciously secure two-party protocol in the RAM model. Our starting point is the garbled RAM construction of Gentry et al. (EUROCRYPT, pp 405–422, 2014) that readily induces a constant round semi-honest two-party protocol for any RAM program assuming identity-based encryption schemes. We show how to enhance the security of their construction into the malicious setting while facing several challenges that stem due to handling the data memory. Next, we show how to apply our techniques to a more recent garbled RAM construction by Garg et al. (STOC, pp 449–458, 2015) that is based on one-way functions.
2019
JOFC
Cryptanalysis of the CLT13 Multilinear Map
In this paper, we describe a polynomial time cryptanalysis of the (approximate) multilinear map proposed by Coron, Lepoint, and Tibouchi in Crypto13 (CLT13). This scheme includes a zero-testing functionality that determines whether the message of a given encoding is zero or not. This functionality is useful for designing several of its applications, but it leaks unexpected values, such as linear combinations of the secret elements. By collecting the outputs of the zero-testing algorithm, we construct a matrix containing the hidden information as eigenvalues, and then recover all the secret elements of the CLT13 scheme via diagonalization of the matrix. In addition, we provide polynomial time algorithms to directly break the security assumptions of many applications based on the CLT13 scheme. These algorithms include solving subgroup membership, decision linear, and graded external Diffie–Hellman problems. These algorithms mainly rely on the computation of the determinants of the matrices and their greatest common divisor, instead of performing their diagonalization.
2019
JOFC
Cryptanalytic Time–Memory–Data Trade-offs for FX-Constructions and the Affine Equivalence Problem
The FX-construction was proposed in 1996 by Kilian and Rogaway as a generalization of the DESX scheme. The construction increases the security of an n -bit core block cipher with a $$\kappa $$ κ -bit key by using two additional n -bit masking keys. Recently, several concrete instances of the FX-construction were proposed, including PRINCE, PRIDE and MANTIS (presented at ASIACRYPT 2012, CRYPTO 2014 and CRYPTO 2016, respectively). In this paper, we devise new cryptanalytic time–memory–data trade-off attacks on FX-constructions. By fine-tuning the parameters to the recent FX-construction proposals, we show that the security margin of these ciphers against practical attacks is smaller than expected. Our techniques combine a special form of time–memory–data trade-offs, typically applied to stream ciphers, with a cryptanalytic technique by Fouque, Joux and Mavromati. In the final part of the paper, we show that the techniques we use in cryptanalysis of the FX-construction are applicable to additional schemes. In particular, we use related methods in order to devise new time–memory trade-offs for solving the affine equivalence problem. In this problem, the input consists of two functions $$F,G: \{0,1\}^n \rightarrow \{0,1\}^n$$ F , G : { 0 , 1 } n → { 0 , 1 } n , and the goal is to determine whether there exist invertible affine transformations $$A_1,A_2$$ A 1 , A 2 over $$GF(2)^n$$ G F ( 2 ) n such that $$G = A_2 \circ F \circ A_1$$ G = A 2 ∘ F ∘ A 1 .
2019
JOFC
Efficient Constant-Round Multi-party Computation Combining BMR and SPDZ
Recently, there has been huge progress in the field of concretely efficient secure computation, even while providing security in the presence of malicious adversaries. This is especially the case in the two-party setting, where constant-round protocols exist that remain fast even over slow networks. However, in the multi-party setting, all concretely efficient fully secure protocols, such as SPDZ, require many rounds of communication. In this paper, we present a constant-round multi-party secure computation protocol that is fully secure in the presence of malicious adversaries and for any number of corrupted parties. Our construction is based on the constant-round protocol of Beaver et al. (the BMR protocol) and is the first version of that protocol that is concretely efficient for the dishonest majority case. Our protocol includes an online phase that is extremely fast and mainly consists of each party locally evaluating a garbled circuit. For the offline phase, we present both a generic construction (using any underlying MPC protocol) and a highly efficient instantiation based on the SPDZ protocol. Our estimates show the protocol to be considerably more efficient than previous fully secure multi-party protocols.
2019
JOFC
Efficient Dissection of Bicomposite Problems with Cryptanalytic Applications
In this paper, we show that a large class of diverse problems have a bicomposite structure which makes it possible to solve them with a new type of algorithm called dissection , which has much better time/memory tradeoffs than previously known algorithms. A typical example is the problem of finding the key of multiple encryption schemes with r independent n -bit keys. All the previous error-free attacks required time T and memory M satisfying $$\textit{TM} = 2^{rn}$$ TM = 2 rn , and even if “false negatives” are allowed, no attack could achieve $$\textit{TM}<2^{3rn/4}$$ TM < 2 3 r n / 4 . Our new technique yields the first algorithm which never errs and finds all the possible keys with a smaller product of $$\textit{TM}$$ TM , such as $$T=2^{4n}$$ T = 2 4 n time and $$M=2^{n}$$ M = 2 n memory for breaking the sequential execution of $$\hbox {r}=7$$ r = 7 block ciphers. The improvement ratio we obtain increases in an unbounded way as r increases, and if we allow algorithms which can sometimes miss solutions, we can get even better tradeoffs by combining our dissection technique with parallel collision search. To demonstrate the generality of the new dissection technique, we show how to use it in a generic way in order to improve rebound attacks on hash functions and to solve with better time complexities (for small memory complexities) hard combinatorial search problems, such as the well-known knapsack problem.
2019
JOFC
Efficient Fully Structure-Preserving Signatures and Shrinking Commitments
In structure-preserving signatures, public keys, messages, and signatures are all collections of source group elements of some bilinear groups. In this paper, we introduce fully structure-preserving signature schemes, with the additional requirement that even secret keys are group elements. This strong property allows efficient non-interactive proofs of knowledge of the secret key, which is useful in designing cryptographic protocols under simulation-based security where online extraction of the secret key is needed. We present efficient constructions under simple standard assumptions and pursue even more efficient constructions with the extra property of randomizability based on the generic bilinear group model. An essential building block for our efficient standard model construction is a shrinking structure-preserving trapdoor commitment scheme, which is by itself an important primitive and of independent interest as it appears to contradict a known impossibility result that structure-preserving commitments cannot be shrinking. We argue that a relaxed binding property lets us circumvent the impossibility while still retaining the usefulness of the primitive in important applications as mentioned above.
2019
JOFC
Efficient RSA Key Generation and Threshold Paillier in the Two-Party Setting
The problem of generating an RSA composite in a distributed manner without leaking its factorization is particularly challenging and useful in many cryptographic protocols. Our first contribution is the first non-generic fully simulatable protocol for distributively generating an RSA composite with security against malicious behavior. Our second contribution is a complete Paillier (in: EUROCRYPT, pp 223–238, 1999) threshold encryption scheme in the two-party setting with security against malicious attacks. We further describe how to extend our protocols to the multiparty setting with dishonest majority. Our RSA key generation protocol is comprised of the following subprotocols: (i) a distributed protocol for generation of an RSA composite and (ii) a biprimality test for verifying the validity of the generated composite. Our Paillier threshold encryption scheme uses the RSA composite for the public key and is comprised of the following subprotocols: (i) a distributed generation of the corresponding secret key shares and (ii) a distributed decryption protocol for decrypting according to Paillier.
2019
JOFC
Feasibility and Infeasibility of Secure Computation with Malicious PUFs
A recent line of work has explored the use of physically unclonable functions (PUFs) for secure computation, with the goals of (1) achieving universal composability without additional setup and/or (2) obtaining unconditional security (i.e., avoiding complexity-theoretic assumptions). Initial work assumed that all PUFs, even those created by an attacker, are honestly generated. Subsequently, researchers have investigated models in which an adversary can create malicious PUFs with arbitrary behavior. Researchers have considered both malicious PUFs that might be stateful , as well as malicious PUFs that can have arbitrary behavior but are guaranteed to be stateless . We settle the main open questions regarding secure computation in the malicious-PUF model: We prove that unconditionally secure oblivious transfer is impossible, even in the stand-alone setting, if the adversary can construct (malicious) stateful PUFs. We show that if the attacker is limited to creating (malicious) stateless PUFs, then universally composable two-party computation is possible, unconditionally.
2019
JOFC
Four-State Non-malleable Codes with Explicit Constant Rate
Non-malleable codes (NMCs), introduced by Dziembowski, Pietrzak and Wichs (ITCS 2010), provide a powerful guarantee in scenarios where the classical notion of error-correcting codes cannot provide any guarantee: a decoded message is either the same or completely independent of the underlying message, regardless of the number of errors introduced into the codeword. Informally, NMCs are defined with respect to a family of tampering functions $$\mathcal {F}$$ F and guarantee that any tampered codeword decodes either to the same message or to an independent message, so long as it is tampered using a function $$f \in \mathcal {F}$$ f ∈ F . One of the well-studied tampering families for NMCs is the t -split-state family, where the adversary tampers each of the t “states” of a codeword, arbitrarily but independently. Cheraghchi and Guruswami (TCC 2014) obtain a rate-1 non-malleable code for the case where $$t = \mathcal {O}(n)$$ t = O ( n ) with n being the codeword length and, in (ITCS 2014), show an upper bound of $$1-1/t$$ 1 - 1 / t on the best achievable rate for any t -split state NMC. For $$t=10$$ t = 10 , Chattopadhyay and Zuckerman (FOCS 2014) achieve a constant-rate construction where the constant is unknown. In summary, there is no known construction of an NMC with an explicit constant rate for any $$t= o(n)$$ t = o ( n ) , let alone one that comes close to matching Cheraghchi and Guruswami’s lowerbound! In this work, we construct an efficient non-malleable code in the t -split-state model, for $$t=4$$ t = 4 , that achieves a constant rate of $$\frac{1}{3+\zeta }$$ 1 3 + ζ , for any constant $$\zeta > 0$$ ζ > 0 , and error $$2^{-\varOmega (\ell / log^{c+1} \ell )}$$ 2 - Ω ( ℓ / l o g c + 1 ℓ ) , where $$\ell $$ ℓ is the length of the message and $$c > 0$$ c > 0 is a constant.
2019
JOFC
From Cryptomania to Obfustopia Through Secret-Key Functional Encryption
Functional encryption lies at the frontiers of the current research in cryptography; some variants have been shown sufficiently powerful to yield indistinguishability obfuscation (IO), while other variants have been constructed from standard assumptions such as LWE. Indeed, most variants have been classified as belonging to either the former or the latter category. However, one mystery that has remained is the case of secret-key functional encryption with an unbounded number of keys and ciphertexts. On the one hand, this primitive is not known to imply anything outside of minicrypt, the land of secret-key cryptography, but, on the other hand, we do no know how to construct it without the heavy hammers in obfustopia. In this work, we show that (subexponentially secure) secret-key functional encryption is powerful enough to construct indistinguishability obfuscation if we additionally assume the existence of (subexponentially secure) plain public-key encryption. In other words, secret-key functional encryption provides a bridge from cryptomania to obfustopia. On the technical side, our result relies on two main components. As our first contribution, we show how to use secret-key functional encryption to get “exponentially efficient indistinguishability obfuscation” (XIO), a notion recently introduced by Lin et al. (PKC’16) as a relaxation of IO. Lin et al. show how to use XIO and the LWE assumption to build IO. As our second contribution, we improve on this result by replacing its reliance on the LWE assumption with any plain public-key encryption scheme. Lastly, we ask whether secret-key functional encryption can be used to construct public-key encryption itself and therefore take us all the way from minicrypt to obfustopia. A result of Asharov and Segev (FOCS’15) shows that this is not the case under black-box constructions, even for exponentially secure functional encryption. We show, through a non-black-box construction, that subexponentially secure-key functional encryption indeed leads to public-key encryption. The resulting public-key encryption scheme, however, is at most quasi-polynomially secure, which is insufficient to take us to obfustopia.
2019
JOFC
From Minicrypt to Obfustopia via Private-Key Functional Encryption
Private-key functional encryption enables fine-grained access to symmetrically encrypted data. Although private-key functional encryption (supporting an unbounded number of keys and ciphertexts) seems significantly weaker than its public-key variant, its known realizations all rely on public-key functional encryption. At the same time, however, up until recently it was not known to imply any public-key primitive, demonstrating our poor understanding of this primitive. Bitansky et al. (Theory of cryptography—14th international conference, TCC 2016-B, 2016 ) showed that sub-exponentially secure private-key function encryption bridges from nearly exponential security in Minicrypt to slightly super-polynomial security in Cryptomania, and from sub-exponential security in Cryptomania to Obfustopia. Specifically, given any sub-exponentially secure private-key functional encryption scheme and a nearly exponentially secure one-way function, they constructed a public-key encryption scheme with slightly super-polynomial security. Assuming, in addition, a sub-exponentially secure public-key encryption scheme, they then constructed an indistinguishability obfuscator (or a public-key functional encryption scheme if the given building blocks are polynomially secure). We show that quasi-polynomially secure private-key functional encryption bridges from sub-exponential security in Minicrypt all the way to Cryptomania. First, given any quasi-polynomially secure private-key functional encryption scheme, we construct an indistinguishability obfuscator for circuits with inputs of poly-logarithmic length. Then, we observe that such an obfuscator can be used to instantiate many natural applications of indistinguishability obfuscation. Specifically, relying on sub-exponentially secure one-way functions, we show that quasi-polynomially secure private-key functional encryption implies not just public-key encryption but leads all the way to public-key functional encryption for circuits with inputs of poly-logarithmic length. Moreover, relying on sub-exponentially secure injective one-way functions, we show that quasi-polynomially secure private-key functional encryption implies a hard-on-average distribution over instances of a PPAD-complete problem. Underlying our constructions is a new transformation from single-input functional encryption to multi-input functional encryption in the private-key setting. The previously known such transformation (Brakerski et al. J Cryptol 31(2):434–520, 2018 ) required a sub-exponentially secure single-input scheme, and obtained a scheme supporting only a slightly super-constant number of inputs. Our transformation both relaxes the underlying assumption and supports more inputs: Given any quasi-polynomially secure single-input scheme, we obtain a scheme supporting a poly-logarithmic number of inputs.
2019
JOFC
From Physical to Stochastic Modeling of a TERO-Based TRNG
Security in random number generation for cryptography is closely related to the entropy rate at the generator output. This rate has to be evaluated using an appropriate stochastic model. The stochastic model proposed in this paper is dedicated to the transition effect ring oscillator (TERO)-based true random number generator (TRNG) proposed by Varchola and Drutarovsky (in: Cryptographic hardware and embedded systems (CHES), 2010, Springer, 2010 ). The advantage and originality of this model are that it is derived from a physical model based on a detailed study and on the precise electrical description of the noisy physical phenomena that contribute to the generation of random numbers. We compare the proposed electrical description with data generated in two different technologies: TERO TRNG implementations in 40 and 28 nm CMOS ASICs. Our experimental results are in very good agreement with those obtained with both the physical model of TERO’s noisy behavior and the stochastic model of the TERO TRNG, which we also confirmed using the AIS 31 test suites.
2019
JOFC
Fully Secure Functional Encryption with a Large Class of Relations from the Decisional Linear Assumption
This paper presents a fully secure (adaptively secure) practical functional encryption scheme for a large class of relations, that are specified by non-monotone access structures combined with inner-product relations. The security is proven under a standard assumption, the decisional linear assumption, in the standard model. Our scheme is constructed on the concept of dual pairing vector spaces and a hierarchical reduction technique on this concept is employed for the security proof. The proposed functional encryption scheme covers, as special cases, (1) key-policy, ciphertext-policy and unified-policy attribute-based encryption with non-monotone access structures, (2) (hierarchical) attribute-hiding functional encryption with inner-product relations and functional encryption with nonzero inner-product relations and (3) spatial encryption and a more general class of encryption than spatial encryption.
2019
JOFC
Generic Attacks on Hash Combiners
Hash combiners are a practical way to make cryptographic hash functions more tolerant to future attacks and compatible with existing infrastructure. A combiner combines two or more hash functions in a way that is hopefully more secure than each of the underlying hash functions, or at least remains secure as long as one of them is secure. Two classical hash combiners are the exclusive-or (XOR) combiner $$ \mathcal {H}_1(M) \oplus \mathcal {H}_2(M) $$ H 1 ( M ) ⊕ H 2 ( M ) and the concatenation combiner $$ \mathcal {H}_1(M) \Vert \mathcal {H}_2(M) $$ H 1 ( M ) ‖ H 2 ( M ) . Both of them process the same message using the two underlying hash functions in parallel. Apart from parallel combiners, there are also cascade constructions sequentially calling the underlying hash functions to process the message repeatedly, such as Hash-Twice $$\mathcal {H}_2(\mathcal {H}_1(IV, M), M)$$ H 2 ( H 1 ( I V , M ) , M ) and the Zipper hash $$\mathcal {H}_2(\mathcal {H}_1(IV, M), \overleftarrow{M})$$ H 2 ( H 1 ( I V , M ) , M ← ) , where $$\overleftarrow{M}$$ M ← is the reverse of the message M . In this work, we study the security of these hash combiners by devising the best-known generic attacks. The results show that the security of most of the combiners is not as high as commonly believed. We summarize our attacks and their computational complexities (ignoring the polynomial factors) as follows: 1. Several generic preimage attacks on the XOR combiner: A first attack with a best-case complexity of $$ 2^{5n/6} $$ 2 5 n / 6 obtained for messages of length $$ 2^{n/3} $$ 2 n / 3 . It relies on a novel technical tool named interchange structure. It is applicable for combiners whose underlying hash functions follow the Merkle–Damgård construction or the HAIFA framework. A second attack with a best-case complexity of $$ 2^{2n/3} $$ 2 2 n / 3 obtained for messages of length $$ 2^{n/2} $$ 2 n / 2 . It exploits properties of functional graphs of random mappings. It achieves a significant improvement over the first attack but is only applicable when the underlying hash functions use the Merkle–Damgård construction. An improvement upon the second attack with a best-case complexity of $$ 2^{5n/8} $$ 2 5 n / 8 obtained for messages of length $$ 2^{5n/8} $$ 2 5 n / 8 . It further exploits properties of functional graphs of random mappings and uses longer messages. These attacks show a rather surprising result: regarding preimage resistance, the sum of two n -bit narrow-pipe hash functions following the considered constructions can never provide n -bit security. 2. A generic second-preimage attack on the concatenation combiner of two Merkle–Damgård hash functions. This attack finds second preimages faster than $$ 2^n $$ 2 n for challenges longer than $$ 2^{2n/7} $$ 2 2 n / 7 and has a best-case complexity of $$ 2^{3n/4} $$ 2 3 n / 4 obtained for challenges of length $$ 2^{3n/4} $$ 2 3 n / 4 . It also exploits properties of functional graphs of random mappings. 3. The first generic second-preimage attack on the Zipper hash with underlying hash functions following the Merkle–Damgård construction. The best-case complexity is $$ 2^{3n/5} $$ 2 3 n / 5 , obtained for challenge messages of length $$ 2^{2n/5} $$ 2 2 n / 5 . 4. An improved generic second-preimage attack on Hash-Twice with underlying hash functions following the Merkle–Damgård construction. The best-case complexity is $$ 2^{13n/22} $$ 2 13 n / 22 , obtained for challenge messages of length $$ 2^{13n/22} $$ 2 13 n / 22 . The last three attacks show that regarding second-preimage resistance, the concatenation and cascade of two n -bit narrow-pipe Merkle–Damgård hash functions do not provide much more security than that can be provided by a single n -bit hash function. Our main technical contributions include the following: 1. The interchange structure, which enables simultaneously controlling the behaviours of two hash computations sharing the same input. 2. The simultaneous expandable message, which is a set of messages of length covering a whole appropriate range and being multi-collision for both of the underlying hash functions. 3. New ways to exploit the properties of functional graphs of random mappings generated by fixing the message block input to the underlying compression functions.
2019
JOFC
Hardness-Preserving Reductions via Cuckoo Hashing
The focus of this work is hardness-preserving transformations of somewhat limited pseudorandom functions families (PRFs) into ones with more versatile characteristics. Consider the problem of domain extension of pseudorandom functions: given a PRF that takes as input elements of some domain $$\mathcal {U}$$U, we would like to come up with a PRF over a larger domain. Can we do it with little work and without significantly impacting the security of the system? One approach is to first hash the larger domain into the smaller one and then apply the original PRF. Such a reduction, however, is vulnerable to a “birthday attack”: after $$\sqrt{\left| \mathcal {U}\right| }$$U queries to the resulting PRF, a collision (i.e., two distinct inputs having the same hash value) is very likely to occur. As a consequence, the resulting PRF is insecure against an attacker making this number of queries. In this work, we show how to go beyond the aforementioned birthday attack barrier by replacing the above simple hashing approach with a variant of cuckoo hashing, a hashing paradigm that resolves collisions in a table by using two hash functions and two tables, cleverly assigning each element to one of the two tables. We use this approach to obtain: (i) a domain extension method that requires just two calls to the original PRF can withstand as many queries as the original domain size, and has a distinguishing probability that is exponentially small in the amount of non-cryptographic work; and (ii) a security-preserving reduction from non-adaptive to adaptive PRFs.
2019
JOFC
Improved Combinatorial Algorithms for the Inhomogeneous Short Integer Solution Problem
The paper is about algorithms for the inhomogeneous short integer solution problem: given $$(\mathbf A , \mathbf s )$$ ( A , s ) to find a short vector $$\mathbf{x }$$ x such that $$\mathbf A \mathbf{x }\equiv \mathbf s \pmod {q}$$ A x ≡ s ( mod q ) . We consider algorithms for this problem due to Camion and Patarin; Wagner; Schroeppel and Shamir; Minder and Sinclair; Howgrave–Graham and Joux (HGJ); Becker, Coron and Joux (BCJ). Our main results include: applying the Hermite normal form (HNF) to get faster algorithms; a heuristic analysis of the HGJ and BCJ algorithms in the case of density greater than one; an improved cryptanalysis of the SWIFFT hash function; a new method that exploits symmetries to speed up algorithms for Ring-SIS in some cases.
2019
JOFC
Improved Key Recovery Attacks on Reduced-Round AES with Practical Data and Memory Complexities
Determining the security of AES is a central problem in cryptanalysis, but progress in this area had been slow and only a handful of cryptanalytic techniques led to significant advancements. At Eurocrypt 2017 Grassi et al. presented a novel type of distinguisher for AES-like structures, but so far all the published attacks which were based on this distinguisher were inferior to previously known attacks in their complexity. In this paper we combine the technique of Grassi et al. with several other techniques in a novel way to obtain the best known key recovery attack on 5-round AES in the single-key model, reducing its overall complexity from about $$2^{32}$$ 2 32 to less than $$2^{22}$$ 2 22 . Extending our techniques to 7-round AES, we obtain the best known attacks on reduced-round AES-192 which use practical amounts of data and memory, breaking the record for such attacks which was obtained in 2000 by the classical Square attack. In addition, we use our techniques to improve the Gilbert–Minier attack (2000) on 7-round AES, reducing its memory complexity from $$2^{80}$$ 2 80 to $$2^{40}$$ 2 40 .
2019
JOFC
Key Establishment à la Merkle in a Quantum World
In 1974, Ralph Merkle proposed the first unclassified protocol for secure communications over insecure channels. When legitimate communicating parties are willing to spend an amount of computational effort proportional to some parameter  N , an eavesdropper cannot break into their communication without spending a time proportional to  $$N^2$$ N 2 , which is quadratically more than the legitimate effort. In a quantum world, however, Merkle’s protocol is immediately broken by Grover’s algorithm, but it is easily repaired if we are satisfied with a quantum protocol against which a quantum adversary needs to spend a time proportional to $$N^{3/2}$$ N 3 / 2 in order to break it. Can we do better? We give two new key establishment protocols in the spirit of Merkle’s. The first one, which requires the legitimate parties to have access to a quantum computer, resists any quantum adversary who is not willing to make an effort at least proportional to  $$N^{5/3}$$ N 5 / 3 , except with vanishing probability. Our second protocol is purely classical, yet it requires any quantum adversary to work asymptotically harder than the legitimate parties, again except with vanishing probability. In either case, security is proved for a typical run of the protocols: the probabilities are taken over the random (or quantum) choices made by the legitimate participants in order to establish their key as well as over the random (or quantum) choices made by the adversary who is trying to be privy to it.
2019
JOFC
Koblitz Curves over Quadratic Fields
In this work, we retake an old idea that Koblitz presented in his landmark paper (Koblitz, in: Proceedings of CRYPTO 1991. LNCS, vol 576, Springer, Berlin, pp 279–287, 1991 ), where he suggested the possibility of defining anomalous elliptic curves over the base field $${\mathbb {F}}_4$$ F 4 . We present a careful implementation of the base and quadratic field arithmetic required for computing the scalar multiplication operation in such curves. We also introduce two ordinary Koblitz-like elliptic curves defined over $${\mathbb {F}}_4$$ F 4 that are equipped with efficient endomorphisms. To the best of our knowledge, these endomorphisms have not been reported before. In order to achieve a fast reduction procedure, we adopted a redundant trinomial strategy that embeds elements of the field $${\mathbb {F}}_{4^{m}},$$ F 4 m , with m a prime number, into a ring of higher order defined by an almost irreducible trinomial. We also suggest a number of techniques that allow us to take full advantage of the native vector instructions of high-end microprocessors. Our software library achieves the fastest timings reported for the computation of the timing-protected scalar multiplication on Koblitz curves, and competitive timings with respect to the speed records established recently in the computation of the scalar multiplication over binary and prime fields.
2019
JOFC
Leakage Resilience from Program Obfuscation
The literature on leakage-resilient cryptography contains various leakage models that provide different levels of security. In the bounded leakage model (Akavia et al.—TCC 2009 ), it is assumed that there is a fixed upper bound L on the number of bits the attacker may leak on the secret key in the entire lifetime of the scheme. Alternatively, in the continual leakage model (Brakerski et al.—FOCS 2010 , Dodis et al.—FOCS 2010 ), the lifetime of a cryptographic scheme is divided into “time periods” between which the scheme’s secret key is updated. Furthermore, in its attack the adversary is allowed to obtain some bounded amount of leakage on the current secret key during each time period. In the continual leakage model, a challenging problem has been to provide security against leakage on key updates , that is, leakage that is a function of not only the current secret key but also the randomness used to update it. We propose a modular approach to overcome this problem based on program obfuscation. Namely, we present a compiler that transforms any public key encryption or signature scheme that achieves a slight strengthening of continual leakage resilience, which we call consecutive continual leakage resilience, to one that is continual leakage resilient with leakage on key updates, assuming indistinguishability obfuscation (Barak et al.—CRYPTO 2001 , Garg et al.—FOCS 2013 ). Under stronger forms of obfuscation, the leakage rate tolerated by our compiled scheme is essentially as good as that of the starting scheme. Our compiler is derived by making a connection between the problems of leakage on key updates and so-called sender-deniable encryption (Canetti et al.—CRYPTO 1997 ), which was recently constructed based on indistinguishability obfuscation by Sahai and Waters (STOC 2014 ). In the bounded leakage model, we give an approach to constructing leakage-resilient public key encryption from program obfuscation based on the public key encryption scheme of Sahai and Waters (STOC 2014 ). In particular, we achieve leakage-resilient public key encryption tolerating L bits of leakage for any L from $${\mathsf {iO}} $$ iO and one-way functions. We build on this to achieve leakage-resilient public key encryption with optimal leakage rate of $$1-o(1)$$ 1 - o ( 1 ) based on stronger forms of obfuscation and collision-resistant hash functions. Such a leakage rate is not known to be achievable in a generic way based on public key encryption alone. We then develop additional techniques to construct public key encryption that is (consecutive) continual leakage resilient under appropriate assumptions, which we believe is of independent interest.
2019
JOFC
Making Masking Security Proofs Concrete (Or How to Evaluate the Security of Any Leaking Device), Extended Version
We investigate the relationship between theoretical studies of leaking cryptographic devices and concrete security evaluations with standard side-channel attacks. Our contributions are in four parts. First, we connect the formal analysis of the masking countermeasure proposed by Duc et al. (Eurocrypt 2014) with the Eurocrypt 2009 evaluation framework for side-channel key recovery attacks. In particular, we re-state their main proof for the masking countermeasure based on a mutual information metric, which is frequently used in concrete physical security evaluations. Second, we discuss the tightness of the Eurocrypt 2014 bounds based on experimental case studies. This allows us to conjecture a simplified link between the mutual information metric and the success rate of a side-channel adversary, ignoring technical parameters and proof artifacts. Third, we introduce heuristic (yet well-motivated) tools for the evaluation of the masking countermeasure when its independent leakage assumption is not perfectly fulfilled, as it is frequently encountered in practice. Thanks to these tools, we argue that masking with non-independent leakages may provide improved security levels in certain scenarios. Eventually, we consider the tradeoff between the measurement complexity and the key enumeration time complexity in divide-and-conquer side-channel attacks and show that these complexities can be lower bounded based on the mutual information metric, using simple and efficient algorithms. The combination of these observations enables significant reductions of the evaluation costs for certification bodies.
2019
JOFC
Multi-theorem Preprocessing NIZKs from Lattices
Non-interactive zero-knowledge (NIZK) proofs are fundamental to modern cryptography. Numerous NIZK constructions are known in both the random oracle and the common reference string (CRS) models. In the CRS model, there exist constructions from several classes of cryptographic assumptions such as trapdoor permutations, pairings, and indistinguishability obfuscation. However, at the time of the initial publication of this work, we did not have constructions of NIZKs from standard lattice assumptions. In this work, we take an initial step toward constructing multi-theorem NIZKs for general $$\mathsf {NP}$$NP languages from standard lattice assumptions by considering a relaxation to the preprocessing model and a new model we call the designated-prover model. In the preprocessing model, a setup algorithm generates secret proving and verification keys for the prover and the verifier, respectively. In the designated-prover model, the proving key is secret, but the verification key is public. In both settings, the proving key is used to construct proofs and the verification key is used to check proofs. Finally, in the multi-theorem setting, both the proving and verification keys should be reusable for an unbounded number of theorems without compromising soundness or zero-knowledge. Previous constructions of NIZKs in the preprocessing model that rely on weaker assumptions like one-way functions or oblivious transfer are only secure in a single-theorem setting. Thus, constructing multi-theorem NIZKs in these relaxed models does not seem to be inherently easier than constructing them in the CRS model. In this work, we first construct a multi-theorem preprocessing NIZK argument from context-hiding homomorphic signatures. In fact, the construction is a designated-prover NIZK. We also show that using homomorphic commitments, we can get statistically sound proofs in the preprocessing and designated-prover models. Together with lattice-based instantiations of homomorphic signatures and commitments, we obtain the first multi-theorem NIZKs in the preprocessing and designated-prover models from standard lattice assumptions. Finally, we show how to generalize our construction to obtain a universally composable NIZK (UC-NIZK) in the preprocessing model from standard lattice assumptions. Our UC-NIZK relies on a simple preprocessing protocol based on a new primitive we call blind homomorphic signatures.
2019
JOFC
Multidimensional Linear Cryptanalysis
Linear cryptanalysis introduced by Matsui is a statistical attack which exploits a binary linear relation between plaintext, ciphertext and key, either in Algorithm 1 for recovering one bit of information of the secret key of a block cipher, or in Algorithm 2 for ranking candidate values for a part of the key. The statistical model is based on the expected and observed bias of a single binary value. Multiple linear approximations have been used with the goal to make the linear attack more efficient. More bits of information of the key can potentially be recovered possibly using less data. But then also more elaborated statistical models are needed to capture the joint behaviour of several not necessarily independent binary variables. Also more options are available for generalising the statistics of a single variable to several variables. The multidimensional extension of linear cryptanalysis to be introduced in this paper considers using multiple linear approximations that form a linear subspace. Different extensions of Algorithm 1 and Algorithm 2 will be presented and studied. The methods will be based on known statistical tools such as goodness-of-fit test and log-likelihood ratio. The efficiency of the different methods will be measured and compared in theory and experiments using the concept of advantage introduced by Selçuk. The block cipher Serpent with a reduced number of rounds will be used as test bed. The multidimensional linear cryptanalysis will also be compared with previous methods that use biasedness of multiple linear approximations. It will be shown in the simulations that the multidimensional method is potentially more powerful. Its main theoretical advantage is that the statistical model can be given without the assumption about statistical independence of the linear approximations.
2019
JOFC
Non-black-box Simulation in the Fully Concurrent Setting, Revisited
We give a new proof of the existence of $$O(n^{\epsilon })$$ O ( n ϵ ) -round public-coin concurrent zero-knowledge arguments for $$\mathcal {NP}$$ NP , where $$\epsilon >0$$ ϵ > 0 is an arbitrary constant. The security is proven in the plain model under the assumption that collision-resistant hash functions exist. The existence of such concurrent zero-knowledge arguments was previously proven by Goyal (STOC’13) in the plain model under the same assumption. In the proof, we use a new variant of the non-black-box simulation technique of Barak (FOCS’01). An important property of our simulation technique is that the simulator runs in a “straight-line” manner in the fully concurrent setting. Compared with the simulation technique of Goyal, which also has such a property, the analysis of our simulation technique is (arguably) simpler.
2019
JOFC
Oblivious Network RAM and Leveraging Parallelism to Achieve Obliviousness
Oblivious RAM (ORAM) is a cryptographic primitive that allows a trusted CPU to securely access untrusted memory, such that the access patterns reveal nothing about sensitive data. ORAM is known to have broad applications in secure processor design and secure multiparty computation for big data. Unfortunately, due to a logarithmic lower bound by Goldreich and Ostrovsky (J ACM 43(3):431–473, 1996 ), ORAM is bound to incur a moderate cost in practice. In particular, with the latest developments in ORAM constructions, we are quickly approaching this limit, and the room for performance improvement is small. In this paper, we consider new models of computation in which the cost of obliviousness can be fundamentally reduced in comparison with the standard ORAM model. We propose the oblivious network RAM model of computation, where a CPU communicates with multiple memory banks, such that the adversary observes only which bank the CPU is communicating with, but not the address offset within each memory bank. In other words, obliviousness within each bank comes for free—either because the architecture prevents a malicious party from observing the address accessed within a bank, or because another solution is used to obfuscate memory accesses within each bank—and hence we only need to obfuscate communication patterns between the CPU and the memory banks. We present new constructions for obliviously simulating general or parallel programs in the network RAM model. We describe applications of our new model in distributed storage applications with a network adversary.
2019
JOFC
On Black-Box Complexity of Universally Composable Security in the CRS Model
In this work, we study the intrinsic complexity of black-box Universally Composable (UC) secure computation based on general assumptions. We present a thorough study in various corruption modelings while focusing on achieving security in the common reference string (CRS) model. Our results involve the following:Static UC secure computation. Designing the first static UC oblivious transfer protocol based on public-key encryption and stand-alone semi-honest oblivious transfer. As a corollary, we obtain the first black-box constructions of UC secure computation assuming only two-round semi-honest oblivious transfer.One-sided UC secure computation. Designing adaptive UC two-party computation with single corruptions assuming public-key encryption with oblivious ciphertext generation.Adaptive UC secure computation. Designing adaptively secure UC commitment scheme assuming only public-key encryption with oblivious ciphertext generation. As a corollary, we obtain the first black-box constructions of adaptive UC secure computation assuming only (trapdoor) simulatable public-key encryption (as well as a variety of concrete assumptions).We remark that such a result was not known even under non-black-box constructions.
2019
JOFC
On the Impossibility of Structure-Preserving Deterministic Primitives
In structure-preserving cryptography over bilinear groups, cryptographic schemes are restricted to exchange group elements only, and their correctness must be verifiable only by evaluating pairing product equations. Several primitives, such as structure-preserving signatures, commitments, and encryption schemes, have been proposed. Although deterministic primitives, such as verifiable pseudorandom functions or verifiable unpredictable functions, play an important role in the construction of cryptographic protocols, no structure-preserving realizations of them are known. This is not coincident: In this paper, we show that it is impossible to construct algebraic structure-preserving deterministic primitives that provide provability, uniqueness, and unpredictability. This includes verifiable random functions, unique signatures, and verifiable unpredictable functions as special cases. The restriction of structure-preserving primitives to be algebraic is natural, otherwise it would not be known how to verify correctness only by evaluating pairing product equations. We further extend our negative result to pseudorandom functions and deterministic public key encryption as well as non-strictly structure-preserving primitives, where target group elements are also allowed in their ranges and public keys.
2019
JOFC
On the Tightness of Forward-Secure Signature Reductions
In this paper, we revisit the security of factoring-based signature schemes built via the Fiat–Shamir transform and show that they can admit tighter reductions to certain decisional complexity assumptions such as the quadratic-residuosity, the high-residuosity, and the $$\phi $$ ϕ -hiding assumptions. We do so by proving that the underlying identification schemes used in these schemes are a particular case of the lossy identification notion introduced by Abdalla et al. at Eurocrypt 2012. Next, we show how to extend these results to the forward-security setting based on ideas from the Itkis–Reyzin forward-secure signature scheme. Unlike the original Itkis–Reyzin scheme, our construction can be instantiated under different decisional complexity assumptions and has a much tighter security reduction. Moreover, we also show that the tighter security reductions provided by our proof methodology can result in concrete efficiency gains in practice, both in the standard and forward-security setting, as long as the use of stronger security assumptions is deemed acceptable. Finally, we investigate the design of forward-secure signature schemes whose security reductions are fully tight.
2019
JOFC
On Tight Security Proofs for Schnorr Signatures
The Schnorr signature scheme is the most efficient signature scheme based on the discrete logarithm problem and a long line of research investigates the existence of a tight security reduction for this scheme in the random oracle model. Almost all recent works present lower tightness bounds and most recently Seurin EUROCRYPT 2012 showed that under certain assumptions the non -tight security proof for Schnorr signatures in the random oracle by Pointcheval and Stern EUROCRYPT’96 is essentially optimal. All previous works in this direction rule out tight reductions from the (one-more) discrete logarithm problem. In this paper, we introduce a new meta-reduction technique, which shows lower bounds for the large and very natural class of generic reductions. A generic reduction is independent of a particular representation of group elements. Most reductions in state-of-the-art security proofs have this property. It is desirable, because then the reduction applies generically to any concrete instantiation of the group. Our approach shows unconditionally that there is no tight generic reduction from any natural non-interactive computational problem $$\Pi $$ Π defined over algebraic groups to breaking Schnorr signatures, unless solving $$\Pi $$ Π is easy. In an additional application of the new meta-reduction technique, we also unconditionally rule out any (even non-tight) generic reduction from natural non-interactive computational problems defined over algebraic groups to breaking Schnorr signatures in the non-programmable random oracle model.
2019
JOFC
Probabilistic Termination and Composability of Cryptographic Protocols
When analyzing the round complexity of multi-party protocols, one often overlooks the fact that underlying resources, such as a broadcast channel, can by themselves be expensive to implement. For example, it is well known that it is impossible to implement a broadcast channel by a (deterministic) protocol in a sublinear (in the number of corrupted parties) number of rounds. The seminal works of Rabin and Ben-Or from the early 1980s demonstrated that limitations as the above can be overcome by using randomization and allowing parties to terminate at different rounds, igniting the study of protocols over point-to-point channels with probabilistic termination and expected constant round complexity. However, absent a rigorous simulation-based definition, the suggested protocols are proven secure in a property-based manner or via ad hoc simulation-based frameworks, therefore guaranteeing limited, if any, composability. In this work, we put forth the first simulation-based treatment of multi-party cryptographic protocols with probabilistic termination. We define secure multi-party computation (MPC) with probabilistic termination in the UC framework and prove a universal composition theorem for probabilistic termination protocols. Our theorem allows to compile a protocol using deterministic termination hybrids into a protocol that uses expected constant round protocols for emulating these hybrids, preserving the expected round complexity of the calling protocol. We showcase our definitions and compiler by providing the first composable protocols (with simulation-based security proofs) for the following primitives, relying on point-to-point channels: (1) expected constant round perfect Byzantine agreement, (2) expected constant round perfect parallel broadcast, and (3) perfectly secure MPC with round complexity independent of the number of parties.
2019
JOFC
Round-Efficient Black-Box Construction of Composable Multi-Party Computation
We present a round-efficient black-box construction of a general multi-party computation (MPC) protocol that satisfies composability in the plain model. The security of our protocol is proven in the angel-based UC framework [Prabhakaran and Sahai, STOC’04] under the minimal assumption of the existence of semi-honest oblivious transfer protocols. The round complexity of our protocol is $$\max (\widetilde{O}(\log ^2n), O(R_{{{{\mathsf {O}}}{{\mathsf {T}}}}}))$$ max ( O ~ ( log 2 n ) , O ( R O T ) ) when the round complexity of the underlying oblivious transfer protocol is $$R_{{{{\mathsf {O}}}{{\mathsf {T}}}}}$$ R O T . Since constant-round semi-honest oblivious transfer protocols can be constructed under standard assumptions (such as the existence of enhanced trapdoor permutations), our result gives a $$\widetilde{O}(\log ^2n)$$ O ~ ( log 2 n ) -round protocol under these assumptions. Previously, only an $$O(\max (n^{\epsilon }, R_{{{{\mathsf {O}}}{{\mathsf {T}}}}}))$$ O ( max ( n ϵ , R O T ) ) -round protocol was shown, where $$\epsilon >0$$ ϵ > 0 is an arbitrary constant. We obtain our MPC protocol by constructing a $$\widetilde{O}(\log ^2n)$$ O ~ ( log 2 n ) -round CCA-secure commitment scheme in a black-box way under the assumption of the existence of one-way functions.
2019
JOFC
Small CRT-Exponent RSA Revisited
Since May (Crypto’02) revealed the vulnerability of the small CRT-exponent RSA using Coppersmith’s lattice-based method, several papers have studied the problem and two major improvements have been made. (1) Bleichenbacher and May (PKC’06) proposed an attack for small $$d_q$$ d q when the prime factor p is significantly smaller than the other prime factor q ; the attack works for $$p<N^{0.468}$$ p < N 0.468 . (2) Jochemsz and May (Crypto’07) proposed an attack for small $$d_p$$ d p and $$d_q$$ d q when the prime factors p and q are balanced; the attack works for $$d_p, d_q<N^{0.073}$$ d p , d q < N 0.073 . Even a decade has passed since their proposals, the above two attacks are still considered as the state of the art, and no improvements have been made thus far. A novel technique seems to be required for further improvements since it seems that the attacks have been studied with all the applicable techniques for Coppersmith’s methods proposed by Durfee–Nguyen (Asiacrypt’00), Jochemsz–May (Asiacrypt’06), and Herrmann–May (Asiacrypt’09, PKC’10). Since it seems that the attacks have been studied with all the applicable techniques for Coppersmith’s methods proposed by Durfee–Nguyen (Asiacrypt’00), Jochemsz–May (Asiacrypt’06), and Herrmann–May (Asiacrypt’09, PKC’10), improving the previous results seem technically hard. In this paper, we propose two improved attacks on the small CRT-exponent RSA: a small $$d_q$$ d q attack for $$p<N^{0.5}$$ p < N 0.5 (an improvement of Bleichenbacher–May’s) and a small $$d_p$$ d p and $$d_q$$ d q attack for $$d_p, d_q < N^{0.122}$$ d p , d q < N 0.122 (an improvement of Jochemsz–May’s). The latter result is also an improvement of our result in the proceeding version (Eurocrypt ’17); $$d_p, d_q < N^{0.091}$$ d p , d q < N 0.091 . We use Coppersmith’s lattice-based method to solve modular equations and obtain the improvements from a novel lattice construction by exploiting useful algebraic structures of the CRT-RSA key generation equation. We explicitly show proofs of our attacks and verify the validities by computer experiments. In addition to the two main attacks, we also propose small $$d_q$$ d q attacks on several variants of RSA.
2019
JOFC
Structure-Preserving Signatures on Equivalence Classes and Constant-Size Anonymous Credentials
Structure-preserving signatures (SPS) are a powerful building block for cryptographic protocols. We introduce SPS on equivalence classes (SPS-EQ), which allow joint randomization of messages and signatures. Messages are projective equivalence classes defined on group-element vectors, so multiplying a vector by a scalar yields a different representative of the same class. Our scheme lets one adapt a signature for one representative to a signature for another representative without knowledge of any secret. Moreover, given a signature, an adapted signature for a different representative is indistinguishable from a fresh signature on a random message. We propose a definitional framework for SPS-EQ and an efficient construction in Type-3 bilinear groups, which we prove secure against generic forgers. We also introduce set-commitment schemes that let one open subsets of the committed set. From this and SPS-EQ, we then build an efficient multi-show attribute-based anonymous credential system for an arbitrary number of attributes. Our ABC system avoids costly zero-knowledge proofs and only requires a short interactive proof to thwart replay attacks. It is the first credential system whose bandwidth required for credential showing is independent of the number of its attributes, i.e., constant-size. We propose strengthened game-based security definitions for ABC and prove our scheme anonymous against malicious organizations in the standard model; finally, we discuss a concurrently secure variant in the CRS model.
2019
JOFC
TFHE: Fast Fully Homomorphic Encryption Over the Torus
This work describes a fast fully homomorphic encryption scheme over the torus (TFHE) that revisits, generalizes and improves the fully homomorphic encryption (FHE) based on GSW and its ring variants. The simplest FHE schemes consist in bootstrapped binary gates. In this gate bootstrapping mode, we show that the scheme FHEW of Ducas and Micciancio (Eurocrypt, 2015) can be expressed only in terms of external product between a GSW and an LWE ciphertext. As a consequence of this result and of other optimizations, we decrease the running time of their bootstrapping from 690 to 13 ms single core, using 16 MB bootstrapping key instead of 1 GB, and preserving the security parameter. In leveled homomorphic mode, we propose two methods to manipulate packed data, in order to decrease the ciphertext expansion and to optimize the evaluation of lookup tables and arbitrary functions in $${\mathrm {RingGSW}}$$RingGSW-based homomorphic schemes. We also extend the automata logic, introduced in Gama et al. (Eurocrypt, 2016), to the efficient leveled evaluation of weighted automata, and present a new homomorphic counter called $$\mathrm {TBSR}$$TBSR, that supports all the elementary operations that occur in a multiplication. These improvements speed up the evaluation of most arithmetic functions in a packed leveled mode, with a noise overhead that remains additive. We finally present a new circuit bootstrapping that converts $$\mathsf {LWE}$$LWE ciphertexts into low-noise $${\mathrm {RingGSW}}$$RingGSW ciphertexts in just 137 ms, which makes the leveled mode of TFHE composable and which is fast enough to speed up arithmetic functions, compared to the gate bootstrapping approach. Finally, we provide an alternative practical analysis of LWE based schemes, which directly relates the security parameter to the error rate of LWE and the entropy of the LWE secret key, and we propose concrete parameter sets and timing comparison for all our constructions.
2019
JOFC
The Communication Complexity of Private Simultaneous Messages, Revisited
Private simultaneous message (PSM) protocols were introduced by Feige, Kilian, and Naor (STOC ’94) as a minimal non-interactive model for information theoretic three-party secure computation. While it is known that every function $$f:\{0,1\}^k\times \{0,1\}^k \rightarrow \{0,1\}$$ f : { 0 , 1 } k × { 0 , 1 } k → { 0 , 1 } admits a PSM protocol with exponential communication of $$2^{k/2}$$ 2 k / 2 (Beimel et al., TCC ’14), the best known (non-explicit) lower-bound is $$3k-O(1)$$ 3 k - O ( 1 ) bits. To prove this lower-bound, FKN identified a set of simple requirements, showed that any function that satisfies these requirements is subject to the $$3k-O(1)$$ 3 k - O ( 1 ) lower-bound, and proved that a random function is likely to satisfy the requirements. We revisit the FKN lower-bound and prove the following results: (Counterexample) We construct a function that satisfies the FKN requirements but has a PSM protocol with communication of $$2k+O(1)$$ 2 k + O ( 1 ) bits, revealing a gap in the FKN proof. (PSM lower-bounds) We show that by imposing additional requirements, the FKN argument can be fixed leading to a $$3k-O(\log k)$$ 3 k - O ( log k ) lower-bound for a random function. We also get a similar lower-bound for a function that can be computed by a polynomial-size circuit (or even polynomial-time Turing machine under standard complexity-theoretic assumptions). This yields the first non-trivial lower-bound for an explicit Boolean function partially resolving an open problem of Data, Prabhakaran, and Prabhakaran (Crypto ’14, IEEE Information Theory ’16). We further extend these results to the setting of imperfect PSM protocols which may have small correctness or privacy error. (CDS lower-bounds) We show that the original FKN argument applies (as is) to some weak form of PSM protocols which are strongly related to the setting of Conditional Disclosure of Secrets (CDS). This connection yields a simple combinatorial criterion for establishing linear $$\varOmega (k)$$ Ω ( k ) -bit CDS lower-bounds. As a corollary, we settle the complexity of the inner-product predicate resolving an open problem of Gay, Kerenidis, and Wee (Crypto ’15).
2019
JOFC
The Magic of ELFs
We introduce the notion of an Extremely Lossy Function (ELF). An ELF is a family of functions with an image size that is tunable anywhere from injective to having a polynomial-sized image. Moreover, for any efficient adversary, for a sufficiently large polynomial r (necessarily chosen to be larger than the running time of the adversary), the adversary cannot distinguish the injective case from the case of image size r . We develop a handful of techniques for using ELFs, and show that such extreme lossiness is useful for instantiating random oracles in several settings. In particular, we show how to use ELFs to build secure point function obfuscation with auxiliary input, as well as polynomially many hardcore bits for any one-way function. Such applications were previously known from strong knowledge assumptions—for example, polynomially many hardcore bits were only known from differing inputs obfuscation, a notion whose plausibility has been seriously challenged. We also use ELFs to build a simple hash function with output intractability , a new notion we define that may be useful for generating common reference strings. Next, we give a construction of ELFs relying on the exponential hardness of the decisional Diffie–Hellman problem, which is plausible in elliptic curve groups. Combining with the applications above, our work gives several practical constructions relying on qualitatively different—and arguably better—assumptions than prior works.
2019
JOFC
Unifying Leakage Models: From Probing Attacks to Noisy Leakage
A recent trend in cryptography is to formally show the leakage resilience of cryptographic implementations in a given leakage model. One of the most prominent leakage model—the so-called bounded leakage model—assumes that the amount of leakage that an adversary receives is a-priori bounded. Unfortunately, it has been pointed out by several works that the assumption of bounded leakages is hard to verify in practice. A more realistic assumption is to consider that leakages are sufficiently noisy, following the engineering observation that real-world physical leakages are inherently perturbed by physical noise. While already the seminal work of Chari et al. (in: CRYPTO, pp 398–412, 1999 ) study security of side-channel countermeasures in the noisy model, only recently Prouff and Rivain (in: Johansson T, Nguyen PQ (eds) EUROCRYPT, volume 7881 of lecture notes in 931 computer science, pp 142–159, Springer, 2013 ) offer a full formal analysis of the masking countermeasure in a physically motivated noise model. In particular, the authors show that a block-cipher implementation that uses the Boolean masking scheme is secure against a very general class of noisy leakage functions. While this is an important step toward better understanding the security of masking schemes, the analysis of Prouff and Rivain has several shortcomings including in particular requiring leak-free gates. In this work, we provide an alternative security proof in the same noise model that overcomes these challenges. We achieve this goal by a new reduction from noisy leakage to the important model of probing adversaries (Ishai et al. in: CRYPTO, pp 463–481, 2003 ). This reduction is the main technical contribution of our work that significantly simplifies the formal security analysis of masking schemes against realistic side-channel leakages.
2019
JOFC
Updating Key Size Estimations for Pairings
Recent progress on NFS imposed a new estimation of the security of pairings. In this work we study the best attacks against some of the most popular pairings and propose new key sizes using an analysis which is more precise than the analysis in a recent article of Menezes, Sarkar and Singh. We also select pairing-friendly curves for standard security levels.
2019
JOFC
Verifiable Random Functions from Non-interactive Witness-Indistinguishable Proofs
Verifiable random functions (VRFs) are pseudorandom functions where the owner of the seed, in addition to computing the function’s value y at any point x , can also generate a non-interactive proof $$\pi $$ π that y is correct, without compromising pseudorandomness at other points. Being a natural primitive with a wide range of applications, considerable efforts have been directed toward the construction of such VRFs. While these efforts have resulted in a variety of algebraic constructions (from bilinear maps or the RSA problem), the relation between VRFs and other general primitives is still not well understood. We present new constructions of VRFs from general primitives, the main one being non-interactive witness-indistinguishable proofs (NIWIs). This includes: (1) a selectively secure VRF assuming NIWIs and non-interactive commitments. As usual, the VRF can be made adaptively secure assuming subexponential hardness of the underlying primitives. (2) An adaptively secure VRF assuming (polynomially hard) NIWIs, non-interactive commitments, and ( single-key ) constrained pseudorandom functions for a restricted class of constraints. The above primitives can be instantiated under various standard assumptions, which yields corresponding VRF instantiations, under different assumptions than were known so far. One notable example is a non-uniform construction of VRFs from subexponentially hard trapdoor permutations, or more generally, from verifiable pseudorandom generators (the construction can be made uniform under a standard derandomization assumption). This partially answers an open question by Dwork and Naor (FOCS ’00). The construction and its analysis are quite simple. Both draw from ideas commonly used in the context of indistinguishability obfuscation .
2019
JOFC
What Security Can We Achieve Within 4 Rounds?
Katz and Ostrovsky (Crypto 2004) proved that five rounds are necessary for stand-alone general black-box constructions of secure two-party protocols and at least four rounds are necessary if only one party needs to receive the output. Recently, Ostrovsky, Richelson and Scafuro (Crypto 2015) proved optimality of this result by showing how to realize stand-alone, secure two-party computation under general assumptions (with black-box proof of security) in four rounds where only one party receives the output, and an extension to five rounds where both parties receive the output. In this paper, we study the question of what security is achievable for stand-alone two-party protocols within four rounds and show the following results: 1. A 4-round two-party protocol for coin-tossing that achieves 1 /  p - security (i.e., simulation fails with probability at most $$1/p+{\mathsf {negl}}$$ 1 / p + negl ), in the presence of malicious corruptions. 2. A 4-round two-party protocol for general functionalities where both parties receive the output, that achieves 1 /  p -security and privacy in the presence of malicious adversaries corrupting one of the parties, and full security in the presence of non-aborting malicious adversaries corrupting the other party. 3. A 3-round oblivious-transfer protocol that achieves 1 /  p -security against arbitrary malicious senders, while simultaneously guaranteeing a meaningful notion of privacy against malicious corruptions of either party. 4. Finally, we show that the simulation-based security guarantees for our 3-round protocols are optimal by proving that 1 /  p -simulation security is impossible to achieve against both parties in three rounds or less when requiring some minimal guarantees on the privacy of their inputs.
2019
JOFC
White-Box Cryptography: Don’t Forget About Grey-Box Attacks
Despite the fact that all current scientific white-box approaches of standardized cryptographic primitives have been publicly broken, these attacks require knowledge of the internal data representation used by the implementation. In practice, the level of implementation knowledge required is only attainable through significant reverse-engineering efforts. In this paper, we describe new approaches to assess the security of white-box implementations which require neither knowledge about the look-up tables used nor expensive reverse-engineering efforts. We introduce the differential computation analysis (DCA) attack which is the software counterpart of the differential power analysis attack as applied by the cryptographic hardware community. Similarly, the differential fault analysis (DFA) attack is the software counterpart of fault injection attacks on cryptographic hardware. For DCA, we developed plugins to widely available dynamic binary instrumentation (DBI) frameworks to produce software execution traces which contain information about the memory addresses being accessed. For the DFA attack, we developed modified emulators and plugins for DBI frameworks that allow injecting faults at selected moments within the execution of the encryption or decryption process as well as a framework to automate static fault injection. To illustrate the effectiveness, we show how DCA and DFA can extract the secret key from numerous publicly available non-commercial white-box implementations of standardized cryptographic algorithms. These approaches allow one to extract the secret key material from white-box implementations significantly faster and without specific knowledge of the white-box design in an automated or semi-automated manner.