International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Itai Dinur

Publications

Year
Venue
Title
2024
EUROCRYPT
Tight Indistinguishability Bounds for the XOR of Independent Random Permutations by Fourier Analysis
Itai Dinur
The XOR of two independent permutations (XoP) is a well-known construction for achieving security beyond the birthday bound when implementing a pseudorandom function using a block cipher (i.e., a pseudorandom permutation). The idealized construction (where the permutations are uniformly chosen and independent) and its variants have been extensively analyzed over nearly 25 years. The best-known asymptotic information-theoretic indistinguishability bound for the XoP construction is $O(q/2^{1.5n})$, derived by Eberhard in~2017. A generalization of the XoP construction outputs the XOR of $r \geq 2$ independent permutations, and has also received significant attention in both the single-user and multi-user settings. In particular, for $r = 3$, the best-known bound (obtained by Choi et al. [ASIACRYPT'22]) is about $q^2/2^{2.5 n}$ in the single-user setting and $\sqrt{u} q_{\max}^2/2^{2.5 n}$ in the multi-user setting (where $u$ is the number of users and $q_{\max}$ is the number of queries per user). In this paper, we prove an indistinguishability bound of $q/2^{(r - 0.5)n}$ for the (generalized) XoP construction in the single-user setting, and a bound of $\sqrt{u} q_{\max}/2^{(r - 0.5)n}$ in the multi-user setting. In particular, for $r=2$, we obtain the bounds $q/2^{1.5n}$ and $\sqrt{u} q_{\max}/2^{1.5n}$ in single-user and multi-user settings, respectively. For $r=3$ the corresponding bounds are $q/2^{2.5n}$ and $\sqrt{u} q_{\max}/2^{2.5n}$. All of these bounds hold assuming $q < 2^{n}/2$ (or $q_{\max} < 2^{n}/2$). Compared to previous works, we improve all the best-known bounds for the (generalized) XoP construction in the multi-user setting, and the best-known bounds for the generalized XoP construction for $r \geq 3$ in the single-user setting (assuming $q \geq 2^{n/2}$). For the basic two-permutation XoP construction in the single-user setting, our concrete bound of $q/2^{1.5n}$ stands in contrast to the asymptotic bound of $O(q/2^{1.5n})$ by Eberhard. Since all of our bounds are matched (up to constant factors) for $q > 2^{n/2}$ by attacks published by Patarin in 2008 (and their generalizations to the multi-user setting), they are all tight. We obtain our results by Fourier analysis of Boolean functions. Most of our technical work involves bounding (sums of) Fourier coefficients of the density function associated with sampling without replacement. While the proof of Eberhard relies on similar bounds, our proof is elementary and significantly simpler.
2023
EUROCRYPT
On Differential Privacy and Adaptive Data Analysis with Bounded Space
We study the space complexity of the two related fields of {\em differential privacy} and {\em adaptive data analysis}. Specifically, \begin{enumerate} \item Under standard cryptographic assumptions, we show that there exists a problem $P$ that requires exponentially more space to be solved efficiently with differential privacy, compared to the space needed without privacy. To the best of our knowledge, this is the first separation between the space complexity of private and non-private algorithms. \item The line of work on adaptive data analysis focuses on understanding the number of {\em samples} needed for answering a sequence of adaptive queries. We revisit previous lower bounds at a foundational level, and show that they are a consequence of a space bottleneck rather than a sampling bottleneck. \end{enumerate} To obtain our results, we define and construct an encryption scheme with multiple keys that is built to withstand a limited amount of key leakage in a very particular way.
2023
EUROCRYPT
Efficient Detection of High Probability Statistical Properties of Cryptosystems via Surrogate Differentiation
A central problem in cryptanalysis is to find all the significant deviations from randomness in a given $n$-bit cryptographic primitive. When $n$ is small (e.g., an $8$-bit S-box), this is easy to do, but for large $n$, the only practical way to find such statistical properties was to exploit the internal structure of the primitive and to speed up the search with a variety of heuristic rules of thumb. However, such bottom-up techniques can miss many properties, especially in cryptosystems which are designed to have hidden trapdoors. In this paper we consider the top-down version of the problem in which the cryptographic primitive is given as a structureless black box, and reduce the complexity of the best known techniques for finding all its significant differential and linear properties by a large factor of $2^{n/2}$. Our main new tool is the idea of using {\it surrogate differentiation}. In the context of finding differential properties, it enables us to simultaneously find information about all the differentials of the form $f(x) \oplus f(x \oplus \alpha)$ in all possible directions $\alpha$ by differentiating $f$ in a single arbitrarily chosen direction $\gamma$ (which is unrelated to the $\alpha$'s). In the context of finding linear properties, surrogate differentiation can be combined in a highly effective way with the Fast Fourier Transform. For $64$-bit cryptographic primitives, this technique makes it possible to automatically find in about $2^{64}$ time all their differentials with probability $p \geq 2^{-32}$ and all their linear approximations with bias $|p| \geq 2^{-16}$; previous algorithms for these problems required at least $2^{96}$ time. Similar techniques can be used to significantly improve the best known time complexities of finding related key differentials, second-order differentials, and boomerangs. In addition, we show how to run variants of these algorithms which require no memory, and how to detect such statistical properties even in trapdoored cryptosystems whose designers specifically try to evade our techniques.
2022
TOSC
2022
EUROCRYPT
Refined Cryptanalysis of the GPRS Ciphers GEA-1 and GEA-2 📺
Itai Dinur Dor Amzaleg
At EUROCRYPT~2021, Beierle et al. presented the first public analysis of the GPRS ciphers GEA-1 and GEA-2. They showed that although GEA-1 uses a 64-bit session key, it can be recovered with the knowledge of only 65 bits of keystream in time $2^{40}$ using $44$ GiB of memory. The attack exploits a weakness in the initialization process of the cipher that was presumably hidden intentionally by the designers to reduce its security. While no such weakness was found for GEA-2, the authors presented an attack on this cipher with time complexity of about $2^{45}$. The main practical obstacle is the required knowledge of 12800 bits of keystream used to encrypt a full GPRS frame. Variants of the attack are applicable (but more expensive) when given less consecutive keystream bits, or when the available keystream is fragmented (it contains no long consecutive block). In this paper, we improve and complement the previous analysis of GEA-1 and GEA-2. For GEA-1, we devise an attack in which the memory complexity is reduced by a factor of about $2^{13} = 8192$ from $44$ GiB to about 4 MiB, while the time complexity remains $2^{40}$. Our implementation recovers the GEA-1 session key in average time of 2.5~hours on a modern laptop. For GEA-2, we describe two attacks that complement the analysis of Beierle et al. The first attack obtains a linear tradeoff between the number of consecutive keystream bits available to the attacker (denoted by $\ell$) and the time complexity. It improves upon the previous attack in the range of (roughly) $\ell \leq 7000$. Specifically, for $\ell = 1100$ the complexity of our attack is about $2^{54}$, while the previous one is not faster than the $2^{64}$ brute force complexity. In case the available keystream is fragmented, our second attack reduces the memory complexity of the previous attack by a factor of $512$ from 32 GiB to 64 MiB with no time complexity penalty. Our attacks are based on new combinations of stream cipher cryptanalytic techniques and algorithmic techniques used in other contexts (such as solving the $k$-XOR problem).
2021
TOSC
2021
EUROCRYPT
Cryptanalytic Applications of the Polynomial Method for Solving Multivariate Equation Systems over GF(2) 📺
Itai Dinur
At SODA 2017 Lokshtanov et al. presented the first worst-case algorithms with exponential speedup over exhaustive search for solving polynomial equation systems of degree $d$ in $n$ variables over finite fields. These algorithms were based on the polynomial method in circuit complexity which is a technique for proving circuit lower bounds that has recently been applied in algorithm design. Subsequent works further improved the asymptotic complexity of polynomial method-based algorithms for solving equations over the field $\mathbb{F}_2$. However, the asymptotic complexity formulas of these algorithms hide significant low-order terms, and hence they outperform exhaustive search only for very large values of~$n$. In this paper, we devise a concretely efficient polynomial method-based algorithm for solving multivariate equation systems over $\mathbb{F}_2$. We analyze our algorithm's performance for solving random equation systems, and bound its complexity by about $n^2 \cdot 2^{0.815n}$ bit operations for $d = 2$ and $n^2 \cdot 2^{\left(1 - 1/2.7d\right) n}$ for any $d \geq 2$. We apply our algorithm in cryptanalysis of recently proposed instances of the Picnic signature scheme (an alternate third-round candidate in NIST's post-quantum standardization project) that are based on the security of the LowMC block cipher. Consequently, we show that 2 out of 3 new instances do not achieve their claimed security level. As a secondary application, we also improve the best-known preimage attacks on several round-reduced variants of the Keccak hash function. Our algorithm combines various techniques used in previous polynomial method-based algorithms with new optimizations, some of which exploit randomness assumptions about the system of equations. In its cryptanalytic application to Picnic, we demonstrate how to further optimize the algorithm for solving structured equation systems that are constructed from specific cryptosystems.
2021
CRYPTO
MPC-Friendly Symmetric Cryptography from Alternating Moduli: Candidates, Protocols, and Applications 📺
We study new candidates for symmetric cryptographic primitives that leverage alternation between linear functions over $\mathbb{Z}_2$ and $\mathbb{Z}_3$ to support fast protocols for secure multiparty computation (MPC). This continues the study of weak pseudorandom functions of this kind initiated by Boneh et al. (TCC 2018) and Cheon et al. (PKC 2021). We make the following contributions. (Candidates). We propose new designs of symmetric primitives based on alternating moduli. These include candidate one-way functions, pseudorandom generators, and weak pseudorandom functions. We propose concrete parameters based on cryptanalysis. (Protocols). We provide a unified approach for securely evaluating modulus-alternating primitives in different MPC models. For the original candidate of Boneh et al., our protocols obtain at least 2x improvement in all performance measures. We report efficiency benchmarks of an optimized implementation. (Applications). We showcase the usefulness of our candidates for a variety of applications. This includes short ``Picnic-style'' signature schemes, as well as protocols for oblivious pseudorandom functions, hierarchical key derivation, and distributed key generation for function secret sharing.
2021
TCC
Distributed Merkle's Puzzles 📺
Itai Dinur Ben Hasson
Merkle's puzzles were proposed in 1974 by Ralph Merkle as a key agreement protocol between two players based on symmetric-key primitives. In order to agree on a secret key, each player makes $T$ queries to a random function (oracle), while any eavesdropping adversary has to make $\Omega(T^2)$ queries to the random oracle in order to recover the key with high probability. The quadratic gap between the query complexity of the honest players and the eavesdropper was shown to be optimal by Barak and Mahmoody [CRYPTO`09]. We consider Merkle's puzzles in a distributed setting, where the goal is to allow \emph{all} pairs among $M$ honest players with access to a random oracle to agree on secret keys. We devise a protocol in this setting, where each player makes $T$ queries to the random oracle and communicates at most $T$ bits, while any adversary has to make $\Omega(M \cdot T^2)$ queries to the random oracle (up to logarithmic factors) in order to recover \emph{any one} of the keys with high probability. Therefore, the amortized (per-player) complexity of achieving secure communication (for a fixed security level) decreases with the size of the network. Finally, we prove that the gap of $T \cdot M$ between the query complexity of each honest player and the eavesdropper is optimal.
2020
TOSC
2020
EUROCRYPT
Tight Time-Space Lower Bounds for Finding Multiple Collision Pairs and Their Applications 📺
Itai Dinur
We consider a \emph{collision search problem} (CSP), where given a parameter $C$, the goal is to find $C$ collision pairs in a random function $f:[N] \rightarrow [N]$ (where $[N] = \{0,1,\ldots,N-1\})$ using $S$ bits of memory. Algorithms for CSP have numerous cryptanalytic applications such as space-efficient attacks on double and triple encryption. The best known algorithm for CSP is \emph{parallel collision search} (PCS) published by van Oorschot and Wiener, which achieves the time-space tradeoff $T^2 \cdot S = \tilde{O}(C^2 \cdot N)$. In this paper, we prove that any algorithm for CSP satisfies $T^2 \cdot S = \tilde{\Omega}(C^2 \cdot N)$, hence the best known time-space tradeoff is optimal (up to poly-logarithmic factors in $N$). On the other hand, we give strong evidence that proving similar unconditional time-space tradeoff lower bounds on CSP applications (such as breaking double and triple encryption) may be very difficult, and would imply a breakthrough in complexity theory. Hence, we propose a new restricted model of computation and prove that under this model, the best known time-space tradeoff attack on double encryption is optimal.
2020
EUROCRYPT
On the Streaming Indistinguishability of a Random Permutation and a Random Function 📺
Itai Dinur
An adversary with $S$ bits of memory obtains a stream of $Q$ elements that are uniformly drawn from the set $\{1,2,\ldots,N\}$, either with or without replacement. This corresponds to sampling $Q$ elements using either a random function or a random permutation. The adversary's goal is to distinguish between these two cases. This problem was first considered by Jaeger and Tessaro (EUROCRYPT 2019), which proved that the adversary's advantage is upper bounded by $\sqrt{Q \cdot S/N}$. Jaeger and Tessaro used this bound as a streaming switching lemma which allowed proving that known time-memory tradeoff attacks on several modes of operation (such as counter-mode) are optimal up to a factor of $O(\log N)$ if $Q \cdot S \approx N$. However, the bound's proof assumed an unproven combinatorial conjecture. Moreover, if $Q \cdot S \ll N$ there is a gap between the upper bound of $\sqrt{Q \cdot S/N}$ and the $Q \cdot S/N$ advantage obtained by known attacks. In this paper, we prove a tight upper bound (up to poly-logarithmic factors) of $O(\log Q \cdot Q \cdot S/N)$ on the adversary's advantage in the streaming distinguishing problem. The proof does not require a conjecture and is based on a hybrid argument that gives rise to a reduction from the unique-disjointness communication complexity problem to streaming.
2020
CRYPTO
Out of Oddity -- New Cryptanalytic Techniques against Symmetric Primitives Optimized for Integrity Proof Systems 📺
The security and performance of many integrity proof systems like SNARKs, STARKs and Bulletproofs highly depend on the underlying hash function. For this reason several new proposals have recently been developed. These primitives obviously require an in-depth security evaluation, especially since their implementation constraints have led to less standard design approaches. This work compares the security levels offered by two recent families of such primitives, namely GMiMC and HadesMiMC. We exhibit low-complexity distinguishers against the GMiMC and HadesMiMC permutations for most parameters proposed in recently launched public challenges for STARK-friendly hash functions. In the more concrete setting of the sponge construction corresponding to the practical use in the ZK-STARK protocol, we present a practical collision attack on a round-reduced version of GMiMC and a preimage attack on some instances of HadesMiMC. To achieve those results, we adapt and generalize several cryptographic techniques to fields of odd characteristic.
2019
EUROCRYPT
Linear Equivalence of Block Ciphers with Partial Non-Linear Layers: Application to LowMC
$$\textsc {LowMC}$$LOWMC is a block cipher family designed in 2015 by Albrecht et al. It is optimized for practical instantiations of multi-party computation, fully homomorphic encryption, and zero-knowledge proofs. $$\textsc {LowMC}$$LOWMC is used in the $$\textsc {Picnic}$$PICNIC signature scheme, submitted to NIST’s post-quantum standardization project and is a substantial building block in other novel post-quantum cryptosystems. Many $$\textsc {LowMC}$$LOWMC instances use a relatively recent design strategy (initiated by Gérard et al. at CHES 2013) of applying the non-linear layer to only a part of the state in each round, where the shortage of non-linear operations is partially compensated by heavy linear algebra. Since the high linear algebra complexity has been a bottleneck in several applications, one of the open questions raised by the designers was to reduce it, without introducing additional non-linear operations (or compromising security).In this paper, we consider $$\textsc {LowMC}$$LOWMC instances with block size n, partial non-linear layers of size $$s \le n$$s≤n and r encryption rounds. We redesign LowMC’s linear components in a way that preserves its specification, yet improves LowMC’s performance in essentially every aspect. Most of our optimizations are applicable to all SP-networks with partial non-linear layers and shed new light on this relatively new design methodology.Our main result shows that when $$s < n$$s<n, each $$\textsc {LowMC}$$LOWMC instance belongs to a large class of equivalent instances that differ in their linear layers. We then select a representative instance from this class for which encryption (and decryption) can be implemented much more efficiently than for an arbitrary instance. This yields a new encryption algorithm that is equivalent to the standard one, but reduces the evaluation time and storage of the linear layers from $$r \cdot n^2$$r·n2 bits to about $$r \cdot n^2 - (r-1)(n-s)^2$$r·n2-(r-1)(n-s)2. Additionally, we reduce the size of LowMC’s round keys and constants and optimize its key schedule and instance generation algorithms. All of these optimizations give substantial improvements for small s and a reasonable choice of r. Finally, we formalize the notion of linear equivalence of block ciphers and prove the optimality of some of our results.Comprehensive benchmarking of our optimizations in various $$\textsc {LowMC}$$LOWMC applications (such as $$\textsc {Picnic}$$PICNIC) reveals improvements by factors that typically range between 2x and 40x in runtime and memory consumption.
2019
EUROCRYPT
Multi-target Attacks on the Picnic Signature Scheme and Related Protocols 📺
Itai Dinur Niv Nadler
Picnic is a signature scheme that was presented at ACM CCS 2017 by Chase et al. and submitted to NIST’s post-quantum standardization project. Among all submissions to NIST’s project, Picnic is one of the most innovative, making use of recent progress in construction of practically efficient zero-knowledge (ZK) protocols for general circuits.In this paper, we devise multi-target attacks on Picnic and its underlying ZK protocol, ZKB++. Given access to S signatures, produced by a single or by several users, our attack can (information theoretically) recover the $$\kappa $$-bit signing key of a user in complexity of about $$2^{\kappa - 7}/S$$. This is faster than Picnic’s claimed $$2^{\kappa }$$ security against classical (non-quantum) attacks by a factor of $$2^7 \cdot S$$ (as each signature contains about $$2^7$$ attack targets).Whereas in most multi-target attacks, the attacker can easily sort and match the available targets, this is not the case in our attack on Picnic, as different bits of information are available for each target. Consequently, it is challenging to reach the information theoretic complexity in a computational model, and we had to perform cryptanalytic optimizations by carefully analyzing ZKB++ and its underlying circuit. Our best attack for $$\kappa = 128$$ has time complexity of $$T = 2^{77}$$ for $$S = 2^{64}$$. Alternatively, we can reach the information theoretic complexity of $$T = 2^{64}$$ for $$S = 2^{57}$$, given that all signatures are produced with the same signing key.Our attack exploits a weakness in the way that the Picnic signing algorithm uses a pseudo-random generator. The weakness is fixed in the recent Picnic 2.0 version.In addition to our attack on Picnic, we show that a recently proposed improvement of the ZKB++ protocol (due to Katz, Kolesnikov and Wang) is vulnerable to a similar multi-target attack.
2019
JOFC
Efficient Dissection of Bicomposite Problems with Cryptanalytic Applications
In this paper, we show that a large class of diverse problems have a bicomposite structure which makes it possible to solve them with a new type of algorithm called dissection , which has much better time/memory tradeoffs than previously known algorithms. A typical example is the problem of finding the key of multiple encryption schemes with r independent n -bit keys. All the previous error-free attacks required time T and memory M satisfying $$\textit{TM} = 2^{rn}$$ TM = 2 rn , and even if “false negatives” are allowed, no attack could achieve $$\textit{TM}<2^{3rn/4}$$ TM < 2 3 r n / 4 . Our new technique yields the first algorithm which never errs and finds all the possible keys with a smaller product of $$\textit{TM}$$ TM , such as $$T=2^{4n}$$ T = 2 4 n time and $$M=2^{n}$$ M = 2 n memory for breaking the sequential execution of $$\hbox {r}=7$$ r = 7 block ciphers. The improvement ratio we obtain increases in an unbounded way as r increases, and if we allow algorithms which can sometimes miss solutions, we can get even better tradeoffs by combining our dissection technique with parallel collision search. To demonstrate the generality of the new dissection technique, we show how to use it in a generic way in order to improve rebound attacks on hash functions and to solve with better time complexities (for small memory complexities) hard combinatorial search problems, such as the well-known knapsack problem.
2019
JOFC
An Optimal Distributed Discrete Log Protocol with Applications to Homomorphic Secret Sharing
The distributed discrete logarithm (DDL) problem was introduced by Boyle, Gilboa and Ishai at CRYPTO 2016. A protocol solving this problem was the main tool used in the share conversion procedure of their homomorphic secret sharing (HSS) scheme which allows non-interactive evaluation of branching programs among two parties over shares of secret inputs. Let g be a generator of a multiplicative group $${\mathbb {G}}$$ G . Given a random group element $$g^{x}$$ g x and an unknown integer $$b \in [-M,M]$$ b ∈ [ - M , M ] for a small M , two parties A and B (that cannot communicate) successfully solve DDL if $$A(g^{x}) - B(g^{x+b}) = b$$ A ( g x ) - B ( g x + b ) = b . Otherwise, the parties err. In the DDL protocol of Boyle et al., A and B run in time T and have error probability that is roughly linear in M  /  T . Since it has a significant impact on the HSS scheme’s performance, a major open problem raised by Boyle et al. was to reduce the error probability as a function of T . In this paper we devise a new DDL protocol that substantially reduces the error probability to $$O(M \cdot T^{-2})$$ O ( M · T - 2 ) . Our new protocol improves the asymptotic evaluation time complexity of the HSS scheme by Boyle et al. on branching programs of size S from $$O(S^2)$$ O ( S 2 ) to $$O(S^{3/2})$$ O ( S 3 / 2 ) . We further show that our protocol is optimal up to a constant factor for all relevant cryptographic group families, unless one can solve the discrete logarithm problem in a short interval of length R in time $$o(\sqrt{R})$$ o ( R ) . Our DDL protocol is based on a new type of random walk that is composed of several iterations in which the expected step length gradually increases. We believe that this random walk is of independent interest and will find additional applications.
2019
JOFC
Cryptanalytic Time–Memory–Data Trade-offs for FX-Constructions and the Affine Equivalence Problem
Itai Dinur
The FX-construction was proposed in 1996 by Kilian and Rogaway as a generalization of the DESX scheme. The construction increases the security of an n -bit core block cipher with a $$\kappa $$ κ -bit key by using two additional n -bit masking keys. Recently, several concrete instances of the FX-construction were proposed, including PRINCE, PRIDE and MANTIS (presented at ASIACRYPT 2012, CRYPTO 2014 and CRYPTO 2016, respectively). In this paper, we devise new cryptanalytic time–memory–data trade-off attacks on FX-constructions. By fine-tuning the parameters to the recent FX-construction proposals, we show that the security margin of these ciphers against practical attacks is smaller than expected. Our techniques combine a special form of time–memory–data trade-offs, typically applied to stream ciphers, with a cryptanalytic technique by Fouque, Joux and Mavromati. In the final part of the paper, we show that the techniques we use in cryptanalysis of the FX-construction are applicable to additional schemes. In particular, we use related methods in order to devise new time–memory trade-offs for solving the affine equivalence problem. In this problem, the input consists of two functions $$F,G: \{0,1\}^n \rightarrow \{0,1\}^n$$ F , G : { 0 , 1 } n → { 0 , 1 } n , and the goal is to determine whether there exist invertible affine transformations $$A_1,A_2$$ A 1 , A 2 over $$GF(2)^n$$ G F ( 2 ) n such that $$G = A_2 \circ F \circ A_1$$ G = A 2 ∘ F ∘ A 1 .
2019
JOFC
Generic Attacks on Hash Combiners
Hash combiners are a practical way to make cryptographic hash functions more tolerant to future attacks and compatible with existing infrastructure. A combiner combines two or more hash functions in a way that is hopefully more secure than each of the underlying hash functions, or at least remains secure as long as one of them is secure. Two classical hash combiners are the exclusive-or (XOR) combiner $$ \mathcal {H}_1(M) \oplus \mathcal {H}_2(M) $$ H 1 ( M ) ⊕ H 2 ( M ) and the concatenation combiner $$ \mathcal {H}_1(M) \Vert \mathcal {H}_2(M) $$ H 1 ( M ) ‖ H 2 ( M ) . Both of them process the same message using the two underlying hash functions in parallel. Apart from parallel combiners, there are also cascade constructions sequentially calling the underlying hash functions to process the message repeatedly, such as Hash-Twice $$\mathcal {H}_2(\mathcal {H}_1(IV, M), M)$$ H 2 ( H 1 ( I V , M ) , M ) and the Zipper hash $$\mathcal {H}_2(\mathcal {H}_1(IV, M), \overleftarrow{M})$$ H 2 ( H 1 ( I V , M ) , M ← ) , where $$\overleftarrow{M}$$ M ← is the reverse of the message M . In this work, we study the security of these hash combiners by devising the best-known generic attacks. The results show that the security of most of the combiners is not as high as commonly believed. We summarize our attacks and their computational complexities (ignoring the polynomial factors) as follows: 1. Several generic preimage attacks on the XOR combiner: A first attack with a best-case complexity of $$ 2^{5n/6} $$ 2 5 n / 6 obtained for messages of length $$ 2^{n/3} $$ 2 n / 3 . It relies on a novel technical tool named interchange structure. It is applicable for combiners whose underlying hash functions follow the Merkle–Damgård construction or the HAIFA framework. A second attack with a best-case complexity of $$ 2^{2n/3} $$ 2 2 n / 3 obtained for messages of length $$ 2^{n/2} $$ 2 n / 2 . It exploits properties of functional graphs of random mappings. It achieves a significant improvement over the first attack but is only applicable when the underlying hash functions use the Merkle–Damgård construction. An improvement upon the second attack with a best-case complexity of $$ 2^{5n/8} $$ 2 5 n / 8 obtained for messages of length $$ 2^{5n/8} $$ 2 5 n / 8 . It further exploits properties of functional graphs of random mappings and uses longer messages. These attacks show a rather surprising result: regarding preimage resistance, the sum of two n -bit narrow-pipe hash functions following the considered constructions can never provide n -bit security. 2. A generic second-preimage attack on the concatenation combiner of two Merkle–Damgård hash functions. This attack finds second preimages faster than $$ 2^n $$ 2 n for challenges longer than $$ 2^{2n/7} $$ 2 2 n / 7 and has a best-case complexity of $$ 2^{3n/4} $$ 2 3 n / 4 obtained for challenges of length $$ 2^{3n/4} $$ 2 3 n / 4 . It also exploits properties of functional graphs of random mappings. 3. The first generic second-preimage attack on the Zipper hash with underlying hash functions following the Merkle–Damgård construction. The best-case complexity is $$ 2^{3n/5} $$ 2 3 n / 5 , obtained for challenge messages of length $$ 2^{2n/5} $$ 2 2 n / 5 . 4. An improved generic second-preimage attack on Hash-Twice with underlying hash functions following the Merkle–Damgård construction. The best-case complexity is $$ 2^{13n/22} $$ 2 13 n / 22 , obtained for challenge messages of length $$ 2^{13n/22} $$ 2 13 n / 22 . The last three attacks show that regarding second-preimage resistance, the concatenation and cascade of two n -bit narrow-pipe Merkle–Damgård hash functions do not provide much more security than that can be provided by a single n -bit hash function. Our main technical contributions include the following: 1. The interchange structure, which enables simultaneously controlling the behaviours of two hash computations sharing the same input. 2. The simultaneous expandable message, which is a set of messages of length covering a whole appropriate range and being multi-collision for both of the underlying hash functions. 3. New ways to exploit the properties of functional graphs of random mappings generated by fixing the message block input to the underlying compression functions.
2018
EUROCRYPT
2018
CRYPTO
An Optimal Distributed Discrete Log Protocol with Applications to Homomorphic Secret Sharing 📺
The distributed discrete logarithm (DDL) problem was introduced by Boyle et al. at CRYPTO 2016. A protocol solving this problem was the main tool used in the share conversion procedure of their homomorphic secret sharing (HSS) scheme which allows non-interactive evaluation of branching programs among two parties over shares of secret inputs.Let g be a generator of a multiplicative group $$\mathbb {G}$$G. Given a random group element $$g^{x}$$gx and an unknown integer $$b \in [-M,M]$$b∈[-M,M] for a small M, two parties A and B (that cannot communicate) successfully solve DDL if $$A(g^{x}) - B(g^{x+b}) = b$$A(gx)-B(gx+b)=b. Otherwise, the parties err. In the DDL protocol of Boyle et al., A and B run in time T and have error probability that is roughly linear in M/T. Since it has a significant impact on the HSS scheme’s performance, a major open problem raised by Boyle et al. was to reduce the error probability as a function of T.In this paper we devise a new DDL protocol that substantially reduces the error probability to $$O(M \cdot T^{-2})$$O(M·T-2). Our new protocol improves the asymptotic evaluation time complexity of the HSS scheme by Boyle et al. on branching programs of size S from $$O(S^2)$$O(S2) to $$O(S^{3/2})$$O(S3/2). We further show that our protocol is optimal up to a constant factor for all relevant cryptographic group families, unless one can solve the discrete logarithm problem in a short interval of length R in time $$o(\sqrt{R})$$o(R).Our DDL protocol is based on a new type of random walk that is composed of several iterations in which the expected step length gradually increases. We believe that this random walk is of independent interest and will find additional applications.
2017
CRYPTO
2016
EUROCRYPT
2016
CRYPTO
2016
JOFC
2015
EUROCRYPT
2015
EUROCRYPT
2015
EUROCRYPT
2015
CRYPTO
2015
ASIACRYPT
2014
CRYPTO
2014
JOFC
2014
ASIACRYPT
2014
FSE
2014
FSE
2013
ASIACRYPT
2013
FSE
2012
CRYPTO
2012
FSE
2012
FSE
2011
FSE
2011
FSE
2011
ASIACRYPT
2009
EUROCRYPT
2009
FSE

Program Committees

Crypto 2024
Crypto 2023
FSE 2023
Asiacrypt 2022
Crypto 2022
Eurocrypt 2022
Crypto 2021
TCC 2021
Eurocrypt 2019
FSE 2019
FSE 2018
Eurocrypt 2017
FSE 2017
FSE 2016
Crypto 2016
FSE 2015
Asiacrypt 2014