CryptoDB
Papers from Journal of Cryptology 2013
Year
Venue
Title
2013
JOFC
A Comparison of Cryptanalytic Tradeoff Algorithms
Abstract
Three time-memory tradeoff algorithms are compared in this paper. Specifically, the classical tradeoff algorithm by Hellman, the distinguished point tradeoff method, and the rainbow table method, in their non-perfect table versions, are treated.We show that, under parameters and assumptions that are typically considered in theoretic discussions of the tradeoff algorithms, the Hellman and distinguished point tradeoffs perform very close to each other and the rainbow table method performs somewhat better than the other two algorithms. Our method of comparison can easily be applied to other situations, where the conclusions could be different.The analysis of tradeoff efficiency presented in this paper does not ignore the effects of false alarms and also covers techniques for reducing storage, such as ending point truncations and index tables. Our comparison of algorithms fully takes into account success probabilities and precomputation efforts.
2013
JOFC
A Note on Constant-Round Zero-Knowledge Proofs of Knowledge
Abstract
In this note, we show the existence of constant-round computational zero-knowledge proofs of knowledge for all $\mathcal {NP}$. The existence of constant-round zero-knowledge proofs was proven by Goldreich and Kahan (Journal of Cryptology, 1996), and the existence of constant-round zero-knowledge arguments of knowledge was proven by Feige and Shamir (CRYPTO, 1989). However, the existence of constant-round zero-knowledge proofs of knowledge for all $\mathcal {NP}$ is folklore, to the best of our knowledge, since no proof of this fact has been published.
2013
JOFC
A Note on the Bivariate Coppersmith Theorem
Abstract
In 1997, Coppersmith proved a famous theorem for finding small roots of bivariate polynomials over ℤ, with important applications to cryptography.While it seems to have been overlooked until now, we found the proof of the most commonly cited version of this theorem to be incomplete. Filling in the gap requires technical manipulations which we carry out in this paper.
2013
JOFC
Compact Proofs of Retrievability
Abstract
In a proof-of-retrievability system, a data storage center must prove to a verifier that he is actually storing all of a client’s data. The central challenge is to build systems that are both efficient and provably secure—that is, it should be possible to extract the client’s data from any prover that passes a verification check. In this paper, we give the first proof-of-retrievability schemes with full proofs of security against arbitrary adversaries in the strongest model, that of Juels and Kaliski.Our first scheme, built from BLS signatures and secure in the random oracle model, features a proof-of-retrievability protocol in which the client’s query and server’s response are both extremely short. This scheme allows public verifiability: anyone can act as a verifier, not just the file owner. Our second scheme, which builds on pseudorandom functions (PRFs) and is secure in the standard model, allows only private verification. It features a proof-of-retrievability protocol with an even shorter server’s response than our first scheme, but the client’s query is long. Both schemes rely on homomorphic properties to aggregate a proof into one small authenticator value.
2013
JOFC
Enhancements of Trapdoor Permutations
Abstract
We take a closer look at several enhancements of the notion of trapdoor permutations. Specifically, we consider the notions of enhanced trapdoor permutation (Goldreich, Foundation of Cryptography: Basic Applications, 2004) and doubly enhanced trapdoor permutation (Goldreich, Computational Complexity: A Conceptual Perspective, 2011) as well as intermediate notions (Rothblum, A Taxonomy of Enhanced Trapdoor Permutations, 2010). These enhancements arose in the study of Oblivious Transfer and NIZK, but they address natural concerns that may arise also in other applications of trapdoor permutations. We clarify why these enhancements are needed in such applications, and show that they actually suffice for these needs.
2013
JOFC
FlipIt: The Game of “Stealthy Takeover”
Abstract
Recent targeted attacks have increased significantly in sophistication, undermining the fundamental assumptions on which most cryptographic primitives rely for security. For instance, attackers launching an Advanced Persistent Threat (APT) can steal full cryptographic keys, violating the very secrecy of “secret” keys that cryptographers assume in designing secure protocols. In this article, we introduce a game-theoretic framework for modeling various computer security scenarios prevalent today, including targeted attacks. We are particularly interested in situations in which an attacker periodically compromises a system or critical resource completely, learns all its secret information and is not immediately detected by the system owner or defender. We propose a two-player game between an attacker and defender called FlipIt or The Game of “Stealthy Takeover.” In FlipIt, players compete to control a shared resource. Unlike most existing games, FlipIt allows players to move at any given time, taking control of the resource. The identity of the player controlling the resource, however, is not revealed until a player actually moves. To move, a player pays a certain move cost. The objective of each player is to control the resource a large fraction of time, while minimizing his total move cost. FlipIt provides a simple and elegant framework in which we can formally reason about the interaction between attackers and defenders in practical scenarios. In this article, we restrict ourselves to games in which one of the players (the defender) plays with a renewal strategy, one in which the intervals between consecutive moves are chosen independently and uniformly at random from a fixed probability distribution. We consider attacker strategies ranging in increasing sophistication from simple periodic strategies (with moves spaced at equal time intervals) to more complex adaptive strategies, in which moves are determined based on feedback received during the game. For different classes of strategies employed by the attacker, we determine strongly dominant strategies for both players (when they exist), strategies that achieve higher benefit than all other strategies in a particular class. When strongly dominant strategies do not exist, our goal is to characterize the residual game consisting of strategies that are not strongly dominated by other strategies. We also prove equivalence or strict inclusion of certain classes of strategies under different conditions. Our analysis of different FlipIt variants teaches cryptographers, system designers, and the community at large some valuable lessons: 1.Systems should be designed under the assumption of repeated total compromise, including theft of cryptographic keys. FlipIt provides guidance on how to implement a cost-effective defensive strategy.2.Aggressive play by one player can motivate the opponent to drop out of the game (essentially not to play at all). Therefore, moving fast is a good defensive strategy, but it can only be implemented if move costs are low. We believe that virtualization has a huge potential in this respect.3.Close monitoring of one’s resources is beneficial in detecting potential attacks faster, gaining insight into attacker’s strategies, and scheduling defensive moves more effectively.
Interestingly, FlipIt finds applications in other security realms besides modeling of targeted attacks. Examples include cryptographic key rotation, password changing policies, refreshing virtual machines, and cloud auditing.
2013
JOFC
Fully Leakage-Resilient Signatures
Abstract
A signature scheme is fully leakage resilient (Katz and Vaikuntanathan, ASIACRYPT’09) if it is existentially unforgeable under an adaptive chosen-message attack even in a setting where an adversary may obtain bounded (yet arbitrary) leakage information on all intermediate values that are used throughout the lifetime of the system. This is a strong and meaningful notion of security that captures a wide range of side-channel attacks.One of the main challenges in constructing fully leakage-resilient signature schemes is dealing with leakage that may depend on the random bits used by the signing algorithm, and constructions of such schemes are known only in the random-oracle model. Moreover, even in the random-oracle model, known schemes are only resilient to leakage of less than half the length of their signing key.In this paper we construct the first fully leakage-resilient signature schemes without random oracles. We present a scheme that is resilient to any leakage of length (1−o(1))L bits, where L is the length of the signing key. Our approach relies on generic cryptographic primitives, and at the same time admits rather efficient instantiations based on specific number-theoretic assumptions. In addition, we show that our approach extends to the continual-leakage model, recently introduced by Dodis, Haralambiev, Lopez-Alt and Wichs (FOCS’10), and by Brakerski, Tauman Kalai, Katz and Vaikuntanathan (FOCS’10). In this model the signing key is allowed to be refreshed, while its corresponding verification key remains fixed, and the amount of leakage is assumed to be bounded only in between any two successive key refreshes.
2013
JOFC
Polynomial Runtime and Composability
Abstract
We devise a notion of polynomial runtime suitable for the simulation-based security analysis of multi-party cryptographic protocols. Somewhat surprisingly, straightforward notions of polynomial runtime lack expressivity for reactive tasks and/or lead to an unnatural simulation-based security notion. Indeed, the problem has been recognized in previous works, and several notions of polynomial runtime have already been proposed. However, our new notion, dubbed reactive polynomial time, is the first to combine the following properties: it is simple enough to support simple security/runtime analyses,it is intuitive in the sense that all intuitively feasible protocols and attacks (and only those) are considered polynomial-time,it supports secure composition of protocols in the sense of a universal composition theorem. We work in the Universal Composability (UC) protocol framework. We remark that while the UC framework already features a universal composition theorem, we develop new techniques to prove secure composition in the case of reactively polynomial-time protocols and attacks.
2013
JOFC
Quark: A Lightweight Hash
Abstract
The need for lightweight (that is, compact, low-power, low-energy) cryptographic hash functions has been repeatedly expressed by professionals, notably to implement cryptographic protocols in RFID technology. At the time of writing, however, no algorithm exists that provides satisfactory security and performance. The ongoing SHA-3 Competition will not help, as it concerns general-purpose designs and focuses on software performance. This paper thus proposes a novel design philosophy for lightweight hash functions, based on the sponge construction in order to minimize memory requirements. Inspired by the stream cipher Grain and by the block cipher KATAN (amongst the lightest secure ciphers), we present the hash function family Quark, composed of three instances: u-Quark, d-Quark, and s-Quark. As a sponge construction, Quark can be used for message authentication, stream encryption, or authenticated encryption. Our hardware evaluation shows that Quark compares well to previous tentative lightweight hash functions. For example, our lightest instance u-Quark conjecturally provides at least 64-bit security against all attacks (collisions, multicollisions, distinguishers, etc.), fits in 1379 gate-equivalents, and consumes on average 2.44 μW at 100 kHz in 0.18 μm ASIC. For 112-bit security, we propose s-Quark, which can be implemented with 2296 gate-equivalents with a power consumption of 4.35 μW.
2013
JOFC
Round-Optimal Password-Based Authenticated Key Exchange
Abstract
We show a general framework for constructing password-based authenticated key-exchange protocols with optimal round complexity—one message per party, sent simultaneously—in the standard model, assuming the existence of a common reference string. When our framework is instantiated using bilinear-map-based cryptosystems, the resulting protocol is also (reasonably) efficient. Somewhat surprisingly, our framework can be adapted to give protocols in the standard model that are universally composable while still using only one (simultaneous) round.