Towards Low-Energy Leakage-Resistant Authenticated Encryption from the Duplex Sponge Construction 📺
The ongoing NIST lightweight cryptography standardization process highlights the importance of resistance to side-channel attacks, which has renewed the interest for Authenticated Encryption schemes (AEs) with light(er)-weight sidechannel secure implementations. To address this challenge, our first contribution is to investigate the leakage-resistance of a generic duplex-based stream cipher. When the capacity of the duplex is of c bits, we prove the classical bound, i.e., ≈ 2c/2, under an assumption of non-invertible leakage. Based on this, we propose a new 1-pass AE mode TETSponge, which carefully combines a tweakable block cipher that must have strong protections against side-channel attacks and is scarcely used, and a duplex-style permutation that only needs weak side-channel protections and is used to frugally process the message and associated data. It offers: (i) provable integrity (resp. confidentiality) guarantees in the presence of leakage during both encryption and decryption (resp. encryption only), (ii) some level of nonce misuse robustness. We conclude that TETSponge is an appealing option for the implementation of low-energy AE in settings where side-channel attacks are a concern. We also provides the first rigorous methodology for the leakage-resistance of sponge/duplex-based AEs based on a minimal non-invertibility assumption on leakages, which leads to various insights on designs and implementations.
Spook: Sponge-Based Leakage-Resistant Authenticated Encryption with a Masked Tweakable Block Cipher 📺
This paper defines Spook: a sponge-based authenticated encryption with associated data algorithm. It is primarily designed to provide security against side-channel attacks at a low energy cost. For this purpose, Spook is mixing a leakageresistant mode of operation with bitslice ciphers enabling efficient and low latency implementations. The leakage-resistant mode of operation leverages a re-keying function to prevent differential side-channel analysis, a duplex sponge construction to efficiently process the data, and a tag verification based on a Tweakable Block Cipher (TBC) providing strong data integrity guarantees in the presence of leakages. The underlying bitslice ciphers are optimized for the masking countermeasures against side-channel attacks. Spook is an efficient single-pass algorithm. It ensures state-of-the-art black box security with several prominent features: (i) nonce misuse-resilience, (ii) beyond-birthday security with respect to the TBC block size, and (iii) multiuser security at minimum cost with a public tweak. Besides the specifications and design rationale, we provide first software and hardware implementation results of (unprotected) Spook which confirm the limited overheads that the use of two primitives sharing internal components imply. We also show that the integrity of Spook with leakage, so far analyzed with unbounded leakages for the duplex sponge and a strongly protected TBC modeled as leak-free, can be proven with a much weaker unpredictability assumption for the TBC. We finally discuss external cryptanalysis results and tweaks to improve both the security margins and efficiency of Spook.
Mode-Level vs. Implementation-Level Physical Security in Symmetric Cryptography: A Practical Guide Through the Leakage-Resistance Jungle 📺
Triggered by the increasing deployment of embedded cryptographic devices (e.g., for the IoT), the design of authentication, encryption and authenticated encryption schemes enabling improved security against side-channel attacks has become an important research direction. Over the last decade, a number of modes of operation have been proposed and analyzed under different abstractions. In this paper, we investigate the practical consequences of these findings. For this purpose, we first translate the physical assumptions of leakage-resistance proofs into minimum security requirements for implementers. Thanks to this (heuristic) translation, we observe that (i) security against physical attacks can be viewed as a tradeoff between mode-level and implementation-level protection mechanisms, and (i}) security requirements to guarantee confidentiality and integrity in front of leakage can be concretely different for the different parts of an implementation. We illustrate the first point by analyzing several modes of operation with gradually increased leakage-resistance. We illustrate the second point by exhibiting leveled implementations, where different parts of the investigated schemes have different security requirements against leakage, leading to performance improvements when high physical security is needed. We finally initiate a comparative discussion of the different solutions to instantiate the components of a leakage-resistant authenticated encryption scheme.
TEDT, a Leakage-Resist AEAD Mode for High Physical Security Applications 📺
We propose TEDT, a new Authenticated Encryption with Associated Data (AEAD) mode leveraging Tweakable Block Ciphers (TBCs). TEDT provides the following features: (i) It offers full leakage-resistance, that is, it limits the exploitability of physical leakages via side-channel attacks, even if these leakages happen during every message encryption and decryption operation. Moreover, the leakage integrity bound is asymptotically optimal in the multi-user setting. (ii) It offers nonce misuse-resilience, that is, the repetition of nonces does not impact the security of ciphertexts produced with fresh nonces. (iii) It can be implemented with a remarkably low energy cost when strong resistance to side-channel attacks is needed, supports online encryption and handles static and incremental associated data efficiently. Concretely, TEDT encourages so-called leveled implementations, in which two TBCs are implemented: the first one needs strong and energy demanding protections against side-channel attacks but is used in a limited way, while the other only requires weak and energy-efficient protections and performs the bulk of the computation. As a result, TEDT leads to more energy-efficient implementations compared to traditional AEAD schemes, whose side-channel security requires to uniformly protect every (T)BC execution.
On Leakage-Resilient Authenticated Encryption with Decryption Leakages
At CCS 2015, Pereira et al. introduced a pragmatic model enabling the study of leakage-resilient symmetric cryptographic primitives based on the minimal use of a leak-free component. This model was recently used to prove the good integrity and confidentiality properties of an authenticated encryption scheme called DTE when the adversary is only given encryption leakages. In this paper, we extend this work by analyzing the case where decryption leakages are also available. We first exhibit attacks exploiting such leakages against the integrity of DTE (and variants) and show how to mitigate them. We then consider message confidentiality in a context where an adversary can observe decryption leakages but not the corresponding messages. The latter is motivated by applications such as secure bootloading and bitstream decryption. We finally formalize the confidentiality requirements that can be achieved in this case and propose a new construction satisfying them, while providing integrity properties with leakage that are as good as those of DTE.
Universally Composable Security Analysis of TLS---Secure Sessions with Handshake and Record Layer Protocols
We present a security analysis of the complete TLS protocol in the Universal Composable security framework. This analysis evaluates the composition of key exchange functionalities realized by the TLS handshake with the message transmission of the TLS record layer to emulate secure communication sessions and is based on the adaption of the secure channel model from Canetti and Krawczyk to the setting where peer identities are not necessarily known prior the protocol invocation and may remain undisclosed. Our analysis shows that TLS, including the Diffie-Hellman and key transport suites in the uni-directional and bi-directional models of authentication, securely emulates secure communication sessions.
Modeling Computational Security in Long-Lived Systems, Version 2
For many cryptographic protocols, security relies on the assumption that adversarial entities have limited computational power. This type of security degrades progressively over the lifetime of a protocol. However, some cryptographic services, such as timestamping services or digital archives, are long-lived in nature; they are expected to be secure and operational for a very long time (i.e. super-polynomial). In such cases, security cannot be guaranteed in the traditional sense: a computationally secure protocol may become insecure if the attacker has a super-polynomial number of interactions with the protocol. This paper proposes a new paradigm for the analysis of long-lived security protocols. We allow entities to be active for a potentially unbounded amount of real time, provided they perform only a polynomial amount of work per unit of real time. Moreover, the space used by these entities is allocated dynamically and must be polynomially bounded. We propose a new notion of long-term implementation, which is an adaptation of computational indistinguishability to the long-lived setting. We show that long-term implementation is preserved under polynomial parallel composition and exponential sequential composition. We illustrate the use of this new paradigm by analyzing some security properties of the long-lived timestamping protocol of Haber and Kamat.
On the Role of Scheduling in Simulation-Based Security
In a series of papers, K\"usters et al. investigated the relationships between various notions of simulation-based security. Two main factors, the placement of a ``master process'' and the existence of ``forwarder processes'', were found to affect the relationship between different definitions. In this extended abstract, we add a new dimension to the analysis of simulation-based security, namely, the scheduling of concurrent processes. We show that, when we move from sequential scheduling (as used in previous studies) to task-based nondeterministic scheduling, the same syntactic definition of security gives rise to incomparable semantic notions of security. Under task-based scheduling, the hierarchy based on placement of ``master process'' is no longer relevant, because no such designation is necessary to obtain meaningful runs of a system. On the other hand, the existence of ``forwarder processes'' remains an important factor.
Verifying Statistical Zero Knowledge with Approximate Implementations
Statistical zero-knowledge (SZK) properties play an important role in designing cryptographic protocols that enforce honest behavior while maintaining privacy. This paper presents a novel approach for verifying SZK properties, using recently developed techniques based on approximate simulation relations. We formulate statistical indistinguishability as an implementation relation in the Task-PIOA framework, which allows us to express computational restrictions. The implementation relation is then proven using approximate simulation relations. This technique separates proof obligations into two categories: those requiring probabilistic reasoning, as well as those that do not. The latter is a good candidate for mechanization. We illustrate the general method by verifying the SZK property of the well-known identification protocol of Girault, Poupard and Stern.
A Block Cipher based PRNG Secure Against Side-Channel Key Recovery
We study the security of a block cipher-based pseudorandom number generator (PRNG), both in the black box world and in the physical world, separately. We first show that the construction is a secure PRNG in the black box world, relying on standard computational assumptions. Then, we demonstrate its security against a Bayesian side-channel key recovery adversary. As a main result, we show that our construction guarantees that the success rate of the adversary does not increase with the number of physical bservations, but in a limited and controlled way. Besides, we observe that, under common assumptions on side-channel attack strategies, increasing the security parameter (typically the block cipher key size) by a polynomial factor involves an increase of a side-channel attack complexity by an exponential factor, as usually expected for secure cryptographic primitives. Therefore, we believe this work provides a first interesting example of the way the algorithmic design of a cryptographic scheme influences its side-channel resistance.
How to Model Bounded Computation in Long-Lived Systems
In most interesting cases, the security of cryptographic protocols relies on the assumption that adversarial entities have limited computational power, and it is generally accepted that security degrades progressively over time. However, some cryptographic services (e.g., time-stamping services or digital archives) are long-lived in nature; that is, their lifetime need not be bounded by a polynomial. In such cases, it is impossible to guarantee security in the traditional sense: even information theoretically secure protocols can fail if the attacker is given sufficient run time. This work proposes a new paradigm for long-lived computation, where computational restrictions are stated in terms of space and processing rates. In this setting, entities may live for an unbounded amount of real time, subject to the condition that only a polynomial amount of work can be done per unit real time. Moreover, the space used by these entities is allocated dynamically and must be polynomially bounded. We propose a key notion of approximate implementation, which is an adaptation of computational indistinguishability to the long-lived setting. We show that approximate implementation is preserved under polynomial parallel composition, and under exponential sequential composition. This provides core foundations for an exciting new area, namely, the analysis of long-lived cryptographic systems.
Using Probabilistic I/O Automata to Analyze an Oblivious Transfer Protocol
The Probabilistic I/O Automata framework of Lynch, Segala and Vaandrager provides tools for precisely specifying protocols and reasoning about their correctness using multiple levels of abstraction, based on implementation relationships between these levels. We enhance this framework to allow analyzing protocols that use cryptographic primitives. This requires resolving and reconciling issues such as nondeterministic behavior and scheduling, randomness, resource-bounded computation, and computational hardness assumptions. The enhanced framework allows for more rigorous and systematic analysis of cryptographic protocols. To demonstrate the use of this framework, we present an example analysis that we have done for an Oblivious Transfer protocol.
- CHES 2012
- Boaz Barak (1)
- Davide Bellizia (2)
- David Bernhard (2)
- Francesco Berti (3)
- Olivier Bronchain (2)
- Ran Canetti (4)
- Gaëtan Cassiers (2)
- Ling Cheung (5)
- Véronique Cortier (1)
- Yevgeniy Dodis (1)
- Sébastien Duval (1)
- Sebastian Gajek (1)
- David Galindo (1)
- Vincent Grosso (1)
- Chun Guo (4)
- Dilsun Kaynar (3)
- Hugo Krawczyk (1)
- Gregor Leander (1)
- Gaëtan Leurent (1)
- Itamar Levi (1)
- Moses Liskov (1)
- Nancy Lynch (4)
- Tal Malkin (1)
- Mark Manulis (1)
- Sayan Mitra (1)
- Charles Momin (2)
- Thomas Peters (5)
- Christophe Petit (1)
- Krzysztof Pietrzak (1)
- Ahmad-Reza Sadeghi (1)
- Jörg Schwenk (1)
- Roberto Segala (1)
- François-Xavier Standaert (8)
- Balazs Udvarhelyi (1)
- Bogdan Warinschi (2)
- Friedrich Wiemer (1)
- Yu Yu (2)
- Moti Yung (1)