International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

François-Xavier Standaert

Affiliation: UCL, Belgium

Publications

Year
Venue
Title
2020
TOSC
Efficient Side-Channel Secure Message Authentication with Better Bounds
Chun Guo François-Xavier Standaert Weijia Wang Yu Yu
We investigate constructing message authentication schemes from symmetric cryptographic primitives, with the goal of achieving security when most intermediate values during tag computation and verification are leaked (i.e., mode-level leakage-resilience). Existing efficient proposals typically follow the plain Hash-then-MAC paradigm T = TGenK(H(M)). When the domain of the MAC function TGenK is {0, 1}128, e.g., when instantiated with the AES, forgery is possible within time 264 and data complexity 1. To dismiss such cheap attacks, we propose two modes: LRW1-based Hash-then-MAC (LRWHM) that is built upon the LRW1 tweakable blockcipher of Liskov, Rivest, and Wagner, and Rekeying Hash-then-MAC (RHM) that employs internal rekeying. Built upon secure AES implementations, LRWHM is provably secure up to (beyond-birthday) 278.3 time complexity, while RHM is provably secure up to 2121 time. Thus in practice, their main security threat is expected to be side-channel key recovery attacks against the AES implementations. Finally, we benchmark the performance of instances of our modes based on the AES and SHA3 and confirm their efficiency.
2020
TCHES
Side-Channel Countermeasures’ Dissection and the Limits of Closed Source Security Evaluations 📺
Olivier Bronchain François-Xavier Standaert
We take advantage of a recently published open source implementation of the AES protected with a mix of countermeasures against side-channel attacks to discuss both the challenges in protecting COTS devices against such attacks and the limitations of closed source security evaluations. The target implementation has been proposed by the French ANSSI (Agence Nationale de la Sécurité des Systèmes d’Information) to stimulate research on the design and evaluation of side-channel secure implementations. It combines additive and multiplicative secret sharings into an affine masking scheme that is additionally mixed with a shuffled execution. Its preliminary leakage assessment did not detect data dependencies with up to 100,000 measurements. We first exhibit the gap between such a preliminary leakage assessment and advanced attacks by demonstrating how a countermeasures’ dissection exploiting a mix of dimensionality reduction, multivariate information extraction and key enumeration can recover the full key with less than 2,000 measurements. We then discuss the relevance of open source evaluations to analyze such implementations efficiently, by pointing out that certain steps of the attack are hard to automate without implementation knowledge (even with machine learning tools), while performing them manually is straightforward. Our findings are not due to design flaws but from the general difficulty to prevent side-channel attacks in COTS devices with limited noise. We anticipate that high security on such devices requires significantly more shares.
2020
TCHES
Efficient and Private Computations with Code-Based Masking 📺
Weijia Wang Pierrick Méaux Gaëtan Cassiers François-Xavier Standaert
Code-based masking is a very general type of masking scheme that covers Boolean masking, inner product masking, direct sum masking, and so on. The merits of the generalization are twofold. Firstly, the higher algebraic complexity of the sharing function decreases the information leakage in “low noise conditions” and may increase the “statistical security order” of an implementation (with linear leakages). Secondly, the underlying error-correction codes can offer improved fault resistance for the encoded variables. Nevertheless, this higher algebraic complexity also implies additional challenges. On the one hand, a generic multiplication algorithm applicable to any linear code is still unknown. On the other hand, masking schemes with higher algebraic complexity usually come with implementation overheads, as for example witnessed by inner-product masking. In this paper, we contribute to these challenges in two directions. Firstly, we propose a generic algorithm that allows us (to the best of our knowledge for the first time) to compute on data shared with linear codes. Secondly, we introduce a new amortization technique that can significantly mitigate the implementation overheads of code-based masking, and illustrate this claim with a case study. Precisely, we show that, although performing every single code-based masked operation is relatively complex, processing multiple secrets in parallel leads to much better performances. This property enables code-based masked implementations of the AES to compete with the state-of-the-art in randomness complexity. Since our masked operations can be instantiated with various linear codes, we hope that these investigations open new avenues for the study of code-based masking schemes, by specializing the codes for improved performances, better side-channel security or improved fault tolerance.
2020
TOSC
Towards Low-Energy Leakage-Resistant Authenticated Encryption from the Duplex Sponge Construction
Chun Guo Olivier Pereira Thomas Peters François-Xavier Standaert
The ongoing NIST lightweight cryptography standardization process highlights the importance of resistance to side-channel attacks, which has renewed the interest for Authenticated Encryption schemes (AEs) with light(er)-weight sidechannel secure implementations. To address this challenge, our first contribution is to investigate the leakage-resistance of a generic duplex-based stream cipher. When the capacity of the duplex is of c bits, we prove the classical bound, i.e., ≈ 2c/2, under an assumption of non-invertible leakage. Based on this, we propose a new 1-pass AE mode TETSponge, which carefully combines a tweakable block cipher that must have strong protections against side-channel attacks and is scarcely used, and a duplex-style permutation that only needs weak side-channel protections and is used to frugally process the message and associated data. It offers: (i) provable integrity (resp. confidentiality) guarantees in the presence of leakage during both encryption and decryption (resp. encryption only), (ii) some level of nonce misuse robustness. We conclude that TETSponge is an appealing option for the implementation of low-energy AE in settings where side-channel attacks are a concern. We also provides the first rigorous methodology for the leakage-resistance of sponge/duplex-based AEs based on a minimal non-invertibility assumption on leakages, which leads to various insights on designs and implementations.
2020
TOSC
Low AND Depth and Efficient Inverses: a Guide on S-boxes for Low-latency Masking
In this work, we perform an extensive investigation and construct a portfolio of S-boxes suitable for secure lightweight implementations, which aligns well with the ongoing NIST Lightweight Cryptography competition. In particular, we target good functional properties on the one hand and efficient implementations in terms of AND depth and AND gate complexity on the other. Moreover, we also consider the implementation of the inverse S-box and the possibility for it to share resources with the forward S-box. We take our exploration beyond the conventional small (and even) S-box sizes. Our investigation is twofold: (1) we note that implementations of existing S-boxes are not optimized for the criteria which define masking complexity (AND depth and AND gate complexity) and improve a tool published at FSE 2016 by Stoffelen in order to fill this gap. (2) We search for new S-box designs which take these implementation properties into account from the start. We perform a systematic search based on the properties of not only the S-box but also its inverse as well as an exploration of larger S-box sizes using length-doubling structures. The result of our investigation is not only a wide selection of very good S-boxes, but we also provide complete descriptions of their circuits, enabling their integration into future work.
2020
TCHES
Understanding Screaming Channels: From a Detailed Analysis to Improved Attacks 📺
Giovanni Camurati Aurélien Francillon François-Xavier Standaert
Recently, some wireless devices have been found vulnerable to a novel class of side-channel attacks, called Screaming Channels. These leaks might appear if the sensitive leaks from the processor are unintentionally broadcast by a radio transmitter placed on the same chip. Previous work focuses on identifying the root causes, and on mounting an attack at a distance considerably larger than the one achievable with conventional electromagnetic side channels, which was demonstrated in the low-noise environment of an anechoic chamber. However, a detailed understanding of the leak, attacks that take full advantage of the novel vector, and security evaluations in more practical scenarios are still missing. In this paper, we conduct a thorough experimental analysis of the peculiar properties of Screaming Channels. For example, we learn about the coexistence of intended and unintended data, the role of distance and other parameters on the strength of the leak, the distortion of the leakmodel, and the portability of the profiles. With such insights, we build better attacks. We profile a device connected via cable with 10000·500 traces. Then, 5 months later, we attack a different instance at 15m in an office environment. We recover the AES-128 key with 5000·1000 traces and key enumeration up to 223. Leveraging spatial diversity, we mount some attacks in the presence of obstacles. As a first example of application to a real system, we show a proof-of-concept attack against the authentication method of Google Eddystone beacons. On the one side, this work lowers the bar for more realistic attacks, highlighting the importance of the novel attack vector. On the other side, it provides a broader security evaluation of the leaks, helping the defender and radio designers to evaluate risk, and the need of countermeasures.
2020
TOSC
Spook: Sponge-Based Leakage-Resistant Authenticated Encryption with a Masked Tweakable Block Cipher
This paper defines Spook: a sponge-based authenticated encryption with associated data algorithm. It is primarily designed to provide security against side-channel attacks at a low energy cost. For this purpose, Spook is mixing a leakageresistant mode of operation with bitslice ciphers enabling efficient and low latency implementations. The leakage-resistant mode of operation leverages a re-keying function to prevent differential side-channel analysis, a duplex sponge construction to efficiently process the data, and a tag verification based on a Tweakable Block Cipher (TBC) providing strong data integrity guarantees in the presence of leakages. The underlying bitslice ciphers are optimized for the masking countermeasures against side-channel attacks. Spook is an efficient single-pass algorithm. It ensures state-of-the-art black box security with several prominent features: (i) nonce misuse-resilience, (ii) beyond-birthday security with respect to the TBC block size, and (iii) multiuser security at minimum cost with a public tweak. Besides the specifications and design rationale, we provide first software and hardware implementation results of (unprotected) Spook which confirm the limited overheads that the use of two primitives sharing internal components imply. We also show that the integrity of Spook with leakage, so far analyzed with unbounded leakages for the duplex sponge and a strongly protected TBC modeled as leak-free, can be proven with a much weaker unpredictability assumption for the TBC. We finally discuss external cryptanalysis results and tweaks to improve both the security margins and efficiency of Spook.
2020
CRYPTO
Mode-Level vs. Implementation-Level Physical Security in Symmetric Cryptography: A Practical Guide Through the Leakage-Resistance Jungle 📺
Triggered by the increasing deployment of embedded cryptographic devices (e.g., for the IoT), the design of authentication, encryption and authenticated encryption schemes enabling improved security against side-channel attacks has become an important research direction. Over the last decade, a number of modes of operation have been proposed and analyzed under different abstractions. In this paper, we investigate the practical consequences of these findings. For this purpose, we first translate the physical assumptions of leakage-resistance proofs into minimum security requirements for implementers. Thanks to this (heuristic) translation, we observe that (i) security against physical attacks can be viewed as a tradeoff between mode-level and implementation-level protection mechanisms, and (i}) security requirements to guarantee confidentiality and integrity in front of leakage can be concretely different for the different parts of an implementation. We illustrate the first point by analyzing several modes of operation with gradually increased leakage-resistance. We illustrate the second point by exhibiting leveled implementations, where different parts of the investigated schemes have different security requirements against leakage, leading to performance improvements when high physical security is needed. We finally initiate a comparative discussion of the different solutions to instantiate the components of a leakage-resistant authenticated encryption scheme.
2020
TCHES
Modeling Soft Analytical Side-Channel Attacks from a Coding Theory Viewpoint 📺
Qian Guo Vincent Grosso François-Xavier Standaert Olivier Bronchain
One important open question in side-channel analysis is to find out whether all the leakage samples in an implementation can be exploited by an adversary, as suggested by masking security proofs. For attacks exploiting a divide-and-conquer strategy, the answer is negative: only the leakages corresponding to the first/last rounds of a block cipher can be exploited. Soft Analytical Side-Channel Attacks (SASCA) have been introduced as a powerful solution to mitigate this limitation. They represent the target implementation and its leakages as a code (similar to a Low Density Parity Check code) that is decoded thanks to belief propagation. Previous works have shown the low data complexities that SASCA can reach in practice. In this paper, we revisit these attacks by modeling them with a variation of the Random Probing Model used in masking security proofs, that we denote as the Local Random Probing Model (LRPM). Our study establishes interesting connections between this model and the erasure channel used in coding theory, leading to the following benefits. First, the LRPM allows bounding the security of concrete implementations against SASCA in a fast and intuitive manner. We use it in order to confirm that the leakage of any operation in a block cipher can be exploited, although the leakages of external operations dominate in known-plaintext/ciphertext attack scenarios. Second, we show that the LRPM is a tool of choice for the (nearly worst-case) analysis of masked implementations in the noisy leakage model, taking advantage of all the operations performed, and leading to new tradeoffs between their amount of randomness and physical noise level. Third, we show that it can considerably speed up the evaluation of other countermeasures such as shuffling.
2019
EUROCRYPT
2019
TCHES
Towards Globally Optimized Masking: From Low Randomness to Low Noise Rate 📺
Gaëtan Cassiers François-Xavier Standaert
We improve the state-of-the-art masking schemes in two important directions. First, we propose a new masked multiplication algorithm that satisfies a recently introduced notion called Probe-Isolating Non-Interference (PINI). It captures a sufficient requirement for designing masked implementations in a trivial way, by combining PINI multiplications and linear operations performed share by share. Our improved algorithm has the best reported randomness complexity for large security orders (while the previous PINI multiplication was best for small orders). Second, we analyze the security of most existing multiplication algorithms in the literature against so-called horizontal attacks, which aim to reduce the noise of the actual leakages measured by an adversary, by combining the information of multiple target intermediate values. For this purpose, we leave the (abstract) probing model and consider a specialization of the (more realistic) noisy leakage / random probing models. Our (still partially heuristic but quantitative) analysis allows confirming the improved security of an algorithm by Battistello et al. from CHES 2016 in this setting. We then use it to propose new improved algorithms, leading to better tradeoffs between randomness complexity and noise rate, and suggesting the possibility to design efficient masked multiplication algorithms with constant noise rate in F2.
2019
TCHES
Glitch-Resistant Masking Revisited 📺
Thorben Moos Amir Moradi Tobias Schneider François-Xavier Standaert
Implementing the masking countermeasure in hardware is a delicate task. Various solutions have been proposed for this purpose over the last years: we focus on Threshold Implementations (TIs), Domain-Oriented Masking (DOM), the Unified Masking Approach (UMA) and Generic Low Latency Masking (GLM). The latter generally come with innovative ideas to cope with physical defaults such as glitches. Yet, and in contrast to the situation in software-oriented masking, these schemes have not been formally proven at arbitrary security orders and their composability properties were left unclear. So far, only a 2-cycle implementation of the seminal masking scheme by Ishai, Sahai and Wagner has been shown secure and composable in the robust probing model – a variation of the probing model aimed to capture physical defaults such as glitches – for any number of shares.In this paper, we argue that this lack of proofs for TIs, DOM, UMA and GLM makes the interpretation of their security guarantees difficult as the number of shares increases. For this purpose, we first put forward that the higher-order variants of all these schemes are affected by (local or composability) security flaws in the (robust) probing model, due to insufficient refreshing. We then show that composability and robustness against glitches cannot be analyzed independently. We finally detail how these abstract flaws translate into concrete (experimental) attacks, and discuss the additional constraints robust probing security implies on the need of registers. Despite not systematically leading to improved complexities at low security orders, e.g., with respect to the required number of measurements for a successful attack, we argue that these weaknesses provide a case for the need of security proofs in the robust probing model (or a similar abstraction) at higher security orders.
2019
TCHES
Reducing a Masked Implementation’s Effective Security Order with Setup Manipulations 📺
Itamar Levi Davide Bellizia François-Xavier Standaert
Couplings are a type of physical default that can violate the independence assumption needed for the secure implementation of the masking countermeasure. Two recent works by De Cnudde et al. put forward qualitatively that couplings can cause information leakages of lower order than theoretically expected. However, the (quantitative) amplitude of these lower-order leakages (e.g., measured as the amplitude of a detection metric such as Welch’s T statistic) was usually lower than the one of the (theoretically expected) dth order leakages. So the actual security level of these implementations remained unaffected. In addition, in order to make the couplings visible, the authors sometimes needed to amplify them internally (e.g., by tweaking the placement and routing or iterating linear operations on the shares). In this paper, we first show that the amplitude of low-order leakages in masked implementations can be amplified externally, by tweaking side-channel measurement setups in a way that is under control of a power analysis adversary. Our experiments put forward that the “effective security order” of both hardware (FPGA) and software (ARM-32) implementations can be reduced, leading to concrete reductions of their security level. For this purpose, we move from the detection-based analyzes of previous works to attack-based evaluations, allowing to confirm the exploitability of the lower-order leakages that we amplify. We also provide a tentative explanation for these effects based on couplings, and describe a model that can be used to predict them in function of the measurement setup’s external resistor and implementation’s supply voltage. We posit that the effective security orders observed are mainly due to “externally-amplified couplings” that can be systematically exploited by actual adversaries.
2019
TCHES
Multi-Tuple Leakage Detection and the Dependent Signal Issue 📺
Olivier Bronchain Tobias Schneider François-Xavier Standaert
Leakage detection is a common tool to quickly assess the security of a cryptographic implementation against side-channel attacks. The Test Vector Leakage Assessment (TVLA) methodology using Welch’s t-test, proposed by Cryptography Research, is currently the most popular example of such tools, thanks to its simplicity and good detection speed compared to attack-based evaluations. However, as any statistical test, it is based on certain assumptions about the processed samples and its detection performances strongly depend on parameters like the measurement’s Signal-to-Noise Ratio (SNR), their degree of dependency, and their density, i.e., the ratio between the amount of informative and non-informative points in the traces. In this paper, we argue that the correct interpretation of leakage detection results requires knowledge of these parameters which are a priori unknown to the evaluator, and, therefore, poses a non-trivial challenge to evaluators (especially if restricted to only one test). For this purpose, we first explore the concept of multi-tuple detection, which is able to exploit differences between multiple informative points of a trace more effectively than tests relying on the minimum p-value of concurrent univariate tests. To this end, we map the common Hotelling’s T2-test to the leakage detection setting and, further, propose a specialized instantiation of it which trades computational overheads for a dependency assumption. Our experiments show that there is not one test that is the optimal choice for every leakage scenario. Second, we highlight the importance of the assumption that the samples at each point in time are independent, which is frequently considered in leakage detection, e.g., with Welch’s t-test. Using simulated and practical experiments, we show that (i) this assumption is often violated in practice, and (ii) deviations from it can affect the detection performances, making the correct interpretation of the results more difficult. Finally, we consolidate our findings by providing guidelines on how to use a combination of established and newly-proposed leakage detection tools to infer the measurements parameters. This enables a better interpretation of the tests’ results than the current state-of-the-art (yet still relying on heuristics for the most challenging evaluation scenarios).
2019
CRYPTO
Leakage Certification Revisited: Bounding Model Errors in Side-Channel Security Evaluations 📺
Leakage certification aims at guaranteeing that the statistical models used in side-channel security evaluations are close to the true statistical distribution of the leakages, hence can be used to approximate a worst-case security level. Previous works in this direction were only qualitative: for a given amount of measurements available to an evaluation laboratory, they rated a model as “good enough” if the model assumption errors (i.e., the errors due to an incorrect choice of model family) were small with respect to the model estimation errors. We revisit this problem by providing the first quantitative tools for leakage certification. For this purpose, we provide bounds for the (unknown) Mutual Information metric that corresponds to the true statistical distribution of the leakages based on two easy-to-compute information theoretic quantities: the Perceived Information, which is the amount of information that can be extracted from a leaking device thanks to an estimated statistical model, possibly biased due to estimation and assumption errors, and the Hypothetical Information, which is the amount of information that would be extracted from an hypothetical device exactly following the model distribution. This positive outcome derives from the observation that while the estimation of the Mutual Information is in general a hard problem (i.e., estimators are biased and their convergence is distribution-dependent), it is significantly simplified in the case of statistical inference attacks where a target random variable (e.g., a key in a cryptographic setting) has a constant (e.g., uniform) probability. Our results therefore provide a general and principled path to bound the worst-case security level of an implementation. They also significantly speed up the evaluation of any profiled side-channel attack, since they imply that the estimation of the Perceived Information, which embeds an expensive cross-validation step, can be bounded by the computation of a cheaper Hypothetical Information, for any estimated statistical model.
2019
TCHES
TEDT, a Leakage-Resist AEAD Mode for High Physical Security Applications 📺
We propose TEDT, a new Authenticated Encryption with Associated Data (AEAD) mode leveraging Tweakable Block Ciphers (TBCs). TEDT provides the following features: (i) It offers full leakage-resistance, that is, it limits the exploitability of physical leakages via side-channel attacks, even if these leakages happen during every message encryption and decryption operation. Moreover, the leakage integrity bound is asymptotically optimal in the multi-user setting. (ii) It offers nonce misuse-resilience, that is, the repetition of nonces does not impact the security of ciphertexts produced with fresh nonces. (iii) It can be implemented with a remarkably low energy cost when strong resistance to side-channel attacks is needed, supports online encryption and handles static and incremental associated data efficiently. Concretely, TEDT encourages so-called leveled implementations, in which two TBCs are implemented: the first one needs strong and energy demanding protections against side-channel attacks but is used in a limited way, while the other only requires weak and energy-efficient protections and performs the bulk of the computation. As a result, TEDT leads to more energy-efficient implementations compared to traditional AEAD schemes, whose side-channel security requires to uniformly protect every (T)BC execution.
2019
JOFC
Making Masking Security Proofs Concrete (Or How to Evaluate the Security of Any Leaking Device), Extended Version
Alexandre Duc Sebastian Faust François-Xavier Standaert
We investigate the relationship between theoretical studies of leaking cryptographic devices and concrete security evaluations with standard side-channel attacks. Our contributions are in four parts. First, we connect the formal analysis of the masking countermeasure proposed by Duc et al. (Eurocrypt 2014) with the Eurocrypt 2009 evaluation framework for side-channel key recovery attacks. In particular, we re-state their main proof for the masking countermeasure based on a mutual information metric, which is frequently used in concrete physical security evaluations. Second, we discuss the tightness of the Eurocrypt 2014 bounds based on experimental case studies. This allows us to conjecture a simplified link between the mutual information metric and the success rate of a side-channel adversary, ignoring technical parameters and proof artifacts. Third, we introduce heuristic (yet well-motivated) tools for the evaluation of the masking countermeasure when its independent leakage assumption is not perfectly fulfilled, as it is frequently encountered in practice. Thanks to these tools, we argue that masking with non-independent leakages may provide improved security levels in certain scenarios. Eventually, we consider the tradeoff between the measurement complexity and the key enumeration time complexity in divide-and-conquer side-channel attacks and show that these complexities can be lower bounded based on the mutual information metric, using simple and efficient algorithms. The combination of these observations enables significant reductions of the evaluation costs for certification bodies.
2018
EUROCRYPT
2018
TCHES
Leakage Detection with the x2-Test 📺
Amir Moradi Bastian Richter Tobias Schneider François-Xavier Standaert
We describe how Pearson’s χ2-test can be used as a natural complement to Welch’s t-test for black box leakage detection. In particular, we show that by using these two tests in combination, we can mitigate some of the limitations due to the moment-based nature of existing detection techniques based on Welch’s t-test (e.g., for the evaluation of higher-order masked implementations with insufficient noise). We also show that Pearson’s χ2-test is naturally suited to analyze threshold implementations with information lying in multiple statistical moments, and can be easily extended to a distinguisher for key recovery attacks. As a result, we believe the proposed test and methodology are interesting complementary ingredients of the side-channel evaluation toolbox, for black box leakage detection and non-profiled attacks, and as a preliminary before more demanding advanced analyses.
2018
TCHES
Composable Masking Schemes in the Presence of Physical Defaults & the Robust Probing Model
Composability and robustness against physical defaults (e.g., glitches) are two highly desirable properties for secure implementations of masking schemes. While tools exist to guarantee them separately, no current formalism enables their joint investigation. In this paper, we solve this issue by introducing a new model, the robust probing model, that is naturally suited to capture the combination of these properties. We first motivate this formalism by analyzing the excellent robustness and low randomness requirements of first-order threshold implementations, and highlighting the difficulty to extend them to higher orders. Next, and most importantly, we use our theory to design and prove the first higher-order secure, robust and composable multiplication gadgets. While admittedly inspired by existing approaches to masking (e.g., Ishai-Sahai-Wagner-like, threshold, domain-oriented), these gadgets exhibit subtle implementation differences with these state-of-the-art solutions (none of which being provably composable and robust). Hence, our results illustrate how sound theoretical models can guide practically-relevant implementations.
2017
EUROCRYPT
2017
ASIACRYPT
2017
CHES
Gimli : A Cross-Platform Permutation
This paper presents Gimli, a 384-bit permutation designed to achieve high security with high performance across a broad range of platforms, including 64-bit Intel/AMD server CPUs, 64-bit and 32-bit ARM smartphone CPUs, 32-bit ARM microcontrollers, 8-bit AVR microcontrollers, FPGAs, ASICs without side-channel protection, and ASICs with side-channel protection.
2017
TOSC
On Leakage-Resilient Authenticated Encryption with Decryption Leakages
Francesco Berti Olivier Pereira Thomas Peters François-Xavier Standaert
At CCS 2015, Pereira et al. introduced a pragmatic model enabling the study of leakage-resilient symmetric cryptographic primitives based on the minimal use of a leak-free component. This model was recently used to prove the good integrity and confidentiality properties of an authenticated encryption scheme called DTE when the adversary is only given encryption leakages. In this paper, we extend this work by analyzing the case where decryption leakages are also available. We first exhibit attacks exploiting such leakages against the integrity of DTE (and variants) and show how to mitigate them. We then consider message confidentiality in a context where an adversary can observe decryption leakages but not the corresponding messages. The latter is motivated by applications such as secure bootloading and bitstream decryption. We finally formalize the confidentiality requirements that can be achieved in this case and propose a new construction satisfying them, while providing integrity properties with leakage that are as good as those of DTE.
2017
CHES
A Systematic Approach to the Side-Channel Analysis of ECC Implementations with Worst-Case Horizontal Attacks
Romain Poussier Yuanyuan Zhou François-Xavier Standaert
The wide number and variety of side-channel attacks against scalar multiplication algorithms makes their security evaluations complex, in particular in case of time constraints making exhaustive analyses impossible. In this paper, we present a systematic way to evaluate the security of such implementations against horizontal attacks. As horizontal attacks allow extracting most of the information in the leakage traces of scalar multiplications, they are suitable to avoid risks of overestimated security levels. For this purpose, we additionally propose to use linear regression in order to accurately characterize the leakage function and therefore approach worst-case security evaluations. We then show how to apply our tools in the contexts of ECDSA and ECDH implementations, and validate them against two targets: a Cortex-M4 and a Cortex-A8 micro-controllers.
2017
CHES
Very High Order Masking: Efficient Implementation and Security Evaluation
Anthony Journault François-Xavier Standaert
In this paper, we study the performances and security of recent masking algorithms specialized to parallel implementations in a 32-bit embedded software platform, for the standard AES Rijndael and the bitslice cipher Fantomas. By exploiting the excellent features of these algorithms for bitslice implementations, we first extend the recent speed records of Goudarzi and Rivain (presented at Eurocrypt 2017) and report realistic timings for masked implementations with 32 shares. We then observe that the security level provided by such implementations is uneasy to quantify with current evaluation tools. We therefore propose a new “multi-model” evaluation methodology which takes advantage of different (more or less abstract) security models introduced in the literature. This methodology allows us to both bound the security level of our implementations in a principled manner and to assess the risks of overstated security based on well understood parameters. Concretely, it leads us to conclude that these implementations withstand worst-case adversaries with $$>\!2^{64}$$ measurements under falsifiable assumptions.
2016
EUROCRYPT
2016
EUROCRYPT
2016
CRYPTO
2016
CHES
2016
CHES
2016
ASIACRYPT
2016
ASIACRYPT
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
FSE
2015
EUROCRYPT
2015
ASIACRYPT
2015
CHES
2015
CHES
2014
EUROCRYPT
2014
EPRINT
Moments-Correlating DPA
Amir Moradi François-Xavier Standaert
2014
EPRINT
2014
EPRINT
2014
EPRINT
2014
EPRINT
2014
EPRINT
2014
ASIACRYPT
2014
CHES
2014
FSE
2013
CRYPTO
2013
CHES
2013
CHES
2013
EUROCRYPT
2012
EUROCRYPT
2012
CHES
2012
CHES
2012
CHES
2012
CHES
2012
ASIACRYPT
2011
CRYPTO
2011
CRYPTO
2011
EUROCRYPT
2011
CHES
2011
CHES
2011
JOFC
2010
CHES
2010
ASIACRYPT
2010
EPRINT
The World is Not Enough: Another Look on Second-Order DPA
In a recent work, Mangard et al. showed that under certain assumptions, the (so-called) standard univariate side-channel attacks using a distance-of-means test, correlation analysis and Gaussian templates are essentially equivalent. In this paper, we show that in the context of multivariate attacks against masked implementations, this conclusion does not hold anymore. In other words, while a single distinguisher can be used to compare the susceptibility of different unprotected devices to first-order DPA, understanding second-order attacks requires to carefully investigate the information leakages and the adversaries exploiting these leakages, separately. Using a framework put forward by Standaert et al. at Eurocrypt 2009, we provide the first analysis that considers these two questions in the case of a masked device exhibiting a Hamming weight leakage model. Our results lead to new intuitions regarding the efficiency of various practically-relevant distinguishers. Further, we also investigate the case of second- and third-order masking (i.e. using three and four shares to represent one value). It turns out that moving to higher-order masking only leads to significant security improvements if the secret sharing is combined with a sufficient amount of noise. Eventually, we show that an information theoretic analysis allows determining this necessary noise level, for different masking schemes and target security levels, with high accuracy and smaller data complexity than previous methods.
2009
CHES
2009
CHES
2009
CHES
2009
EUROCRYPT
2008
FSE
2008
EPRINT
Information Theoretic Evaluation of Side-Channel Resistant Logic Styles
Francois Mace François-Xavier Standaert Jean-Jacques Quisquater
We propose to apply an information theoretic metric to the evaluation of side-channel resistant logic styles. Due to the long design and development time required for the physical evaluation of such hardware countermeasures, our analysis is based on simulations. Although they do not aim to replace the need of actual measurements, we show that simulations can be used as a meaningful first step in the validation chain of a cryptographic product. For illustration purposes, we apply our methodology to gate-level simulations of different logic styles and stress that it allows a significant improvement of the previously considered evaluation methods. In particular, our results allow putting forward the respective strengths and weaknesses of actual countermeasures and determining to which extent they can practically lead to secure implementations (with respect to a noise parameter), if adversaries were provided with simulation-based side-channel traces. Most importantly, the proposed methodology can be straightforwardly adapted to adversaries provided with any other kind of leakage traces (including physical ones).
2008
EPRINT
Improving the Rules of the DPA Contest
A DPA contest has been launched at CHES 2008. The goal of this initiative is to make it possible for researchers to compare different side-channel attacks in an objective manner. For this purpose, a set of 80000 traces corresponding to the encryption of 80000 different plaintexts with the Data Encryption Standard and a fixed key has been made available. In this short note, we discuss the rules that the contest uses to rate the effectiveness of different distinguishers. We first describe practical examples of attacks in which these rules can be misleading. Then, we suggest an improved set of rules that can be implemented easily in order to obtain a better interpretation of the comparisons performed.
2008
CHES
2007
CHES
2007
EPRINT
Towards Security Limits in Side-Channel Attacks
In this paper, we consider a recently introduced framework that investigates physically observable implementations from a theoretical point of view. The model allows quantifying the effect of practically relevant leakage functions with a combination of security and information theoretic metrics. More specifically, we apply our evaluation methodology to an exemplary block cipher. We first consider a Hamming weight leakage function and evaluate the efficiency of two commonly investigated countermeasures, namely noise addition and masking. Then, we show that the proposed methodology allows capturing certain non-trivial intuitions, e.g. about the respective effectiveness of these countermeasures. Finally, we justify the need of combined metrics for the evaluation, comparison and understanding of side-channel attacks.
2007
EPRINT
A Block Cipher based PRNG Secure Against Side-Channel Key Recovery
We study the security of a block cipher-based pseudorandom number generator (PRNG), both in the black box world and in the physical world, separately. We first show that the construction is a secure PRNG in the black box world, relying on standard computational assumptions. Then, we demonstrate its security against a Bayesian side-channel key recovery adversary. As a main result, we show that our construction guarantees that the success rate of the adversary does not increase with the number of physical bservations, but in a limited and controlled way. Besides, we observe that, under common assumptions on side-channel attack strategies, increasing the security parameter (typically the block cipher key size) by a polynomial factor involves an increase of a side-channel attack complexity by an exponential factor, as usually expected for secure cryptographic primitives. Therefore, we believe this work provides a first interesting example of the way the algorithmic design of a cryptographic scheme influences its side-channel resistance.
2006
CHES
2006
CHES
2006
EPRINT
A Unified Framework for the Analysis of Side-Channel Key Recovery Attacks (extended version)
François-Xavier Standaert Tal G. Malkin Moti Yung
The fair evaluation and comparison of side-channel attacks and countermeasures has been a long standing open question, limiting further developments in the field. Motivated by this challenge, this work makes a step in this direction and proposes a framework for the analysis of cryptographic implementations that includes a theoretical model and an application methodology. The model is based on commonly accepted hypotheses about side-channels that computations give rise to. It allows quantifying the effect of practically relevant leakage functions with a combination of information theoretic and security metrics, measuring the quality of an implementation and the strength of an adversary, respectively. From a theoretical point of view, we demonstrate formal connections between these metrics and discuss their intuitive meaning. From a practical point of view, the model implies a unified methodology for the analysis of side-channel key recovery attacks. The proposed solution allows getting rid of most of the subjective parameters that were limiting previous specialized and often ad hoc approaches in the evaluation of physically observable devices. It typically determines the extent to which basic (but practically essential) questions such as "How to compare two implementations?" or "How to compare two side-channel adversaries?" can be answered in a sound fashion.
2005
CHES
2004
CHES
2004
FSE
2003
CHES
2002
CHES

Program Committees

FSE 2020
Eurocrypt 2020
CHES 2018
FSE 2018
Asiacrypt 2017
FSE 2016
FSE 2015
Eurocrypt 2015
Crypto 2015
FSE 2014
Eurocrypt 2014
FSE 2013
Asiacrypt 2013
FSE 2012
Crypto 2012
Crypto 2011
CHES 2010
Asiacrypt 2010
Asiacrypt 2009
CHES 2006
CHES 2005

Coauthors

Cédric Archambeau (4)
Stéphane Badel (1)
Josep Balasch (2)
Boaz Barak (1)
Gilles Barthe (1)
Lejla Batina (1)
Sonia Belaïd (1)
Davide Bellizia (3)
Daniel J. Bernstein (1)
Francesco Berti (3)
Begül Bilgin (1)
Andrey Bogdanov (1)
David Bol (1)
Hai Brenner (1)
Philip Brisk (1)
Olivier Bronchain (6)
Nicolas Bruneau (1)
Philippe Bulens (1)
Giovanni Camurati (1)
Claude Carlet (1)
Gaëtan Cassiers (4)
Alessandro Cevrero (1)
Baudoin Collard (1)
Lauren De Meyer (1)
Yves Deville (1)
Yevgeniy Dodis (1)
Nicolas Donckers (1)
Alexandre Duc (3)
François Dupressoir (1)
François Durvaux (7)
Sébastien Duval (2)
Stefan Dziembowski (1)
Sebastian Faust (9)
Martin Feldhofer (1)
Denis Flandre (2)
Aurelien Francillon (1)
Rong Fu (1)
Lubos Gaspar (2)
Benoît Gérard (5)
Benedikt Gierlichs (5)
Cezary Glowacz (1)
Benjamin Grégoire (1)
Vincent Grosso (15)
Dawu Gu (1)
Sylvain Guilley (1)
Qian Guo (1)
Zheng Guo (1)
Chun Guo (5)
Julien M. Hendrickx (1)
Gottfried Herold (1)
Annelie Heuser (1)
Cédric Hocquet (1)
Paolo Ienne (1)
Anthony Journault (3)
Antoine Joux (1)
Dina Kamel (2)
Markus Kasper (2)
Stéphanie Kerckhof (2)
Theo Kluter (1)
Lars R. Knudsen (1)
Stefan Kölbl (1)
Hugo Krawczyk (1)
Gregor Leander (2)
Yusuf Leblebici (1)
Jean-Didier Legat (3)
Gaëtan Leurent (3)
Itamar Levi (3)
Junrong Liu (1)
Stefan Lucks (1)
Francois Mace (2)
Jean-Baptiste Mairy (1)
Tal Malkin (3)
Stefan Mangard (3)
Daniel Masny (1)
Clément Massart (1)
Pedro Maat Costa Massolino (1)
Pierrick Méaux (2)
Marcel Medwed (6)
Florian Mendel (1)
Giacomo de Meulenaer (1)
Charles Momin (2)
Thorben Moos (1)
Amir Moradi (3)
Kashif Nawaz (1)
María Naya-Plasencia (1)
Ventzislav Nikov (1)
Alex Olshevsky (1)
Yossef Oren (1)
Siddika Berna Örs (1)
Elisabeth Oswald (2)
Clara Paglialonga (2)
Eric Peeters (4)
Olivier Pereira (8)
Thomas Peters (5)
Christophe Petit (1)
Krzysztof Pietrzak (1)
Gilles Piret (1)
Romain Poussier (4)
Santos Merino Del Pozo (3)
Bart Preneel (1)
Emmanuel Prouff (1)
Jean-Jacques Quisquater (10)
Francesco Regazzoni (1)
Mathieu Renauld (4)
Oscar Reparaz (1)
Bastian Richter (1)
Olivier Rioul (1)
Matthieu Rivain (1)
Alon Rosen (1)
Gaël Rouvroy (3)
Tobias Schneider (4)
Joachim Schüth (1)
Peter Schwabe (1)
John P. Steinberger (1)
Pierre-Yves Strub (1)
Yannick Teglia (1)
Elmar Tischhauser (1)
Yosuke Todo (1)
Balazs Udvarhelyi (1)
Kerem Varici (1)
Nicolas Veyrat-Charvillon (14)
Benoît Viguier (1)
Weijia Wang (3)
Friedrich Wiemer (1)
Avishai Wool (1)
Sen Xu (1)
Yu Yu (4)
Moti Yung (3)
Yuanyuan Zhou (1)