International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

François-Xavier Standaert

Affiliation: UCL, Belgium

Publications

Year
Venue
Title
2019
EUROCRYPT
2019
TCHES
Towards Globally Optimized Masking: From Low Randomness to Low Noise Rate
Gaëtan Cassiers François-Xavier Standaert
We improve the state-of-the-art masking schemes in two important directions. First, we propose a new masked multiplication algorithm that satisfies a recently introduced notion called Probe-Isolating Non-Interference (PINI). It captures a sufficient requirement for designing masked implementations in a trivial way, by combining PINI multiplications and linear operations performed share by share. Our improved algorithm has the best reported randomness complexity for large security orders (while the previous PINI multiplication was best for small orders). Second, we analyze the security of most existing multiplication algorithms in the literature against so-called horizontal attacks, which aim to reduce the noise of the actual leakages measured by an adversary, by combining the information of multiple target intermediate values. For this purpose, we leave the (abstract) probing model and consider a specialization of the (more realistic) noisy leakage / random probing models. Our (still partially heuristic but quantitative) analysis allows confirming the improved security of an algorithm by Battistello et al. from CHES 2016 in this setting. We then use it to propose new improved algorithms, leading to better tradeoffs between randomness complexity and noise rate, and suggesting the possibility to design efficient masked multiplication algorithms with constant noise rate in F2.
2019
TCHES
Glitch-Resistant Masking Revisited
Thorben Moos Amir Moradi Tobias Schneider François-Xavier Standaert
Implementing the masking countermeasure in hardware is a delicate task. Various solutions have been proposed for this purpose over the last years: we focus on Threshold Implementations (TIs), Domain-Oriented Masking (DOM), the Unified Masking Approach (UMA) and Generic Low Latency Masking (GLM). The latter generally come with innovative ideas to cope with physical defaults such as glitches. Yet, and in contrast to the situation in software-oriented masking, these schemes have not been formally proven at arbitrary security orders and their composability properties were left unclear. So far, only a 2-cycle implementation of the seminal masking scheme by Ishai, Sahai and Wagner has been shown secure and composable in the robust probing model – a variation of the probing model aimed to capture physical defaults such as glitches – for any number of shares.In this paper, we argue that this lack of proofs for TIs, DOM, UMA and GLM makes the interpretation of their security guarantees difficult as the number of shares increases. For this purpose, we first put forward that the higher-order variants of all these schemes are affected by (local or composability) security flaws in the (robust) probing model, due to insufficient refreshing. We then show that composability and robustness against glitches cannot be analyzed independently. We finally detail how these abstract flaws translate into concrete (experimental) attacks, and discuss the additional constraints robust probing security implies on the need of registers. Despite not systematically leading to improved complexities at low security orders, e.g., with respect to the required number of measurements for a successful attack, we argue that these weaknesses provide a case for the need of security proofs in the robust probing model (or a similar abstraction) at higher security orders.
2019
TCHES
Reducing a Masked Implementation’s Effective Security Order with Setup Manipulations
Itamar Levi Davide Bellizia François-Xavier Standaert
Couplings are a type of physical default that can violate the independence assumption needed for the secure implementation of the masking countermeasure. Two recent works by De Cnudde et al. put forward qualitatively that couplings can cause information leakages of lower order than theoretically expected. However, the (quantitative) amplitude of these lower-order leakages (e.g., measured as the amplitude of a detection metric such as Welch’s T statistic) was usually lower than the one of the (theoretically expected) dth order leakages. So the actual security level of these implementations remained unaffected. In addition, in order to make the couplings visible, the authors sometimes needed to amplify them internally (e.g., by tweaking the placement and routing or iterating linear operations on the shares). In this paper, we first show that the amplitude of low-order leakages in masked implementations can be amplified externally, by tweaking side-channel measurement setups in a way that is under control of a power analysis adversary. Our experiments put forward that the “effective security order” of both hardware (FPGA) and software (ARM-32) implementations can be reduced, leading to concrete reductions of their security level. For this purpose, we move from the detection-based analyzes of previous works to attack-based evaluations, allowing to confirm the exploitability of the lower-order leakages that we amplify. We also provide a tentative explanation for these effects based on couplings, and describe a model that can be used to predict them in function of the measurement setup’s external resistor and implementation’s supply voltage. We posit that the effective security orders observed are mainly due to “externally-amplified couplings” that can be systematically exploited by actual adversaries.
2019
TCHES
Multi-Tuple Leakage Detection and the Dependent Signal Issue
Olivier Bronchain Tobias Schneider François-Xavier Standaert
Leakage detection is a common tool to quickly assess the security of a cryptographic implementation against side-channel attacks. The Test Vector Leakage Assessment (TVLA) methodology using Welch’s t-test, proposed by Cryptography Research, is currently the most popular example of such tools, thanks to its simplicity and good detection speed compared to attack-based evaluations. However, as any statistical test, it is based on certain assumptions about the processed samples and its detection performances strongly depend on parameters like the measurement’s Signal-to-Noise Ratio (SNR), their degree of dependency, and their density, i.e., the ratio between the amount of informative and non-informative points in the traces. In this paper, we argue that the correct interpretation of leakage detection results requires knowledge of these parameters which are a priori unknown to the evaluator, and, therefore, poses a non-trivial challenge to evaluators (especially if restricted to only one test). For this purpose, we first explore the concept of multi-tuple detection, which is able to exploit differences between multiple informative points of a trace more effectively than tests relying on the minimum p-value of concurrent univariate tests. To this end, we map the common Hotelling’s T2-test to the leakage detection setting and, further, propose a specialized instantiation of it which trades computational overheads for a dependency assumption. Our experiments show that there is not one test that is the optimal choice for every leakage scenario. Second, we highlight the importance of the assumption that the samples at each point in time are independent, which is frequently considered in leakage detection, e.g., with Welch’s t-test. Using simulated and practical experiments, we show that (i) this assumption is often violated in practice, and (ii) deviations from it can affect the detection performances, making the correct interpretation of the results more difficult. Finally, we consolidate our findings by providing guidelines on how to use a combination of established and newly-proposed leakage detection tools to infer the measurements parameters. This enables a better interpretation of the tests’ results than the current state-of-the-art (yet still relying on heuristics for the most challenging evaluation scenarios).
2019
CRYPTO
Leakage Certification Revisited: Bounding Model Errors in Side-Channel Security Evaluations 📺
Leakage certification aims at guaranteeing that the statistical models used in side-channel security evaluations are close to the true statistical distribution of the leakages, hence can be used to approximate a worst-case security level. Previous works in this direction were only qualitative: for a given amount of measurements available to an evaluation laboratory, they rated a model as “good enough” if the model assumption errors (i.e., the errors due to an incorrect choice of model family) were small with respect to the model estimation errors. We revisit this problem by providing the first quantitative tools for leakage certification. For this purpose, we provide bounds for the (unknown) Mutual Information metric that corresponds to the true statistical distribution of the leakages based on two easy-to-compute information theoretic quantities: the Perceived Information, which is the amount of information that can be extracted from a leaking device thanks to an estimated statistical model, possibly biased due to estimation and assumption errors, and the Hypothetical Information, which is the amount of information that would be extracted from an hypothetical device exactly following the model distribution. This positive outcome derives from the observation that while the estimation of the Mutual Information is in general a hard problem (i.e., estimators are biased and their convergence is distribution-dependent), it is significantly simplified in the case of statistical inference attacks where a target random variable (e.g., a key in a cryptographic setting) has a constant (e.g., uniform) probability. Our results therefore provide a general and principled path to bound the worst-case security level of an implementation. They also significantly speed up the evaluation of any profiled side-channel attack, since they imply that the estimation of the Perceived Information, which embeds an expensive cross-validation step, can be bounded by the computation of a cheaper Hypothetical Information, for any estimated statistical model.
2019
TCHES
TEDT, a Leakage-Resist AEAD Mode for High Physical Security Applications
We propose TEDT, a new Authenticated Encryption with Associated Data (AEAD) mode leveraging Tweakable Block Ciphers (TBCs). TEDT provides the following features: (i) It offers full leakage-resistance, that is, it limits the exploitability of physical leakages via side-channel attacks, even if these leakages happen during every message encryption and decryption operation. Moreover, the leakage integrity bound is asymptotically optimal in the multi-user setting. (ii) It offers nonce misuse-resilience, that is, the repetition of nonces does not impact the security of ciphertexts produced with fresh nonces. (iii) It can be implemented with a remarkably low energy cost when strong resistance to side-channel attacks is needed, supports online encryption and handles static and incremental associated data efficiently. Concretely, TEDT encourages so-called leveled implementations, in which two TBCs are implemented: the first one needs strong and energy demanding protections against side-channel attacks but is used in a limited way, while the other only requires weak and energy-efficient protections and performs the bulk of the computation. As a result, TEDT leads to more energy-efficient implementations compared to traditional AEAD schemes, whose side-channel security requires to uniformly protect every (T)BC execution.
2018
EUROCRYPT
2018
TCHES
Leakage Detection with the x2-Test 📺
Amir Moradi Bastian Richter Tobias Schneider François-Xavier Standaert
We describe how Pearson’s χ2-test can be used as a natural complement to Welch’s t-test for black box leakage detection. In particular, we show that by using these two tests in combination, we can mitigate some of the limitations due to the moment-based nature of existing detection techniques based on Welch’s t-test (e.g., for the evaluation of higher-order masked implementations with insufficient noise). We also show that Pearson’s χ2-test is naturally suited to analyze threshold implementations with information lying in multiple statistical moments, and can be easily extended to a distinguisher for key recovery attacks. As a result, we believe the proposed test and methodology are interesting complementary ingredients of the side-channel evaluation toolbox, for black box leakage detection and non-profiled attacks, and as a preliminary before more demanding advanced analyses.
2018
TCHES
Composable Masking Schemes in the Presence of Physical Defaults & the Robust Probing Model
Composability and robustness against physical defaults (e.g., glitches) are two highly desirable properties for secure implementations of masking schemes. While tools exist to guarantee them separately, no current formalism enables their joint investigation. In this paper, we solve this issue by introducing a new model, the robust probing model, that is naturally suited to capture the combination of these properties. We first motivate this formalism by analyzing the excellent robustness and low randomness requirements of first-order threshold implementations, and highlighting the difficulty to extend them to higher orders. Next, and most importantly, we use our theory to design and prove the first higher-order secure, robust and composable multiplication gadgets. While admittedly inspired by existing approaches to masking (e.g., Ishai-Sahai-Wagner-like, threshold, domain-oriented), these gadgets exhibit subtle implementation differences with these state-of-the-art solutions (none of which being provably composable and robust). Hence, our results illustrate how sound theoretical models can guide practically-relevant implementations.
2017
EUROCRYPT
2017
ASIACRYPT
2017
CHES
Gimli : A Cross-Platform Permutation
This paper presents Gimli, a 384-bit permutation designed to achieve high security with high performance across a broad range of platforms, including 64-bit Intel/AMD server CPUs, 64-bit and 32-bit ARM smartphone CPUs, 32-bit ARM microcontrollers, 8-bit AVR microcontrollers, FPGAs, ASICs without side-channel protection, and ASICs with side-channel protection.
2017
TOSC
On Leakage-Resilient Authenticated Encryption with Decryption Leakages
Francesco Berti Olivier Pereira Thomas Peters François-Xavier Standaert
At CCS 2015, Pereira et al. introduced a pragmatic model enabling the study of leakage-resilient symmetric cryptographic primitives based on the minimal use of a leak-free component. This model was recently used to prove the good integrity and confidentiality properties of an authenticated encryption scheme called DTE when the adversary is only given encryption leakages. In this paper, we extend this work by analyzing the case where decryption leakages are also available. We first exhibit attacks exploiting such leakages against the integrity of DTE (and variants) and show how to mitigate them. We then consider message confidentiality in a context where an adversary can observe decryption leakages but not the corresponding messages. The latter is motivated by applications such as secure bootloading and bitstream decryption. We finally formalize the confidentiality requirements that can be achieved in this case and propose a new construction satisfying them, while providing integrity properties with leakage that are as good as those of DTE.
2017
CHES
A Systematic Approach to the Side-Channel Analysis of ECC Implementations with Worst-Case Horizontal Attacks
Romain Poussier Yuanyuan Zhou François-Xavier Standaert
The wide number and variety of side-channel attacks against scalar multiplication algorithms makes their security evaluations complex, in particular in case of time constraints making exhaustive analyses impossible. In this paper, we present a systematic way to evaluate the security of such implementations against horizontal attacks. As horizontal attacks allow extracting most of the information in the leakage traces of scalar multiplications, they are suitable to avoid risks of overestimated security levels. For this purpose, we additionally propose to use linear regression in order to accurately characterize the leakage function and therefore approach worst-case security evaluations. We then show how to apply our tools in the contexts of ECDSA and ECDH implementations, and validate them against two targets: a Cortex-M4 and a Cortex-A8 micro-controllers.
2017
CHES
Very High Order Masking: Efficient Implementation and Security Evaluation
Anthony Journault François-Xavier Standaert
In this paper, we study the performances and security of recent masking algorithms specialized to parallel implementations in a 32-bit embedded software platform, for the standard AES Rijndael and the bitslice cipher Fantomas. By exploiting the excellent features of these algorithms for bitslice implementations, we first extend the recent speed records of Goudarzi and Rivain (presented at Eurocrypt 2017) and report realistic timings for masked implementations with 32 shares. We then observe that the security level provided by such implementations is uneasy to quantify with current evaluation tools. We therefore propose a new “multi-model” evaluation methodology which takes advantage of different (more or less abstract) security models introduced in the literature. This methodology allows us to both bound the security level of our implementations in a principled manner and to assess the risks of overstated security based on well understood parameters. Concretely, it leads us to conclude that these implementations withstand worst-case adversaries with $$>\!2^{64}$$ measurements under falsifiable assumptions.
2016
EUROCRYPT
2016
EUROCRYPT
2016
CRYPTO
2016
CHES
2016
CHES
2016
ASIACRYPT
2016
ASIACRYPT
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
FSE
2015
EUROCRYPT
2015
ASIACRYPT
2015
CHES
2015
CHES
2014
EUROCRYPT
2014
EPRINT
Moments-Correlating DPA
Amir Moradi François-Xavier Standaert
2014
EPRINT
2014
EPRINT
2014
EPRINT
2014
EPRINT
2014
EPRINT
2014
ASIACRYPT
2014
CHES
2014
FSE
2013
CRYPTO
2013
CHES
2013
CHES
2013
EUROCRYPT
2012
EUROCRYPT
2012
CHES
2012
CHES
2012
CHES
2012
CHES
2012
ASIACRYPT
2011
CRYPTO
2011
CRYPTO
2011
EUROCRYPT
2011
CHES
2011
CHES
2011
JOFC
2010
CHES
2010
ASIACRYPT
2010
EPRINT
The World is Not Enough: Another Look on Second-Order DPA
In a recent work, Mangard et al. showed that under certain assumptions, the (so-called) standard univariate side-channel attacks using a distance-of-means test, correlation analysis and Gaussian templates are essentially equivalent. In this paper, we show that in the context of multivariate attacks against masked implementations, this conclusion does not hold anymore. In other words, while a single distinguisher can be used to compare the susceptibility of different unprotected devices to first-order DPA, understanding second-order attacks requires to carefully investigate the information leakages and the adversaries exploiting these leakages, separately. Using a framework put forward by Standaert et al. at Eurocrypt 2009, we provide the first analysis that considers these two questions in the case of a masked device exhibiting a Hamming weight leakage model. Our results lead to new intuitions regarding the efficiency of various practically-relevant distinguishers. Further, we also investigate the case of second- and third-order masking (i.e. using three and four shares to represent one value). It turns out that moving to higher-order masking only leads to significant security improvements if the secret sharing is combined with a sufficient amount of noise. Eventually, we show that an information theoretic analysis allows determining this necessary noise level, for different masking schemes and target security levels, with high accuracy and smaller data complexity than previous methods.
2009
CHES
2009
CHES
2009
CHES
2009
EUROCRYPT
2008
FSE
2008
EPRINT
Information Theoretic Evaluation of Side-Channel Resistant Logic Styles
Francois Mace François-Xavier Standaert Jean-Jacques Quisquater
We propose to apply an information theoretic metric to the evaluation of side-channel resistant logic styles. Due to the long design and development time required for the physical evaluation of such hardware countermeasures, our analysis is based on simulations. Although they do not aim to replace the need of actual measurements, we show that simulations can be used as a meaningful first step in the validation chain of a cryptographic product. For illustration purposes, we apply our methodology to gate-level simulations of different logic styles and stress that it allows a significant improvement of the previously considered evaluation methods. In particular, our results allow putting forward the respective strengths and weaknesses of actual countermeasures and determining to which extent they can practically lead to secure implementations (with respect to a noise parameter), if adversaries were provided with simulation-based side-channel traces. Most importantly, the proposed methodology can be straightforwardly adapted to adversaries provided with any other kind of leakage traces (including physical ones).
2008
EPRINT
Improving the Rules of the DPA Contest
A DPA contest has been launched at CHES 2008. The goal of this initiative is to make it possible for researchers to compare different side-channel attacks in an objective manner. For this purpose, a set of 80000 traces corresponding to the encryption of 80000 different plaintexts with the Data Encryption Standard and a fixed key has been made available. In this short note, we discuss the rules that the contest uses to rate the effectiveness of different distinguishers. We first describe practical examples of attacks in which these rules can be misleading. Then, we suggest an improved set of rules that can be implemented easily in order to obtain a better interpretation of the comparisons performed.
2008
CHES
2007
CHES
2007
EPRINT
Towards Security Limits in Side-Channel Attacks
In this paper, we consider a recently introduced framework that investigates physically observable implementations from a theoretical point of view. The model allows quantifying the effect of practically relevant leakage functions with a combination of security and information theoretic metrics. More specifically, we apply our evaluation methodology to an exemplary block cipher. We first consider a Hamming weight leakage function and evaluate the efficiency of two commonly investigated countermeasures, namely noise addition and masking. Then, we show that the proposed methodology allows capturing certain non-trivial intuitions, e.g. about the respective effectiveness of these countermeasures. Finally, we justify the need of combined metrics for the evaluation, comparison and understanding of side-channel attacks.
2007
EPRINT
A Block Cipher based PRNG Secure Against Side-Channel Key Recovery
We study the security of a block cipher-based pseudorandom number generator (PRNG), both in the black box world and in the physical world, separately. We first show that the construction is a secure PRNG in the black box world, relying on standard computational assumptions. Then, we demonstrate its security against a Bayesian side-channel key recovery adversary. As a main result, we show that our construction guarantees that the success rate of the adversary does not increase with the number of physical bservations, but in a limited and controlled way. Besides, we observe that, under common assumptions on side-channel attack strategies, increasing the security parameter (typically the block cipher key size) by a polynomial factor involves an increase of a side-channel attack complexity by an exponential factor, as usually expected for secure cryptographic primitives. Therefore, we believe this work provides a first interesting example of the way the algorithmic design of a cryptographic scheme influences its side-channel resistance.
2006
CHES
2006
CHES
2006
EPRINT
A Unified Framework for the Analysis of Side-Channel Key Recovery Attacks (extended version)
François-Xavier Standaert Tal G. Malkin Moti Yung
The fair evaluation and comparison of side-channel attacks and countermeasures has been a long standing open question, limiting further developments in the field. Motivated by this challenge, this work makes a step in this direction and proposes a framework for the analysis of cryptographic implementations that includes a theoretical model and an application methodology. The model is based on commonly accepted hypotheses about side-channels that computations give rise to. It allows quantifying the effect of practically relevant leakage functions with a combination of information theoretic and security metrics, measuring the quality of an implementation and the strength of an adversary, respectively. From a theoretical point of view, we demonstrate formal connections between these metrics and discuss their intuitive meaning. From a practical point of view, the model implies a unified methodology for the analysis of side-channel key recovery attacks. The proposed solution allows getting rid of most of the subjective parameters that were limiting previous specialized and often ad hoc approaches in the evaluation of physically observable devices. It typically determines the extent to which basic (but practically essential) questions such as "How to compare two implementations?" or "How to compare two side-channel adversaries?" can be answered in a sound fashion.
2005
CHES
2004
CHES
2004
FSE
2003
CHES
2002
CHES

Program Committees

FSE 2020
Eurocrypt 2020
CHES 2018
FSE 2018
Asiacrypt 2017
FSE 2016
FSE 2015
Eurocrypt 2015
Crypto 2015
FSE 2014
Eurocrypt 2014
FSE 2013
Asiacrypt 2013
FSE 2012
Crypto 2012
Crypto 2011
CHES 2010
Asiacrypt 2010
Asiacrypt 2009
CHES 2006
CHES 2005

Coauthors

Cédric Archambeau (4)
Stéphane Badel (1)
Josep Balasch (2)
Boaz Barak (1)
Gilles Barthe (1)
Lejla Batina (1)
Sonia Belaïd (1)
Davide Bellizia (1)
Daniel J. Bernstein (1)
Francesco Berti (2)
Andrey Bogdanov (1)
David Bol (1)
Hai Brenner (1)
Philip Brisk (1)
Olivier Bronchain (2)
Nicolas Bruneau (1)
Philippe Bulens (1)
Claude Carlet (1)
Gaëtan Cassiers (1)
Alessandro Cevrero (1)
Baudoin Collard (1)
Yves Deville (1)
Yevgeniy Dodis (1)
Nicolas Donckers (1)
Alexandre Duc (2)
François Dupressoir (1)
François Durvaux (7)
Stefan Dziembowski (1)
Sebastian Faust (8)
Martin Feldhofer (1)
Denis Flandre (2)
Rong Fu (1)
Lubos Gaspar (2)
Benoît Gérard (5)
Benedikt Gierlichs (5)
Cezary Glowacz (1)
Benjamin Grégoire (1)
Vincent Grosso (13)
Dawu Gu (1)
Sylvain Guilley (1)
Zheng Guo (1)
Chun Guo (1)
Julien M. Hendrickx (1)
Gottfried Herold (1)
Annelie Heuser (1)
Cédric Hocquet (1)
Paolo Ienne (1)
Anthony Journault (3)
Antoine Joux (1)
Dina Kamel (2)
Markus Kasper (2)
Stéphanie Kerckhof (2)
Theo Kluter (1)
Lars R. Knudsen (1)
Stefan Kölbl (1)
Hugo Krawczyk (1)
Gregor Leander (1)
Yusuf Leblebici (1)
Jean-Didier Legat (3)
Gaëtan Leurent (2)
Itamar Levi (1)
Junrong Liu (1)
Stefan Lucks (1)
Francois Mace (2)
Jean-Baptiste Mairy (1)
Tal Malkin (3)
Stefan Mangard (3)
Daniel Masny (1)
Clément Massart (1)
Pedro Maat Costa Massolino (1)
Pierrick Méaux (1)
Marcel Medwed (6)
Florian Mendel (1)
Giacomo de Meulenaer (1)
Thorben Moos (1)
Amir Moradi (3)
Kashif Nawaz (1)
María Naya-Plasencia (1)
Ventzislav Nikov (1)
Alex Olshevsky (1)
Yossef Oren (1)
Siddika Berna Örs (1)
Elisabeth Oswald (2)
Clara Paglialonga (2)
Eric Peeters (4)
Olivier Pereira (5)
Thomas Peters (2)
Christophe Petit (1)
Krzysztof Pietrzak (1)
Gilles Piret (1)
Romain Poussier (4)
Santos Merino Del Pozo (3)
Bart Preneel (1)
Emmanuel Prouff (1)
Jean-Jacques Quisquater (10)
Francesco Regazzoni (1)
Mathieu Renauld (4)
Oscar Reparaz (1)
Bastian Richter (1)
Olivier Rioul (1)
Matthieu Rivain (1)
Alon Rosen (1)
Gaël Rouvroy (3)
Tobias Schneider (4)
Joachim Schüth (1)
Peter Schwabe (1)
John P. Steinberger (1)
Pierre-Yves Strub (1)
Yannick Teglia (1)
Elmar Tischhauser (1)
Yosuke Todo (1)
Kerem Varici (1)
Nicolas Veyrat-Charvillon (14)
Benoît Viguier (1)
Weijia Wang (1)
Avishai Wool (1)
Sen Xu (1)
Yu Yu (3)
Moti Yung (3)
Yuanyuan Zhou (1)