IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
09 February 2022
Ziqi Zhou, Onur Gunlu, Rafael G. L. D'Oliveira, Muriel Medard, Parastoo Sadeghi, Rafael F. Schaefer
ePrint Report
We extend a previous framework for designing differentially private (DP) mechanisms via randomized graph colorings that was restricted to binary functions, corresponding to colorings in a graph, to multi-valued functions. As before, datasets are nodes in the graph and any two neighboring datasets are connected by an edge. In our setting, we assume each dataset has a preferential ordering for the possible outputs of the mechanism, which we refer to as a rainbow. Different rainbows partition the graph of datasets into different regions. We show that when the DP mechanism is pre-specified at the boundary of such regions, at most one optimal mechanism can exist. Moreover, if the mechanism is to behave identically for all same-rainbow boundary datasets, the problem can be greatly simplified and solved by means of a morphism to a line graph. We then show closed form expressions for the line graph in the case of ternary functions. Treatment of ternary queries in this paper displays enough richness to be extended to higher-dimensional query spaces with preferential query ordering, but the optimality proof does not seem to follow directly from the ternary proof.
Guangpu Gao , Weiguo Zhang, Yongjuan Wang
ePrint Report
Bent functions are optimal combinatorial objects and have been studied over the last four decades. Secondary construction plays a central role in constructing bent functions since it may generate bent functions outside the primary classes of bent functions. In this study, we improve a theoretical framework of the secondary construction of bent functions in terms of the composition of Boolean functions. Based on this framework, we propose several constructions of bent functions through the composition of a balanced Boolean function and dually isomorphic (DI) bent functions defined herein. In addition, we present a construction of self-dual bent functions.
Muhammed F. Esgin, Ron Steinfeld, Raymond K. Zhao
ePrint Report
We introduce verifiable partially-decryptable commitments (VPDC), as a building block for constructing efficient privacy-preserving protocols supporting auditability by a trusted party. A VPDC is an extension of a commitment along with an accompanying proof, convincing a verifier that (i) the given commitment is well-formed and (ii) a certain part of the committed message can be decrypted using a (secret) trapdoor known to a trusted party.
We first formalize VPDCs and then introduce a general decryption feasibility result that overcomes the challenges in relaxed proofs arising in the lattice setting. Our general result can be applied to a wide class of Fiat-Shamir based protocols and may be of independent interest.
Next, we show how to extend the commonly used lattice-based `Hashed-Message Commitment' (HMC) scheme into a succinct and efficient VPDC. In particular, we devise a novel `gadget'-based Regev-style (partial) decryption method, compatible with efficient relaxed lattice-based zero-knowledge proofs. We prove the soundness of our VPDC in the setting of adversarial proofs, where a prover tries to create a valid VPDC output that fails in decryption.
To demonstrate the effectiveness of our results, we extend a private blockchain payment protocol, MatRiCT, by Esgin et al. (ACM CCS '19) into a formally auditable construction, which we call MatRiCT-Au, with very low communication and computation overheads over MatRiCT.
We first formalize VPDCs and then introduce a general decryption feasibility result that overcomes the challenges in relaxed proofs arising in the lattice setting. Our general result can be applied to a wide class of Fiat-Shamir based protocols and may be of independent interest.
Next, we show how to extend the commonly used lattice-based `Hashed-Message Commitment' (HMC) scheme into a succinct and efficient VPDC. In particular, we devise a novel `gadget'-based Regev-style (partial) decryption method, compatible with efficient relaxed lattice-based zero-knowledge proofs. We prove the soundness of our VPDC in the setting of adversarial proofs, where a prover tries to create a valid VPDC output that fails in decryption.
To demonstrate the effectiveness of our results, we extend a private blockchain payment protocol, MatRiCT, by Esgin et al. (ACM CCS '19) into a formally auditable construction, which we call MatRiCT-Au, with very low communication and computation overheads over MatRiCT.
Muhammed F. Esgin, Ron Steinfeld, Dongxi Liu, Sushmita Ruj
ePrint Report
In this work, we study hybrid exact/relaxed zero-knowledge proofs from lattices, where the proved relation is exact in one part and relaxed in the other. Such proofs arise in important real-life applications such as those requiring verifiable PRF evaluation and have so far not received significant attention as a standalone problem.
We first introduce a general framework, LANES+, for realizing such hybrid proofs efficiently by combining standard relaxed proofs of knowledge RPoK and the LANES framework (due to a series of works in Crypto'20, Asiacrypt'20, ACM CCS'20). The latter framework is a powerful lattice-based proof system that can prove exact linear and multiplicative relations. The advantage of LANES+ is its ability to realize hybrid proofs more efficiently by exploiting RPoK for the high-dimensional part of the secret witness while leaving a low-dimensional secret witness part for the exact proof that is proven at a significantly lower cost via LANES.
We apply our LANES+ framework to construct substantially shorter proofs of rounding, which is a central tool for verifiable deterministic lattice-based cryptography. Based on our rounding proof, we then design an efficient long-term verifiable random function (VRF), named LaV. LaV leads to the shortest VRF outputs among the proposals of standard (i.e., long-term and stateless) VRFs based on quantum-safe assumptions. Of independent interest, we also present a general framework for the construction of efficient VRFs (in the random oracle model) and generalized results for challenge difference invertibility, a fundamental soundness security requirement for many proof systems.
We first introduce a general framework, LANES+, for realizing such hybrid proofs efficiently by combining standard relaxed proofs of knowledge RPoK and the LANES framework (due to a series of works in Crypto'20, Asiacrypt'20, ACM CCS'20). The latter framework is a powerful lattice-based proof system that can prove exact linear and multiplicative relations. The advantage of LANES+ is its ability to realize hybrid proofs more efficiently by exploiting RPoK for the high-dimensional part of the secret witness while leaving a low-dimensional secret witness part for the exact proof that is proven at a significantly lower cost via LANES.
We apply our LANES+ framework to construct substantially shorter proofs of rounding, which is a central tool for verifiable deterministic lattice-based cryptography. Based on our rounding proof, we then design an efficient long-term verifiable random function (VRF), named LaV. LaV leads to the shortest VRF outputs among the proposals of standard (i.e., long-term and stateless) VRFs based on quantum-safe assumptions. Of independent interest, we also present a general framework for the construction of efficient VRFs (in the random oracle model) and generalized results for challenge difference invertibility, a fundamental soundness security requirement for many proof systems.
Sebastian Faust, Juliane Krämer, Maximilian Orlt, Patrick Struck
ePrint Report
Related-key attacks (RKA) are powerful cryptanalytic attacks, where the adversary can tamper with the secret key of a cryptographic scheme. Since their invention, RKA security has been an important design goal in cryptography, and various works aim at designing cryptographic primitives that offer protection against related-key attacks. At EUROCRYPT'03, Bellare and Kohno introduced the first formal treatment of related-key attacks focusing on pseudorandom functions and permutations. This was later extended to cover other primitives such as signatures and public key encryption schemes, but until now, a comprehensive formal security analysis of authenticated encryption schemes with associated data (AEAD) in the RKA setting has been missing. The main contribution of our work is to close this gap for the relevant class of nonce-based AEAD schemes.
To this end, we revisit the common approach to construct AEAD from encryption and message authentication. We extend the traditional security notion of AEAD to the RKA setting and consider an adversary that can tamper with the key $K_e$ and $K_m$ of the underlying encryption and MAC, respectively. We study two security models. In our weak setting, we require that tampering will change both $K_e$ and $K_m$, while in our strong setting, tampering can be arbitrary, i.e., only one key might be affected. We then study the security of the standard composition methods by analysing the nonce-based AEAD schemes N1 (Encrypt-and-MAC), N2 (Encrypt-then-MAC), and N3 (MAC-then-Encrypt) due to Namprempre, Rogaway, and Shrimpton (EUROCRYPT'03). We show that these schemes are weakly RKA secure, while they can be broken under a strong related-key attack.
Finally, based on the N3 construction, we give a novel AEAD scheme that achieves our stronger notion.
Christian Janson, Patrick Struck
ePrint Report
In this work, we study the security of sponge-based authenticated encryption schemes against quantum attackers. In particular, we analyse the sponge-based authenticated encryption scheme SLAE as put forward by Degabriele et al. (ASIACRYPT'19). We show that the scheme achieves security in the post-quantum (QS1) setting in the quantum random oracle model by using the one-way to hiding lemma. Furthermore, we analyse the scheme in a fully-quantum (QS2) setting. There we provide a set of attacks showing that SLAE does not achieve ciphertext indistinguishability and hence overall does not provide the desired level of security.
Pierre Galissant, Louis Goubin
ePrint Report
Cryptography is increasingly deployed in applications running on open devices
in which the software is extremely vulnerable to attacks, since the attacker has complete control over the execution platform and the software implementation itself. This creates a challenge for cryptography: design implementations of cryptographic algorithms that are secure, not only in the black-box model, but also in this attack context that is referred to as the white-box adversary model. Moreover, emerging applications such as mobile payment, mobile contract signing or blockchain-based technologies have created a need for white-box implementations of public-key cryptography, and especially of signature algorithms.
However, while many attempts were made to construct white-box implementations of block-ciphers, almost no white-box implementations have been published for what concerns asymmetric schemes. We present here a concrete white-box implementation of the well-known HFE signature algorithm for a specific set of internal polynomials. For a security level $2^{80}$, the public key size is approximately 62.5 MB and the white-box implementation of the signature algorithm has a size approximately 256 GB.
However, while many attempts were made to construct white-box implementations of block-ciphers, almost no white-box implementations have been published for what concerns asymmetric schemes. We present here a concrete white-box implementation of the well-known HFE signature algorithm for a specific set of internal polynomials. For a security level $2^{80}$, the public key size is approximately 62.5 MB and the white-box implementation of the signature algorithm has a size approximately 256 GB.
Marco Cianfriglia, Elia Onofri, Silvia Onofri, Marco Pedicini
ePrint Report
In 2009, Dinur and Shamir proposed the cube attack, an algebraic cryptanalysis technique that only requires black box access to a target cipher. Since then, this attack has received both many criticisms and endorsements from crypto community; this work aims at revising and collecting the many attacks that have been proposed starting from it.
We categorise all of these attacks in five classes; for each class, we provide a brief summary description along with the state-of-the-art references and the most recent cryptanalysis results. Furthermore, we extend and refine the new notation we proposed in 2021 and we use it to provide a consistent definition for each attack family. Finally, in the appendix, we provide an in-depth description of the kite attack framework, a cipher independent tool we firstly proposed in 2018 that implements the kite attack on GPUs. To prove its effectiveness, we use Mickey2.0 as a use case, showing how to embed it in the framework.
Maya Dotan, Saar Tochner, Aviv Zohar, Yossi Gilad
ePrint Report
Payment channel networks (PCNs) provide a faster and cheaper alternative to transactions recorded on the blockchain. Clients can trustlessly establish payment channels with relays by locking coins and then send signed payments that shift coin balances over the network's channels. Although payments are never published, anyone can track a client's payment by monitoring changes in coin balances over the network's channels. We present Twilight, the first PCN that provides a rigorous differential privacy guarantee to its users.
Relays in Twilight run a noisy payment processing mechanism that hides the payments they carry. This mechanism increases the relay's cost, so Twilight combats selfish relays that wish to avoid it using a trusted execution environment (TEE) that ensures they follow its protocol.
The TEE does not store the channel's state, which minimizes the trusted computing base. Crucially, Twilight ensures that even if a relay breaks the TEE's security, it cannot break the integrity of the PCN. We analyze Twilight in terms of privacy and cost and study the trade-off between them. We implement Twilight using Intel's SGX framework and evaluate its performance using relays deployed on two continents. We show that a route consisting of 4 relays handles 820 payments/sec.
Zheng Xu, Yongqiang Li, Lin Jiao, Mingsheng Wang, Willi Meier
ePrint Report
Firstly, we improve the evaluation theory of differential propagation for modular additions and XORs, respectively. By introducing the concept of $additive$ $sums$ and using signed differences, we can add more information of value propagation to XOR differential propagation to calculate the probabilities of differential characteristics more precisely. Based on our theory, we propose the first modeling method to describe the general ARX differential propagation, which is not based on the Markov cipher assumption. Secondly, we propose an automatic search tool for differential characteristics with more precise probabilities in ARX ciphers. We find that some differential characteristics that used to be valid become impossible, and some probabilities that used to be underestimated increase. In applications, for CHAM-64/128 (one of the underlying block ciphers in COMET, one of 32 second-round candidates in NIST’s lightweight cryptography standardization process), we find that there is no valid $39$-round differential characteristic with a probability of $2^{-63}$ computed using previous methods, and we correct the probabilities to $2^{-64}$ and $2^{-64}$ instead of $2^{-65}$ and $2^{-65}$ computed using previous methods for two 39-round differential characteristics starting from the $1$-st round, respectively; however, if we search for differential characteristics starting from the $5$-th round, the two differential characteristics are invalid, which means that the round constants can affect the security of ARX ciphers against differential cryptanalysis; for Alzette with $c = \tt{0xb7e15162}$ (one of the S-boxes in SPARKLE, one of 10 finalists in NIST’s lightweight cryptography standardization process), we correct the probabilities to $0$ and $2^{-22}$ instead of $2^{-23}$ and $2^{-23}$ computed using previous methods for two 4-round differential characteristics, respectively; for XTEA, we correct the probabilities to $0$ and $2^{-49}$ instead of $2^{-58}$ and $2^{-56}$ computed using previous methods for two 10-round differential characteristics, respectively. Moreover, for Alzette with $c = \tt{0xb7e15162}$, XTEA, the $\tt{quarterround}$ function of Salsa20, and the round function of Chaskey, we find some invalid DCs that Leurent’s ARX Toolkit cannot detect. Thirdly, we propose a SAT-based automatic search tool for impossible differential characteristics in ARX ciphers. We find some distinguishers ignored by previous methods. In applications, for CHAM-64/128, we find five $20$-round and nineteen $19$-round impossible differential characteristics starting from the $3$-rd round for the first time. However, if we search for impossible differential characteristics starting from the $1$-st round, we cannot find any $20$-round impossible differential characteristic, which means that the round constants can affect the security of ARX ciphers against impossible differential cryptanalysis. Moreover, we find more impossible differential characteristics for 18-round, 16-round, 14-round, and 12-round CHAM-64/128, respectively. According to our results, the differential (resp. impossible differential) attack constructed by the previous methods of placing a DC (resp. an ID) anywhere in a block cipher may be invalid.
Emanuele Bellini, Rusydi H. Makarim
ePrint Report
This paper proposes functional cryptanalysis, a flexible and versatile approach to analyse symmetric-key primitives with two primary features. Firstly, it is a generalization of multiple attacks including (but not limited to) differential, rotational and rotational-xor cryptanalysis. Secondly, it is a theoretical framework that unifies all of the aforementioned cryptanalysis techniques and at the same time opens up possibilities for the development of new cryptanalytic approaches. The main idea of functional cryptanalysis is the usage of binary relations in the form of functions, hence the name functional, instead of binary operations like in a classical settings of "differential"-like cryptanalysis. We establish the theoretical foundations of functional cryptanalysis from standard terminologies. This work also presents an interpretation of functional cryptanalysis from the point of view of commutative algebra. In particular, we exhibit an algorithm to compute the functional probability (hence differential, rotational, and rotational-xor probability) using Grobner bases. We demonstrate the applicability of functional cryptanalysis against reduced-round Xoodoo and compare it against the best differential. To avoid dealing with invalid differential trails, we propose a method to construct a valid differential trail using Satisfiability Modulo Theory (SMT). To the best of our knowledge, this is the first time the SMT model is used to construct a valid differential while previous approaches rely on Mixed-Integer Linear Programming (MILP) model. Lastly, we remark that the use of non-translation functionals shares analogous advantages and limitations with the use of nonlinear approximations in linear cryptanalysis.
Eduardo Lopes Cominetti, Marcos Vinicius M. Silva, Marcos A. Simplicio Jr., Harsh Kupwade Patil, Jefferson E. Ricardini
ePrint Report
Vehicular-to-Everything (V2X) communications enable vehicles to exchange messages with other entities, including nearby vehicles and pedestrians. V2X is, thus, essential for establishing an Intelligent Transportation System (ITS), where vehicles use information from their surroundings to reduce traffic congestion and improve safety. To avoid abuse, V2X messages should be digitally signed using valid digital certificates. Messages sent by unauthorized entities can then be discarded, while misbehavior can lead to the revocation of the corresponding certificates. One challenge in this scenario is that messages must be verified shortly after arrival (e.g., within centiseconds), whereas vehicles may receive thousands of them per second. To handle this issue, some solutions propose prioritization or delayed-verification mechanisms, while others involve signature schemes that support batch verification. In this manuscript, we discuss two mechanisms that complement such proposals, enabling the authentication of a sequence of messages from the same source with one single signature verification. Our analysis shows that the technique can reduce the number of verified signatures by around 90% for reliable communication channels, and by more than 65% for a maximum packet loss rate of 20%.
Christoph Egger, Russell W. F. Lai, Viktoria Ronge, Ivy K. Y. Woo, Hoover H. F. Yin
ePrint Report
In a ring-signature-based anonymous cryptocurrency, signers of a transaction are hidden among a set of potential signers, called a ring, whose size is much smaller than the number of all users. The ring-membership relations specified by the sets of transactions thus induce bipartite transaction graphs, whose distribution is in turn induced by the ring sampler underlying the cryptocurrency.
Since efficient graph analysis could be performed on transaction graphs to potentially deanonymise signers, it is crucial to understand the resistance of (the transaction graphs induced by) a ring sampler against graph analysis. Of particular interest is the class of partitioning ring samplers. Although previous works showed that they provide almost optimal local anonymity, their resistance against global, e.g. graph-based, attacks were unclear.
In this work, we analyse transaction graphs induced by partitioning ring samplers. Specifically, we show (partly analytically and partly empirically) that, somewhat surprisingly, by setting the ring size to be at least logarithmic in the number of users, a graph-analysing adversary is no better than the one that performs random guessing in deanonymisation up to constant factor of 2.
Since efficient graph analysis could be performed on transaction graphs to potentially deanonymise signers, it is crucial to understand the resistance of (the transaction graphs induced by) a ring sampler against graph analysis. Of particular interest is the class of partitioning ring samplers. Although previous works showed that they provide almost optimal local anonymity, their resistance against global, e.g. graph-based, attacks were unclear.
In this work, we analyse transaction graphs induced by partitioning ring samplers. Specifically, we show (partly analytically and partly empirically) that, somewhat surprisingly, by setting the ring size to be at least logarithmic in the number of users, a graph-analysing adversary is no better than the one that performs random guessing in deanonymisation up to constant factor of 2.
Yue Qin, Ruoyu Ding, Chi Cheng, Nina Bindel, Yanbin Pan, Jintai Ding
ePrint Report
Key exchange protocols from the learning with errors (LWE) problem share many similarities with the Diffie–Hellman–Merkle (DHM) protocol, which plays a central role in securing our Internet. Therefore, there has been a long time effort in designing authenticated key exchange directly from LWE to mirror the advantages of DHM-based protocols. In this paper, we revisit signal leakage attacks and show that the severity of these attacks against LWE-based (authenticated) key exchange is still underestimated.
In particular, by converting the problem of launching a signal leakage attack into a coding problem, we can significantly reduce the needed number of queries to reveal the secret key. Specifically, for DXL-KE we reduce the queries from 1,266 to only 29, while for DBS-KE, we need only 748 queries, a great improvement over the previous 1,074,434 queries. Moreover, our new view of signals as binary codes enables recognizing vulnerable schemes more easily. As such we completely recover the secret key of a password-based authenticated key exchange scheme by Dabra et al. with only 757 queries and partially reveal the secret used in a two-factor authentication by Wang et al. with only one query. The experimental evaluation supports our theoretical analysis and demonstrates the efficiency and effectiveness of our attacks. Our results caution against underestimating the power of signal leakage attacks as they are applicable even in settings with a very restricted number of interactions between adversary and victim.
Gideon Samid
ePrint Report
Thousands of digital money protocols compete for attention; the vast majority of them are a minor variation of the Satoshi Nakamoto 2008 proposal. It is time to extract the underlying principles of the Bitcoin revolution and re-assemble them in a way that preserves its benefits and gets rid of its faults. BitMint*LeVeL is a move in this direction. It upholds the fundamental migration of money from hidden bank accounts to cryptographically protected publicly exposed digital coins; it enables a cyber version of peer-to-peer cash transactions. Bitcoin and its variants rely on a fixed public/private key algorithm. Being 'fixed' turns it into a resting target for advanced cryptanalysis. The LeVeL protocol assigns each coin holder to pick their own public/private key algorithm. An attacker would have to compromise all the algorithms used by all previous coin owners -- a substantial security upgrade relative to Bitcoin. LeVeL applies to self-referential money like Bitcoin or fiat currency, and to other-referential money, serving as a claim check for assets, like gold or fiat currency. Bitcoin decentralization is groundbreaking but it gives too much aid and comfort to wrongdoers. BitMint*LeVeL re-imagines decentralization via the notion of the InterMint: Money is minted by many smoothly interchangeable mints competing for traders. Lastly, BitMint*LeVeL is built on top of the original BitMint protocol which was implemented in the legacy banking system, and thus it offers a smooth migration into cyberspace. 1.2 Billion people around us have no bank account, but do have cell phones. The LeVeL offers social accountability and financial inclusion.
Michael Gruber, Georg Sigl
ePrint Report
Protection against physical attacks is a major requirement for cryptographic implementations running on devices which are accessible to an attacker. Side-channel attacks are the most common types of physical attacks, the most frequent side-channel is the device's power consumption. In this work we propose a novel open-source tool called TOFU which synthesizes VCD simulation traces into power traces, with adjustable leakage models. Additionally, we propose a workflow which is only based on open-source tools. The functionality of TOFU and the proposed workflow was verified by a CPA of a AES hardware implementation. We also provide numbers for the required running time of TOFU for a trace synthesis with respect to the according VCD file size. Furthermore, we provide TOFU's source code.
Pierre Karpman, Charlotte Lefevre
ePrint Report
We propose new algorithms for solving a class of large-weight syndrome decoding problems in random ternary codes. This is the main generic problem underlying the security of the recent Wave signature scheme (Debris-Alazard et al., 2019), and it has so far received limited attention. At SAC 2019 Bricout et al. proposed a reduction to a binary subset sum problem requiring many solutions, and used it to obtain the fastest known algorithm. However ---as is often the case in the coding theory literature--- its memory cost is proportional to its time cost, which makes it unattractive in most applications.
In this work we propose a range of memory-efficient algorithms for this problem, which describe a near-continuous time-memory tradeoff curve. Those are obtained by using the same reduction as Bricout et al. and carefully instantiating the derived subset sum problem with exhaustive-search algorithms from the literature, in particular dissection (Dinur et al., 2012) and dissection in tree (Dinur, 2019). We also spend significant effort adapting those algorithms to decrease their granularity, thereby allowing them to be smoothly used in a syndrome decoding context when not all the solutions to the subset sum problem are required. For a proposed parameter set for Wave, one of our best instantiations is estimated to cost $2^{177}$ bit operations and requiring $2^{88.5}$ bits of storage, while we estimate this to be $2^{152}$ and $2^{144}$ for the best algorithm from Bricout et al..
In this work we propose a range of memory-efficient algorithms for this problem, which describe a near-continuous time-memory tradeoff curve. Those are obtained by using the same reduction as Bricout et al. and carefully instantiating the derived subset sum problem with exhaustive-search algorithms from the literature, in particular dissection (Dinur et al., 2012) and dissection in tree (Dinur, 2019). We also spend significant effort adapting those algorithms to decrease their granularity, thereby allowing them to be smoothly used in a syndrome decoding context when not all the solutions to the subset sum problem are required. For a proposed parameter set for Wave, one of our best instantiations is estimated to cost $2^{177}$ bit operations and requiring $2^{88.5}$ bits of storage, while we estimate this to be $2^{152}$ and $2^{144}$ for the best algorithm from Bricout et al..
Gyu-Chol.Kim, Jae-Yong.Sin, Yong-Bok.Jong
ePrint Report
In order to prove the ElGamal CCA (Chosen Ciphertext Attack) security in the random oracle model, it is necessary to use the group (i.e., ICDH group) where ICDH assumption holds. Until now, only bilinear group where ICDH assumption is equivalent to CDH assumption has been known as the ICDH group. In this paper, we introduce another ICDH group in which ICDH assumption holds under the RSA assumption. Based on this group, we propose the CCA secure ElGamal encryption. And we describe the possibility to speed up decryption by reducing CRT (Chinese Remainder Theorem) exponents in CCA secure ElGamal.
Andrei-Alexandru Brebu, Mihai Iacov, Emil Simion
ePrint Report
Cloud computing has emerged as a necessity for hosting data on cloud servers so that information can be accessed and shared remotely. It was quickly adopted because it provides quality of service for various remotely available, easy-to-configure, and easy-to- use products, such as IaaS (Infrastructure as a Service) or PaaS (Platform as a Service). However, this new paradigm of data hosting brings new challenges. Some of the challenges related to the issue of security require independent audit services to verify the integrity of cloud-hosted data. With many end users and companies moving from on-premise to cloud models for their business, cloud data security is a critical concept that needs to be managed. First, we identify security requirements. Second, we look at potential solutions to ensure data integrity in cloud storage. Last, we propose a data auditing solution that can be used to detect corrupt data or file anomalies in the storage system.
Brice Colombier, Vlad-Florin Dragoi, Pierre-Louis Cayrel, Vincent Grosso
ePrint Report
The NIST standardization process for post-quantum cryptography has been drawing the attention of researchers to the submitted candidates. One direction of research consists in implementing those candidates on embedded systems and that exposes them to physical attacks in return. The Classic McEliece cryptosystem, which is among the four finalists of round 3 in the Key Encapsulation Mechanism category, was recently targeted by a laser fault injection attack leading to message recovery. Regrettably, the attack setting is very restrictive. Indeed, it does not tolerate errors in the faulty syndrome. Moreover, it depends on the very strong attacker model of laser fault injection, and is not applicable to optimised implementations of the algorithm that make optimal usage of the machine words capacity. In this article, we propose a change of attack angle and perform a message-recovery attack that relies on side-channel information only. We improve on the previously published work in several key aspects. First, we show that side-channel information is sufficient to obtain a faulty syndrome in $\N$, as required by the attack. This is done by leveraging classic machine learning techniques that recover the Hamming weight information very accurately. Second, we put forward a computationally-efficient method, based on a simple dot product, to recover the message from the, possibly noisy, syndrome in $\N$. We show that this new method, which additionally leverages existing information-set decoding algorithms from coding theory, is very robust to noise. Finally, we present a countermeasure against the proposed attack.