International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

10 September 2019

Elena Kirshanova, Erik Mårtensson, Eamonn W. Postlethwaite, Subhayan Roy Moulik
ePrint Report ePrint Report
The Shortest Vector Problem (SVP) is one of the mathematical foundations of lattice based cryptography. Lattice sieve algorithms are amongst the foremost methods of solving SVP. The asymptotically fastest known classical and quantum sieves solve SVP in a \(d\)-dimensional lattice in \(2^{cd + o(d)}\) time steps with \(2^{c'd + o(d)}\) memory for constants \(c, c'\). In this work, we give various quantum sieving algorithms that trade computational steps for memory.

We first give a quantum analogue of the classical \(k\)-Sieve algorithm [Herold--Kirshanova--Laarhoven, PKC'18] in the Quantum Random Access Memory (QRAM) model, achieving an algorithm that heuristically solves SVP in \(2^{0.2989d + o(d)}\) time steps using \(2^{0.1395d + o(d)}\) memory. This should be compared to the state-of-the-art algorithm [Laarhoven, Ph.D Thesis, 2015] which, in the same model, solves SVP in \(2^{0.2653d + o(d)}\) time steps and memory. In the QRAM model these algorithms can be implemented using \(poly(d)\) width quantum circuits.

Secondly, we frame the \(k\)-Sieve as the problem of \(k\)-clique listing in a graph and apply quantum \(k\)-clique finding techniques to the \(k\)-Sieve.

Finally, we explore the large quantum memory regime by adapting parallel quantum search [Beals et al., Proc. Roy. Soc. A'13] to the \(2\)-Sieve and giving an analysis in the quantum circuit model. We show how to heuristically solve SVP in \(2^{0.1037d + o(d)}\) time steps using \(2^{0.2075d + o(d)}\) quantum memory.
Expand
Eleftherios Kokoris-Kogias, Alexander Spiegelman, Dahlia Malkhi, Ittai Abraham
ePrint Report ePrint Report
In this paper, we present the first fully asynchronous distributed key generation (ADKG) algorithm as well as the first distributed key generation algorithm that can create keys with a dual $(f,2f+1)-$threshold that are necessary for scalable consensus (which so far needs a trusted dealer assumption).

In order to create a DKG with a dual $(f,2f+1)-$ threshold we first answer in the affirmative the open question posed by Cachin et al. how to create an AVSS protocol with recovery thresholds $ f+1 < k \le 2f+1$, which is of independent interest. Our High-threshold-AVSS (\textit{HAVSS}) uses an asymmetric bi-variate polynomial, where the secret shared is hidden from any set of $k$ nodes but an honest node that did not participate in the sharing phase can still recover his share with only $n-2f$ shares, hence be able to contribute in the secret reconstruction.

Another building block for ADKG is a novel \textit{Eventually Perfect} Common Coin (EPCC) abstraction and protocol that enables the participants to create a common coin that might fail to agree at most $f+1$ times (even if invoked a polynomial number of times). Using \textit{EPCC} we implement an Eventually Efficient Asynchronous Binary Agreement (EEABA) in which each instance takes $O(n^2)$ bits and $O(1)$ rounds in expectation, except for at most $f+1$ instances which may take $O(n^4)$ bits and $O(n)$ rounds in total.

Using EEABA we construct the first fully Asynchronous Distributed Key Generation (ADKG) which has the same overhead and expected runtime as the best partially-synchronous DKG ($O(n^4)$ words, $O(n)$ rounds). As a corollary of our ADKG we can also create the first Validated Asynchronous Byzantine Agreement (VABA) in the authenticated setting that does not need a trusted dealer to setup threshold signatures of degree $n-f$. Our VABA has an overhead of expected $O(n^2)$ words and $O(1)$ time per instance after an initial $O(n^4)$ words and $O(n)$ time bootstrap via ADKG.
Expand
Estuardo Alpirez Bock, Chris Brzuska, Marc Fischlin, Christian Janson, Wil Michiels
ePrint Report ePrint Report
The goal of white-box cryptography is to provide security even when the cryptographic implementation is executed in adversarially controlled environments. White-box implementations nowadays appear in commercial products such as mobile payment applications, e.g., those certified by Mastercard. Interestingly, there, white-box cryptography is championed as a tool for secure storage of payment tokens, and importantly, the white-boxed storage functionality is bound to a hardware functionality to prevent code-lifting attacks.

In this paper, we show that the approach of using hardware binding and obfuscation for secure storage is conceptually sound. Following security specifications by Mastercard, we first define security for a white-box key derivation functions (WKDF) that is bound to a hardware functionality. WKDFs with hardware-binding model a secure storage functionality, as the WKDFs in turn can be used to derive encryption keys for secure storage. We then provide a proof-of-concept construction of WKDFs based on pseudorandom functions (PRF) and obfuscation. To show that our use of cryptographic primitives is sound, we perform a cryptographic analysis and reduce the security of our WKDF to the cryptographic assumptions of indistinguishability obfuscation and PRF-security. The hardware-functionality that our WKDF is bound to is a PRF-like functionality. Obfuscation helps us to hide the secret key used for the verification, essentially emulating a signature functionality as is provided by the Android key store.

We rigorously define the required security properties of a hardware-bound white-box payment application (WPAY) for generating and encrypting valid payment requests. We construct a WPAY, which uses a WKDF as a secure building block. We thereby show that a WKDF can be securely combined with any secure symmetric encryption scheme, including those based on standard ciphers such as AES.
Expand
Carolyn Whitnall, Elisabeth Oswald
ePrint Report ePrint Report
The ISO standardisation of `Testing methods for the mitigation of non-invasive attack classes against cryptographic modules' (ISO/IEC 17825:2016) specifies the use of the Test Vector Leakage Assessment (TVLA) framework as the sole measure to assess whether or not an implementation of (symmetric) cryptography is vulnerable to differential side-channel attacks. It is the only publicly available standard of this kind, and the first side-channel assessment regime to exclusively rely on a TVLA instantiation.

TVLA essentially specifies statistical leakage detection tests with the aim of removing the burden of having to test against an ever increasing number of attack vectors. It offers the tantalising prospect of `conformance testing': if a device passes TVLA, then, one is led to hope, the device would be secure against all (first-order) differential side-channel attacks.

In this paper we provide a statistical assessment of the specific instantiation of TVLA in this standard. This task leads us to inquire whether (or not) it is possible to assess the side-channel security of a device via leakage detection (TVLA) only. We find a number of grave issues in the standard and its adaptation of the original TVLA guidelines. We propose some innovations on existing methodologies and finish by giving recommendations for best practice and the responsible reporting of outcomes.
Expand
Fuyuki Kitagawa, Takahiro Matsuda, Keisuke Tanaka
ePrint Report ePrint Report
We propose two efficient public key encryption (PKE) schemes satisfying key dependent message security against chosen ciphertext attacks (KDM-CCA security). The first one is KDM-CCA secure with respect to affine functions. The other one is KDM-CCA secure with respect to polynomial functions. Both of our schemes are based on the KDM-CPA secure PKE schemes proposed by Malkin, Teranishi, and Yung (EUROCRYPT 2011). Although our schemes satisfy KDM-CCA security, their efficiency overheads compared to Malkin et al.'s schemes are very small. Thus, efficiency of our schemes is drastically improved compared to the existing KDM-CCA secure schemes.

We achieve our results by extending the construction technique by Kitagawa and Tanaka (ASIACRYPT 2018). Our schemes are obtained via semi-generic constructions using an IND-CCA secure PKE scheme as a building block. We prove the KDM-CCA security of our schemes based on the decisional composite residuosity (DCR) assumption and the IND-CCA security of the building block PKE scheme.

Moreover, our security proofs are tight if the IND-CCA security of the building block PKE scheme is tightly reduced to its underlying computational assumption. By instantiating our schemes using existing tightly IND-CCA secure PKE schemes, we obtain the first tightly KDM-CCA secure PKE schemes whose ciphertext consists only of a constant number of group elements.
Expand

09 September 2019

Oxford, United Kingdom, 14 April - 16 April 2020
Event Calendar Event Calendar
Event date: 14 April to 16 April 2020
Submission deadline: 1 December 2019
Notification: 15 January 2020
Expand
Raymond K. Zhao, Ron Steinfeld, Amin Sakzad
ePrint Report ePrint Report
The arbitrary-centered discrete Gaussian sampler is a fundamental subroutine in implementing lattice trapdoor sampling algorithms. However, existing approaches typically rely on either a fast implementation of another discrete Gaussian sampler or pre-computations with regards to some specific discrete Gaussian distributions with fixed centers and standard deviations. These approaches may only support sampling from standard deviations within a limited range, or cannot efficiently sample from arbitrary standard deviations determined on-the-fly at run-time.

In this paper, we propose a compact and scalable rejection sampling algorithm by sampling from a continuous normal distribution and performing rejection sampling on rounded samples. Our scheme does not require pre-computations related to any specific discrete Gaussian distributions. Our scheme can sample from both arbitrary centers and arbitrary standard deviations determined on-the-fly at run-time. In addition, we show that our scheme only requires a low number of trials close to 2 per sample on average, and our scheme maintains good performance when scaling up the standard deviation. We also provide a concrete error analysis of our scheme based on the Renyi divergence.
Expand
Rishab Goyal, Venkata Koppula, Satyanarayana Vusirikala, Brent Waters
ePrint Report ePrint Report
In a lockable obfuscation scheme a party takes as input a program $P$, a lock value $\alpha$, a message $m$ and produces an obfuscated program $\tilde{P}$. The obfuscated program can be evaluated on an input $x$ to learn the message $m$ if $P(x)= \alpha$. The security of such schemes states that if $\alpha$ is randomly chosen (independent of $P$ and $m$), then one cannot distinguish an obfuscation of $P$ from a ``dummy'' obfuscation. Existing constructions of lockable obfuscation achieve provable security under the Learning with Errors assumption. One limitation of these constructions is that they achieve only statistical correctness and allow for a possible one sided error where the obfuscated program could output the $m$ on some value $x$ where $P(x) \neq \alpha$.

In this work we motivate the problem of studying perfect correctness in lockable obfuscation for the case where the party performing the obfuscation might wish to inject a backdoor or hole in correctness. We begin by studying the existing constructions and identify two components that are susceptible to imperfect correctness. The first is in the LWE-based pseudo random generators (PRGs) that are non-injective, while the second is in the last level testing procedure of the core constructions.

We address each in turn. First, we build upon previous work to design injective PRGs that are provably secure from the LWE assumption. Next, we design an alternative last level testing procedure that has additional structure to prevent correctness errors. We then provide a surgical proof of security (to avoid redundancy) that connects our construction to the construction by Goyal, Koppula, and Waters (GKW). Specifically, we show how for a random value $\alpha$ an obfuscation under our new construction is indistinguishable from an obfuscation under the existing GKW construction.
Expand
Jintai Ding, Seungki Kim, Tsuyoshi Takagi, Yuntao Wang
ePrint Report ePrint Report
We introduce stochastic sandpile models which imitate numerous aspects of the practical behavior of the LLL algorithm with compelling accuracy. In addition, we argue that the physics and mathematics of sandpile models provide satisfactory heuristic explanations to much of the mysteries of LLL, and pleasant implications for lattice-based cryptography as a whole. Based on these successes, we suggest a paradigm in which one regards blockwise reduction algorithms as 1-d stochastic self-organized criticality(SOC) models and study them as such.
Expand

06 September 2019

Olivier Bronchain, François-Xavier Standaert
ePrint Report ePrint Report
We take advantage of a recently published open source implementation of the AES protected with a mix of countermeasures against side-channel attacks to discuss both the challenges in protecting COTS devices against such attacks and the limitations of closed source security evaluations. The target implementation has been proposed by the French ANSSI (Agence Nationale de la Sécurité des Systèmes d'Information) to stimulate research on the design and evaluation of side-channel secure implementations. It combines additive and multiplicative secret sharings into an affine masking scheme that is additionally mixed with a shuffled execution. Its preliminary leakage assessment did not detect data dependencies with up to 100,000 measurements. We first exhibit the gap between such a preliminary leakage assessment and advanced attacks by exhibiting how a countermeasures' dissection exploiting a mix of dimensionality reduction, multivariate information extraction and key enumeration can recover the full key with less than 2,000 measurements. We then discuss the relevance of open source evaluations to analyze such implementations efficiently, by exhibiting that certain steps of the attack are hard to automate without implementation knowledge (even with machine learning tools), while performing them manually is trivial. Our findings are not due to design flaws but from the general difficulty to prevent side-channel attacks in COTS devices with limited noise. We anticipate that high security on such devices requires significantly more shares.
Expand
Philippe Elbaz-Vincent, Cyril Hugounenq, Sébastien Riou
ePrint Report ePrint Report
We propose SPAE, a single pass, patent free, authenticated encryption with associated data (AEAD) for AES. The algorithm has been developped to address the needs of a growing trend in IoT systems: storing code and data on a low cost flash memory external to the main SOC. Existing AEAD algorithms such as OCB, GCM, CCM, EAX , SIV, provide the required functionality however in practice each of them suffer from various drawbacks for this particular use case. Academic contributions such as ASCON and AEGIS-128 are suitable and efficient however they require the development of new hardware accelerators and they use primitives which are not ‘approved’ by governemental institutions such as NIST, BSI, ANSSI. From a silicon manufacturer point of view, an efficient AEAD which use existing AES hardware is much more enticing: the AES is required already by most industry standards invovling symmetric encryption (GSMA, EMVco, FIDO, Bluetooth, ZigBee to name few). This paper expose the properties of an ideal AEAD for external memory encryption, present the SPAE algorithm and analyze various security aspects. Performances of SPAE on actual hardware are better than OCB, GCM and CCM.
Expand
Francesco Lucente Stabile, Carey Patrick Atkins
ePrint Report ePrint Report
The LSA cryptosystem is an asymmetric encryption algorithm which is based on both group and number theory that follows Kerckhoffs’s principle and relies on a specific case of Gauss’s Generalization of Wilson’s Theorem. Unlike prime factorization based algorithms, the eavesdropping cryptanalyst has no indication that he has successfully decrypted the ciphertext. For this reason, we aim to show that LSAis not only more secure than existing asymmetric algorithms but has the potential to be significantly computationally faster.
Expand
Sikkim, India, 18 March - 20 March 2020
Event Calendar Event Calendar
Event date: 18 March to 20 March 2020
Submission deadline: 1 November 2019
Notification: 15 November 2019
Expand

05 September 2019

Siemen Dhooghe, Svetla Nikova, Vincent Rijmen
ePrint Report ePrint Report
Threshold Implementations (TI) are secure algorithmic countermeasures against side-channel attacks in the form of differential power analysis. The strength of TI lies in its minimal algorithmic requirements. These requirements have been studied over more than 10 years and many efficient implementations for symmetric primitives have been proposed. Thus, over the years the practice of protecting implementations matured, however, the theory behind threshold implementations remained the same. In this work, we revise this theory by looking at the properties of correctness, non-completeness, and uniformity as a composable security model. We prove that this model provides first-order and higher-order univariate security in the glitch-robust probing model which lets us expand the theoretic framework of TI. We first provide a link between uniformity and the notion of non-interference, a known composable security notion building out the probing model. We then relax the notion of non-completeness which helps the design of secure expansion and compression functions. Lastly, we provide generalisations of the threshold notions to allow for general secret sharing schemes and provide examples of how different sharing schemes affect the security and efficiency of the countermeasure.
Expand
Elena Andreeva, Virginie Lallemand, Antoon Purnal, Reza Reyhanitabar, Arnab Roy, Damian Vizar
ePrint Report ePrint Report
Highly efficient encryption and authentication of short messages is an essential requirement for enabling security in constrained scenarios such as the CAN FD in automotive systems (max. message size 64 bytes), massive IoT, critical communication domains of 5G, and Narrowband IoT, to mention a few. In addition, one of the NIST lightweight cryptography project requirements is that AEAD schemes shall be “optimized to be efficient for short messages (e.g., as short as 8 bytes)”.

In this work we introduce and formalize a novel primitive in symmetric cryptography called a forkcipher. A forkcipher is a keyed function expanding a fixed-length input to a fixed-length output. We define its security as indistinguishability under chosen ciphertext attack. We give a generic construction validation via the new iterate-fork-iterate design paradigm.

We then propose ForkSkinny as a concrete forkcipher instance with a public tweak and based on SKINNY: a tweakable lightweight block cipher constructed using the TWEAKEY framework. We conduct extensive cryptanalysis of ForkSkinny against classical and structure-specific attacks.

We demonstrate the applicability of forkciphers by designing three new provably-secure, nonce-based AEAD modes which offer performance and security tradeoffs and are optimized for efficiency of very short messages. Considering a reference block size of 16 bytes, and ignoring possible hardware optimizations, our new AEAD schemes beat the best SKINNY-based AEAD modes. More generally, we show forkciphers are suited for lightweight applications dealing with predominately short messages, while at the same time allowing handling arbitrary messages sizes.

Furthermore, our hardware implementation results show that when we exploit the inherent parallelism of ForkSkinny we achieve the best performance when directly compared with the most efficient mode instantiated with the SKINNY block cipher.
Expand
Thinh Dang, Dustin Moody
ePrint Report ePrint Report
Elliptic curves are typically defined by Weierstrass equations. Given a kernel, the well-known Velu’s formula shows how to explicitly write down an isogeny between Weierstrass curves. However, it is not clear how to do the same on other forms of elliptic curves without isomorphisms mapping to and from the Weierstrass form. Previous papers have shown some isogeny formulas for (twisted) Edwards, Huff, and Montgomery forms of elliptic curves. Continuing this line of work, this paper derives an explicit formula for isogenies between elliptic curves in (twisted) Hessian form.
Expand
Shizhu Tian, Christina Boura, Léo Perrin
ePrint Report ePrint Report
In order to study the resistance of a block cipher against boomerang attacks, a tool called the Boomerang Connectivity Table (BCT) for S-boxes was recently introduced. Very little is known today about the properties of this table especially for bijective S-boxes defined for $n$ variables with $n\equiv 0 \mod{4}$. In this work we study the boomerang uniformity of some popular constructions used for building large S-boxes, e.g. for 8 variables, from smaller ones. We show that the BCTs of all the studied constructions have abnormally high values in some positions. This remark permits us in some cases to link the boomerang properties of an S-box with other well-known cryptanalytic techniques on such constructions while in other cases it leads to the discovery of new ones. A surprising outcome concerns notably the Feistel and MISTY networks. While these two structures are very similar, their boomerang uniformity can be very different. In a second time, we investigate the boomerang uniformity under EA-equivalence for Gold and the inverse function (as used respectively in MPC-friendly ciphers and the AES) and we prove that the boomerang uniformity is EA-invariant in these cases. Finally, we present an algorithm for inverting a given BCT and provide experimental results on the size of the BCT-equivalence classes for some $4$ and $8$-bit S-boxes.
Expand
Shi Bai, Katharina Boudgoust, Dipayan Das, Adeline Roux-Langlois, Weiqiang Wen, Zhenfei Zhang
ePrint Report ePrint Report
At CRYPTO 2017, Rosca et al. introduce a new variant of the Learning With Errors (LWE) problem, called the Middle-Product LWE (MP-LWE). The hardness of this new assumption is based on the hardness of the Polynomial LWE (P-LWE) problem parameterized by a set of polynomials, making it more secure against the possible weakness of a single defining polynomial. As a cryptographic application, they also provide an encryption scheme based on the MP-LWE problem. In this paper, we propose a deterministic variant of their encryption scheme, which does not need Gaussian sampling and is thus simpler than the original one. Still, it has the same quasi-optimal asymptotic key and ciphertext sizes. The main ingredient for this purpose is the Learning With Rounding (LWR) problem which has already been used to derandomize LWE type encryption. The hardness of our scheme is based on a new assumption called Middle-Product Computational Learning With Rounding, an adaption of the computational LWR problem over rings, introduced by Chen et al. at ASIACRYPT 2018. We prove that this new assumption is as hard as the decisional version of MP-LWE and thus benefits from worst-case to average-case hardness guarantees.
Expand
Aisling Connolly, Pooya Farshim, Georg Fuchsbauer
ePrint Report ePrint Report
We study the security of symmetric primitives against key-correlated attacks (KCA), whereby an adversary can arbitrarily correlate keys, messages, and ciphertexts. Security against KCA is required whenever a primitive should securely encrypt key-dependent data, even when it is used under related keys. KCA is a strengthening of the previously considered notions of related-key attack (RKA) and key-dependent message (KDM) security. This strengthening is strict, as we show that 2-round Even–Mansour fails to be KCA secure even though it is both RKA and KDM secure. We provide feasibility results in the ideal-cipher model for KCAs and show that 3-round Even–Mansour is KCA secure under key offsets in the random-permutation model. We also give a natural transformation that converts any authenticated encryption scheme to a KCA-secure one in the random-oracle model. Conceptually, our results allow for a unified treatment of RKA and KDM security in idealized models of computation.
Expand
Pierrick Méaux
ePrint Report ePrint Report
In different contexts such as filtered LFSR, Goldreich's PRG, and FLIP stream ciphers, the security of a cryptographic primitive mostly depends on the algebraic properties of one Boolean function. Since the Seventies, more and more efficient attacks have been exhibited in this context, related to more and more general algebraic properties, such as the degree, the algebraic immunity, and finally, the fast algebraic immunity. Once the properties to estimate the attack complexities are identified, it remains to determine the exact parameters of interesting families of functions with these properties. Then, these functions can be combined in secondary constructions to guarantee the good algebraic properties of a main function. In particular, the family of symmetric functions, and more precisely the subclass of majority functions, has been intensively studied in the area of cryptography, because of their practical advantages and good properties.

The degree of all these functions is known, and they have been proven to reach the optimal algebraic immunity, but still very few is known about its fast algebraic immunity. For a function in $n=2^m+j$ variables, an upper bound is known for all $m$ and $j$, proving that these functions do not reach the optimal fast algebraic immunity. However, the exact fast algebraic immunity is known only for very few families indexed by $j$, where the parameter is exhibited for all members of the family since $m$ is big enough. Recent works gave exact values for $j=0$ and $j=1$ (in the first case), and for $j=2$ and $j=3$ with $m\geq2$ (in the second case). In this work, we determine the exact fast algebraic immunity for all possible values of $j$, for all member of the family assuming $m\geq 1+\log_2(j+1)$.
Expand
◄ Previous Next ►