International Association for Cryptologic Research

IACR News Central

Here you can see all recent updates to the IACR webpage. These updates are also available:

Now viewing news items related to:

27 March 2017
Invariant subspace attack is a novel cryptanalytic technique which breaks several recently proposed lightweight block ciphers. In this paper, we propose a new method to bound the dimension of some invariant subspaces in a class of lightweight block ciphers which have a similar structure as the AES but with 4-bit Sboxes. With assumptions on the diffusion layer, the dimension of any invariant subspaces is at most 32 when the inputs into each Sboxes are linearly independent. The observation brings new insights about the invariant subspace attack, as well as lightweight countermeasures to enhance the resistance against it.
26 March 2017
In the study of cryptography in $\text{NC}^0$, it was previously known that Goldreich's candidate pseudorandom generator (PRG) is insecure when instantiated with a predicate $P$ in $4$ or fewer variables, if one wants to achieve polynomial stretch (that is, stretching $n$ bits to $n^{1+\epsilon}$ bits for some constant $\epsilon>0$). The current standard candidate predicate for this setting is the ``tri-sum-and'' predicate $\text{TSA}(x) = \text{XOR}_3 \oplus \text{AND}_2(x) = x_1\oplus x_2\oplus x_3\oplus x_4x_5$, yielding a candidate PRG of locality $5$. Moreover, Goldreich's PRG, when instantiated with TSA as the predicate, is known to be secure against several families of attacks, including $\mathbb F_2$-linear attacks and attacks using SDP hierarchies such as the Lasserre/Parrilo sum-of-squares hierarchy.

However, it was previously unknown if $\text{TSA}$ is an ``optimal'' predicate according to other complexity measures: in particular, decision tree (DT-)complexity (i.e., the smallest depth of a binary decision tree computing $P$) and $\mathbb Q$-degree (i.e., the degree of $P$ as a polynomial over $\mathbb Q$), which are important measures of complexity in cryptographic applications such as the construction of an indistinguishability obfuscation scheme. In this work, we ask: Can Goldreich's PRG be instantiated with a predicate with DT-complexity or $\mathbb Q$-degree less than $5$?

We show that this is indeed possible: we give a candidate predicate for Goldreich's PRG with DT-complexity $4$ and $\mathbb Q$-degree $3$; in particular, this candidate PRG therefore has the property that every output bit is a degree 3 polynomial in its input. Moreover, Goldreich's PRG instantiated with our predicate has security properties similar to what is known for $\text{TSA}$, namely security against $\mathbb F_2$-linear attacks and security against attacks from SDP hierarchies such as the Lasserre/Parrilo sum-of-squares hierarchy.

We also show that all predicates with either DT-complexity less than $4$ or $\mathbb Q$-degree less than $3$ yield insecure PRGs, so our candidate predicate simultaneously achieves the best possible locality, DT-complexity, $\mathbb Q$-degree, and $\mathbb F_2$-degree according to all known attacks.
We show how to obfuscate a large and expressive class of programs, which we call compute-and-compare programs, under the learning-with-errors (LWE) assumption. Each such program $CC[f,y]$ is parametrized by an arbitrary polynomial-time computable function $f$ along with a target value $y$ and we define $CC[f,y](x)$ to output $1$ if $f(x)=y$ and $0$ otherwise. In other words, the program performs an arbitrary computation $f$ and then compares its output against a target $y$. Our obfuscator satisfies distributional virtual-black-box security, which guarantees that the obfuscated program does not reveal any partial information about the function $f$ or the target value $y$ as long as they are chosen from some distribution where $y$ has sufficient pseudo-entropy given $f$. We also extend our result to multi-bit compute-and-compare programs $MBCC[f,y,z](x)$ which output a message $z$ if $f(x)=y$.

Compute-and-compare programs are powerful enough to capture many interesting obfuscation tasks as special cases. This includes obfuscating conjunctions, and therefore we improve on the prior work of Brakerski et al. (ITCS '16) which constructed a conjunctions obfuscator under a non-standard ``entropic'' ring-LWE assumption, while here we obfuscate a significantly broader class of programs under standard LWE. We show that our obfuscator has several interesting applications. For example, we can take an arbitrary encryption scheme and publish an obfuscated plaintext equality tester that allows users to test whether a ciphertext encrypts some target value $y$; as long as $y$ has sufficient pseudo-entropy this will not harm semantic security. We can also use our obfuscator to generically upgrade attribute-based encryption to predicate encryption as well as witness encryption to indistinguishability obfuscation which is secure for all null circuits. Furthermore, we show that our obfuscator gives new circular-security counter-examples for public-key bit encryption and for unbounded length key cycles.

Our result uses the graph-induced multi-linear maps of Gentry, Gorbunov and Halevi (TCC '15), but only in a carefully restricted manner which is provably secure under LWE. Our technique is inspired by ideas introduced in a recent work of Goyal, Koppula and Waters (EUROCRYPT '17) in a seemingly unrelated context.
25 March 2017
We propose simple generic constructions of indistinguishability obfuscator (IO). Our key tool is exponentially-efficient indistinguishability obfuscator (XIO), which is the same as IO except that the size of an obfuscated circuit (or the running-time of an obfuscator) is slightly smaller than that of a brute-force canonicalizer that outputs the entire truth table of a circuit to be obfuscated. A ``compression factor'' of XIO indicates how much XIO compresses the brute-force canonicalizer. In this study, we show that XIO is a powerful enough to achieve cutting-edge cryptography. In particular, we propose the following constructions:

1. A single-key weakly succinct secret-key functional encryption (SKFE) scheme is constructed from XIO (even with a bad compression factor) and one-way function. 2. A single-key weakly succinct public-key functional encryption (PKFE) scheme is constructed from XIO with a good compression factor and public-key encryption scheme. 3. A single-key weakly succinct PKFE scheme is constructed from XIO (even with a bad compression factor) and identity-based encryption scheme.

It is known that sub-exponentially secure single-key weakly succinct PKFE scheme implies IO and that single-key weakly succinct (resp. multi-key non-succinct) SKFE implies XIO with a bad (resp. good) compression factor. Thus, we developed two methods of constructing IO. One uses multi-key SKFE and plain public-key encryption schemes and the other uses single-key weakly succinct SKFE (or XIO) and identity-based encryption schemes. It is not known whether single-key weakly succinct SKFE implies IO (if we use fully black-box reduction in a certain model, it is impossible), but our single-key weakly succinct SKFE scheme gives many interesting by-products.
ePrint Report Lockable Obfuscation Rishab Goyal, Venkata Koppula, Brent Waters
In this paper we introduce the notion of lockable obfuscation. In a lockable obfuscation scheme there exists an obfuscation algorithm $\mathsf{Obf}$ that takes as input a security parameter $\lambda$, a program $P$, a message $\mathsf{msg}$ and ``lock value'' $\alpha$ and outputs an obfuscated program $\widetilde{P}$. One can evaluate the obfuscated program $\widetilde{P}$ on any input $x$ where the output of evaluation is the message $\mathsf{msg}$ if $P(x) = \alpha$ and otherwise receives a rejecting symbol $\perp$.

We proceed to provide a construction of lockable obfuscation and prove it secure under the Learning with Errors (LWE) assumption. Notably, our proof only requires LWE with polynomial hardness and does not require complexity leveraging.

We follow this by describing multiple applications of lockable obfuscation. First, we show how to transform any attribute-based encryption (ABE) scheme into one in which the attributes used to encrypt the message are hidden from any user that is not authorized to decrypt the message. (Such a system is also know as predicate encryption with one-sided security.) The only previous construction due to Gorbunov, Vaikuntanathan and Wee is based off of a specific ABE scheme of Boneh et al. By enabling the transformation of any ABE scheme we can inherent different forms and features of the underlying scheme such as: multi-authority, adaptive security from polynomial hardness, regular language policies, etc.

We also show applications of lockable obfuscation to separation and uninstantiability results. We first show how to create new separation results in circular encryption that were previously based on indistinguishability obfuscation. This results in new separation results from learning with error including a public key bit encryption scheme that it IND-CPA secure and not circular secure. The tool of lockable obfuscation allows these constructions to be almost immediately realized by translation from previous indistinguishability obfuscation based constructions. In a similar vein we provide random oracle uninstantiability results of the Fujisaki-Okamoto transformation (and related transformations) from the lockable obfuscation combined with fully homomorphic encryption. Again, we take advantage that previous work used indistinguishability obfuscation that obfuscated programs in a form that could easily be translated to lockable obfuscation.
Non-malleable commitment is a fundamental cryptographic tool for preventing man-in-the-middle attacks. Since its proposal by Dolev, Dwork, and Noar in 1991, a rich line of research has steadily reduced the number of communication rounds needed for non-malleable commitment, towards the ultimate goal of constructing non-interactive non-malleable commitment from well-studied hardness assumptions. However, this successful research recently hit a barrier: Pass showed that 2-round non-malleable commitment cannot be based on any, even subexponentially secure, falsifiable assumptions, via black-box security reductions [Pass, STOC 2011], and the only known construction of non-interactive non-malleable commitment is based on a new and non-falsifiable assumption [Pandey, Pass, Vaikuntanathan, Crypto 2008].

In this work, we present a construction of 2-round non-malleable commitment from time-lock puzzles; they are "mechanisms for sending messages to the future" proposed by Rivest, Shamir, and Wagner in 1996, whose original construction has withstood cryptanalysis for two decades. In addition, our construction uses a subexponentially secure injective one-way function and a non-interactive witness indistinguishable proof system. The key to circumventing Pass's impossibility result lies in leveraging the different nature of hardness provided by time-lock puzzles and classical cryptographic primitives. Conventionally, cryptographic hardness is defined against adversaries with bounded time (or equivalently circuit-size); in contrast, the hardness of time-lock puzzles holds against adversaries with bounded parallel-time (or circuit-depth). This difference allows us to construct commitment schemes that are simultaneously harder than each other according to different complexity measures, which imply a weak form of non-malleability. It is then strengthened through a new 2-round non-malleability amplification technique, and the final protocol is non-malleable even in the fully concurrent setting.

To the best of our knowledge, this is the first time that time-lock puzzles are used constructively outside time-released cryptography, and opens an interesting direction of combining hardness w.r.t. different complexity measures for achieving cryptographic goals.
In leakage-resilient symmetric cryptography, two important concepts have been proposed in order to decrease the success rate of differential side-channel attacks. The first one is to limit the attacker’s data complexity by restricting the number of observable inputs; the second one is to create correlated algorithmic noise by using parallel S-boxes with equal inputs. The latter hinders the typical divide and conquer approach of differential side-channel attacks and makes key recovery much more difficult in practice. The use of localized electromagnetic (EM) measurements has already been shown to limit the effectiveness of such measures in previous works based on PRESENT S-boxes and 90nm FPGAs. However, it has been left for future investigation in recent publications based on AES S-boxes. We aim at providing helpful results and insights from LDA-preprocessed, multivariate, localized EM attacks against a 45nm FPGA implementation using AES S-boxes. We show, that even in the case of densely placed S-boxes (with identical routing constraints), and even when limiting the data complexity to the minimum of only two inputs, the guessing entropy of the key is reduced to only 2^48 , which remains well within the key enumeration capabilities of today’s adversaries. Relaxing the S-box placement constraints further reduces the guessing entropy. Also, increasing the data complexity for efficiency, decreases it down to a direct key recovery. While our results are empirical and reflective of one device and implementation, they emphasize the threat of multivariate localized EM attacks to such AES-based leakage-resilient constructions, more than currently believed.
ePrint Report High Order Masking of Look-up Tables with Common Shares Jean-Sebastien Coron, Franck Rondepierre, Rina Zeitoun
Masking is an effective countermeasure against side-channel attacks. In this paper, we improve the efficiency of the high-order masking of look-up tables countermeasure introduced at Eurocrypt 2014, based on a combination of three techniques, and still with a proof of security in the Ishai-Sahai-Wagner (ISW) probing model. The first technique consists in proving security under the stronger t-SNI definition, which enables to use n=t+1 shares instead of n=2t+1 against t-th order attacks. The second technique consists in progressively incrementing the number of shares within the countermeasure, from a single share to n, thereby reducing the complexity of the countermeasure. The third technique consists in adapting the common shares approach introduced by Coron et al. at CHES 2016, so that half of a randomized look-up table can be pre-computed for multiple SBoxes. When combined, our three techniques lead to a factor 10.7 improvement in efficiency, asymptotically for a large number of shares n. For a practical implementation with a reasonable number of shares, we get a 4.8 speed-up factor compared to the initial countermeasure for AES.
Rational proofs, introduced by Azar and Micali (STOC 2012), are a variant of interactive proofs in which the prover is rational, and may deviate from the protocol for increasing his reward. Guo et al.\ (ITCS 2014) demonstrated that rational proofs are relevant to delegation of computation. By restricting the prover to be computationally bounded, they presented a one-round delegation scheme with sublinear verification for functions computable by log-space uniform circuits with logarithmic depth. In this work, we study rational proofs in which the verifier is also rational, and may deviate from the protocol for decreasing the prover's reward. We construct a three-message delegation scheme with sublinear verification for functions computable by log-space uniform circuits with polylogarithmic depth in the random oracle model.
Side channel analysis and fault attacks are two powerful methods to analyze and break cryptographic implementations. Recently, secure multiparty computation has been applied to prevent side channel attacks. While multiparty computation is known to be fault resistant as well, the particular schemes popular for side channel protection do not currently offer this feature. In this paper we introduce a new secure multiparty circuit to prevent both fault attacks and side channel analysis. The new scheme builds on an existing side channel countermeasure and extends it to preserve errors and propagate them until the end of the circuit. A new recombination operation ensures randomization of the output in the case of an error, ensuring that nothing can be learned from the faulty output. After introducing the new secure multiparty circuit, we show how it can be applied to AES and present the performance and security analysis.
ePrint Report Efficient Sanitizable Signatures without Random Oracles Russell W. F. Lai, Tao Zhang, Sherman S. M. Chow, Dominique Schröder
Sanitizable signatures, introduced by Ateniese et al. (ESORICS '05), allow the signer to delegate the sanitization right of signed messages. The sanitizer can modify the message and update the signature accordingly, so that the sanitized part of the message is kept private. For a stronger protection of sensitive information, it is desirable that no one can link sanitized message-signature pairs of the same document. This idea was formalized by Brzuska et al. (PKC '10) as unlinkability, which was followed up recently by Fleischhacker et al. (PKC '16). Unfortunately, the existing generic constructions of sanitizable signatures, unlinkable or not, are based on building blocks with specially crafted features of which efficient (standard model) instantiations are absent. Basing on existing primitives or a conceptually simple primitive is more desirable.

In this work, we present two such generic constructions, leading to efficient instantiations in the standard model. The first one is based on rerandomizable tagging, a new primitive which may find independent interests. It captures the core accountability mechanism of sanitizable signatures. The second one is based on accountable ring signatures (CARDIS '04, ESORICS '15). As an intermediate result, we propose the first accountable ring signature scheme in the standard model.
Recently, gray-box attacks on white-box cryptographic implementations have succeeded. These attacks are more efficient than white-box attacks because they can be performed without detailed knowledge of the target implementation. The success of the gray-box attack is due to the unbalanced encoding used to generate the white-box lookup table. In this paper, we propose a method to protect the gray-box attack against white-box implementations. The basic idea is to use Boolean masking before encoding intermediate values during the white-box lookup table generation. Compared to the existing white-box AES implementation, the lookup table size and the table lookups increase by about 1.5- and 1.6 times, respectively.
Polytopic cryptanalysis was introduced at EUROCRYPT 2016 as a cryptanalytic technique for low-data-complexity attacks on block ciphers. In this paper, we give an account of how the technique was developed, quickly go over the basic ideas and techniques of polytopic cryptanalysis, look into how the technique differs from previously existing cryptographic techniques, and discuss whether the attack angle can be useful for developing improved cryptanalytic techniques.
This paper puts forward an efficient broadcast encryption in public key setting employing ternary tree subset difference method for revocation. It provides outsider anonymity disabling the revoked users from getting any information of message and concealing the set of subscribed users from the revoked users. Our approach utilizes composite order bilinear group setting and exhibits significant improvement in the broadcast efficiency. The proposed scheme compares favourably over the existing similar schemes in standard model. The public key and secret key sizes are poly-logarithmic while the ciphertext size is sub linear in total number of users. Our scheme achieves selective security against chosen plaintext attack in the standard model under reasonable assumptions.
ePrint Report A note on how to (pre-)compute a ladder Thomaz Oliveira, Julio López, Francisco Rodríguez-Henríquez
In the RFC 7748 memorandum, the Internet Research Task Force specified a Montgomery ladder scalar multiplication function based on two recently proposed prime elliptic curves. The purpose of this function is to support the Diffie-Hellman key exchange algorithm included in the coming version of the Transport Layer Security cryptographic protocol. In this paper, we introduce a Montgomery-ladder variant that permits to accelerate the fixed-point multiplication function when applied in the Diffie-Hellman key pair generation step. Our function combines a right-to-left variant of the Montgomery ladder with the pre-computation of multiples of the base point, and by requiring very modest memory resources and a small implementation effort with respect to the RFC 7748 programming code, it obtains significant performance improvements on desktop architectures. To our knowledge, this is the first proposal of a Montgomery ladder procedure for prime elliptic curves that admits the extensive use of pre-computation.
We solve the problem of finding the success rate of an optimal side-channel attack targeting at once the first and the last round of a block cipher. We relate the results to the properties of the direct and inverse substitution boxes (when they are bijective), in terms of confusion coefficients.
Most modern actively secure multiparty computation protocols make use of a function and input independent pre-processing phase. This pre-processing phase is tasked with producing some form of correlated randomness and distributing it to the parties. Whilst the “online” phase of such protocols is exceedingly fast, the bottleneck comes in the pre-processing phase. In this paper we examine situations where the computing parties in the online phase may want to outsource the pre-processing phase to another set of parties, or to a sub-committee. We examine how this can be done, and also describe situations where this may be a benefit.
ePrint Report Side-channel Analysis of Lightweight Ciphers: Does Lightweight Equal Easy? Annelie Heuser, Stjepan Picek, Sylvain Guilley, Nele Mentens
Side-channel attacks represent a powerful category of attacks against cryptographic devices. Still, side-channel analysis for lightweight ciphers is much less investigated than for instance for AES. Although intuition may lead to the conclusion that lightweight ciphers are weaker in terms of side-channel resistance, that remains to be confirmed and quantified. In this paper, we consider various side-channel analysis metrics which should provide an insight on the resistance of lightweight ciphers against side-channel attacks. In particular, for the non-profiled scenario we use the theoretical confusion coefficient and empirical correlation power analysis. Furthermore, we conduct a profiled side-channel analysis using various machine learning attacks on PRESENT and AES. Our results show that the difference between AES and lightweight ciphers is smaller than one would expect. Interestingly, we observe that the studied 4-bit S-boxes have a different side-channel resilience, while the difference in the 8-bit ones is only theoretically present.
This paper explores a new type of MACs called message-recovery MACs (MRMACs). MRMACs have an additional input $R$ that gets recovered upon verification. Receivers must execute verification in order to recover $R$, making the verification process unskippable. Such a feature helps avoid mis-implementing verification algorithms. The syntax and security notions of MRMACs are rigorously formulated. In particular, we formalize the notion of unskippability and present a construction of an unskippable MRMAC from a tweakable cipher and a universal hash function. Our construction is provided with formal security proofs. We extend the idea of MRMACs to a new type of authenticated encryption called verification-unskippable AE (VUAE). We propose a generic Enc-then-MRMAC composition which realizes VUAE. The encryption part needs to satisfy a new security notion called one-time undecipherability. We provide three constructions that are one-time undecipherable, and they are proven secure under various security models.
Sampling integers with Gaussian distribution is a fundamental problem that arises in almost every application of lattice cryptography, and it can be both time consuming and challenging to implement. Most previous work has focused on the optimization and implementation of integer Gaussian sampling in the context of specific applications, with fixed sets of parameters. We present new algorithms for discrete Gaussian sampling that are both generic (application independent), efficient, and more easily implemented in constant time without incurring a substantial slow-down, making them more resilient to side-channel (e.g., timing) attacks. As an additional contribution, we present new analytical techniques that can be used to simplify the precision/security evaluation of floating point cryptographic algorithms, and an experimental comparison of our algorithms with previous algorithms from the literature.
ePrint Report Pseudorandomness of Ring-LWE for Any Ring and Modulus Chris Peikert, Oded Regev, Noah Stephens-Davidowitz
We give a polynomial-time quantum reduction from worst-case (ideal) lattice problems directly to the decision version of (Ring-)LWE. This extends to decision all the worst-case hardness results that were previously known for the search version, for the same or even better parameters and with no algebraic restrictions on the modulus or number field. Indeed, our reduction is the first that works for decision Ring-LWE with any number field and any modulus.
ePrint Report Threshold Fully Homomorphic Encryption Aayush Jain, Peter M. R. Rasmussen, Amit Sahai
We formally define and give the first construction of (leveled) threshold fully homomorphic encryption for any access structure induced by a monotone boolean formula and in particular for the threshold access structure. Our construction is based on the learning with errors assumption and can be instantiated with any existing homomorphic encryption scheme that satisfies fairly general conditions, such as Gentry, Sahai, Waters (CRYPTO 2013) and Brakerski, Gentry, Vaikuntanathan (ITCS 2012).

From threshold homomorphic encryption, we construct function secret sharing and distributed pseudorandom functions for the aforementioned access structures. No such constructions were known prior to this work.
20 March 2017
The analysis of real-world protocols, in particular key exchange protocols and protocols building on these protocols, is a very complex, error-prone, and tedious task. Besides the complexity of the protocols itself, one important reason for this is that the security of the protocols has to be reduced to the security of the underlying cryptographic primitives for every protocol time and again.

We would therefore like to get rid of reduction proofs for real-world key exchange protocols as much as possible and in many cases altogether, also for higher-level protocols which use the exchanged keys. So far some first steps have been taken in this direction. But existing work is still quite limited, and, for example, does not support Diffie-Hellman (DH) key exchange, a prevalent cryptographic primitive for real-world protocols.

In this paper, building on work by K{\"u}sters and Tuengerthal, we provide an ideal functionality in the universal composability setting which supports several common cryptographic primitives, including DH key exchange. This functionality helps to avoid reduction proofs in the analysis of real-world protocols and often eliminates them completely. We also propose a new general ideal key exchange functionality which allows higher-level protocols to use exchanged keys in an ideal way. As a proof of concept, we apply our framework to three practical DH key exchange protocols, namely ISO 9798-3, SIGMA, and OPTLS.
ePrint Report New Limits for AES Known-Key Distinguishers Lorenzo Grassi, Christian Rechberger
Known-key distinguishers have been introduced to better understand the security of block ciphers in situations where the key can not be considered to be secret.

AES is often considered as a target of such analyses, simply because AES or its building blocks are used in many settings that go beyond classical encryption. The most recent known-key model of Gilbert (proposed at Asiacrypt 2014) allows to consider two 4-round distinguishers combined in an inside-out fashion (8 core rounds), and to extend it by one round in each direction (two extension rounds). The resulting 10-round distinguisher has a time complexity of $2^{64}$. In that work, arguments were put forward suggesting that two extension rounds seems to be the limit in the known-key model, and that likely only a distinguisher that exploits the balance property can be extended in such way.

In this paper we disprove both these conjectures and arrive at the following results. We firstly show that the technique proposed by Gilbert can also be used to extend a known-key distinguisher based on truncated differential trails. This allows to improve all the known-key distinguishers currently present in literature for AES up to 10 rounds of AES. In particular, we are able to set up a 9-round known-key distinguisher for AES with a time complexity of $2^{23}$ and a 10-round known-key distinguisher with a time complexity of $2^{50}$. Secondly we are also able to show that more than two extension rounds are possible. As a result of this, we describe the first known-key distinguishers on 12 rounds of AES, by extending an 8-round known-key distinguisher by two rounds in each direction (four extension rounds). The time complexity is $2^{82}$.

We conclude with a discussion on why it seems not feasible to set up similar distinguishers on 14 rounds exploiting the same strategy.
ePrint Report Towards Easy Key Enumeration Changhai Ou, Degang Sun, Zhu Wang, Xinping Zhou, Juan Ai
Key enumeration solutions are post-processing schemes for the output sequences of side channel distinguishers, the application of which are prevented by very large key candidate space and computation power requirements. The attacker may spend several days or months to enumerate a huge key candidate space (i.e. $2^{40}$). In this paper, we aim at pre-processing and reducing the key candidate space by deleting impossible key candidates before enumeration. A new distinguisher named Group Collision Attack (GCA) is given. Moreover, we introduce key verification into key recovery and a new divide and conquer strategy named Key Grouping Enumeration (KGE) is proposed. KGE divides the huge key space into several groups and uses GCA to delete impossible key combinations and output possible ones in each group. KGE then recombines the remaining key candidates in each group using verification. The number of remaining key candidates becomes much more smaller through these two impossible key candidate deletion steps with a small amount of computation. Thus, the attacker can use KGE as a pre-processing tool of key enumeration and enumerate the key more easily and fast in a much smaller candidate space.
We conduct a reduction-based security analysis of the Extensible Authentication Protocol (EAP), a widely used three-party authentication framework. EAP is often found in enterprise networks where it allows a client and an authenticator to establish a shared key with the help of a mutually trusted server. Considered as a three-party authenticated key exchange protocol, we show that the general EAP construction achieves a security notion we call 3P-AKE$^w$. %under the assumption that it employs \emph{channel binding}. %Channel binding ensures that the session key derived by the client and the authenticator is cryptographically bound to them. The 3P-AKE$^w$ security notion captures the idea of \emph{weak forward secrecy} and is a simplified three-party version of the well-known eCK model in the two-pass variant. Our analysis is modular and reflects the compositional nature of EAP.

Additionally, we show that the security of EAP can easily be upgraded to provide \emph{full} forward secrecy simply by adding a subsequent key-confirmation step between the client and the authenticator. In practice this key-confirmation step is often carried out in the form of a 2P-AKE protocol which uses EAP to bootstrap its authentication. A concrete example is the extremely common IEEE~802.11 protocol used in WLANs. In enterprise settings EAP is often used in conjunction with IEEE~802.11 in order to allow the wireless client to authenticate itself to a wireless access point (the authenticator) through some centrally administrated server. Building on our modular results for EAP, we get as our second major result the first reduction-based security result for IEEE~802.11 combined with EAP.
Masking with random values is an effective countermeasure against side-channel attacks. For cryptographic algorithms combining arithmetic and Boolean masking, it is necessary to switch from arithmetic to Boolean masking and vice versa. Following a recent approach by Hutter and Tunstall, we describe a high-order Boolean to arithmetic conversion algorithm whose complexity is independent of the register size k. Our new algorithm is proven secure in the Ishai, Sahai and Wagner (ISW) framework for private circuits. In practice, for small orders, our new countermeasure is one order of magnitude faster than previous work. We also describe a 3rd-order attack against the 3rd-order Hutter-Tunstall algorithm, and a constant, 4th-order attack against the t-th order Hutter-Tunstall algorithms, for any t>=4.
ePrint Report A Lattice-Based Universal Thresholdizer for Cryptographic Systems Dan Boneh, Rosario Gennaro, Steven Goldfeder, Sam Kim
We develop a general approach to thresholdizing a large class of (non-threshold) cryptographic schemes. We show how to add threshold functionality to CCA-secure public-key encryption (PKE), signature schemes, pseudorandom functions, and others primitives. To do so, we introduce a general tool, called a universal thresholdizer, from which many threshold systems are possible. The tool builds upon a lattice-based fully-homomorphic encryption (FHE) system. Applying the tool to a (non-threshold) lattice-based signature, gives the first single-round threshold signature from the learning with errors problem (LWE). Applying the tool to a (non-threshold) lattice-base CCA-secure PKE, gives a single-round lattice-based threshold CCA-secure PKE.
Recent works (Lin, EUROCRYPT'16, ePrint'16; Lin and Vaikunthanathan, FOCS'16; Ananth and Sahai, EUROCRYPT'17) establish a tight connection between constructions of indistinguishability obfuscation from $L$-linear maps and pseudo-random generators (PRGs) with output locality $L$. This approach appears however not to be suitable to obtain instantiations from bilinear maps, as no polynomial-stretch PRG with locality lower than 5 exists.

This paper presents new candidate constructions of indistinguishability obfuscation from (i) $L$-linear maps for any $L \ge 2$, and (ii) PRGs with block-wise locality $L$. A PRG has block-wise locality $L$ if every output bit depends on at most $L$ (disjoint) input blocks, each consisting of up to $\log \lambda$ input bits. In particular, we give:

A construction of a general-purpose indistinguishability obfuscator from $L$-linear maps and a subexponentially-secure PRG with block-wise locality $L$ and polynomial stretch.

A construction of general-purpose functional encryption from $L$-linear maps and any slightly super-polynomially secure PRG with block-wise locality $L$ and polynomial stretch.

All our constructions are based on the SXDH assumption on $L$-linear maps and subexponential Learning With Errors (LWE) assumption. In the special case of $L = 2$, our constructions can be alternatively based on bilinear maps with the Matrix Diffie-Hellman assumption and the 3-party Decision Diffie Hellman assumption, without assuming LWE. Concurrently, we initiate the study of candidate PRGs with block-wise locality $L\ge 2$ based on Goldreich's local functions, and their security. In particular, lower bounds on the locality of PRGs do not apply to block-wise locality for any $L \ge 2$, and the security of instantiations with block-wise locality $L \ge 3$ is backed by similar validation as constructions with (conventional) locality $5$. We complement this with hardness amplification techniques that weaken the pseudorandomness requirement on our candidates to qualitatively weaker requirements.
ePrint Report Proof of Luck: an Efficient Blockchain Consensus Protocol Mitar Milutinovic, Warren He, Howard Wu, Maxinder Kanwal
In the paper, we present designs for multiple blockchain consensus primitives and a novel blockchain system, all based on the use of trusted execution environments (TEEs), such as Intel SGX-enabled CPUs. First, we show how using TEEs for existing proof of work schemes can make mining equitably distributed by preventing the use of ASICs. Next, we extend the design with proof of time and proof of ownership consensus primitives to make mining energy- and time-efficient. Further improving on these designs, we present a blockchain using a proof of luck consensus protocol. Our proof of luck blockchain uses a TEE platform's random number generation to choose a consensus leader, which offers low-latency transaction validation, deterministic confirmation time, negligible energy consumption, and equitably distributed mining. Lastly, we discuss a potential protection against up to a constant number of compromised TEEs.

  older items