Asiacrypt 2016
Menu

List of Accepted Papers

  • Size-Hiding Computation for Multiple Parties
    Kazumasa Shinagawa; Koji Nuida; Takashi Nishide; Goichiro Hanaoka; Eiji Okamoto
    University of Tsukuba, AIST; AIST; University of Tsukuba; AIST; University of Tsukuba
  • Abstract:

    Lindell, Nissim, and Orlandi (ASIACRYPT 2013) studied feasibility and infeasibility of general two-party protocols that hide not only the contents of the inputs of parties, but also some sizes of the inputs and/or the output.
    In this paper, we extend their results to $n$-party protocols for $n \geq 2$, and prove that it is infeasible to securely compute every function while hiding two or more (input or output) sizes.
    Then, to circumvent the infeasibility, we naturally extend the communication model in a way that any adversary can learn neither the contents of the messages nor the numbers of bits exchanged among honest parties.
    We note that such size-hiding computation is never a trivial problem even by using our size-hiding channel, since size-hiding computation of some function remains infeasible as we show in the text.
    Then, as our main result, we give a necessary and sufficient condition for feasibility of size-hiding computation of an arbitrary function, in terms of which of the input and output sizes must be hidden from which of the $n$ parties.
    In particular, it is now possible to let each input/output size be hidden from some parties, while the previous model only allows the size of at most one input to be hidden.
    Our results are based on a security model slightly stronger than the honest-but-curious model.

  • Trick or Tweak: On the (In)security of OTR’s Tweaks
    Raphael Bost; Olivier Sanders
    DGA/Université de Rennes 1; Orange Labs
  • Abstract:

    Tweakable blockcipher (TBC) is a powerful tool to design authenticated encryption schemes as illustrated by Minematsu’s Offset Two Rounds (OTR) construction. It considers an additional input, called tweak, to a standard blockcipher which adds some variability to this primitive. More specifically, each tweak is expected to define a different, independent pseudo-random permutation.

    In this work we focus on OTR’s way to instantiate a TBC and show that it does not achieve such a property for a large amount of parameters. We indeed describe collisions between the input masks derived from the tweaks and explain how they result in practical attacks against this scheme, breaking privacy, authenticity, or both, using a single encryption query, with advantage at least 1/4.

    We stress however that our results do not invalidate the OTR construction as a whole but simply prove that the TBC’s input masks should be designed differently.

  • Authenticated Encryption with Variable Stretch
    Reza Reyhanitabar; Serge Vaudenay; Damian Vizár
    NEC Laboratories Europe, Germany; EPFL, Switzerland; EPFL, Switzerland
  • Abstract:

    In conventional authenticated-encryption (AE) schemes, the ciphertext expansion, a.k.a. stretch or tag length, is a constant or a parameter of the scheme that must be fixed per key. However, using variable-length tags per key can be desirable in practice or may occur as a result of a misuse. The RAE definition by Hoang, Krovetz, and Rogaway (Eurocrypt 2015), aiming at the “best-possible” AE security, supports variable stretch among other strong features, but achieving the RAE goal incurs a particular inefficiency: neither encryption nor decryption can be online. The problem of enhancing the well-established nonce-based AE (nAE) model and the standard schemes thereof to support variable tag lengths per key, without sacrificing any desirable functional and efficiency properties such as online encryption, has recently regained interest as evidenced by extensive discussion threads on the CFRG forum and the CAESAR competition. Yet there is a lack of formal definition for this goal. First, we show that several recently proposed heuristic measures trying to augment the known schemes by inserting the tag length into the nonce and/or associated data fail to deliver any meaningful security in this setting. Second, we provide a formal definition for the notion of nonce-based variable-stretch AE (nvAE) as a natural extension to the traditional nAE model. Then, we proceed by showing a second modular approach to formalizing the goal by combining the nAE notion and a new property we call “key-equivalent separation by stretch” (kess). It is proved that (after a mild adjustment to the syntax) any nAE scheme which additionally fulfills the kess property will achieve the nvAE goal. Finally, we show that the nvAE goal is efficiently and provably achievable; for instance, by simple tweaks to off-the-shelf schemes such as OCB.

  • Adaptive Oblivious Transfer and Generalization
    Olivier Blazy; Céline Chevalier; Paul Germouty
    XLim, Université de Limoges; CRED, Univerité Pantheon Assas Paris II; XLim, Université de Limoges
  • Abstract:

    Oblivious Transfer (OT) protocols were introduced in the seminal paper of Rabin, and allow a user to retrieve a given number of lines (usually one) in a database, without revealing which ones to the server. The server is ensured that only this given number of lines can be accessed per interaction, and so the others are protected; while the user is ensured that the server does not learn the numbers of the lines required. This primitive has a huge interest in practice, for example in secure multi-party computation, and directly echoes to Symmetrically Private Information Retrieval (SPIR).

    Recent Oblivious Transfer instantiations secure in the UC framework suffer from a drastic fallback. After the first query, there is no improvement on the global scheme complexity and so subsequent queries each have a global complexity of O(|DB|) meaning that there is no gain compared to running completely independent queries. In this paper, we propose a new protocol solving this issue, and allowing to have subsequent queries with a complexity of O(\log(|DB|)) while keeping round optimality, and prove the protocol security in the UC framework with adaptive corruptions and reliable erasures.

    As a second contribution, we show that the techniques we use for Oblivious Transfer can be generalized to a new framework we call Oblivious Language-Based Envelope (OLBE). It is of practical interest since it seems more and more unrealistic to consider a database with uncontrolled access in access control scenarios. Our approach generalizes Oblivious Signature-Based Envelope, to handle more expressive credentials and requests from the user. Naturally, OLBE encompasses both OT and OSBE, but it also allows to achieve Oblivious Transfer with fine grain access over each line. For example, a user can access a line if and only if he possesses a certificate granting him access to such line.

    We show how to generically and efficiently instantiate such primitive, and prove them secure in the Universal Composability framework, with adaptive corruptions assuming reliable erasures. We provide the new UC ideal functionalities when needed, or we show that the existing ones fit in our new framework.

    The security of such designs allows to preserve both the secrecy of the database values and the user credentials. This symmetry allows to view our new approach as a generalization of the notion of Symmetrically PIR.

  • Structure-Preserving Smooth Projective Hashing
    Olivier Blazy; Céline Chevalier
    XLim, Université de Limoges; CRED, Université Pantheon Assas Paris II
  • Abstract:

    Smooth projective hashing has proven to be an extremely useful primitive, in particular when used in conjunction with commitments to provide implicit decommitment. This has lead to applications proven secure in the UC framework, even in presence of an adversary which can do adaptive corruptions, like for example Password Authenticated Key Exchange (PAKE), and 1-out-of-m Oblivious Transfer (OT). However such solutions still lack in efficiency, since they heavily scale on the underlying message length.
    Structure-preserving cryptography aims at providing elegant and efficient schemes based on classical assumptions and standard group operations on group elements. Recent trend focuses on constructions of structure- preserving signatures, which require message, signature and verification keys to lie in the base group, while the verification equations only consist of pairing-product equations. Classical constructions of Smooth Projective Hash Function suffer from the same limitation as classical signatures: at least one part of the computation (messages for signature, witnesses for SPHF) is a scalar.
    In this work, we introduce and instantiate the concept of Structure- Preserving Smooth Projective Hash Function, and give as applications more efficient instantiations for one-round PAKE and three-round OT, and information retrieval thanks to Anonymous Credentials, all UC- secure against adaptive adversaries.

  • Public-Key Cryptosystems Resilient to Continuous Tampering and Leakage of Arbitrary Functions
    Eiichiro Fujisaki; Keita Xagawa
    NTT Secure Platform Laboratories
  • Abstract:

    We present the first chosen-ciphertext secure public-key encryption schemes resilient to continuous tampering of arbitrary (efficiently computable) functions. Since it is impossible to realize such schemes without a self-destruction or key-updating mechanism, our proposals allow for either of them. As in the previous works resilient to this type of tampering attacks, our schemes also tolerate bounded or continuous memory leakage attacks at the same time. Unlike the previous works, our schemes have efficient instantiations. We also prove that there is no secure digital signature scheme resilient to arbitrary tampering functions against a stronger variant of the continuous tampering attack, even if it has a self-destruction mechanism.

  • Efficient KDM-CCA Secure Public Key Encryption for Polynomial Functions
    Shuai Han; Shengli Liu; Lin Lyu
    Shanghai Jiao Tong University
  • Abstract:

    KDM[F]-CCA secure public-key encryption (PKE) protects the security of message f(sk), with f \in F, that is computed directly from the secret key, even if the adversary has access to a decryption oracle. An efficient KDM[F_{aff}]-CCA secure PKE scheme for affine functions was proposed by Lu, Li and Jia (LLJ, EuroCrypt2015). We point out that their security proof cannot go through based on the DDH assumption.

    In this paper, we introduce a new concept \emph{Authenticated Encryption with Auxiliary-Input} AIAE and define for it new security notions dealing with related-key attacks, namely \emph{IND-RKA security} and \emph{weak INT-RKA security}. We also construct such an AIAE w.r.t. a set of restricted affine functions from the DDH assumption. With our AIAE,

    — we construct the first efficient KDM[F_{aff}]-CCA secure PKE w.r.t. affine functions with compact ciphertexts, which consist only of a constant number of group elements;

    — we construct the first efficient KDM[F_{poly}^d]-CCA secure PKE w.r.t. polynomial functions of bounded degree d with almost compact ciphertexts, and the number of group elements in a ciphertext is polynomial in d, independent of the security parameter.

    Our PKEs are both based on the DDH & DCR assumptions, free of NIZK and free of pairing.

  • Linear Structures: Applications to Cryptanalysis of Round-Reduced Keccak
    Jian Guo; Meicheng Liu; Ling Song
    NTU, Singapore; NTU, Singapore and Chinese Academy of Sciences, China; NTU, Singapore and Chinese Academy of Sciences, China
  • Abstract:

    In this paper, we analyze the security of round-reduced versions of the Keccak hash function family. Based on the work pioneered by Aumasson and Meier, and Dinur et al., we formalize and develop a technique named linear structure, which allows linearization of the underlying permutation of Keccak for up to 3 rounds with large number of variable spaces. As a direct application, it extends the best zero-sum distinguishers by 2 rounds without increasing the complexities. We also apply linear structures to preimage attacks against Keccak. By carefully studying the properties of the underlying Sbox, we show bilinear structures and find ways to convert the information on the output bits to linear functions on input bits. These findings, combined with linear structures, lead us to preimage attacks against up to 4-round Keccak with reduced complexities. An interesting feature of such preimage attacks is low complexities for small variants. As extreme examples, we can now find preimages of 3-round SHAKE128 with complexity 1, as well as the first practical solutions to two 3-round instances of Keccak challenge. Both zero-sum distinguishers and preimage attacks are verified by implementations. It is noted that the attacks here are still far from threatening the security of the full 24-round Keccak.

  • Simpira v2: A Family of Efficient Permutations Using the AES Round Function
    Shay Gueron; Nicky Mouha
    University of Haifa, Israel and Intel Corporation, Israel; ESAT/COSIC, KU Leuven, Belgium, Inria, France and NIST, USA
  • Abstract:

    This paper introduces Simpira, a family of cryptographic permutations that supports inputs of $128 \times b$ bits, where $b$ is a positive integer. Its design goal is to achieve high throughput on virtually all modern 64-bit processors, that nowadays already have native instructions for AES. To achieve this goal, Simpira uses only one building block: the AES round function. For $b=1$, Simpira corresponds to 12-round AES with fixed round keys, whereas for $b\ge 2$, Simpira is a Generalized Feistel Structure (GFS) with an $F$-function that consists of two rounds of AES. We claim that there are no structural distinguishers for Simpira with a complexity below $2^{128}$, and analyze its security against a variety of attacks in this setting. The throughput of Simpira is close to the theoretical optimum, namely, the number of AES rounds in the construction. For example, on the Intel Skylake processor, Simpira has throughput below 1 cycle per byte for $b \le 4$ and $b=6$. For larger permutations, where moving data in memory has a more pronounced effect, Simpira with $b=32$ (512 byte inputs) evaluates 732 AES rounds, and performs at 824 cycles (1.61 cycles per byte), which is less than $13\%$ off the theoretical optimum. If the data is stored in interleaved buffers, this overhead is reduced to less than $1\%$. The Simpira family offers an efficient solution when processing wide blocks, larger than 128 bits, is desired.

  • Balloon Hashing: A Memory-Hard Function Providing Provable Protection Against Sequential Attacks
    Dan Boneh; Henry Corrigan-Gibbs; Stuart Schechter
    Stanford University; Stanford University; Microsoft Research
  • Abstract:

    We present the Balloon password-hashing algorithm. This is the first practical cryptographic hash function that: (i) has proven memory-hardness properties in the random-oracle model, (ii) uses a password-independent access pattern, and (iii) meets—and often exceeds—the performance of the best heuristically secure password-hashing algorithms. Memory-hard functions require a large amount of working space to evaluate efficiently and, when used for password hashing, they dramatically increase the cost of offline dictionary attacks. In this work, we leverage a previously unstudied property of a certain class of graphs (“random sandwich graphs”) to analyze the memory-hardness of the Balloon algorithm. The techniques we develop are general: we also use them to give a proof of security of the scrypt and Argon2i password-hashing functions, in the random-oracle model. Our security analysis uses a sequential model of computation, which essentially captures attacks that run on single-core machines. Recent work shows how to use massively parallel special-purpose machines (e.g., with hundreds of cores) to attack memory-hard functions, including Balloon. We discuss this important class of attacks, which is outside of our adversary model, and propose practical defenses against them. To motivate the need for security proofs in the area of password hashing, we demonstrate and implement a practical attack against Argon2i that successfully evaluates the function with less space than was previously claimed possible. Finally, we use experimental results to compare the performance of the Balloon hashing algorithm to other memory-hard functions.

  • When are Fuzzy Extractors Possible?
    Benjamin Fuller; Adam Smith; Leonid Reyzin
    Boston University and MIT Lincoln Laboratory; Pennsylvania State University; Boston University
  • Abstract:

    Fuzzy extractors (Dodis et al., Eurocrypt 2004) convert repeated noisy readings of a high-entropy secret into the same uniformly distributed key. A minimum condition for the security of the key is the hardness of guessing a value that is similar to the secret, because the fuzzy extractor converts such a guess to the key.

    We define fuzzy min-entropy to quantify this property of a noisy source of secrets. Fuzzy min-entropy measures the success of the adversary when provided with only the functionality of the fuzzy extractor, that is, the ideal security possible from a noisy distribution. High fuzzy min-entropy is necessary for the existence of a fuzzy extractor.

    We ask: is high fuzzy min-entropy a sufficient condition for key extraction from noisy sources? If only computational security is required, recent progress on program obfuscation gives evidence that fuzzy min-entropy is indeed sufficient. In contrast, information-theoretic fuzzy extractors are not known for many practically relevant sources of high fuzzy min-entropy.

    In this paper, we show that fuzzy min-entropy is sufficient for information theoretically secure fuzzy extraction. For every source distribution W for which security is possible we give a secure fuzzy extractor.

    Our construction relies on the fuzzy extractor knowing the precise distribution of the source W. A more ambitious goal is to design a single extractor that works for all possible sources. Our second main result is that this more ambitious goal is impossible: we give a family of sources with high fuzzy min-entropy for which no single fuzzy extractor is secure. We show three flavors of this impossibility result: for standard fuzzy extractors, for fuzzy extractors that are allowed to sometimes be wrong, and for secure sketches, which are the main ingredient of most fuzzy extractor constructions.

  • Dual System Encryption Framework in Prime-Order Groups via Computational Pair Encodings
    Nuttapong Attrapadung
    AIST, Japan
  • Abstract:

    We propose a new generic framework for achieving fully secure attribute based encryption (ABE) in prime-order bilinear groups. Previous generic frameworks by Wee (TCC’14) and Attrapadung (Eurocrypt’14) were given in composite-order bilinear groups. Both provide abstractions of dual-system encryption techniques introduced by Waters (Crypto’09). Our framework can be considered as a prime-order version of Attrapadung’s framework and works in a similar manner: it relies on a main component called pair encodings, and it generically compiles any secure pair encoding scheme for a predicate in consideration to a fully secure ABE scheme for that predicate. One feature of our new compiler is that although the resulting ABE schemes will be newly defined in prime-order groups, we require essentially the same security notions of pair encodings as before. Beside the security of pair encodings, our framework assumes only the Matrix Diffie-Hellman assumption (Escala et al., Crypto’13), which includes the Decisional Linear assumption as a special case.

    Recently and independently, prime-order frameworks are proposed also by Chen et al. (Eurocrypt’15), and Agrawal and Chase (TCC’16-A). The main difference is that their frameworks can deal only with information-theoretic encodings, while ours can also deal with computational ones, which admit wider applications. We demonstrate our applications by obtaining the first fully secure prime-order realizations of ABE for regular languages, ABE for monotone span programs with short-ciphertext, short-key, or completely unbounded property, and ABE for branching programs with short-ciphertext, short-key, or unbounded property.

  • Salvaging Weak Security Bounds for Blockcipher-Based Constructions
    Thomas Shrimpton; R. Seth Terashima
    University of Florida; Qualcomm Technologies, Inc.
  • Abstract:

    The concrete security bounds for some blockcipher-based constructions sometimes become worrisome or even vacuous; for example, when a light-weight blockcipher is used, when large amounts of data are processed, or when a large number of connections need to be kept secure. Rotating keys helps, but introduces a “hybrid factor” $m$ equal to the number of keys used. In such instances, analysis in the ideal-cipher model (ICM) can give a sharper picture of security, but this heuristic is called into question when cryptanalysis of the real-world blockcipher reveals weak keys, related-key attacks, etc.

    To address both concerns, we introduce a new analysis model, the ideal-cipher model under key-oblivious access (ICM-KOA). Like the ICM, the ICM-KOA can give sharp security bounds when standard-model bounds do not. Unlike the ICM, results in the ICM-KOA are less brittle to current and future cryptanalytic results on the blockcipher used to instantiate the ideal cipher. Also, results in the ICM-KOA immediately imply results in the ICM \textit{and} the standard model, giving multiple viewpoints on a construction with a single effort. The ICM-KOA provides a conceptual bridge between ideal ciphers and tweakable blockciphers (TBC): blockcipher-based constructions secure in the ICM-KOA have TBC-based analogs that are secure under standard-model TBC security assumptions. Finally, the ICM-KOA provides a natural framework for analyzing blockcipher key-update strategies that use the blockcipher to derive the new key. This is done, for example, in the NIST CTR-DRBG and in the hardware RNG that ships on Intel chips.

  • An Efficient Approach to Tight Reduction in the Multi-instance, Multi-ciphertext Model
    Junqing Gong; Jie Chen; Xiaolei Dong; Zhenfu Cao
    Shanghai Jiao Tong University; East China Normal University, ENS de Lyon; East China Normal University; East China Normal University
  • Abstract:

    In 2015, Hofheinz et al. [PKC, 2015] extended Chen and Wee’s almost-tight reduction technique for identity based encryptions (IBE) [CRYPTO, 2013] to the multi-instance, multi-ciphertext (MIMC, or multi-challenge) setting, where the adversary is allowed to obtain multiple challenge ciphertexts from multiple IBE instances, and gave the first almost-tightly secure IBE in this setting using composite-order bilinear groups. Several prime-order realizations were proposed lately. However there seems to be a dilemma of high system performance (involving ciphertext/key size and encryption/decryption cost) or weak/standard security assumptions. A natural question is: can we achieve high performance without relying on stronger/non-standard assumptions?

    In this paper, we answer the question in the affirmative by describing a prime-order IBE scheme with the same performance as the most efficient solutions so far but whose security still relies on the standard k-linear (k-Lin) assumption. Our technical start point is Blazy et al.’s almost-tightly secure IBE [CRYPTO, 2014]. We revisit their concrete IBE scheme and associate it with the framework of nested dual system group. This allows us to extend Blazy et al.’s almost-tightly secure IBE to the MIMC setting using Gong et al.’s method [PKC, 2016]. We emphasize that, when instantiating our construction by the Symmetric eXternal Diffie-Hellman assumption (SXDH = 1-Lin), we obtain the most efficient concrete IBE scheme with almost-tight reduction in the MIMC setting, whose performance is even comparable to the most efficient IBE in the classical model (i.e., the single-instance, single-ciphertext setting). Besides pursuing high performance, our IBE scheme also achieves a weaker form of anonymity pointed out by Attrapadung et al. [AsiaCrypt, 2015].

  • A New Algorithm for the Unbalanced Meet-in-the-Middle Problem
    Ivica Nikolić ; Yu Sasaki
    Nanyang Technological University, Singapore; NTT Secure Platform Laboratories, Tokyo, Japan
  • Abstract:

    A collision search for a pair of $n$-bit unbalanced functions (one is $R$ times more expensive than the other) is an instance of the meet-in-the-middle problem, solved with the familiar standard algorithm that follows the tradeoff $TM=N$, where $T$ and $M$ are time and memory complexities and $N=2^n$.
    By combining two ideas, unbalanced interleaving and Oorschot-Wiener parallel collision search, we construct an alternative algorithm that follows $T^2 M = R^2 N$, where $M\le R$.
    Among others, the algorithm solves the well-known open problem: how to reduce the memory of unbalanced collision search.

  • A General Polynomial Selection Method and New Asymptotic Complexities for the Tower Number Field Sieve Algorithm
    Palash Sarkar; Shashank Singh
    Indian Statistical Institute
  • Abstract:

    In a recent work, Kim and Barbulescu had extended the tower number field sieve algorithm to obtain improved asymptotic complexities in the medium prime case for the discrete logarithm problem on $\mathbb{F}_{p^n}$ where $n$ is not a prime power. Their method does not work when $n$ is a composite prime power. For this case, we obtain new asymptotic complexities, e.g., $L(1/3,(64/9)^{1/3})$ (resp. $L(1/3,1.88)$ for the multiple number field variation) when $n$ is composite and a power of 2; the previously best known complexity for this case is $L(1/3,(96/9)^{1/3})$ (resp. $L(1/3,2.12)$). These complexities may have consequences to the selection of key sizes for pairing based cryptography. The new complexities are achieved through a general polynomial selection method. This method, which we call Algorithm-$\mathcal{C}$, extends a previous polynomial selection method proposed at Eurocrypt 2016 to the tower number field case. As special cases, it is possible to obtain the generalised Joux-Lercier and the Conjugation method of polynomial selection proposed at Eurocrypt 2015 and the extension of these methods to the tower number field scenario by Kim and Barbulescu. A thorough analysis of the new algorithm is carried out in both concrete and asymptotic terms.

  • The Kernel Matrix Diffie-Hellman Assumption
    Paz Morillo; Carla Rafols; Jorge Luis Villar
    Universitat Politecnica de Catalunya (Spain); Universitat Pompeu Fabra (Spain); Universitat Politecnica de Catalunya (Spain)
  • Abstract:

    We put forward a new family of computational assumptions, the Kernel Matrix Diffie-Hellman Assumption. Given some matrix A sampled from some distribution D, the kernel assumption says that it is hard to find “in the exponent” a nonzero vector in the left kernel of A. This family is a natural computational analogue of the Matrix Decisional Diffie-Hellman Assumption (MDDH), proposed by Escala et al. As such it allows to extend the advantages of their algebraic framework to computational assumptions.

    The k-Decisional Linear Assumption is an example of a family of decisional assumptions of strictly increasing hardness when k grows. We show that for any such family of MDDH assumptions, the corresponding Kernel assumptions are also strictly increasingly weaker. This requires ruling out the existence of some black-box reductions between flexible problems (i.e., computational problems with a non unique solution).

  • Applying MILP Method to Searching Integral Distinguishers Based on Division Property for 6 Lightweight Block Ciphers
    Zejun Xiang; Wentao Zhang; Zhenzhen Bao; Dongdai Lin
    The State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences
  • Abstract:

    Division property is a generalized integral property proposed by Todo at EUROCRYPT 2015, and very recently, Todo~\emph{et~al.} proposed bit-based division property and applied to SIMON32 at FSE 2016. However, this technique can only be applied to block ciphers with block size no larger than 32 due to its high time and memory complexity. In this paper, we extend Mixed Integer Linear Programming (MILP) method, which is used to search differential characteristics and linear trails of block ciphers, to search integral distinguishers of block ciphers based on division property with block size larger than 32.

    Firstly, we study how to model division property propagations of three basic operations (copy, bitwise AND, XOR) and an Sbox operation by linear inequalities, based on which we are able to construct a linear inequality system which can accurately describe the division property propagations of a block cipher given an initial division property. Secondly, by choosing an appropriate objective function, we convert a search algorithm under Todo’s framework into an MILP problem, and we use this MILP problem appropriately to search integral distinguishers. As an application of our technique, we have searched integral distinguishers for SIMON, SIMECK, PRESENT, RECTANGLE, LBlock and TWINE. Our results show that we can find 14-, 16-, 18-, 22- and 26-round integral distinguishers for SIMON32, 48, 64, 96 and 128 respectively. Moreover, for two SP-network lightweight block ciphers PRESENT and RECTANGLE, we found 9-round integral distinguishers for both ciphers which are two more rounds than the best integral distinguishers in the literature~\cite{wuIntegralPresent}\cite{zhang2012extending}. For LBlock and TWINE, our results are consistent with the best known ones with respect to the longest distinguishers.

  • Towards Tightly Secure Lattice Short Signature and Id-Based Encryption
    Xavier Boyen; Qinyi Li
    Queensland University of Technology
  • Abstract:

    Constructing short signatures with tight security from standard assumptions is a long-standing open problem. We present an adaptively secure, short (and stateless) signature scheme, featuring a constant security loss relative to a conservative hardness assumption, Short Integer Solution (SIS), and the security of a concretely instantiated pseudorandom function (PRF). This gives a class of tightly secure short lattice signature schemes whose security is based on SIS and the underlying assumption of the instantiated PRF.

    Our signature construction further extends to give a class of tightly and adaptively secure “compact” Identity-Based Encryption (IBE) schemes, reducible with constant security loss from Regev’s vanilla Learning With Errors (LWE) hardness assumption and the security of a concretely instantiated PRF. Our approach is a novel combination of a number of techniques, including Katz and Wang signature, Agrawal et al. lattice-based secure IBE, and Boneh et al. key-homomorphic encryption.

    Our results, at the first time, eliminate the dependency between the number of adversary’s queries and the security of short signature/IBE schemes in the context of lattice-based cryptography. They also indicate that tightly secure PRFs (with constant security loss) would imply tightly, adaptively secure short signature and IBE schemes (with constant security loss).

  • More Powerful and Reliable Second-level Statistical Randomness Tests for NIST SP 800-22
    Shuangyi Zhu; Yuan Ma; Jia Zhuang; Jingqiang Lin; Jiwu Jing
    Institute of Information Engineering, Chinese Academy of Sciences
  • Abstract:

    Random number generators (RNGs) are essential for cryptographic systems, and statistical tests are usually employed to assess the randomness of their outputs. As the most commonly used statistical test suite, the NIST SP 800-22 suite includes 15 test items, each of which contains two-level tests. For the test items based on the binomial distribution, we find that their second-level tests are flawed due to the inconsistency between the assessed distribution and the assumed one. That is, the sequence that passes the test could still have statistical flaws in the assessed aspect. For this reason, we propose Q-value as the metric for these second-level tests to replace the original P-value without any extra modification, and the first-level tests are kept unchanged. We provide the correctness proof of the proposed Q-value based second-level tests. We perform the theoretical analysis to demonstrate that the modification improves not only the detectability, but also the reliability. That is, the tested sequence that dissatisfies the randomness hypothesis has a higher probability to be rejected by the improved test, and the sequence that satisfies the hypothesis has a higher probability to pass it. The experimental results on several deterministic RNGs indicate that, the Q-value based method is able to detect some statistical flaws that the original SP 800-22 suite cannot realize under the same test parameters.

  • Nonlinear Invariant Attack –Practical Attack on Full SCREAM, iSCREAM, and Midori64
    Yosuke Todo; Gregor Leander; Yu Sasaki
    NTT Secure Platform Laboratories; Ruhr University Bochum; NTT Secure Platform Laboratories
  • Abstract:

    In this paper we introduce a new type of attack, called nonlinear invariant attack. As application examples, we present new attacks that are able to distinguish the full versions of the (tweakable) block ciphers Scream, iScream and Midori64 in a weak-key setting. Those attacks require only a handful of plaintext-ciphertext pairs and have minimal computational costs. Moreover, the nonlinear invariant attack on the underlying (tweakable) block cipher can be extended to a ciphertext-only attack in well-known modes of operation such as CBC or CTR. The plaintext of the authenticated encryption schemes SCREAM and iSCREAM can be practically recovered only from the ciphertexts in the nonce-respecting setting. This is the first result breaking a security claim of SCREAM. Moreover, the plaintext in Midori64 with well-known modes of operation can practically be recovered. All of our attacks are experimentally verified.

  • Statistical Fault Attacks on Nonce-Based Authenticated Encryption Schemes
    Christoph Dobraunig; Maria Eichlseder; Thomas Korak; Victor Lomne; Florian Mendel
    Graz University of Technology, Austria; Graz University of Technology, Austria; Graz University of Technology, Austria; ANSSI, Paris, France; Graz University of Technology, Austria
  • Abstract:

    Since the first demonstration of fault attacks by Boneh et al. on RSA, a multitude of fault attack techniques on various cryptosystems have been proposed. Most of these techniques, like Differential Fault Analysis, Safe Error Attacks, and Collision Fault Analysis, have the requirement to process two inputs that are either identical or related, in order to generate pairs of correct/faulty ciphertexts. However, when targeting authenticated encryption schemes, this is in practice usually precluded by the unique nonce required by most of these schemes.

    In this work, we present the first practical fault attacks on several nonce-based authenticated encryption modes for AES. This includes attacks on the ISO/IEC standards GCM, CCM, EAX, and OCB, as well as several second-round candidates of the ongoing CAESAR competition. All attacks are based on the Statistical Fault Attacks by Fuhr et al., which use a biased fault model and just operate on collections of faulty ciphertexts. Hereby, we put effort in reducing the assumptions made regarding the capabilities of an attacker as much as possible. In the attacks, we only assume that we are able to influence some byte (or a larger structure) of the internal AES state before the last application of MixColumns, so that the value of this byte is afterwards non-uniformly distributed.

    In order to show the practical relevance of Statistical Fault Attacks and for evaluating our assumptions on the capabilities of an attacker, we perform several fault-injection experiments targeting real hardware. For instance, laser fault injections targeting an AES co-processor of a smartcard microcontroller, which is used to implement modes like GCM or CCM, show that 4 bytes (resp. all 16 bytes) of the last round key can be revealed with a small number of faulty ciphertexts.

  • Towards Practical Whitebox Cryptography: Optimizing Efficiency and Space Hardness
    Andrey Bogdanov; Takanori Isobe; Elmar Tischhauser
    Technical University of Denmark; Sony Corporation; Technical University of Denmark
  • Abstract:

    Whitebox cryptography aims to provide security for cryptographic algorithms in an untrusted environment where the adversary has full access to their implementation. Typical security goals for whitebox cryptography include key extraction security and decomposition security: Indeed, it should be infeasible to recover the secret key from the implementation and it should be hard to decompose the implementation by finding a more compact representation without recovering the secret key, which mitigates code lifting.

    Whereas all published whitebox implementations for standard cryptographic algorithms such as DES or AES are prone to practical key extraction attacks, there have been two dedicated design approaches for whitebox block ciphers: ASASA by Birykov et al. at ASIACRYPT’14 and SPACE by Bogdanov and Isobe at CCS’15. While ASASA suffers from decomposition attacks, SPACE reduces the security against key extraction and decomposition attacks in the white box to the security of a standard block cipher such as AES in the standard blackbox setting. However, due to the security-prioritized design strategy, SPACE imposes a sometimes prohibitive performance overhead in the real world as it needs many AES calls to encrypt a single block.

    In this paper, we address the issue by designing a family of dedicated whitebox block ciphers SPNbox and a family of underlying small block ciphers with software efficiency and constant-time execution in mind. While still relying on the standard blackbox block cipher security for the resistance against key extraction and decomposition, SPNbox attains speed-ups of up to 6.5 times in the black box and up to 18 times in the white box on Intel Skylake and ARMv8 CPUs, compared to SPACE. The designs allow for constant-time implementations in the blackbox setting and meet the practical requirements to whitebox cryptography in real-world applications such as DRM or mobile payments. Moreover, we formalize resistance towards decomposition in form of weak and strong space hardness at various security levels. We obtain bounds on space hardness in all those adversarial models.

    Thus, for the first time, SPNbox provides a practical whitebox block cipher that features well-understood key extraction security, rigorous analysis towards decomposition security, demonstrated real-world efficiency on various platforms and constant-time implementations. This paves the way to enhancing susceptible real-world applications with whitebox cryptography.

  • How to Circumvent the Two-Ciphertext Lower Bound for Linear Garbling Schemes
    Carmen Kempka; Ryo Kikuchi; Koutarou Suzuki
    NTT Secure Platform Laboratories
  • Abstract:

    At EUROCRYPT 2015, Zahur et al.\ argued that all linear, and thus, efficient, garbling schemes need at least two $k$-bit elements to garble an AND gate with security parameter $k$. We show how to circumvent this lower bound, and propose an efficient garbling scheme which requires less than two $k$-bit elements per AND gate for most circuit layouts. Our construction slightly deviates from the linear garbling model, and constitutes no contradiction to any claims in the lower-bound proof. With our proof of concept construction, we hope to spur new ideas for more practical garbling schemes.

    Our construction can directly be applied to semi-private function evaluation by garbling XOR, XNOR, NAND, OR, NOR and AND gates in the same way, and keeping the evaluator oblivious of the gate function.

  • Optimization of LPN Solving Algorithms
    Sonia Bogos; Serge Vaudenay
    EPFL
  • Abstract:

    In this article we focus on constructing an algorithm that automatizes the generation of LPN solving algorithms from the considered parameters.
    When searching for an algorithm to solve an LPN instance, we make use of the existing techniques and optimize their use. We formalize an LPN algorithm as a path in a graph G and our algorithm is searching for the optimal paths in this graph. Our results bring improvements over the existing work, i.e. we improve the results of the covering code from ASIACRYPT’14 and EUROCRYPT’16. Furthermore, we propose concrete practical codes and a method to find good codes.

  • From 5-pass MQ-based identification to MQ-based signatures
    Ming-Shing Chen, Andreas Hulsing; Joost Rijneveld; Simona Samardjiska; Peter Schwabe
    Academia Sinica, Eindhoven University of Technology; Radboud University; SS Cyril and Methodius University; Radboud University
  • Abstract:

    This paper presents MQDSS, the first signature scheme with a security reduction based on the problem of solving a multivariate system of quadratic equations (MQ problem). In order to construct this scheme we give a new security reduction for the Fiat-Shamir transform from a large class of 5-pass identification schemes and show that a previous attempt from the literature to obtain such a proof does not achieve the desired goal. We give concrete parameters for MQDSS and provide a detailed security analysis showing that the resulting instantiation MQDSS-31-64 achieves 128 bits of post-quantum security. Finally, we describe an optimized implementation of MQDSS-31-64 for recent Intel processors with full protection against timing attacks and report benchmarks of this implementation.

  • Characterisation and Estimation of the Key Rank Distribution in the Context of Side Channel Evaluations
    Dan Martin; Luke Mather; Elisabeth Oswald; Martijn Stam
    University of Bristol
  • Abstract:

    Quantifying the side channel security of implementations has been a significant research question for several years in academia but also among real world side channel practitioners. As part of security evaluations, efficient key rank estimation algorithms were devised, which in contrast to analyses based on subkey recovery, give a holistic picture of the security level after a side channel attack. However, it has been observed that outcomes of rank estimations show a huge spread in precisely the range of key ranks where enumeration could lead to key recovery. These observations raise the question whether this is because of insufficient rank estimation procedures, or, if this is an inherent property of the key rank. Furthermore, if this was inherent, how could key rank outcomes be translated into practically meaningful figures, suitable to analysing the risk that real world side channel attacks pose? This paper is a direct response to these questions. We experimentally identify the key rank distribution and show that it is independent of different distinguishers and signal-to-noise ratios. Then we offer a theoretical explanation for the observed key rank distribution and determine how many samples thereof are required for a robust estimation of some key parameters. We discuss how this can be naturally integrated into real world side channel evaluation practices. We conclude our research by connecting non-parametric order statistics, in particular percentiles, in a practically meaningful way with business goals.

  • Constant-Round Asynchronous Multi-Party Computation Based on One-Way Functions
    Sandro Coretti; Juan A. Garay; Martin Hirt; Vassilis Zikas
    ETH Zurich; Yahoo Research; ETH Zurich; RPI
  • Abstract:

    Secure multi-party computation (MPC) allows several mutually distrustful parties to securely compute a joint function of their inputs and exists in two main variants: In synchronous MPC parties are connected by a synchronous network with a global clock, and protocols proceed in rounds with strong delivery guarantees, whereas asynchronous MPC protocols can be deployed even in networks that deliver messages in an arbitrary order and impose arbitrary delays on them.

    The two models—synchronous and asynchronous—have to a large extent developed in parallel with results on both feasibility and asymptotic efficiency improvements in either track. The most notable gap in this parallel development is with respect to round complexity. In particular, although under standard assumptions on a synchronous communication network (availability of secure channels and broadcast), synchronous MPC protocols with (exact) constant rounds have been constructed, to the best of our knowledge, thus far no constant-round asynchronous MPC protocols are known, with the best protocols requiring a number of rounds that is linear in the multiplicative depth of the arithmetic circuit computing the desired function.

    In this work we close this gap by providing the first constant-round asynchronous MPC protocol. Our protocol is optimally resilient (i.e., it tolerates up to t < n/3 corrupted parties), is adaptively secure, and makes black-box use of a pseudo-random function. It works under the standard network assumptions for protocols in the asynchronous setting, namely, a complete network of point-to-point (secure) asynchronous channels with eventual delivery and asynchronous Byzantine agreement (consensus). We provide formal definitions of these primitives and a proof of security in the Universal Composability framework.

  • Multi-Key Homomorphic Authenticators
    Dario Fiore; Aikaterini Mitrokotsa; Luca Nizzardo; Elena Pagnin
    IMDEA Software Institute, Madrid, Spain; Chalmers University of Technology, Gothenburg, Sweden; IMDEA Software Institute, Madrid, Spain; Chalmers University of Technology, Gothenburg, Sweden
  • Abstract:

    Homomorphic authenticators (HAs) enable a client to authenticate a large collection of data elements $m_1,…,m_t$ and outsource them, along with the corresponding authenticators, to an untrusted server. At any later point, the server can generate a short authenticator vouching for the correctness of the output y of a function $f$ computed on the outsourced data, i.e., $y = f(m_1,…,m_t)$. Recently researchers have focused on HAs as a solution, with minimal communication and interaction, to the problem of delegating computation on outsourced data. The notion of HAs studied so far, however, only supports executions (and proofs of correctness) of computations over data authenticated by a single user. Motivated by realistic scenarios (ubiquitous computing, sensor networks, etc.) in which large datasets include data provided by multiple users, we study the concept of multi-key homomorphic authenticators. In a nutshell, multi-key HAs are like HAs with the extra feature of allowing the holder of public evaluation keys to compute on data authenticated under different secret keys. In this paper, we introduce and formally define multi-key HAs. Secondly, we propose a construction of a multi-key homomorphic signature based on standard lattices and supporting the evaluation of circuits of bounded polynomial depth. Thirdly, we provide a construction of multi-key homomorphic MACs based only on pseudo- random functions and supporting the evaluation of low-degree arithmetic circuits. Albeit being less expressive and only secretly verifiable, the latter construction presents interesting efficiency properties.

  • Reverse Cycle Walking and Its Applications
    Sarah Miracle; Scott Yilek
    University of St. Thomas
  • Abstract:

    We study the problem of constructing a block-cipher on a “possibly-strange” set S using a block-cipher on a larger set T. Such constructions are useful in format-preserving encryption, where for example the set S might contain “valid 9-digit social security numbers” while T might be the set of 30-bit strings. Previous work has solved this problem using a technique called cycle walking, first formally analyzed by Black and Rogaway. Assuming the size of S is a constant fraction of the size of T, cycle walking allows one to encipher a point x in S by applying the block-cipher on T a small /expected/ number of times and O(N) times
    in the worst case, where N = |T|, without any degradation in security. We introduce an alternative to cycle walking that we call /reverse cycle walking/, which lowers the worst-case number of times we must apply the block-cipher on T from O(N) to O(log N). Additionally, when the underlying block-cipher on T is secure against q = (1-e)N adversarial queries, we show that applying reverse cycle walking gives us a cipher on S secure even if the adversary is allowed to query all of the domain points. Such fully-secure ciphers have been the the target of numerous recent papers.

  • Efficient Public-Key Distance Bounding Protocol
    Handan Kılınç; Serge Vaudenay
    EPFL
  • Abstract:

    Distance bounding protocols become more and more important because they are the most accurate solution to defeat relay attacks. They consist of two parties: a verifier and a prover. The prover shows that (s)he is close enough to the verifier. In some applications such as payment systems, using public-key distance bounding protocols is practical as no pre-shared secret is necessary between the payer and the payee. However, public-key cryptography requires much more computations than symmetric key cryptography. In this work, we focus on the efficiency problem in public-key distance bounding protocols and the formal security proofs of them. We construct two protocols (one without privacy, one with) which require fewer computations on the prover side compared to the existing protocols, while keeping the highest security level. Our construction is generic based on a key agreement model. It can be instantiated with only one resp. three elliptic curve computations for the prover side in the two protocols, respectively. We proved the security of our constructions formally and in detail.

  • Side-Channel Analysis Protection and Low-Latency in Action – case study of PRINCE and Midori
    Amir Moradi; Tobias Schneider
    Ruhr University Bochum, Germany
  • Abstract:

    During the last years, the industry sector showed particular interest in solutions which allow to encrypt and decrypt data within one clock cycle. Known as low-latency cryptography, such ciphers are desirable for pervasive applications with real-time security requirements. On the other hand, pervasive applications are very likely in control of the end user, and may operate in a hostile environment. Hence, in such scenarios it is necessary to provide security against side-channel analysis (SCA) attacks while still keeping the low-latency feature.
    Since the single-clock-cycle concept requires an implementation in a fully-unrolled fashion, the application of masking schemes – as the most widely studied countermeasure – is not straightforward. The contribution of this work is to present and discuss about the difficulties and challenges that hardware engineers face when integrating SCA countermeasures into low-latency constructions. In addition to several design architectures, practical evaluations, and discussions about the problems and potential solutions with respect to the case study PRINCE (also compared with Midori), the final message of this paper is a couple of suggestions for future low-latency designs to – hopefully – ease the integration of SCA countermeasures.

  • Design Strategies for ARX with Provable Bounds: SPARX and LAX
    Daniel Dinu; Leo Perrin; Aleksei Udovenko; Vesselin Velichkov; Johann Großschadl; Alex Biryukov
    SnT, University of Luxembourg
  • Abstract:

    We present, for the first time, a general strategy for designing ARX symmetric-key primitives with provable resistance against single-trail differential and linear cryptanalysis. The latter has been a long standing open problem in the area of ARX design. The {\it wide-trail design strategy} (WTS), that is at the basis of many S-box based ciphers, including the AES, is not suitable for ARX designs due to the lack of S-boxes in the latter. In this paper we address the mentioned limitation by proposing the \emph{long trail design strategy} (LTS) — a dual of the WTS that is applicable (but not limited) to ARX constructions. In contrast to the WTS, that prescribes the use of small and efficient S-boxes at the expense of heavy linear layers with strong mixing properties, the LTS advocates the use of large (ARX-based) S-Boxes together with sparse linear layers. With the help of the so-called \textit{long-trail argument}, a designer can bound the maximum differential and linear probabilities for any number of rounds of a cipher built according to the LTS.

    To illustrate the effectiveness of the new strategy, we propose SPARX — a family of ARX-based block ciphers designed according to the LTS. \textsc{Sparx} has 32-bit ARX-based S-boxes and has provable bounds against differential and linear cryptanalysis. In addition, SPARX is very efficient on a number of embedded platforms. Its optimized software implementation ranks in the top 6 of the most software-efficient ciphers along with \textsc{Simon}, \textsc{Speck}, Chaskey, LEA and RECTANGLE.

    As a second contribution we propose another strategy for designing ARX ciphers with provable properties, that is completely independent of the LTS. It is motivated by a challenge proposed earlier by Wall{\'{e}}n and uses the differential properties of modular addition to minimize the maximum differential probability across multiple rounds of a cipher. A new primitive, called LAX, is designed following those principles. LAX partly solves the Wall{\'{e}}n challenge.

  • Unknown-Input Attacks in the Parallel Setting: Improving the Security and Performances of the CHES 2012 Leakage-Resilient PRF
    Marcel Medwed; Francois-Xavier Standaert, Ventzislav Nikov, Martin Feldhofer
    NXP, Austria; UCL, Belgium, NXP, Belgium, NXP, Austria
  • Abstract:

    In this work we present a leakage-resilient PRF which makes use of parallel block cipher implementations with unknown-inputs. To the best of our knowledge this is the first work to study and exploit unknown-inputs as a form of key-dependent algorithmic noise. It turns out that such noise renders the problem of side-channel key recovery intractable under very little and easily satisfiable assumptions. That is, the construction stays secure even in a noise-free setting and independent of the number of traces and the used power model. The contributions of this paper are as follows. First, we present a PRF construction which offers attractive security properties, even when instantiated with the AES. Second, we study the effect of unknown-input attacks in parallel implementations. We put forward their intractability and explain it by studying the inevitable model errors obtained when building templates in such a scenario. Third, we compare the security of our construction to the CHES 2012 one and show that it is superior in many ways. That is, a standard block cipher can be used, the security holds for all intermediate variables and it can even partially tolerate local EM attacks and some typical implementation mistakes or hardware insufficiencies. Finally, we discuss the performance of a standard-cell implementation.

  • MiMC : Efficient Encryption and Cryptographic Hashing with Minimal Multiplicative Complexity
    Martin R. Albrecht; Lorenzo Grassi; Christian Rechberger; Arnab Roy; Tyge Tiessen
    Royal Holloway, University of London; TU Graz; TU Graz; DTU; DTU
  • Abstract:

    We explore cryptographic primitives with low multiplicative complexity. This is motivated by recent progress in practical applications of secure multi-party computation (MPC), fully homomorphic encryption (FHE) and zero-knowledge proofs (ZK) where primitives from symmetric cryptography are needed and where linear computations are, compared to non-linear operations, essentially “free”. Starting with the cipher design strategy “LowMC” from Eurocrypt 2015 a number of bit-oriented proposals have been put forward, focusing on applications where the multiplicative depth of the circuit describing the cipher is the most important optimization goal.

    Surprisingly, albeit many MPC/FHE/ZK-protocols natively support operations in \GF{p} for large $p$, very few candidates, even considering all of symmetric cryptography, exist which natively work in such fields. To that end, our proposal for both block ciphers and cryptographic hash functions is to reconsider and simplify the round function of the Knudsen-Nyberg cipher from 1995. The mapping $F(x) := x^3$ is used as the main component there and is also the main component of our family of proposals called “MiMC”. We study various attack vectors for this construction and also give a new attack vector that outperforms others in relevant settings.

    Due to its very low number of multiplications, the design lends itself well to a large class of new applications, especially those where the depth does not matter but the total number of multiplications in the circuit dominates all aspects of the implementation. With a number of rounds which we deem secure based on our security analysis, we report on significant performance improvements in a representative use-case involving SNARKs.

  • Reactive Garbling: Foundation, Instantiation, Application
    Jesper Buus Nielsen; Samuel Ranellucci
    Aarhus University; Aarhus University
  • Abstract:

    Garbled circuits is a cryptographic technique, which has been used among other things for the construction of two and three-party secure computation, private function evaluation and secure outsourcing. Garbling schemes is a primitive which formalizes the syntax and security properties of garbled circuits.

    We define a generalization of garbling schemes called \emph{reactive garbling schemes}.

    We consider functions and garbled functions taking multiple inputs and giving multiple outputs.

    Two garbled functions can be linked together: an encoded output of one garbled function can be transformed into an encoded input of the other garbled function without communication between the parties.

    Reactive garbling schemes also allow partial evaluation of garbled functions even when only some of the encoded inputs are provided.

    It is possible to further evaluate the linked garbled functions when more garbled inputs become available.

    It is also possible to later garble more functions and link them to the ongoing garbled evaluation.

    We provide rigorous definitions for reactive garbling schemes.

    We define a new notion of security for reactive garbling schemes called confidentiality.

    We provide both simulation based and indistinguishability based notions of security.

    We also show that the simulation based notion of security implies the indistinguishability based notion of security.

    We present an instantiation of reactive garbling schemes. We present an application of reactive garbling schemes to reactive two-party computation secure against a malicious, static adversary. We demonstrate how garbling schemes can be used to give abstract black-box descriptions and proof of several advanced applications of garbled circuits in the literature, including Minilego and Lindell’s forge-and-lose technique.

  • Cliptography: Clipping the Power of Kleptographic Attacks
    Alexander Russell; Qiang Tang; Moti Yung; Hong-Sheng Zhou
    University of Connecticut; Cornell University; Columbia University; Virginia Commonwealth University
  • Abstract:

    Kleptography, introduced 20 years ago by Young and Yung [Crypto’96], considers the (in)security of malicious implementations (or instantiations) of standard cryptographic primitives that embed a “backdoor” into the system. Remarkably, crippling subliminal attacks are possible even if the subverted cryptosystem produces output indistinguishable from a truly secure “reference implementation.” Bellare, Paterson, and Rogaway [Crypto ’14] recently initiated a formal study of such attacks on symmetric key encryption algorithms, demonstrating a kleptographic attack can be mounted in broad generality against randomized components of cryptographic systems.

    We enlarge the scope of current work on the problem by permitting adversarial subversion of (randomized) key generation; in particular, we initiate the study of cryptography in the \emph{complete subversion model}, where \emph{all} relevant cryptographic primitives are subject to kleptographic attacks. We construct secure one-way permutations and trapdoor one-way permutations in this “complete subversion” model, describing a general, rigorous immunization strategy to clip the power of kleptographic subversions. Our strategy can be viewed as a formal treatment of the folklore “nothing up my sleeve” wisdom in cryptographic practice. We also describe a related “split program” model that can directly inform practical deployment. We additionally apply our general immunization strategy to directly yield a backdoor-free PRG. This notably amplifies
    previous results of Dodis, Ganesh, Golovnev, Juels, and Ristenpart~[Eurocrypt ’15], which require an honestly generated random key.

    We then examine two standard applications of (trapdoor) one-way permutations in this complete subversion model and construct “higher level” primitives via black-box reductions. We showcase a digital signature scheme that preserves existential unforgeability when {\em all} algorithms (including key generation, which was not considered to be under attack before) are subject to kleptographic attacks. Additionally, we demonstrate that the classic Blum–Micali pseudorandom generator (PRG), using an “immunized” one-way permutation, yields a backdoor-free PRG.

    Alongside development of these secure primitives, we set down a hierarchy of kleptographic attack models which we use to organize past results and our new contributions; this taxonomy may be valuable for future work.

  • Collapse-binding quantum commitments without random oracles
    Dominique Unruh
    University of Tartu
  • Abstract:

    We construct collapse-binding commitments in the standard model. Collapse-binding commitments were introduced in (Unruh, Eurocrypt 2016) to model the computational-binding property of commitments against quantum adversaries, but only constructions in the random oracle model were known.

    Furthermore, we show that collapse-binding commitments imply selected other security definitions for quantum commitments, answering an open question from (Unruh, Eurocrypt 2016).

  • How to Build Fully Secure Tweakable Blockcipher from Classical Blockcipher
    Lei Wang; Jian Guo; Guoyan Zhang; Jingyuan Zhao; Dawu Gu
    Shanghai Jiao Tong University, China; Nanyang Technological University, Singapore; Shandong University, China; Institute of Information Engineering, Chinese Academy of Sciences, China; Shanghai Jiao Tong University, China
  • Abstract:

    This paper focuses on building a tweakable blockcipher from a classical blockcipher whose input and output wires all have a size of $n$ bits. The main goal is to achieve full $2^n$ security. Such a tweakable blockcipher was proposed by Mennink at FSE’15, and it is also the only tweakable blockcipher so far that claimed full $2^n$ security to our best knowledge. However, we find a key-recovery attack on Mennink’s proposal (in the proceeding version) with a complexity of about $2^{n/2}$ adversarial queries. The attack well demonstrates that Mennink’s proposal has at most $2^{n/2}$ security, and therefore invalidates its security claim. In this paper, we study a construction of tweakable blockciphers denoted as $\tegoal[s]$ that is built on $s$ invocations of a blockcipher and additional simple XOR operations. As proven in previous work, at least two invocations of blockcipher with linear mixing are necessary to possibly bypass the birthday-bound barrier of $2^{n/2}$ security, we carry out an investigation on the instances of $\tegoal[s]$ with $s \ge 2$, and find $32$ highly efficient tweakable blockciphers $\widetilde{E1}$, $\widetilde{E2}$, $\ldots$, $\widetilde{E32}$ that achieve $2^n$ provable security. Each of these tweakable blockciphers uses two invocations of a blockcipher, one of which uses a tweak-dependent key generated by XORing the tweak to the key (or to a secret subkey derived from the key). We point out the provable security of these tweakable blockciphers is obtained in the ideal blockcipher model due to the usage of the tweak-dependent key.

  • Taylor Expansion of Maximum Likelihood Attacks for Masked and Shuffled Implementations
    Nicolas Bruneau; Sylvain Guilley; Annelie Heuser; Olivier Rioul; Francois-Xavier Standaert; Yannick Teglia
    Telecom ParisTech, STMicroelectronics; Telecom ParisTech; Telecom ParisTech; Telecom ParisTech; Universite catholique de Louvain; STMicroelectronics
  • Abstract:

    The maximum likelihood side-channel distinguisher of a template attack scenario is expanded into lower degree attacks according to the increasing powers of the signal-to-noise ratio (SNR).
    By exploiting this decomposition we show that it is possible to build highly multivariate attacks which remain efficient when the likelihood cannot be computed in practice due to its computational complexity.
    The shuffled table recomputation is used as an illustration to derive a new attack which outperforms the ones presented by Bruneau et al. at CHES 2015, and so across the full range of SNRs.
    This attack combines two attack degrees and is able to exploit high dimensional leakage which explains its efficiency.

  • Universal Forgery and Key Recovery Attacks on ELmD Authenticated Encryption Algorithm
    Asli Bay; Oguzhan Ersoy; Ferhat Karakoc
    Tubitak Bilgem; Tubitak Bilgem, Bogazici University; Tubitak Bilgem
  • Abstract:

    As being one of the second-round CAESAR candidate, it is claimed to provide misuse resistant against forgeries and security against block-wise adaptive adversaries as well as 128-bit security against key recovery attacks.
    We scrutinize ElmD in such a way that we provide universal forgery attacks as well as key recovery attacks. First, based on the collision attacks on similar structures such as Marble, AEZ, and COPA, we present universal forgery attacks. Second, by exploiting the structure of ELmD, we acquire ability to query to the block cipher used in ELmD. Finally, for one of the proposed versions of ELmD, we mount key recovery attacks reducing the effective key strength by more than 60 bits.

  • Deja Q All Over Again: Tighter and Broader Reductions of q-Type Assumptions
    Melissa Chase; Mary Maller; Sarah Meiklejohn
    Microsoft Research Redmond; University College London; University College London
  • Abstract:

    In this paper, we demonstrate that various cryptographic constructions — including ones for broadcast, attribute-based, and hierarchical identity-based encryption — can rely for security on only the static subgroup hiding assumption when instantiated in composite-order bilinear groups, as opposed to the dynamic q-type assumptions on which their security previously was based. This specific goal is accomplished by more generally extending the recent Deja Q framework (Chase and Meiklejohn, Eurocrypt 2014) in two main directions. First, by teasing out common properties of existing reductions, we expand the q-type assumptions that can be covered by the framework; i.e., we demonstrate broader classes of assumptions that can be reduced to subgroup hiding. Second, while the original framework applied only to asymmetric composite-order bilinear groups, we provide a reduction to subgroup hiding that works in symmetric (as well as asymmetric) composite-order groups. As a bonus, our new reduction achieves a tightness of log(q) rather than q.

  • Indistinguishable Proofs of Work or Knowledge
    Foteini Baldimtsi; Aggelos Kiayias; Thomas Zacharias; Bingsheng Zhang
    University of Tsukuba, AIST; AIST; University of Tsukuba; AIST; University of Tsukuba
  • Abstract:

    We introduce a new class of protocols called Proofs of Work or Knowledge (PoWorKs). In a PoWorK, a prover can convince a verifier that she has either performed work or that she possesses knowledge of a witness to a public statement without the verifier being able to distinguish which of the two has taken place. We formalize PoWorK in terms of three properties, completeness, $f$-soundness and indistinguishability (where $f$ is a function that determines the tightness of the proof of work aspect) and present a construction that transforms 3-move HVZK protocols into 3-move public-coin PoWorKs. To formalize the work aspect in a PoWorK~protocol we define cryptographic puzzles that adhere to certain uniformity conditions, which may also be of independent interest. We instantiate our puzzles in the random oracle (RO) model as well as via constructing “dense” versions of suitably hard one-way functions.

    We then showcase PoWorK~protocols by presenting a number of applications. We first show how non-interactive PoWorKs can be used to reduce spam email by forcing users sending an e-mail to either prove to the mail server they are approved contacts of the recipient or to perform computational work. As opposed to previous approaches that applied proofs of work to this problem, our proposal of using PoWorKs is privacy-preserving as it hides the list of the receiver’s approved contacts from the mail server. Our second application shows how PoWorK can be used to compose cryptocurrencies that are based on proofs of work (“Bitcoin-like”) with cryptocurrencies that are based on knowledge relations (these include cryptocurrencies that are based on “proof of stake”, and others). The resulting PoWorK-based cryptocurrency inherits the robustness properties of the underlying two systems while PoWorK-indistinguishability ensures a uniform population of miners. Finally, we show that PoWorK protocols imply straight-line quasi-polynomial simulatable arguments of knowledge and based on our construction we obtain an efficient straight-line concurrent 3-move statistically quasi-polynomial simulatable argument of knowledge.

  • Efficient Public-Key Cryptography with Bounded Leakage and Tamper Resilience
    Antonio Faonio; Daniele Venturi
    Aarhus University; University of Trento
  • Abstract:

    We revisit the question of constructing public-key encryption and signature schemes with security in the presence of bounded leakage and tampering memory attacks. For signatures we obtain the first construction in the standard model; for public-key encryption we obtain the first construction free of pairing (avoiding non-interactive zero-knowledge proofs). Our constructions are based on generic building blocks, and, as we show, also admit efficient instantiations under fairly standard number-theoretic assumptions.

    The model of bounded tamper resistance was recently put forward by Damg{\aa}rd {\em et al.} (Asiacrypt 2013) as an attractive path to achieve security against arbitrary memory tampering attacks without making hardware assumptions (such as the existence of a protected self-destruct or key-update mechanism), the only restriction being on the number of allowed tampering attempts (which is a parameter of the scheme). This allows to circumvent known impossibility results for unrestricted tampering (Gennaro {\em et al.}, TCC 2010), while still being able to capture realistic tampering attacks.

  • Partitioning via Non-Linear Polynomial Functions: More Compact IBEs from Ideal Lattices and Bilinear Maps
    Shuichi Katsumata; Shota Yamada
    The University of Tokyo; AIST
  • Abstract:

    In this paper, we present new adaptively secure identity-based encryption (IBE) schemes. One of the distinguishing properties of the schemes is that it achieves shorter public parameters than previous schemes. Both of our schemes follow the general framework presented in the recent IBE scheme of Yamada (Eurocrypt 2016), employed with novel techniques tailored to meet the underlying algebraic structure to overcome the difficulties arising in our specific setting. Specifically, we obtain the following:

    Our first scheme is proven secure under the ring learning with errors assumption (RLWE) and achieves the best (asymptotic) space efficiency among existing schemes from the same assumption. The main technical contribution is in our new security proof that exploits the ring structure in a crucial way. Our technique allows us to greatly weaken the underlying hardness assumption (e.g., we assume the hardness of RLWE with a fixed polynomial approximation factor whereas Yamada’s scheme requires a super-polynomial approximation factor) while improving the overall efficiency.

    Our second IBE scheme is constructed on bilinear maps and is secure under the $3$-computational bilinear Diffie-Hellman exponent assumption. This is the first IBE scheme based on the hardness of a computational/search problem, rather than a decisional problem such as DDH and DLIN, on bilinear maps with sub-linear public parameter size.

  • How to generate and use Universal Samplers
    Dennis Hofheinz; Tibor Jager; Dakshita Khurana; Amit Sahai; Brent Waters; Mark Zhandry
    Karlsruher Institut fur Technologie; Ruhr University Bochum; UCLA, Center for Encrypted Functionalities; UCLA, Center for Encrypted Functionalities; UT Austin; MIT, Princeton University
  • Abstract:

    A random oracle is an idealization that allows us to model a hash function as an oracle that will output a uniformly random string given any input. We introduce the notion of a universal sampler scheme that extends the notion of a random oracle, to a method of sampling securely from arbitrary distributions.

    We describe several applications that provide a natural motivation for this notion; these include generating the trusted parameters for many schemes from just a single trusted setup. We further demonstrate the versatility of universal samplers by showing how they give rise to simple constructions of identity-based encryption and multiparty key exchange. In particular, we construct adaptively secure non-interactive multiparty key exchange in the random oracle model based on indistinguishability obfuscation; obtaining the first known construction of adaptively secure NIKE without complexity leveraging.

    We give a solution that shows how to transform any random oracle into a universal sampler scheme, based on indistinguishability obfuscation. At the heart of our construction and proof is a new technique we call “delayed backdoor programming” that we believe will have other applications.

  • NIZKs with an Untrusted CRS: Security in the Face of Parameter Subversion
    Mihir Bellare; Georg Fuchsbauer; Alessandra Scafuro
    University of California San Diego (UCSD); Inria, ENS, CNRS and PSL Research University; Boston University and Northeastern University
  • Abstract:

    Motivated by the subversion of “trusted” public parameters in mass-surveillance activities, this paper studies the security of NIZKs in the presence of a maliciously chosen common reference string. We provide definitions for subversion soundness, subversion witness indistinguishability and subversion zero knowledge. We then provide both negative and positive results, showing that certain combinations of goals are unachievable but giving protocols to achieve other combinations.

  • Faster fully homomorphic encryption: Bootstrapping in less than 0.1 seconds
    Ilaria Chillotti; Nicolas Gama; Mariya Georgieva; Malika Izabachene
    UVSQ, France; Inpher, Switzerland and UVSQ, France; Gemalto, France; CEA, List, France
  • Abstract:

    In this paper, we revisit fully homomorphic encryption (FHE) based on GSW and its ring variants. We notice that the internal product of GSW can be replaced by a simpler external product between a GSW and an LWE ciphertext.

    We show that the bootstrapping scheme FHEW of Ducas and Micciancio~[DM15] can be expressed only in terms of this external product. As a result, we obtain a speed up from less than 1 second to less than 0.1 seconds. We also reduce the 1GB bootstrapping key size to 24MB, preserving the same security levels, and we improve the noise propagation overhead by replacing exact decomposition algorithms with approximate ones.

    Moreover, our external product allows to explain the unique asymmetry in the noise propagation of GSW samples and makes it possible to evaluate deterministic automata homomorphically as in~[GINX14] in an efficient way with a noise overhead only linear in the length of the tested word.

    Finally, we provide an alternative practical analysis of LWE based scheme, which directly relates the security parameter to the error rate of LWE and the entropy of the LWE secret key.

  • Signature Schemes with Efficient Protocols and Dynamic Group Signatures from Lattice Assumptions
    Benoit Libert; San Ling; Fabrice Mouhartem; Khoa Nguyen; Huaxiong Wang
    Ecole Normale Supérieure de Lyon (France); Nanyang Technological University (Singapore); Ecole Normale Supérieure de Lyon (France); Nanyang Technological University (Singapore); Nanyang Technological University (Singapore)
  • Abstract:

    A recent line of works — initiated by Gordon, Katz and Vaikuntanathan (Asiacrypt 2010) — gave lattice-based constructions allowing users to authenticate while remaining hidden in a crowd. Despite five years of efforts, known constructions are still limited to static sets of users, which cannot be dynamically updated. This work provides new tools enabling the design of anonymous authentication systems whereby new users can join the system at any time.

    Our first contribution is a signature scheme with efficient protocols, which allows users to obtain a signature on a committed value and subsequently prove knowledge of a signature on a committed message. This construction is well-suited to the design of anonymous credentials and group signatures. It indeed provides the first lattice-based group signature supporting dynamically growing populations of users.

    As a critical component of our group signature, we provide a simple joining mechanism of introducing new group members using our signature scheme. This technique is combined with zero-knowledge arguments allowing registered group members to prove knowledge of a secret short vector of which the corresponding public syndrome was certified by the group manager. These tools provide similar advantages to those of structure-preserving signatures in the realm of bilinear groups. Namely, they allow group members to generate their own public key without having to prove knowledge of the underlying secret key. This results in a two-message joining protocol supporting concurrent enrollments, which can be used in other settings such as group encryption.

    Our zero-knowledge arguments are presented in a unified framework where: (i) The involved statements reduce to arguing possession of a $\{-1,0,1\}$-vector $\mathbf{x}$ with a particular structure and satisfying $\mathbf{P}\cdot \mathbf{x} = \mathbf{v} \bmod q$ for some public matrix $\mathbf{P}$ and vector $\mathbf{v}$; (ii) The reduced statements can be handled using permuting techniques for Stern-like protocols. Our framework can serve as a blueprint for proving many other relations in lattice-based cryptography.

  • Digital Signatures Based on the Hardness of Ideal Lattice Problems in all Rings
    Vadim Lyubashevsky
    IBM Research – Zurich
  • Abstract:

    Many practical lattice-based schemes are built upon the Ring-SIS or Ring-LWE problems, which are problems that are based on the presumed difficulty of finding low-weight solutions to linear equations over polynomial rings $\Zf$. Our belief in the asymptotic computational hardness of these problems rests in part on the fact that there are reduction showing that solving them is as hard as finding short vectors in all lattices that correspond to ideals of the polynomial ring $\Z[\xpo]/\langle \fpo\rangle$. These reductions, however, do not give us an indication as to the effect that the polynomial $\fpo$, which defines the ring, has on the average-case or worst-case problems.

    As of today, there haven’t been any weaknesses found in Ring-SIS or Ring-LWE problems when one uses an $\fpo$ which leads to a meaningful worst-case to average-case reduction, but there have been some recent algorithms for related problems that heavily use the algebraic structures of the underlying rings. It is thus conceivable that some rings could give rise to more difficult instances of Ring-SIS and Ring-LWE than other rings. A more ideal scenario would therefore be if there would be an average-case problem, allowing for efficient cryptographic constructions, that is based on the hardness of finding short vectors in ideals of $\Z[\xpo]/\langle \fpo\rangle$ for \emph{every} $\fpo$.

    In this work, we show that the above may actually be possible. We construct a digital signature scheme based (in the random oracle model) on a simple adaptation of the Ring-SIS problem which is as hard to break as worst-case problems in every $\fpo$ whose degree is bounded by the parameters of the scheme. Up to constant factors, our scheme is as efficient as the highly practical schemes that work over the ring $\Z[\xpo]/\langle \xpo^n+1\rangle$.

  • Selective Opening Security from Simulatable Data Encapsulation
    Felix Heuer; Bertram Poettering
    Ruhr University Bochum
  • Abstract:

    In the realm of public-key encryption, the confidentiality notion of security against selective opening (SO) attacks considers adversaries that obtain challenge ciphertexts and are allowed to adaptively open them, meaning have the corresponding message and randomness revealed. SO security is stronger than IND-CCA and often required when formally arguing towards the security of multi-user applications. While different ways of achieving SO secure schemes are known, as they generally employ expensive asymmetric building blocks like lossy trapdoor functions or lossy encryption, such constructions are routinely left aside by practitioners and standardization bodies. So far, formal arguments towards the SO security of schemes used in practice (e.g., for email encryption) are not known.
    In this work we shift the focus from the asymmetric to the symmetric building blocks of PKE and prove the following statement: If a PKE scheme is composed of a key encapsulation mechanism (KEM) and a blockcipher-based data encapsulation mechanism (DEM), and the DEM has specific combinatorial properties, then the PKE scheme offers SO security in the ideal cipher model. Fortunately, as we show, the required properties hold for popular modes of operation like CTR, CBC and CCM. This paper not only establishes the corresponding theoretical framework of analysis, but also contributes very concretely to practical cryptography by concluding that selective opening security is given for many real-world schemes.

  • Multi-Input Functional Encryption with Unbounded-Message Security
    Vipul Goyal; Aayush Jain; Adam O’Neill
    Microsoft Research Labs, India; UCLA, USA; Georgetown University, USA
  • Abstract:

    Multi-input functional encryption (MIFE) was introduced by Goldwasser \emph{et al.} (EUROCRYPT 2014) as a compelling extension of functional encryption. In MIFE, a receiver is able to compute a joint function of multiple, independently encrypted plaintexts. Goldwasser \emph{et al.} (EUROCRYPT 2014) show various applications of MIFE to running SQL queries over encrypted databases, computing over encrypted data streams, etc.

    The previous constructions of MIFE due to Goldwasser \emph{et al.} (EUROCRYPT 2014) based on indistinguishability obfuscation had a major shortcoming: it could only support encrypting an \emph{a priori bounded} number of message. Once that bound is exceeded, security is no longer guaranteed to hold. In addition, it could only support \emph{selective-security}, meaning that the challenge messages and the set of “corrupted” encryption keys had to be declared by the adversary up-front.

    In this work, we show how to remove these restrictions by relying instead on \emph{sub-exponentially secure} indistinguishability obfuscation. This is done by carefully adapting an alternative MIFE scheme of Goldwasser \emph{et al.} that previously overcame these shortcomings (except for selective security wrt.~the set of “corrupted” encryption keys) by relying instead on differing-inputs obfuscation, which is now seen as an implausible assumption. Our techniques are rather generic, and we hope they are useful in converting other constructions using differing-inputs obfuscation to ones using sub-exponentially secure indistinguishability obfuscation instead.

  • Zero-Knowledge Accumulators and Set Algebra
    Esha Ghosh; Olga Ohrimenko; Dimitrios Papadopoulos; Roberto Tamassia; Nikos Triandopoulos
    Brown University; Microsoft Research, Cambridge UK; University of Maryland, College Park; Brown University; Stevens Institute of Technology
  • Abstract:

    Cryptographic accumulators allow to succinctly represent a set by an accumulation value with respect to which short (non-)membership proofs about the set can be efficiently constructed and verified. Traditionally, their security captures soundness but offers no privacy: Convincing proofs reliably encode set membership, but they may well leak information about the accumulated set.

    In this paper we put forward a strong privacy-preserving enhancement by introducing and devising zero-knowledge accumulators that additionally provide hiding guarantees: Accumulation values and proofs leak nothing about a dynamic set that evolves via element insertions/deletions. We formalize the new property using the standard real-ideal paradigm, namely demanding that an adaptive adversary with access to query/update oracles, cannot tell whether he interacts with honest protocol executions or a simulator fully ignorant of the set (even of the type of updates on it). We rigorously compare the new primitive to existing ones for privacy-preserving verification of set membership (or other relations) and derive interesting implications among related security definitions, showing that zero-knowledge accumulators offer stronger privacy than recent related works by Naor et al. [TCC 2015] and Derler et al. [CT-RSA 2015]. We construct the first dynamic universal zero-knowledge accumulator that we show to be perfect zero-knowledge and secure under the $q$-Strong Bilinear Diffie-Hellman assumption.

    Finally, we extend our new privacy notion and our new construction to provide privacy-preserving proofs also for an authenticated dynamic set collection—a primitive for efficiently verifying more elaborate set operations, beyond set-membership. We introduce a primitive that supports a zero-knowledge verifiable set algebra: Succinct proofs for union, intersection and set difference queries over a dynamically evolving collection of sets can be efficiently constructed and optimally verified, while—for the first time—they leak nothing about the collection beyond the query result.

  • From Identification to Signatures, Tightly: A Framework and Generic Transforms
    Mihir Bellare; Bertram Poettering; Douglas Stebila
    University of California San Diego; Ruhr University Bochum; McMaster University
  • Abstract:

    This paper provides a framework to treat the problem of building signature schemes from identification schemes in a unified and systematic way. The outcomes are (1) Alternatives to the Fiat-Shamir transform that, applied to trapdoor identification schemes, yield signature schemes with tight security reductions to standard assumptions (2) An understanding and characterization of existing transforms in the literature. One of our transforms has the added advantage of producing signatures shorter than produced by the Fiat-Shamir transform. Reduction tightness is important because it allows the implemented scheme to use small parameters (thereby being as efficient as possible) while retaining provable security.

  • Zero-Knowledge Arguments for Matrix-Vector Relations and Lattice-Based Group Encryption
    Benoit Libert; San Ling; Fabrice Mouhartem; Khoa Nguyen; Huaxiong Wang
    Ecole Normale Supérieure de Lyon, France; Nanyang Technological University, Singapore; Ecole Normale Supérieure de Lyon, France; Nanyang Technological University, Singapore; Nanyang Technological University, Singapore
  • Abstract:

    Group encryption (GE) is the natural encryption analogue of group signatures in that it allows verifiably encrypting messages for some anonymous member of a group while providing evidence that the receiver is a properly certified group member. Should the need arise, an opening authority is capable of identifying the receiver of any ciphertext. As introduced by Kiayias, Tsiounis and Yung (Asiacrypt’07), GE is motivated by applications in the context of oblivious retriever storage systems, anonymous third parties and hierarchical group signatures. This paper provides the first realization of group encryption under lattice assumptions. Our construction is proved secure in the standard model (assuming interaction in the proving phase) under the Learning-With-Errors (LWE) and Short-Integer-Solution (SIS) assumptions. As a crucial component of our system, we describe a new zero-knowledge argument system allowing to demonstrate that a given ciphertext is a valid encryption under some hidden but certified public key, which incurs to prove quadratic statements about LWE relations. Specifically, our protocol allows arguing knowledge of witnesses consisting of $\BF{X} \in \Z_q^{m \times n}$, $\BF{s} \in \Z_q^n$ and a small-norm $\BF{e} \in \Z^m$ which underlie a public vector $\BF{b}=\BF{X} \cdot \BF{s} + \BF{e} \in \Z_q^m$ while simultaneously proving that the matrix $\BF{X} \in \Z_q^{m \times n}$ has been correctly certified. We believe our proof system to be useful in other applications involving zero-knowledge proofs in the lattice setting.

  • Universal Composition with Responsive Environments
    Jan Camenisch; Robert R. Enderlein; Stephan Krenn; Ralf Kusters; Daniel Rausch
    IBM Research – Zurich, Ruschlikon, Switzerland; IBM Research – Zurich, Ruschlikon, Switzerland and Department of Computer Science, ETH Zurich, Zurich, Switzerland; AIT Austrian Institute of Technology GmbH, Vienna, Austria; University of Trier, Trier, Germany; University of Trier, Trier, Germany
  • Abstract:

    In universal composability frameworks, adversaries (or environments) and protocols/ideal functionalities often have to exchange meta-information on the network interface, such as algorithms, keys, signatures, ciphertexts, signaling information, corruption-related messages. For these purely modeling-related messages, which do not reflect actual network communication, it would often be very reasonable and natural to expect that adversaries/environments provide the requested information immediately or give control back to the protocol/functionality immediately after having received some information. However, in none of the existing models for universal composability this is guaranteed. We call this the non-responsiveness problem. As discussed in the paper, while formally non-responsiveness does not invalidate any of the universal composability models, it has many disadvantages, such as unnecessarily complex specifications and less expressivity. Also, this problem has often been ignored in the literature, leading to ill-defined and flawed specifications. Protocol designers really should not have to care about this problem at all, but currently they have to: giving the adversary/environment the option to not respond immediately to modeling-related requests does not translate to any real attack scenario.

    This paper solves the non-responsiveness problem and its negative consequences , by avoiding this artificial modeling problem altogether. We propose the new concepts of responsive environments and adversaries. Such environments and adversaries must provide a valid response to modeling-related requests before any other protocol/functionality is activated. Hence, protocol designers do not have to worry any longer about artifacts resulting from such requests not being answered promptly. Our concepts apply to all existing models for universal composability, as exemplified for the UC, GNUC, and IITM models, with full definitions and proofs (simulation relations, transitivity, equivalence of various simulation notions, composition theorems) provided for the IITM model.

  • Selective-Opening Security in the Presence of Randomness Failures
    Viet Tung Hoang; Jonathan Katz; Adam O’Neill; Mohammad Zaheri
    UC Santa Barbara; University of Maryland; Georgetown University; Georgetown University
  • Abstract:

    We initiate the study of public-key encryption (PKE) secure against selective-opening attacks (SOA) in the presence of randomness failures, i.e., when the sender may (inadvertently) use low-quality randomness. In the SOA setting, an adversary can adaptively corrupt senders; this notion is natural to consider in tandem with randomness failures since an adversary may target senders by multiple means.

    Concretely, we first treat SOA security of nonce-based PKE. After formulating an appropriate definition of SOA- secure nonce-based PKE,we provide efficient constructions in the non-programmable random-oracle model, based on lossy trapdoor functions.

    We then lift our notion of security to the setting of “hedged” PKE, which ensures security as long as the sender’s seed, message, and nonce jointly have high entropy. This unifies the notions and strengthens the protection that nonce-based PKE provides against randomness failures even in the non-SOA setting. We lift our definitions and constructions of SOA-secure nonce-based PKE to the hedged setting as well.

  • Cryptographic applications of capacity theory: On the optimality of Coppersmith’s method for univariate polynomials
    Ted Chinburg; Brett Hemenway; Nadia Heninger; Zachary Scherr
    University of Pennsylvania;
  • Abstract:

    We draw a new connection between Coppersmith’s method for finding small solutions to polynomial congruences modulo integers and the capacity theory of adelic subsets of algebraic curves. Coppersmith’s method uses lattice basis reduction to construct an auxiliary polynomial that vanishes at the desired solutions. Capacity theory provides a toolkit for proving when polynomials with certain boundedness properties do or do not exist. Using capacity theory, we prove that Coppersmith’s bound for univariate polynomials is optimal in the sense that there are no auxiliary polynomials of the type he used that would allow finding roots of size $N^{1/d+\epsilon}$ for any monic degree-$d$ polynomial modulo $N$. Our results rule out the existence of polynomials of any degree and do not rely on lattice algorithms, thus eliminating the possibility of improvements for special cases or even superpolynomial-time improvements to Coppersmith’s bound. We extend this result to constructions of auxiliary polynomials using binomial polynomials, and rule out the existence of any auxiliary polynomial of this form that would find solutions of size $N^{1/d+\epsilon}$ unless $N$ has a very small prime factor.

  • A Key Recovery Attack on MDPC with CCA Security Using Decoding Errors
    Thomas Johansson; Paul Stankovski; Qian Guo
    Lund University
  • Abstract:

    Algorithms for secure encryption in a post-quantum world are currently receiving a lot of attention in the research community, including several larger projects and a standardization effort from {NIST}. One of the most promising algorithms is the code-based scheme called QC-MDPC, which has excellent performance and a small public key size.

    In this work we present a very efficient key recovery attack on the QC-MDPC scheme using the fact that decryption uses an iterative decoding step and this can fail with some small probability. We identify a dependence between the secret key and the failure in decoding. This can be used to build what we refer to as a distance spectrum for the secret key, which is the set of all distances between any two ones in the secret key. In a reconstruction step we then determine the secret key from the distance spectrum. The attack has been implemented and tested on a proposed instance of QC-MDPC for 80 bit security. It successfully recovers the secret key in minutes.

    A slightly modified version of the attack can be applied on proposed versions of the QC-MDPC scheme that provides IND-CCA security. The attack is a bit more complex in this case, but still very much below the security level. The reason why we can break schemes with proved CCA security is that the model for these proofs typically does not include the decoding error possibility.

  • On the Security of Supersingular Isogeny Cryptosystems
    Steven Galbraith; Christophe Petit; Barak Shani; Yan Bo Ti
    Auckland; Oxford; Auckland; Auckland
  • Abstract:

    We study cryptosystems based on supersingular isogenies.
    This is an active area of research in post-quantum cryptography.
    Our first contribution is to give a very powerful active attack on the supersingular isogeny encryption scheme.
    This attack can only be prevented by using a (relatively expensive) countermeasure.
    Our second contribution is to show that the security of all schemes of this type depends on the difficulty of computing the endomorphism ring of a supersingular elliptic curve.
    This result gives significant insight into the difficulty of the isogeny problem that underlies the security of these schemes.
    Our third contribution is to give a reduction that uses partial knowledge of shared keys to determine an entire shared key. This can be used to retrieve the secret key, given information leaked from a side-channel attack on the key exchange protocol.
    A corollary of this work is the first bit security result for the supersingular isogeny key exchange: Computing any component of the $j$-invariant is as hard as computing the whole $j$-invariant.

    Our paper therefore provides an improved understanding of the security of these cryptosystems.
    We stress that our work does not imply that these systems are insecure, or that they should not be used.
    However, it highlights that implementations of these schemes will need to take account of the risks associated with various active and side-channel attacks.

  • Verifiable Functional Encryption
    Saikrishna Badrinarayanan; Vipul Goyal; Aayush Jain; Amit Sahai
    UCLA; Microsoft Research, India; UCLA; UCLA
  • Abstract:

    In light of security challenges that have emerged in a world with complex networks and cloud computing, the notion of functional encryption has recently emerged. In this work, we show that in several applications of functional encryption (even those cited in the earliest works on functional encryption), the formal notion of functional encryption is actually \emph{not} sufficient to guarantee security. This is essentially because the case of a malicious authority and/or encryptor is not considered. To address this concern, we put forth the concept of \emph{verifiable functional encryption}, which captures the basic requirement of output correctness: even if the ciphertext is maliciously generated (and even if the setup and key generation is malicious), the decryptor is still guaranteed a meaningful notion of correctness which we show is crucial in several applications.

    We formalize the notion of verifiable function encryption and, following prior work in the area, put forth a simulation-based and an indistinguishability-based notion of security. We show that simulation-based verifiable functional encryption is unconditionally impossible even in the most basic setting where there may only be a single key and a single ciphertext. We then give general positive results for the indistinguishability setting: a general compiler from any functional encryption scheme into a verifiable functional encryption scheme with the only additional assumption being the Decision Linear Assumption over Bilinear Groups (DLIN). We also give a generic compiler in the secret-key setting for functional encryption which maintains both message privacy and function privacy. Our positive results are general and also apply to other simpler settings such as Identity-Based Encryption, Attribute-Based Encryption and Predicate Encryption. We also give an application of verifiable functional encryption to the recently introduced primitive of functional commitments.

    Finally, in the context of indistinguishability obfuscation, there is a fundamental question of whether the correct program was obfuscated. In particular, the recipient of the obfuscated program needs a guarantee that the program indeed does what it was intended to do. This question turns out to be closely related to verifiable functional encryption. We initiate the study of verifiable obfuscation with a formal definition and construction of verifiable indistinguishability obfuscation.

  • How to Obtain Fully Structure-Preserving (Automorphic) Signatures from Structure-Preserving Ones
    Yuyu Wang; Zongyang Zhang; Takahiro Matsuda; Goichiro Hanaoka; Keisuke Tanaka
    Tokyo Institute of Technology/AIST; AIST; AIST; AIST; Tokyo Institute of Technology/JST CREST
  • Abstract:

    In this paper, we bridge the gap between structure- preserving signatures (SPSs) and fully structure-preserving signatures (FSPSs). In SPSs, all the messages, signatures, and verification keys consist only of group elements, while in FSPSs, even signing keys are required to be a collection of group elements. To achieve our goal, we introduce two new primitives called trapdoor signature and signature with auxiliary key, both of which can be derived from SPSs. By carefully combining both primitives, we obtain generic constructions of FSPSs from SPSs. Upon instantiating the above two primitives, we get many instantiations of FSPS with unilateral and bilateral message spaces. Different from previously proposed FSPSs, many of our instantiations also have the automorphic property, i.e., a signer can sign his own verification key. As by-product results, one of our instantiations has the shortest verification key size, signature size, and lowest verification cost among all previous constructions based on standard assumptions, and one of them is the first FSPS scheme in the type I bilinear groups.

  • A Tale of Two Shares: Why Two-Share Threshold Implementation Seems Worthwhile-and Why it is Not
    Cong Chen; Mohammad Farmani; Thomas Eisenbarth
    Worcester Polytechnic Institute
  • Abstract:

    This work explores the possibilities for practical Threshold Implementation (TI) with only two shares in order for a smaller design that needs less randomness but is still first-order leakage resistant.
    We present the first two-share Threshold Implementations of two lightweight block ciphers-Simon and Present. The implementation results show that two-share TI improves the compactness but usually further reduces the throughput when compared with first-order resistant three-share schemes. Our leakage analysis shows that two-share TI can retain perfect first-order resistance. However, the analysis also exposes a strong second-order leakage. All results are backed up by simulation as well as analysis of actual implementations.

  • Iterated Random Oracle: A Universal Approach for Finding Loss in Security Reduction
    Fuchun Guo; Willy Susilo; Yi Mu; Rongmao Chen; Jianchang Lai; Guomin Yang
    University of Wollongong, Australia
  • Abstract:

    The indistinguishability security of a public-key cryptosystem can be reduced to a computational hard assumption in the random oracle model, where the solution to a computational hard problem is hidden in one of the adversary’s queries to the random oracle. Usually, there is a finding loss in finding the correct solution from the query set, especially when the decisional variant of the computational problem is also hard. The problem of finding loss must be addressed towards tight(er) reductions under this type. In EUROCRYPT 2008, Cash, Kiltz and Shoup proposed a novel approach using a trapdoor test that can solve the finding loss problem. The simulator can find the correct solution with overwhelming probability 1, if there exists a trapdoor test for the adopted hard problem. The proposed approach is efficient and can be used for many Diffie-Hellman computational assumptions. The only limitation is the requirement of a trapdoor test that must be found for the adopted computational assumptions.

    In this paper, we introduce a universal approach for finding loss, namely Iterated Random Oracle, which can be applied to all computational assumptions. The finding loss in our proposed approach is very small. For $2^{60}$ queries to the random oracle, the success probability of finding the correct solution from the query set will be as large as $1/{64}$ compared to $1/{2^{60}}$ by a random pick. We show how to apply the iterated random oracle for security transformation from key encapsulation mechanism with one-way security to normal encryption with indistinguishability security. The security reduction is very tight due to a small finding loss. The transformation does not expand the ciphertext size. We also give the application of the iterated random oracle in the key exchange.

  • A Shuffle Argument Secure in the Generic Model
    Prastudy Fauzi; Helger Lipmaa; Michał Zając
    University of Tartu, Estonia
  • Abstract:

    We propose a new random oracle-less NIZK shuffle argument. It has a simple structure, where the first verification equation ascertains that the prover has committed to a permutation matrix, the second verification equation ascertains that the same permutation was used to permute the ciphertexts, and the third verification equation ascertains that input ciphertexts were “correctly” formed. The new argument has $3.5$ times more efficient verification than the up-to-now most efficient shuffle argument by Fauzi and Lipmaa (CT-RSA 2016). Compared to the Fauzi-Lipmaa shuffle argument, we (i) remove the use of knowledge assumptions and prove our scheme is sound in the generic bilinear group model, and (ii) prove standard soundness, instead of culpable soundness.

  • Efficient and Provable White-Box Primitives
    Pierre-Alain Fouque; Pierre Karpman; Paul Kirchner; Brice Minaud
    University Rennes 1, France, and Institut Universitaire de France; Inria and Ecole Polytechnique, France, and Nanyang Technological University, Singapore; Ecole Normale Superieure, France; University Rennes 1, France and Royal Holloway University, United Kingdom
  • Abstract:

    In recent years there have been several attempts to build white-box block ciphers whose implementations aim to be incompressible. This includes the weak white-box ASASA construction by Bouillaguet, Biryukov and Khovratovich from Asiacrypt 2014, and the recent space-hard construction by Bogdanov and Isobe from CCS 2015. In this article we propose the first constructions aiming at the same goal while offering provable security guarantees. Moreover we propose concrete instantiations of our constructions, which prove to be quite efficient and competitive with prior work. Thus provable security comes with a surprisingly low overhead.

  • Cryptographic Reverse Firewall via Malleable Smooth Projective Hash Functions
    Rongmao Chen; Yi Mu; Guomin Yang; Willy Susilo; Fuchun Guo; Mingwu Zhang
    University of Wollongong/National University of Defense Technology, China; University of Wollongong; University of Wollongong; University of Wollongong; University of Wollongong; Hubei University of Technology, China
  • Abstract:

    Motivated by the revelations of Edward Snowden, post-Snowden cryptography has become a prominent research direction in recent years. In Eurocrypt 2015, Mironov and Stephens-Davidowitz proposed a novel concept named cryptographic reverse firewall (CRF) which can resist exfiltration of secret information from an arbitrarily compromised machine. In this work, we continue this line of research and present generic CRF constructions for several widely used cryptographic protocols based on a new notion named malleable smooth projective hash function. Our contributions can be summarized as follows.

    We introduce the notion of malleable smooth projective hash function, which is an extension of the smooth projective hash function (SPHF) introduced by Cramer and Shoup (Eurocrypt’02) with the new properties of key malleability and element rerandomizability. We demonstrate the feasibility of our new notion using graded rings proposed by Benhamouda et al. (Crypto’13), and present an instantiation from the k-linear assumption. Moreover, we show that the malleable SPHF can also be based on other standard assumptions.

    We show how to generically construct CRFs via malleable SPHFs in a modular way for some widely used cryptographic protocols. Specifically, we propose generic constructions of CRFs for the unkeyed message-transmission protocol and the oblivious signature-based envelope (OSBE) protocol of Blazy, Pointcheval and Vergnaud (TCC’12). We also present a new malleable SPHFfrom the linear encryption of valid signatures for instantiating the OSBE protocol with CRFs.

    We further study the two-pass oblivious transfer (OT) protocol and show that the malleable SPHF does not suffice for its CRF constructions. We then develop a new OT framework from graded rings and show how to construct OT-CRFs by modifying the malleable SPHF framework. This new framework encompasses the DDH-based OT-CRF constructions proposed by Mironov and Stephens-Davidowitz (Eurocrypt’15), and yields a new construction under the $k$-linear assumption.