International Association for Cryptologic Research

IACR News Central

Here you can see all recent updates to the IACR webpage. These updates are also available:

Now viewing news items related to:

23 February 2017
ePrint Report AES-GCM-SIV: Specification and Analysis Shay Gueron, Adam Langley, Yehuda Lindell
In this paper, we describe and analyze the security of the AES-GCM-SIV mode of operation, as defined in the CFRG specification \cite{CFRG}. This mode differs from the original GCM-SIV mode that was designed in \cite{GL2015} in two main aspects. First, the CTR encryption uses a 127-bit pseudo-random counter instead of a 95-bit pseudo-random value concatenated with a 32-bit counter. This construction leads to improved security bounds when encrypting short messages. In addition, a new key derivation function is used for deriving a fresh set of keys for each nonce. This addition allows for encrypting up to $2^{50}$ messages with the same key, compared to the significant limitation of only $2^{32}$ messages that were allowed with GCM-SIV (which inherited this same limit from AES-GCM). As a result, the new construction is well suited for real world applications that need a nonce-misuse resistant Authenticated Encryption scheme. We explain the limitations of GCM-SIV, which motivate the new construction, prove the security properties of AES-GCM-SIV, and show how these properties support real usages. Implementations are publicly available in \cite{ShayGit}. We remark that AES-GCM-SIV is already integrated into Google's BoringSSL library \cite{BoringSSL}, and its deployment for ticket encryption in QUIC \cite{QUIC} is underway.
ePrint Report Cloud Storage File Recoverability (Full Version) Christian A. Gorke, Christian Janson, Frederik Armknecht, Carlos Cid
Data loss is perceived as one of the major threats for cloud storage. Consequently, the security community developed several challenge-response protocols that allow a user to remotely verify whether an outsourced file is still intact. However, two important practical problems have not yet been considered. First, clients commonly outsource multiple files of different sizes, raising the question how to formalize such a scheme and in particular ensuring that all files can be simultaneously audited. Second, in case auditing of the files fails, existing schemes do not provide a client with any method to prove if the original files are still recoverable.

We address both problems and describe appropriate solutions. The first problem is tackled by providing a new type of "Proofs of Retrievability" scheme, enabling a client to check all files simultaneously in a compact way. The second problem is solved by defining a novel procedure called "Proofs of Recoverability", enabling a client to obtain an assurance whether a file is recoverable or irreparably damaged. Finally, we present a combination of both schemes allowing the client to check the recoverability of all her original files, thus ensuring cloud storage file recoverability.
After the trials of remote internet voting for local elections in 2011 and parliamentary elections in 2013, a number of local referendums has renewed interest in internet voting in Norway.

The voting scheme used in Norway is not quantum-safe and it has limited voter verifiability. In this case study, we consider how we can use fully homomorphic encryption to construct a quantum-safe voting scheme with better voter verifiability.

While fully homomorphic cryptosystems are not efficient enough for the the system we sketch to be implemented and run today, we expect future improvements in fully homomorphic encryption which may eventually make these techniques practical.
ePrint Report SymSum: Symmetric-Sum Distinguishers Against Round Reduced SHA3 Dhiman Saha, Sukhendu Kuila, Dipanwita Roy Chowdhury
In this work we show the existence of special sets of inputs for which the sum of the images under SHA3 exhibits a symmetric property. We develop an analytical framework which accounts for the existence of these sets. The framework constitutes identification of a generic property of iterated SPN based functions pertaining to the round-constant addition and combining it with the notion of $m-$fold vectorial derivatives for differentiation over specially selected subspaces. Based on this we propose a new distinguisher called SymSum for the SHA3 family which penetrates up to 9 rounds and outperforms the ZeroSum distinguisher by a factor of four. Interestingly, the current work is the first analysis of SHA3/Keccak that relies on round-constants but is independent of their Hamming-weights.
In the \emph{conditional disclosure of secrets} problem (Gertner et al., J. Comput. Syst. Sci., 2000) Alice and Bob, who hold inputs $x$ and $y$ respectively, wish to release a common secret $s$ to Carol (who knows both $x$ and $y$) if only if the input $(x,y)$ satisfies some predefined predicate $f$. Alice and Bob are allowed to send a single message to Carol which may depend on their inputs and some joint randomness and the goal is to minimize the communication complexity while providing information-theoretic security.

Following Gay, Kerenidis, and Wee (Crypto 2015), we study the communication complexity of CDS protocols and derive the following positive and negative results. \begin{itemize} \item (\textbf{Closure}) A CDS for $f$ can be turned into a CDS for its complement $\bar{f}$ with only a minor blow-up in complexity. More generally, for a (possibly non-monotone) predicate $h$, we obtain a CDS for $h(f_1,\ldots,f_m)$ whose cost is essentially linear in the formula size of $h$ and polynomial in the CDS complexity of $f_i$.

\item (\textbf{Amplification}) It is possible to reduce the privacy and correctness error of a CDS from constant to $2^{-k}$ with a multiplicative overhead of $O(k)$. Moreover, this overhead can be amortized over $k$-bit secrets. \item (\textbf{Amortization}) Every predicate $f$ over $n$-bit inputs admits a CDS for multi-bit secrets whose amortized communication complexity per secret bit grows linearly with the input length $n$ for sufficiently long secrets. In contrast, the best known upper-bound for single-bit secrets is exponential in $n$.

\item (\textbf{Lower-bounds}) There exists a (non-explicit) predicate $f$ over $n$-bit inputs for which any perfect (single-bit) CDS requires communication of at least $\Omega(n)$. This is an exponential improvement over the previously known $\Omega(\log n)$ lower-bound.

\item (\textbf{Separations}) There exists an (explicit) predicate whose CDS complexity is exponentially smaller than its randomized communication complexity. This matches a lower-bound of Gay et. al., and, combined with another result of theirs, yields an exponential separation between the communication complexity of linear CDS and non-linear CDS. This is the first provable gap between the communication complexity of linear CDS (which captures most known protocols) and non-linear CDS. \end{itemize} Our results solve several open problems posed by Gay et al., and have applications to secret-sharing schemes for forbidden-graph access structures.
We propose a Somewhat Homomorphic Encryption (SHE) scheme based on the Learning With Rounding (LWR) problem. The LWR problem is somewhat similar to the more classical Learning With Errors (LWE) and was proposed as a deterministic variant of it and setting up an LWR instance does not require the generation of gaussian noise. Thus our SHE scheme can be instantiated without the need for expensive Gaussian noise sampling. Our initial scheme provides lower ciphertext sizes for small plaintext spaces than existing leading schemes such as BGV.
ePrint Report Analysis of AES, SKINNY, and Others with Constraint Programming Siwei Sun, David Gerault, Pascal Lafourcade, Qianqian Yang, Yosuke Todo, Kexin Qiao, Lei Hu
Search for different types of distinguishers are common tasks in symmetric-key cryptanalysis. In this work, we employ the constraint programming (CP) technique to tackle such problems. First, we show that a simple application of the CP approach proposed by Gerault \Dengdeng~leads to the solution of the open problem of determining the exact lower bound of the number of active S-boxes for 6-round AES-128 in the related-key model. Subsequently, we show that the same approach can be applied in searching for integral distinguishers, impossible differentials, zero-correlation linear approximations, in both the single-key and related-(twea)key model. We implement the method using the open source constraint solver Choco and apply it to the block ciphers PRESENT, SKINNY, and HIGHT (ARX construction). As a result, we find 16 related-tweakey impossible differentials for 12-round SKINNY-64-128 based on which we construct an 18-round attack on SKINNY-64-128 (one target version for the crypto competition \url{https://sites.google.com/site/skinnycipher} announced at ASK 2016). Moreover, we show that in some cases, when equipped with proper strategies (ordering heuristic, restart and dynamic branching strategy), the CP approach can be very efficient. Therefore, we suggest that the constraint programming technique should become a convenient tool at hand of the symmetric-key cryptanalysts
ePrint Report Security Notions for Bidirectional Channels Giorgia Azzurra Marson, Bertram Poettering
This paper closes a definitional gap in the context of modeling cryptographic two-party channels. We note that, while most security models for channels consider exclusively unidirectional communication, real-world protocols like TLS and SSH are rather used for bidirectional interaction. The motivational question behind this paper is: Can analyses conducted with the unidirectional setting in mind--including the current ones for TLS and SSH--also vouch for security in the case of bidirectional channel usage? And, in the first place, what does security in the bidirectional setting actually mean?

After developing confidentiality and integrity notions for bidirectional channels, we analyze a standard way of combining two unidirectional channels to realize one bidirectional channel. Although it turns out that this construction is, in general, not as secure as commonly believed, we confirm that for many practical schemes security is provided also in the bidirectional sense.
22 February 2017
ePrint Report Conditional Cube Attack on Round-Reduced ASCON Zheng Li, Xiaoyang Dong, Xiaoyun Wang
This paper evaluates the secure level of authenticated encryption Ascon against cube-like method. Ascon submitted by Dobraunig et al. is one of 16 survivors of the 3rd round CAESAR competition. The cube-like method is first used by Dinur et al. to analyze Keccak keyed modes. At CT-RSA 2015, Dobraunig et al. applied this method to 5/6-round reduced Ascon, whose structure is similar to Keccak keyed modes. However, for Ascon the non-linear layer is more complex and state is much smaller, which make it hard for the attackers to select enough cube variables that do not multiply with each other after the first round. This seems to be the reason why the best previous key-recovery attack is on 6-round Ascon, while for Keccak keyed modes (Keccak-MAC and Keyak) the attacked round is no less than 7-round.

In this paper, we generalize the conditional cube attack proposed by Huang et al., and find new cubes depending on some key bit conditions for 5/6-round reduced Ascon, and translate the previous theoretic 6-round attack with $2^{66}$ time complexity to a practical one with $2^{40}$ time complexity. Moreover, we propose the first 7-round key-recovery attack on Ascon. By introducing the cube-like key-subset technique, we divide the full key space into many subsets according to different key conditions. For each key subset, we launch the cube tester to determine if the key falls into it. Finally, we recover the full key space by testing all the key subsets. The total time complexity is about $2^{103.9}$. In addition, for a weak-key subset, whose size is $2^{117}$, the attack is more efficient and costs only $2^{77}$ time complexity. Those attacks do not threaten the full round (12 rounds) Ascon.
ePrint Report Cube-like Attack on Round-Reduced Initialization of Ketje Sr Xiaoyang Dong, Zheng Li, Xiaoyun Wang, Ling Qin
This paper studies the Keccak-based authenticated encryption (AE) scheme Ketje Sr against cube-like attacks. Ketje is one of the remaining 16 candidates of third round CAESAR competition, whose primary recommendation is Ketje Sr. Although the cube-like method has been successfully applied to Ketje's sister ciphers, including Keccak-MAC and Keyak -- another Keccak-based AE scheme, similar attacks are missing for Ketje. For Ketje Sr, the state (400-bit) is much smaller than Keccak-MAC and Keyak (1600-bit), thus the 128-bit key and cubes with the same dimension would occupy more lanes in Ketje Sr. Hence, the number of key bits independent of the cube sum is very small, which makes the divide-and-conquer method (it has been applied to 7-round attack on Keccak-MAC by Dinur et al.)~can not be translated to Ketje Sr trivially. This property seems to be the barrier for the translation of the previous cube-like attacks to Ketje Sr.

In this paper, we evaluate Ketje Sr against the divide-and-conquer method. Firstly, by applying the linear structure technique, we find some 32/64-dimension cubes of Ketje Sr that do not multiply with each other as well as some bits of the key in the first round. In addition, we introduce the new dynamic variable instead of the auxiliary variable (it was used in Dinur et al.'s divide-and-conquer attack to reduce the diffusion of the key) to reduce the diffusion of the key as well as the cube variables. Finally, we successfully launch a 6/7-round key recovery attack on Ketje Sr v1 and v2 (v2 is presented for the 3rd round CAESAR competition.). In 7-round attack, the complexity of online phase for Ketje Sr v1 is $2^{113}$, while for Ketje Sr v2, it is $2^{97}$ (the preprocessing complexity is the same). We claim 7-round reduced Ketje Sr v2 is weaker than v1 against our attacks. In addition, some results on other Ketje instances and Ketje Sr with smaller nonce are given. Those are the first results on Ketje and bridge the gaps of cryptanalysis between its sister ciphers -- Keyak and the Keccak keyed modes.
This work introduces PassPhone, a new smartphone-based authentication scheme that outsources user verification to a trusted third party without sacrificing privacy: neither can the trusted third party learn the relation between users and service providers, nor can service providers learn those of their users to others. When employed as a second factor in conjunction with, for instance, passwords as a first factor, our scheme maximizes the deployability of two-factor authentication for service providers while maintaining user privacy. We conduct a twofold formal analysis of our scheme, the first regarding its general security, and the second regarding anonymity and unlinkability of its users. Moreover, we provide an automatic analysis using AVISPA, a comparative evaluation to existing schemes under Bonneau et al.'s framework, and an evaluation of a prototypical implementation.
Algebraic manipulation detection codes are a class of error detecting codes which have found numerous applications in cryptography. In this paper we extend these codes to defeat general algebraic attacks - we call such codes general algebraic manipulation detection (GAMD) codes. Positive results are shown for the existence of GAMDs for the families of tampering functions corresponding to point additions and affine functions over a finite field. Compared to non-malleable codes, we demonstrate both positive and negative results regarding the existence of GAMDs for arbitrary families of tampering functions.
ePrint Report Trust Is Risk: A Decentralized Financial Trust Platform Orfeas Stefanos Thyfronitis Litos, Dionysis Zindros
Centralized reputation systems use stars and reviews and thus require algorithm secrecy to avoid manipulation. In autonomous open source decentralized systems this luxury is not available. We create a reputation network for decentralized marketplaces where the trust each user gives to the other users is quantifiable and expressed in monetary terms. We introduce a new model for bitcoin wallets in which user coins are split among trusted associates. Direct trust is defined using shared bitcoin accounts via bitcoin's 1-of-2 multisig. Indirect trust is subsequently defined transitively. This enables formal game theoretic arguments pertaining to risk analysis. We prove that risk and maximum flows are equivalent in our model and that our system is Sybil-resilient. Our system allows for concrete financial decisions on the subjective monetary amount a pseudonymous party can be trusted with. Risk remains invariant under a direct trust redistribution operation followed by a purchase.
In 2003, Schnorr introduced Random sampling to find very short lattice vectors, as an alternative to enumeration. An improved variant has been used in the past few years by Kashiwabara et al. to solve the largest Darmstadt SVP challenges. However, the behaviour of random sampling and its variants is not well-understood: all analyses so far rely on a questionable heuristic assumption, namely that the lattice vectors produced by some algorithm are uniformly distributed over certain parallelepipeds. In this paper, we introduce lattice enumeration with discrete pruning, which generalizes random sampling and its variants, and provides a novel geometric description based on partitions of the n-dimensional space. We obtain what is arguably the first sound analysis of random sampling, by showing how discrete pruning can be rigorously analyzed under the well-known Gaussian heuristic, in the same model as the Gama-Nguyen-Regev analysis of pruned enumeration from EUROCRYPT '10, albeit using different tools: we show how to efficiently compute the volume of the intersection of a ball with a box, and to efficiently approximate a large sum of many such volumes, based on statistical inference. Furthermore, we show how to select good parameters for discrete pruning by enumerating integer points in an ellipsoid. Our analysis is backed up by experiments and allows for the first time to reasonably estimate the success probability of random sampling and its variants, and to make comparisons with previous forms of pruned enumeration. Our work unifies random sampling and pruned enumeration and show that they are complementary of each other: both have different characteristics and offer different trade-offs to speed up enumeration.
ePrint Report Linear Cryptanalysis: Key Schedules and Tweakable Block Ciphers Thorsten Kranz, Friedrich Wiemer, Gregor Leander
This paper serves as a systematization of knowledge of linear cryptanalysis and provides novel insights in the areas of key schedule design and tweakable block ciphers. We examine in a step by step manner the linear hull theorem in a general and consistent setting. Based on this, we study the influence of the choice of the key scheduling on linear cryptanalysis, a -- notoriously difficult -- but important subject. Moreover, we investigate how tweakable block ciphers can be analyzed with respect to linear cryptanalysis, a topic that surprisingly has not been scrutinized until now.
We address the problem of substring searchable encryption. A single user produces a big stream of data and later on wants to learn the positions in the string that some patterns occur. Although current techniques exploit auxiliary data structures to achieve efficient substring search on the server side, the cost at the user side may be prohibitive. We revisit the work of substring searchable encryption in order to reduce the storage cost of auxiliary data structures. Our solution entails suffix array which allows optimal storage cost $O(n)$ with small hidden factor at the size of the string $n$. On top of that we build an encrypted index that allows the server to answer substring queries without learning neither the query nor the result. We identify the leakages of the scheme following the work of Curtmola $\etal$ \cite{sse2} and we analyze the security of the protocol in the real ideal framework. Moreover, we implemented our scheme and the state of the art protocol \cite{Chase} to demonstrate the performance advantage of our solution with precise benchmark results. We improved the storage overhead of the encrypted index by a factor of $\mathbf{1.8}$ and the computation time thereof $\mathbf{4}$ times on $10^6$ character data streams.
We define the concept of an encryptor combiner. Roughly, such a combiner takes as input n public keys for a public key encryption scheme, and produces a new combined public key. Anyone knowing a secret key for one of the input public keys can learn the secret key for the combined public key, but an outsider who just knows the input public keys (who can therefore compute the combined public key for himself) cannot decrypt ciphertexts from the combined public key. We actually think of public keys more generally as encryption procedures, which can correspond to, say, encrypting to a particular identity under an IBE scheme or encrypting to a set of attributes under an ABE scheme.

We show that encryptor combiners satisfying certain natural properties can give natural constructions of multi-party non-interactive key exchange, low-overhead broadcast encryption, and hierarchical identity-based encryption. We then show how to construct two different encryptor combiners. Our first is built from universal samplers (which can in turn be built from indistinguishability obfuscation) and is sufficient for each application above, in some cases improving on existing obfuscation-based constructions. Our second is built from lattices, and is sufficient for hierarchical identity-based encryption. Thus, encryptor combiners serve as a new abstraction that (1) is a useful tool for designing cryptosystems, (2) unifies constructing hierarchical IBE from vastly different assumptions, and (3) provides a target for instantiating obfuscation applications from better tools.
ePrint Report Practical Functional Encryption for Quadratic Functions with Applications to Predicate Encryption Carmen Elisabetta Zaira Baltico, Dario Catalano, Dario Fiore, Romain Gay
We present two practically efficient functional encryption schemes for a large class of quadratic functionalities. Specifically, our constructions enable the computation of so-called bilinear maps on encrypted vectors. This represents a practically relevant class of functions that includes, for instance, multivariate quadratic polynomials (over the integers). Our realizations work over asymmetric bilinear groups and are surprisingly efficient and easy to implement. For instance, in our most efficient scheme the public key and each ciphertext consist of $2n+1$ and $4n+2$ group elements respectively, where $n$ is the dimension of the encrypted vectors, while secret keys are only two group elements. Our two schemes build on similar ideas, but develop them in a different way in order to achieve distinct goals. Our first scheme is proved (selectively) secure under standard assumptions, while our second construction is concretely more efficient and is proved (adaptively) secure in the generic group model.

As a byproduct of our functional encryption schemes, we show new predicate encryption schemes for degree-two polynomial evaluation, where ciphertexts consist of only $O(n)$ group elements. This significantly improves the $O(n^2)$ bound one would get from inner product encryption-based constructions.
A recent work of Boyle et al. (Crypto 2016) suggests that ``group-based'' cryptographic protocols, namely ones that only rely on a cryptographically hard (Abelian) group, can be surprisingly powerful. In particular, they present succinct two-party protocols for securely computing branching programs and NC1 circuits under the DDH assumption, providing the first alternative to fully homomorphic encryption.

In this work we further explore the power of group-based secure computation protocols, improving both their asymptotic and concrete efficiency. We obtain the following results.

- Black-box use of group. We modify the succinct protocols of Boyle et al. so that they only make a black-box use of the underlying group, eliminating an expensive non-black-box setup phase. - Round complexity. For any constant number of parties, we obtain 2-round MPC protocols based on a PKI setup under the DDH assumption. Prior to our work, such protocols were only known using fully homomorphic encryption or indistinguishability obfuscation. - Communication complexity. Under DDH, we present a secure 2-party protocol for any NC1 or log-space computation with n input bits and m output bits using n+(1+o(1)) m+\poly(\lambda) bits of communication, where \lambda is a security parameter. In particular, our protocol can generate n instances of bit-oblivious-transfer using (4+o(1))\cdot n bits of communication. This gives the first constant-rate OT protocol under DDH. - Computation complexity. We present several techniques for improving the computational cost of the share conversion procedure of Boyle et al., improving the concrete efficiency of group-based protocols by several orders of magnitude.
ePrint Report Bitcoin as a Transaction Ledger: A Composable Treatment Christian Badertscher, Ueli Maurer, Daniel Tschudi, Vassilis Zikas
Bitcoin is perhaps the most prominent example of a distributed cryptographic protocol that is extensively used in reality. Nonetheless, existing security-proofs are property-based, and as such they do not support composition.

In this work we put forth a universally composable treatment of the Bitcoin protocol. We specify the goal that Bitcoin aims to achieve as a ledger shared-functionality, aka global setup, in the (G)UC model of Canetti et al. [TCC'07]. Our ledger functionality is weaker than the one recently proposed by Kiayias, Zhou, and Zikas [EUROCRYPT'16], but unlike the latter suggestion which is arguably not implementable given the Bitcoin assumptions, we prove that the one proposed here is securely UC realized under standard assumptions by an appropriate abstraction of Bitcoin as a UC protocol. We further show how known property-based approaches can be cast as special instances of our treatment and how their underlying assumptions can be cast in (G)UC without restricting the environment or the adversary.
Pattern matching is essential to applications such as filtering content in Data streams, searching on genomic data, and searching for correlation in medical data. However, increasing concerns of user and data privacy, exacerbated by threats of mass surveillance, have made the use of encryption practically standard for personal data. Hence, entities performing pattern-matching on data they do not own must now be able to provide the functionality of keyword-search on \emph{encrypted} data.

Existent solutions in searchable encryption suffer from one of two main disadvantages: either an exhausted list of keywords needs to be hardcoded in the input ciphertexts, or the input must be \emph{tokenized}, massively increasing the size of the ciphertext. In both cases, the symmetric-key approach provides faster encryption, but also induces a token-re-generation step at each instantiation (\ie, essentially, for each user). Such approaches are not well-suited when either the data owner is unable to choose all the relevant keywords, or when a single searcher (\eg, an IDS, a firewall, or an independent medical researcher) must screen ciphertexts from many different ownerships. Fast symmetric searchable encryption alternatives (SSE) also come with an extensive leakage, which is not well understood and has recently been under attack.

In this work, we introduce Searchable Encryption with Shiftable Trapdoors (SEST), a new primitive, which allows for pattern matching with universal tokens, \ie, trapdoors which can function on ciphertexts produced by multiple entities, and which allow to match keywords of arbitrary lengths to arbitrary ciphertexts. Our approach relies on a public-key encryption scheme and on bilinear pairings. We essentially project the plaintext bit by bit on a multiplicative basis consisting of powers of a secret key. The keyword is also projected on the same basis, with the order of its bits encoded as a polynomial of degree equal to the keyword length. The searching entity receives unforgeable trapdoors for requested keywords, and can match these against the input ciphertexts, thus finding out whether the pattern matched, and at what position of the plaintext the keyword can be found. In addition, very minor modifications to our solution enable it to take into account regular expressions, such as fully- or partly-unknown characters in a keyword (namely wildcards or interval/subset searches).

Our scheme is a variation of Rabin-Karp and has many potential applications in deep-packet inspection on encrypted streams, searching on genomic data, as well as searching on encrypted structured data. Compared to other alternatives in the literature, our trapdoor size is only linear in the keyword length (and independent of the plaintext size), and we prove that the leakage to the searcher is only the trivial one, namely the ability to distinguish based on different search results of a single trapdoor on two different plaintexts. Although our proofs use a (marginally) interactive assumption, we argue that this is a relatively small price to pay for the flexibility and privacy that we are able to attain.
20 February 2017
We study the notion of {\em ad hoc secure computation}, recently introduced by Beimel et al. (ITCS 2016), in the context of the {\em Private Simultaneous Messages} (PSM) model of Feige et al.\ (STOC 2004). In ad hoc secure computation we have $n$ parties that may potentially participate in a protocol but, at the actual time of execution, only $k$ of them, whose identity is {\em not} known in advance, actually participate. This situation is particularly challenging in the PSM setting, where protocols are non-interactive (a single message from each participating party to a special output party) and where the parties rely on pre-distributed, correlated randomness (that in the ad-hoc setting will have to take into account all possible sets of participants).

We present several different constructions of \apsm\ protocols from standard PSM protocols. These constructions imply, in particular, that efficient information-theoretic \apsm\ protocols exist for NC1 and different classes of log-space computation, and efficient computationally-secure \apsm\ protocols for polynomial-time computable functions can be based on a one-way function. As an application, we obtain an information-theoretic implementation of {\em order-revealing encryption} whose security holds for two messages.

We also consider the case where the actual number of participating parties $t$ may be larger than the minimal $k$ for which the protocol is designed to work. In this case, it is unavoidable that the output party learns the output corresponding to each subset of $k$ out of the $t$ participants. Therefore, a ``best possible security'' notion, requiring that this will be the {\em only} information that the output party learns, is needed. We present connections between this notion and the previously studied notion of {\em $t$-robust PSM} (also known as ``non-interactive MPC''). We show that constructions in this setting for even simple functions (like AND or threshold) can be translated into non-trivial instances of program obfuscation (such as {\em point function obfuscation} and {\em fuzzy point function obfuscation}, respectively). We view these results as a negative indication that protocols with ``best possible security'' are impossible to realize efficiently in the information-theoretic setting or require strong assumptions in the computational setting.
We address the problems of whether t-circular-secure encryption can be based on (t-1)-circular-secure encryption or on semantic (CPA) security, if t = 1. While for t = 1 a folklore construction, based on CPA-secure encryption, can be used to build a 1-circular-secure encryption with the same secret-key and message space, no such constructions are known for the bit-encryption case, which is of particular importance in fully-homomorphic encryption. Also, for $t \geq 2$, all constructions of t-circular-secure encryption (bitwise or otherwise) are based on specific assumptions.

We make progress toward these problems by ruling out all fully-blackbox constructions of

-- 1-seed circular-secure public-key bit encryption from CPA-secure public-key encryption;

-- t-seed circular-secure public-key encryption from (t-1)-seed circular-secure public-key encryption, for any $t \geq 2$.

Informally, seed-circular security is a variant of the circular security notion in which the seed of the key-generation algorithm, instead of the secret key, is encrypted. We also show how to extend our first result to rule out a large and non-trivial class of constructions of 1-circular-secure bit encryption, which we dub key-isolating constructions.

Our separation model follows that of Gertner, Malkin and Reingold (FOCS’01), which is a weaker separation model than that of Impagliazzo and Rudich.
It is widely known that double encryption does not substantially increase the security of a block cipher. Indeed, the classical meet-in-the middle attack recovers the $2k$-bit secret key at the cost of roughly $2^k$ off-line enciphering operations, in addition to very few known plaintext-ciphertext pairs. Thus, essentially as efficiently as for the underlying cipher with a $k$-bit key.

This paper revisits double encryption under the lens of multi-user security. We prove that its security degrades only very mildly with an increasing number of users, as opposed to single encryption, where security drops linearly. More concretely, we give a tight bound for the multi-user security of double encryption as a pseudorandom permutation in the ideal-cipher model, and describe matching attacks.

Our contribution is also conceptual: To prove our result, we enhance and generalize the generic technique recently proposed by Hoang and Tessaro for lifting single-user to multi-user security. We believe this technique to be broadly applicable.
ePrint Report Privacy-Preserving Search of Similar Patients in Genomic Data Gilad Asharov, Shai Halevi, Yehuda Lindell, Tal Rabin
The growing availability of genomic data holds great promise for advancing medicine and research, but unlocking its full potential requires adequate methods for protecting the privacy of individuals whose genome data we use. One example of this tension is running Similar Patient Query on remote genomic data: In this setting a doctor that holds the genome of his/her patient may try to find other individuals with ``close" genomic data, and use the data of these individuals to help diagnose and find effective treatment for that patient's conditions. This is clearly a desirable mode of operation, however, the privacy exposure implications are considerable, so we would like to carry out the above ``closeness'' computation in a privacy preserving manner.

Secure-computation techniques offer a way out of this dilemma, but the high cost of computing edit distance privately poses a great challenge. Wang et al.~proposed recently [ACM CCS '15] an efficient solution, for situations where the genome sequences are so close that edit distance between two genomes can be well approximated just by looking at the indexes in which they differ from the reference genome. However, this solution does not extend well to cases with high divergence among individual genomes, and different techniques are needed there.

In this work we put forward a new approach for highly efficient secure computation for computing an approximation of the edit-distance, that works well even in settings with much higher divergence. We present contributions on two fronts. First, the design of an approximation method that would yield an efficient private computation. Second, further optimizations of the two-party protocol. Our tests indicate that the approximation method works well even in regions of the genome where the distance between individuals is 5\% or more with many insertions and deletions (compared to 99.5\% similarly with mostly substitutions, as considered by Wang et al.). As for speed, our protocol implementation takes just a few seconds to run on databases with thousands of records, each of length thousands of alleles, and it scales almost linearly with both the database size and the length of the sequences in it. As an example, in the datasets of the recent iDASH competition, it takes less than two seconds to find the nearest five records to a query, in a size-500 dataset of length-3500 sequences. This is 2-3 orders of magnitude faster than using state-of-the-art secure protocols for exact computation.
Constraint-hiding constrained PRFs (CHCPRFs), initially studied by Boneh, Lewi and Wu [PKC 2017], are constrained PRFs where the constrained key hides the description of the constraint. Envisioned with powerful applications such as searchable encryption, private-detectable watermarking and symmetric deniable encryption, the only known candidates of CHCPRFs are based on indistinguishability obfuscation or multilinear maps with strong security properties.

In this paper we construct CHCPRFs for all NC1 circuits from the Learning with Errors assumption. The construction draws heavily from the graph-induced multilinear maps by Gentry, Gorbunov and Halevi [TCC 2015], as well as the existing lattice-based PRFs. In fact, our construction can be viewed as an instance of the GGH15 approach where security can be reduced to LWE.

We also show how to build from CHCPRFs reusable garbled circuits (RGC), or equivalently private-key function-hiding functional encryptions with 1-key security. This provides a different approach of constructing RGC from that of Goldwasser et al. [STOC 2013].
The Principal Ideal Problem (resp. Short Principal Ideal Problem), shorten as PIP (resp. SPIP), consists in finding a generator (resp. short generator) of a principal ideal in the ring of integers of a number field. Several lattice-based cryptosystems rely on the presumed hardness of these two problems. In practice, most of them do not use an arbitrary number field but a power-of-two cyclotomic field. The Smart and Vercauteren fully homomorphic encryption scheme and the multilinear map of Garg, Gentry, and Halevi epitomize this common restriction. Recently, Cramer, Ducas, Peikert, and Regev showed that solving the SPIP in such cyclotomic rings boiled down to solving the PIP. In this paper, we present a heuristic algorithm that solves the PIP in prime-power cyclotomic fields in subexponential time L(1/2) in the discriminant of the number field. This is achieved by descending to its totally real subfield. The implementation of our algorithm allows to recover in practice the secret key of the Smart and Vercauteren scheme, for the smallest proposed parameters (in dimension 256).
ePrint Report Partitioned Group Password-Based Authenticated Key Exchange Dario Fiore, Maria Isabel Gonzalez Vasco, Claudio Soriente
Group Password-Based Authenticated Key Exchange (GPAKE) allows a group of users to establish a secret key, as long as all of them share the same password. However, in existing GPAKE protocols as soon as one user runs the protocol with a non-matching password, all the others abort and no key is established. In this paper we seek for a more flexible, yet secure, GPAKE and put forward the notion of partitioned GPAKE. Partitioned GPAKE tolerates users that run the protocol on different passwords. Through a protocol run, any subgroup of users that indeed share a password, establish a session key, factoring out the ``noise'' of inputs by users holding different passwords. At the same time any two keys, each established by a different subgroup of users, are pair-wise independent if the corresponding subgroups hold different passwords. We also introduce the notion of password-privacy for partitioned GPAKE, which is a kind of affiliation hiding property, ensuring that an adversary should not be able to tell whether any given set of users share a password. Finally, we propose an efficient instantiation of partitioned GPAKE building on an unforgeable symmetric encryption scheme and a PAKE by Bellare et al. Our proposal is proven secure in the random oracle/ideal cipher model, and requires only two communication rounds.
Lattice-based cryptography is a promising candidate to build cryptographic primitives that are secure against quantum algorithms. The Learning with Errors problem is one of the most important hardness assumptions, lattice-based constructions base their security on. Recently, Albrecht et al. (Journal of Mathematical Cryptology, 2015) presented the Sage module LWE-Estimator to estimate the hardness of concrete LWE instances, making the choice of parameters for lattice-based primitives easier and better comparable. The effectiveness of algorithms to solve LWE is often depending on the number of LWE instances, called LWE samples, given. To give lower bounds on the hardness of concrete instances it is assumed that each algorithm has given enough samples to run in optimal runtime per instance. That means it is assumed that the optimal number of samples is available. However, in cryptographic applications the optimal number of samples is often not given, but only a small number of samples. This leads to a more conservative choice of parameters than necessary in applications.

This work aims to improve the parameter choice with respect to the described problem. Our contribution is twofold: First, we analyze the hardness of LWE instances given a restricted number of samples. For this, we describe algorithms proposed in the literature to solve LWE briefly and estimate their computational cost while taking a restricted number of samples into account. Secondly, we extend the Sage module LWE-Estimator, based on our theoretical results. Furthermore, we evaluate the resulting implementation and show that restricting the number of samples has a significant impact on the hardness of LWE instances.
ePrint Report Revisiting AES Related-Key Differential Attacks with Constraint Programming David Gérault, Pascal Lafourcade, Marine Minier, Christine Solnon
The Advanced Encryption Standard (AES) is one of the most studied symmetric encryption schemes. During the last years, several attacks have been discovered in different adversary models. In this paper, we focus on related-key differential attacks, where the adversary may introduce differences in plaintext pairs and also in keys. We show that Constraint Programming (CP) can be used to model these attacks, and that it allows us to efficiently find all optimal related-key differential characteristics for AES-128, AES-192 and AES-256. In particular, we improve the best related-key differential for the whole AES-256 and give the best related-key differential on 10 rounds of AES-192, which is the differential trail with the longest path. Those results allow us to improve existing related-key distinguishers, basic related-key attacks and $q$-multicollisions on AES-256.

  older items