International Association for Cryptologic Research

IACR News Central

Here you can see all recent updates to the IACR webpage. These updates are also available:

Now viewing news items related to:

22 February 2017
This work introduces PassPhone, a new smartphone-based authentication scheme that outsources user verification to a trusted third party without sacrificing privacy: neither can the trusted third party learn the relation between users and service providers, nor can service providers learn those of their users to others. When employed as a second factor in conjunction with, for instance, passwords as a first factor, our scheme maximizes the deployability of two-factor authentication for service providers while maintaining user privacy. We conduct a twofold formal analysis of our scheme, the first regarding its general security, and the second regarding anonymity and unlinkability of its users. Moreover, we provide an automatic analysis using AVISPA, a comparative evaluation to existing schemes under Bonneau et al.'s framework, and an evaluation of a prototypical implementation.
Algebraic manipulation detection codes are a class of error detecting codes which have found numerous applications in cryptography. In this paper we extend these codes to defeat general algebraic attacks - we call such codes general algebraic manipulation detection (GAMD) codes. Positive results are shown for the existence of GAMDs for the families of tampering functions corresponding to point additions and affine functions over a finite field. Compared to non-malleable codes, we demonstrate both positive and negative results regarding the existence of GAMDs for arbitrary families of tampering functions.
ePrint Report Trust Is Risk: A Decentralized Financial Trust Platform Orfeas Stefanos Thyfronitis Litos, Dionysis Zindros
Centralized reputation systems use stars and reviews and thus require algorithm secrecy to avoid manipulation. In autonomous open source decentralized systems this luxury is not available. We create a reputation network for decentralized marketplaces where the trust each user gives to the other users is quantifiable and expressed in monetary terms. We introduce a new model for bitcoin wallets in which user coins are split among trusted associates. Direct trust is defined using shared bitcoin accounts via bitcoin's 1-of-2 multisig. Indirect trust is subsequently defined transitively. This enables formal game theoretic arguments pertaining to risk analysis. We prove that risk and maximum flows are equivalent in our model and that our system is Sybil-resilient. Our system allows for concrete financial decisions on the subjective monetary amount a pseudonymous party can be trusted with. Risk remains invariant under a direct trust redistribution operation followed by a purchase.
In 2003, Schnorr introduced Random sampling to find very short lattice vectors, as an alternative to enumeration. An improved variant has been used in the past few years by Kashiwabara et al. to solve the largest Darmstadt SVP challenges. However, the behaviour of random sampling and its variants is not well-understood: all analyses so far rely on a questionable heuristic assumption, namely that the lattice vectors produced by some algorithm are uniformly distributed over certain parallelepipeds. In this paper, we introduce lattice enumeration with discrete pruning, which generalizes random sampling and its variants, and provides a novel geometric description based on partitions of the n-dimensional space. We obtain what is arguably the first sound analysis of random sampling, by showing how discrete pruning can be rigorously analyzed under the well-known Gaussian heuristic, in the same model as the Gama-Nguyen-Regev analysis of pruned enumeration from EUROCRYPT '10, albeit using different tools: we show how to efficiently compute the volume of the intersection of a ball with a box, and to efficiently approximate a large sum of many such volumes, based on statistical inference. Furthermore, we show how to select good parameters for discrete pruning by enumerating integer points in an ellipsoid. Our analysis is backed up by experiments and allows for the first time to reasonably estimate the success probability of random sampling and its variants, and to make comparisons with previous forms of pruned enumeration. Our work unifies random sampling and pruned enumeration and show that they are complementary of each other: both have different characteristics and offer different trade-offs to speed up enumeration.
ePrint Report Linear Cryptanalysis: Key Schedules and Tweakable Block Ciphers Thorsten Kranz, Friedrich Wiemer, Gregor Leander
This paper serves as a systematization of knowledge of linear cryptanalysis and provides novel insights in the areas of key schedule design and tweakable block ciphers. We examine in a step by step manner the linear hull theorem in a general and consistent setting. Based on this, we study the influence of the choice of the key scheduling on linear cryptanalysis, a -- notoriously difficult -- but important subject. Moreover, we investigate how tweakable block ciphers can be analyzed with respect to linear cryptanalysis, a topic that surprisingly has not been scrutinized until now.
We address the problem of substring searchable encryption. A single user produces a big stream of data and later on wants to learn the positions in the string that some patterns occur. Although current techniques exploit auxiliary data structures to achieve efficient substring search on the server side, the cost at the user side may be prohibitive. We revisit the work of substring searchable encryption in order to reduce the storage cost of auxiliary data structures. Our solution entails suffix array which allows optimal storage cost $O(n)$ with small hidden factor at the size of the string $n$. On top of that we build an encrypted index that allows the server to answer substring queries without learning neither the query nor the result. We identify the leakages of the scheme following the work of Curtmola $\etal$ \cite{sse2} and we analyze the security of the protocol in the real ideal framework. Moreover, we implemented our scheme and the state of the art protocol \cite{Chase} to demonstrate the performance advantage of our solution with precise benchmark results. We improved the storage overhead of the encrypted index by a factor of $\mathbf{1.8}$ and the computation time thereof $\mathbf{4}$ times on $10^6$ character data streams.
We define the concept of an encryptor combiner. Roughly, such a combiner takes as input n public keys for a public key encryption scheme, and produces a new combined public key. Anyone knowing a secret key for one of the input public keys can learn the secret key for the combined public key, but an outsider who just knows the input public keys (who can therefore compute the combined public key for himself) cannot decrypt ciphertexts from the combined public key. We actually think of public keys more generally as encryption procedures, which can correspond to, say, encrypting to a particular identity under an IBE scheme or encrypting to a set of attributes under an ABE scheme.

We show that encryptor combiners satisfying certain natural properties can give natural constructions of multi-party non-interactive key exchange, low-overhead broadcast encryption, and hierarchical identity-based encryption. We then show how to construct two different encryptor combiners. Our first is built from universal samplers (which can in turn be built from indistinguishability obfuscation) and is sufficient for each application above, in some cases improving on existing obfuscation-based constructions. Our second is built from lattices, and is sufficient for hierarchical identity-based encryption. Thus, encryptor combiners serve as a new abstraction that (1) is a useful tool for designing cryptosystems, (2) unifies constructing hierarchical IBE from vastly different assumptions, and (3) provides a target for instantiating obfuscation applications from better tools.
ePrint Report Practical Functional Encryption for Quadratic Functions with Applications to Predicate Encryption Carmen Elisabetta Zaira Baltico, Dario Catalano, Dario Fiore, Romain Gay
We present two practically efficient functional encryption schemes for a large class of quadratic functionalities. Specifically, our constructions enable the computation of so-called bilinear maps on encrypted vectors. This represents a practically relevant class of functions that includes, for instance, multivariate quadratic polynomials (over the integers). Our realizations work over asymmetric bilinear groups and are surprisingly efficient and easy to implement. For instance, in our most efficient scheme the public key and each ciphertext consist of $2n+1$ and $4n+2$ group elements respectively, where $n$ is the dimension of the encrypted vectors, while secret keys are only two group elements. Our two schemes build on similar ideas, but develop them in a different way in order to achieve distinct goals. Our first scheme is proved (selectively) secure under standard assumptions, while our second construction is concretely more efficient and is proved (adaptively) secure in the generic group model.

As a byproduct of our functional encryption schemes, we show new predicate encryption schemes for degree-two polynomial evaluation, where ciphertexts consist of only $O(n)$ group elements. This significantly improves the $O(n^2)$ bound one would get from inner product encryption-based constructions.
A recent work of Boyle et al. (Crypto 2016) suggests that ``group-based'' cryptographic protocols, namely ones that only rely on a cryptographically hard (Abelian) group, can be surprisingly powerful. In particular, they present succinct two-party protocols for securely computing branching programs and NC1 circuits under the DDH assumption, providing the first alternative to fully homomorphic encryption.

In this work we further explore the power of group-based secure computation protocols, improving both their asymptotic and concrete efficiency. We obtain the following results.

- Black-box use of group. We modify the succinct protocols of Boyle et al. so that they only make a black-box use of the underlying group, eliminating an expensive non-black-box setup phase. - Round complexity. For any constant number of parties, we obtain 2-round MPC protocols based on a PKI setup under the DDH assumption. Prior to our work, such protocols were only known using fully homomorphic encryption or indistinguishability obfuscation. - Communication complexity. Under DDH, we present a secure 2-party protocol for any NC1 or log-space computation with n input bits and m output bits using n+(1+o(1)) m+\poly(\lambda) bits of communication, where \lambda is a security parameter. In particular, our protocol can generate n instances of bit-oblivious-transfer using (4+o(1))\cdot n bits of communication. This gives the first constant-rate OT protocol under DDH. - Computation complexity. We present several techniques for improving the computational cost of the share conversion procedure of Boyle et al., improving the concrete efficiency of group-based protocols by several orders of magnitude.
ePrint Report Bitcoin as a Transaction Ledger: A Composable Treatment Christian Badertscher, Ueli Maurer, Daniel Tschudi, Vassilis Zikas
Bitcoin is perhaps the most prominent example of a distributed cryptographic protocol that is extensively used in reality. Nonetheless, existing security-proofs are property-based, and as such they do not support composition.

In this work we put forth a universally composable treatment of the Bitcoin protocol. We specify the goal that Bitcoin aims to achieve as a ledger shared-functionality, aka global setup, in the (G)UC model of Canetti et al. [TCC'07]. Our ledger functionality is weaker than the one recently proposed by Kiayias, Zhou, and Zikas [EUROCRYPT'16], but unlike the latter suggestion which is arguably not implementable given the Bitcoin assumptions, we prove that the one proposed here is securely UC realized under standard assumptions by an appropriate abstraction of Bitcoin as a UC protocol. We further show how known property-based approaches can be cast as special instances of our treatment and how their underlying assumptions can be cast in (G)UC without restricting the environment or the adversary.
Pattern matching is essential to applications such as filtering content in Data streams, searching on genomic data, and searching for correlation in medical data. However, increasing concerns of user and data privacy, exacerbated by threats of mass surveillance, have made the use of encryption practically standard for personal data. Hence, entities performing pattern-matching on data they do not own must now be able to provide the functionality of keyword-search on \emph{encrypted} data.

Existent solutions in searchable encryption suffer from one of two main disadvantages: either an exhausted list of keywords needs to be hardcoded in the input ciphertexts, or the input must be \emph{tokenized}, massively increasing the size of the ciphertext. In both cases, the symmetric-key approach provides faster encryption, but also induces a token-re-generation step at each instantiation (\ie, essentially, for each user). Such approaches are not well-suited when either the data owner is unable to choose all the relevant keywords, or when a single searcher (\eg, an IDS, a firewall, or an independent medical researcher) must screen ciphertexts from many different ownerships. Fast symmetric searchable encryption alternatives (SSE) also come with an extensive leakage, which is not well understood and has recently been under attack.

In this work, we introduce Searchable Encryption with Shiftable Trapdoors (SEST), a new primitive, which allows for pattern matching with universal tokens, \ie, trapdoors which can function on ciphertexts produced by multiple entities, and which allow to match keywords of arbitrary lengths to arbitrary ciphertexts. Our approach relies on a public-key encryption scheme and on bilinear pairings. We essentially project the plaintext bit by bit on a multiplicative basis consisting of powers of a secret key. The keyword is also projected on the same basis, with the order of its bits encoded as a polynomial of degree equal to the keyword length. The searching entity receives unforgeable trapdoors for requested keywords, and can match these against the input ciphertexts, thus finding out whether the pattern matched, and at what position of the plaintext the keyword can be found. In addition, very minor modifications to our solution enable it to take into account regular expressions, such as fully- or partly-unknown characters in a keyword (namely wildcards or interval/subset searches).

Our scheme is a variation of Rabin-Karp and has many potential applications in deep-packet inspection on encrypted streams, searching on genomic data, as well as searching on encrypted structured data. Compared to other alternatives in the literature, our trapdoor size is only linear in the keyword length (and independent of the plaintext size), and we prove that the leakage to the searcher is only the trivial one, namely the ability to distinguish based on different search results of a single trapdoor on two different plaintexts. Although our proofs use a (marginally) interactive assumption, we argue that this is a relatively small price to pay for the flexibility and privacy that we are able to attain.
21 February 2017
Event date: 25 June 2017
Submission deadline: 15 March 2017
Notification: 31 March 2017
20 February 2017
We study the notion of {\em ad hoc secure computation}, recently introduced by Beimel et al. (ITCS 2016), in the context of the {\em Private Simultaneous Messages} (PSM) model of Feige et al.\ (STOC 2004). In ad hoc secure computation we have $n$ parties that may potentially participate in a protocol but, at the actual time of execution, only $k$ of them, whose identity is {\em not} known in advance, actually participate. This situation is particularly challenging in the PSM setting, where protocols are non-interactive (a single message from each participating party to a special output party) and where the parties rely on pre-distributed, correlated randomness (that in the ad-hoc setting will have to take into account all possible sets of participants).

We present several different constructions of \apsm\ protocols from standard PSM protocols. These constructions imply, in particular, that efficient information-theoretic \apsm\ protocols exist for NC1 and different classes of log-space computation, and efficient computationally-secure \apsm\ protocols for polynomial-time computable functions can be based on a one-way function. As an application, we obtain an information-theoretic implementation of {\em order-revealing encryption} whose security holds for two messages.

We also consider the case where the actual number of participating parties $t$ may be larger than the minimal $k$ for which the protocol is designed to work. In this case, it is unavoidable that the output party learns the output corresponding to each subset of $k$ out of the $t$ participants. Therefore, a ``best possible security'' notion, requiring that this will be the {\em only} information that the output party learns, is needed. We present connections between this notion and the previously studied notion of {\em $t$-robust PSM} (also known as ``non-interactive MPC''). We show that constructions in this setting for even simple functions (like AND or threshold) can be translated into non-trivial instances of program obfuscation (such as {\em point function obfuscation} and {\em fuzzy point function obfuscation}, respectively). We view these results as a negative indication that protocols with ``best possible security'' are impossible to realize efficiently in the information-theoretic setting or require strong assumptions in the computational setting.
We address the problems of whether t-circular-secure encryption can be based on (t-1)-circular-secure encryption or on semantic (CPA) security, if t = 1. While for t = 1 a folklore construction, based on CPA-secure encryption, can be used to build a 1-circular-secure encryption with the same secret-key and message space, no such constructions are known for the bit-encryption case, which is of particular importance in fully-homomorphic encryption. Also, for $t \geq 2$, all constructions of t-circular-secure encryption (bitwise or otherwise) are based on specific assumptions.

We make progress toward these problems by ruling out all fully-blackbox constructions of

-- 1-seed circular-secure public-key bit encryption from CPA-secure public-key encryption;

-- t-seed circular-secure public-key encryption from (t-1)-seed circular-secure public-key encryption, for any $t \geq 2$.

Informally, seed-circular security is a variant of the circular security notion in which the seed of the key-generation algorithm, instead of the secret key, is encrypted. We also show how to extend our first result to rule out a large and non-trivial class of constructions of 1-circular-secure bit encryption, which we dub key-isolating constructions.

Our separation model follows that of Gertner, Malkin and Reingold (FOCS’01), which is a weaker separation model than that of Impagliazzo and Rudich.
It is widely known that double encryption does not substantially increase the security of a block cipher. Indeed, the classical meet-in-the middle attack recovers the $2k$-bit secret key at the cost of roughly $2^k$ off-line enciphering operations, in addition to very few known plaintext-ciphertext pairs. Thus, essentially as efficiently as for the underlying cipher with a $k$-bit key.

This paper revisits double encryption under the lens of multi-user security. We prove that its security degrades only very mildly with an increasing number of users, as opposed to single encryption, where security drops linearly. More concretely, we give a tight bound for the multi-user security of double encryption as a pseudorandom permutation in the ideal-cipher model, and describe matching attacks.

Our contribution is also conceptual: To prove our result, we enhance and generalize the generic technique recently proposed by Hoang and Tessaro for lifting single-user to multi-user security. We believe this technique to be broadly applicable.
ePrint Report Privacy-Preserving Search of Similar Patients in Genomic Data Gilad Asharov, Shai Halevi, Yehuda Lindell, Tal Rabin
The growing availability of genomic data holds great promise for advancing medicine and research, but unlocking its full potential requires adequate methods for protecting the privacy of individuals whose genome data we use. One example of this tension is running Similar Patient Query on remote genomic data: In this setting a doctor that holds the genome of his/her patient may try to find other individuals with ``close" genomic data, and use the data of these individuals to help diagnose and find effective treatment for that patient's conditions. This is clearly a desirable mode of operation, however, the privacy exposure implications are considerable, so we would like to carry out the above ``closeness'' computation in a privacy preserving manner.

Secure-computation techniques offer a way out of this dilemma, but the high cost of computing edit distance privately poses a great challenge. Wang et al.~proposed recently [ACM CCS '15] an efficient solution, for situations where the genome sequences are so close that edit distance between two genomes can be well approximated just by looking at the indexes in which they differ from the reference genome. However, this solution does not extend well to cases with high divergence among individual genomes, and different techniques are needed there.

In this work we put forward a new approach for highly efficient secure computation for computing an approximation of the edit-distance, that works well even in settings with much higher divergence. We present contributions on two fronts. First, the design of an approximation method that would yield an efficient private computation. Second, further optimizations of the two-party protocol. Our tests indicate that the approximation method works well even in regions of the genome where the distance between individuals is 5\% or more with many insertions and deletions (compared to 99.5\% similarly with mostly substitutions, as considered by Wang et al.). As for speed, our protocol implementation takes just a few seconds to run on databases with thousands of records, each of length thousands of alleles, and it scales almost linearly with both the database size and the length of the sequences in it. As an example, in the datasets of the recent iDASH competition, it takes less than two seconds to find the nearest five records to a query, in a size-500 dataset of length-3500 sequences. This is 2-3 orders of magnitude faster than using state-of-the-art secure protocols for exact computation.
Constraint-hiding constrained PRFs (CHCPRFs), initially studied by Boneh, Lewi and Wu [PKC 2017], are constrained PRFs where the constrained key hides the description of the constraint. Envisioned with powerful applications such as searchable encryption, private-detectable watermarking and symmetric deniable encryption, the only known candidates of CHCPRFs are based on indistinguishability obfuscation or multilinear maps with strong security properties.

In this paper we construct CHCPRFs for all NC1 circuits from the Learning with Errors assumption. The construction draws heavily from the graph-induced multilinear maps by Gentry, Gorbunov and Halevi [TCC 2015], as well as the existing lattice-based PRFs. In fact, our construction can be viewed as an instance of the GGH15 approach where security can be reduced to LWE.

We also show how to build from CHCPRFs reusable garbled circuits (RGC), or equivalently private-key function-hiding functional encryptions with 1-key security. This provides a different approach of constructing RGC from that of Goldwasser et al. [STOC 2013].
The Principal Ideal Problem (resp. Short Principal Ideal Problem), shorten as PIP (resp. SPIP), consists in finding a generator (resp. short generator) of a principal ideal in the ring of integers of a number field. Several lattice-based cryptosystems rely on the presumed hardness of these two problems. In practice, most of them do not use an arbitrary number field but a power-of-two cyclotomic field. The Smart and Vercauteren fully homomorphic encryption scheme and the multilinear map of Garg, Gentry, and Halevi epitomize this common restriction. Recently, Cramer, Ducas, Peikert, and Regev showed that solving the SPIP in such cyclotomic rings boiled down to solving the PIP. In this paper, we present a heuristic algorithm that solves the PIP in prime-power cyclotomic fields in subexponential time L(1/2) in the discriminant of the number field. This is achieved by descending to its totally real subfield. The implementation of our algorithm allows to recover in practice the secret key of the Smart and Vercauteren scheme, for the smallest proposed parameters (in dimension 256).
ePrint Report Partitioned Group Password-Based Authenticated Key Exchange Dario Fiore, Maria Isabel Gonzalez Vasco, Claudio Soriente
Group Password-Based Authenticated Key Exchange (GPAKE) allows a group of users to establish a secret key, as long as all of them share the same password. However, in existing GPAKE protocols as soon as one user runs the protocol with a non-matching password, all the others abort and no key is established. In this paper we seek for a more flexible, yet secure, GPAKE and put forward the notion of partitioned GPAKE. Partitioned GPAKE tolerates users that run the protocol on different passwords. Through a protocol run, any subgroup of users that indeed share a password, establish a session key, factoring out the ``noise'' of inputs by users holding different passwords. At the same time any two keys, each established by a different subgroup of users, are pair-wise independent if the corresponding subgroups hold different passwords. We also introduce the notion of password-privacy for partitioned GPAKE, which is a kind of affiliation hiding property, ensuring that an adversary should not be able to tell whether any given set of users share a password. Finally, we propose an efficient instantiation of partitioned GPAKE building on an unforgeable symmetric encryption scheme and a PAKE by Bellare et al. Our proposal is proven secure in the random oracle/ideal cipher model, and requires only two communication rounds.
Lattice-based cryptography is a promising candidate to build cryptographic primitives that are secure against quantum algorithms. The Learning with Errors problem is one of the most important hardness assumptions, lattice-based constructions base their security on. Recently, Albrecht et al. (Journal of Mathematical Cryptology, 2015) presented the Sage module LWE-Estimator to estimate the hardness of concrete LWE instances, making the choice of parameters for lattice-based primitives easier and better comparable. The effectiveness of algorithms to solve LWE is often depending on the number of LWE instances, called LWE samples, given. To give lower bounds on the hardness of concrete instances it is assumed that each algorithm has given enough samples to run in optimal runtime per instance. That means it is assumed that the optimal number of samples is available. However, in cryptographic applications the optimal number of samples is often not given, but only a small number of samples. This leads to a more conservative choice of parameters than necessary in applications.

This work aims to improve the parameter choice with respect to the described problem. Our contribution is twofold: First, we analyze the hardness of LWE instances given a restricted number of samples. For this, we describe algorithms proposed in the literature to solve LWE briefly and estimate their computational cost while taking a restricted number of samples into account. Secondly, we extend the Sage module LWE-Estimator, based on our theoretical results. Furthermore, we evaluate the resulting implementation and show that restricting the number of samples has a significant impact on the hardness of LWE instances.
ePrint Report Revisiting AES Related-Key Differential Attacks with Constraint Programming David Gérault, Pascal Lafourcade, Marine Minier, Christine Solnon
The Advanced Encryption Standard (AES) is one of the most studied symmetric encryption schemes. During the last years, several attacks have been discovered in different adversary models. In this paper, we focus on related-key differential attacks, where the adversary may introduce differences in plaintext pairs and also in keys. We show that Constraint Programming (CP) can be used to model these attacks, and that it allows us to efficiently find all optimal related-key differential characteristics for AES-128, AES-192 and AES-256. In particular, we improve the best related-key differential for the whole AES-256 and give the best related-key differential on 10 rounds of AES-192, which is the differential trail with the longest path. Those results allow us to improve existing related-key distinguishers, basic related-key attacks and $q$-multicollisions on AES-256.
The Test Vector Leakage Assessment (TVLA) methodology is a qualitative tool relying on Welch's T-test to assess the security of cryptographic implementations against side-channel attacks. Despite known limitations (e.g., risks of false negatives and positives), it is sometimes considered as a pass-fail test to determine whether such implementations are "safe" or not (without clear definition of what is "safe"). In this note, we clarify the limited quantitative meaning of this test when used as a standalone tool. For this purpose, we first show that the straightforward application of this approach to assess the security of a masked implementation is not sufficient. More precisely, we show that even in a simple (more precisely, univariate) case study that seems best suited for the TVLA methodology, detection (or lack thereof) with Welch's T-test can be totally disconnected from the actual security level of an implementation. For this purpose, we put forward the case of a realistic masking scheme that looks very safe from the TVLA point-of-view and is nevertheless easy to break. We then discuss this result in more general terms and argue that this limitation is shared by all "moment-based" security evaluations. We conclude the note positively, by describing how to use moment-based analyzes as a useful ingredient of side-channel security evaluations, to determine a "security order".
ePrint Report Modifying an Enciphering Scheme after Deployment Paul Grubbs, Thomas Ristenpart, Yuval Yarom
Assume that a symmetric encryption scheme has been deployed and used with a secret key. We later must change the encryption scheme in a way that preserves the ability to decrypt (a subset of) previously encrypted plaintexts. Frequent real-world examples are migrating from a token-based encryption system for credit-card numbers to a format-preserving encryption (FPE) scheme, or extending the message space of an already deployed FPE. The ciphertexts may be stored in systems for which it is not easy or not efficient to retrieve them (to re-encrypt the plaintext under the new scheme). We introduce methods for functionality-preserving modifications to encryption, focusing particularly on deterministic, length-preserving ciphers such as those used to perform format-preserving encryption. We provide a new technique, that we refer to as the Zig-Zag construction, that allows one to combine two ciphers using different domains in a way that results in a secure cipher on one domain. We explore its use in the two settings above, replacing token-based systems and extending message spaces. We develop appropriate security goals and prove security relative to them assuming the underlying ciphers are themselves secure as strong pseudorandom permutations.
This paper describes a radically different privacy, security and integrity solution. Dispersed Cryptography converts the cloud from a security threat into a security asset by combining a standard stream cipher and the Quotient Ring Transform (QRT). The result is an integrated error correction/encryption algorithm. This encoding disperses data, breaking it into many smaller pieces and scattering them to different sites. No single site is critical; any can be lost without losing data. No single site can access data, even if the cryptovariable (secret key) is compromised.

The resulting system is more flexible and seamlessly adds both data integrity and security. The underlying codes are linear, and therefore have homomorphic properties and may be used in coding based quantum resistant cryptography.
19 February 2017
Tenure track or tenured position in Cryptology

The vacancy is open to talented individuals who are interested in an excellent opportunity to pursue a successful scientific career. The position is targeted primarily at candidates for the Assistant Professor level. However, candidates with an outstanding record for Associate or Full Professor levels may also be considered.

The professorship is a joint position between the Department of Computer Science (http://cs.aalto.fi/en/) and the Department of Mathematics and Systems analysis (http://math.aalto.fi/en/). With strong research groups in systems security, theoretical computer science, algebra and discrete mathematics, and stochastics, Aalto University is emerging as a leader in information security. The selected candidate is expected to establish independent research and teaching in cryptology. We solicit applications from candidates with expertise in any area of modern cryptology including, but not limited to, symmetric-key and public-key cryptography and cryptanalysis, information-theoretic and complexity-theoretic perspectives of cryptology, as well as research on implementation and application of cryptographic primitives.

Closing date for applications: 31 March 2017

Contact: Professor N. Asokan, tel +358 50 4836465 or Professor Camilla Hollanti, tel. +358 50 5628987, or in recruitment process-related questions HR Coordinator Laura Kuusisto-Noponen.

e-mails: firstname.lastname (at) aalto.fi or, for Prof. N. Asokan, firstinitial.lastname (at) aalto.fi

More information: http://www.aalto.fi/en/about/careers/jobs/view/1210/

17 February 2017
Job Posting Ph.D. student in the area of Implementations of Post-Quantum Cryptography Cryptographic Engineering Research Group at George Mason University, USA

Cryptographic Engineering Research Group (CERG) at George Mason University, USA, is seeking qualified candidates for the Graduate Research Assistant position in the area of efficient implementations of Post-Quantum Cryptosystems and attacks against these cryptosystems. The desired qualifications include strong mathematical background in algebra and number theory, experience in hardware design using hardware description languages, and knowledge of C and scripting languages, such as Python. Additional experience in Magma or SageMath, ASIC and FPGA design, software/hardware codesign, High-Level Synthesis, embedded software development, side-channel analysis, GPU programming, and Linux operating system is a plus.

The position is open starting in Fall 2017. Qualified candidates should apply to the ECE PhD program at George Mason University by March 15, 2017, indicating Dr. Gaj and/or Dr. Kaps as their preffered academic advisors. In parallel, an earlier e-mail contact with Dr. Gaj at kgaj (at) gmu.edu is highly recommended.

Closing date for applications: 15 March 2017

Contact: Kris Gaj, kgaj (at) gmu.edu, http://ece.gmu.edu/~kgaj

More information: https://cryptography.gmu.edu/team

16 February 2017
ePrint Report Hashing Garbled Circuits for Free Xiong Fan, Chaya Ganesh, Vladimir Kolesnikov
We introduce {\em Free Hash}, a new approach to generating Garbled Circuit (GC) hash at no extra cost during GC generation. This is in contrast with state-of-the-art approaches, which hash GCs at computational cost of up to $6\times$ of GC generation. GC hashing is at the core of the cut-and-choose technique of GC-based secure function evaluation (SFE).

Our main idea is to intertwine hash generation/verification with GC generation and evaluation. While we {\em allow} an adversary to generate a GC $\widehat{\GC}$ whose hash collides with an honestly generated $\GC$, such a $\widehat{\GC}$ w.h.p. will fail evaluation and cheating will be discovered. Our GC hash is simply a (slightly modified) XOR of all the gate table rows of GC. It is compatible with Free XOR and half-gates garbling, and can be made to work with many cut-and-choose SFE protocols.

With today's network speeds being not far behind hardware-assisted fixed-key garbling throughput, eliminating the GC hashing cost will significantly improve SFE performance. Our estimates show substantial cost reduction in typical settings, and up to factor $6$ in specialized applications relying on GC hashes.

We implemented GC hashing algorithm and report on its performance.
Cryptographic APIs like PKCS#11 are interfaces to trusted hardware where keys are stored; the secret keys should never leave the trusted hardware in plaintext. In PKCS#11 it is possible to give keys conflicting roles, leading to a number of key-recovery attacks. To prevent these attacks, one can authenticate the attributes of keys when wrapping, but this is not standard in PKCS#11. Alternatively, one can configure PKCS#11 to place additional restrictions on the commands permitted by the API.

Bortolozzo et al. proposed a configuration of PKCS#11, called the Secure Templates Patch (STP), supporting symmetric encryption and key wrapping. However, the security guarantees for STP given by Bortolozzo et al. are with respect to a weak attacker model. STP has been implemented as a set of filtering rules in Caml Crush, a software filter for PKCS#11 that rejects certain API calls. The filtering rules in Caml Crush extend STP by allowing users to compute and verify MACs and so the previous analysis of STP does not apply to this configuration.

We give a rigorous analysis of STP, including the extension used in Caml Crush. Our contribution is as follows:

(i) We show that the extension of STP used in Caml Crush is insecure.

(ii) We propose a strong, computational security model for configurations of PKCS#11 where the adversary can adaptively corrupt keys and prove that STP is secure in this model.

(iii) We prove the security of an extension of STP that adds support for public-key encryption and digital signatures.
ePrint Report Composable and Robust Outsourced Storage Christian Badertscher, Ueli Maurer
The security of data outsourcing mechanisms has become a crucial aspect of today's IT infrastructures. Security goals range from ensuring storage integrity, confidentiality, and access pattern hiding, to proofs of storage, proofs of ownership, and secure deduplication techniques. Despite sharing a common setting, previous security analyses of these tasks are often performed in different models and in a stand-alone fashion, which makes it hard to assess the overall security of a protocol or application involving several security schemes. In this work, we fill this gap and provide a composable model to capture the above security goals. We instantiate the basic client-server setting in this model, where the goal of the honest client is to retain security in the presence of a malicious server. Three specific contributions of the paper, which may be of independent interest, are:

1.) We present a novel and composable definition for secure and robust outsourcing schemes. Our definition is stronger than previous definitions for oblivious RAM or software protection, and assures strong security guarantees against active attacks. It not only assures that an attacker cannot learn the access pattern, but moreover assures resilience to errors and the prevention of targeted attacks to specific locations. We provide a protocol based on the well-known Path ORAM scheme achieving this strong security goal. We justify the need for such a strong notion in practice and show that several existing schemes cannot achieve this level of security.

2.) We present a novel and composable definition for proofs of retrievability capturing the guarantee that a successful audit implies that the current server state allows the client to retrieve his data. As part of our study, we develop an audit mechanism, based on secure and robust outsourcing schemes, that is similar to the construction by Cash et al. (Eurocrpyt 2013), but is universally composable and fault-tolerant.

3.) We assess the security of the standard challenge-response audit mechanism, in which the server has to compute a hash $H(F||c)$ on the file $F$ concatenated with a uniformly random challenge $c$ chosen by the client. Being concerned with composable security, we prove that this audit mechanism is not secure, even in the random oracle model, without assuming additional restrictions on the server behavior. The security of this basic audit scheme was implicitly assumed in Ristenpart et al. (Eurocrypt 2011). To complete the picture, we state the additional assumptions for this audit mechanism to be provably secure and investigate the (in)applicability of hash-function constructions in this setting.
In the recent IEEE communication letter ``Grouping-Proof-Distance-Bounding Protocols: Keep All Your Friends Close" by Karlsson and Mitrokotsa, a protocol for grouping-proof distance-bounding (GPDB) is proposed. In this letter, we show that the proof that is generated by the proposed GBDP protocol does not actually prove anything. Furthermore, we provide a construction towards a distance-bounding grouping-proof, however it remains unclear if one can ever truly combine (privacy-preserving) distance-bounding and a grouping-proof.

newer items   older items