International Association for Cryptologic Research

# IACR News Central

Here you can see all recent updates to the IACR webpage. These updates are also available:

Now viewing news items related to:

20 January 2017
We present novel variants of the dual-lattice attack against LWE in the presence of an unusually short secret. These variants are informed by recent progress in BKW-style algorithms for solving LWE. Applying them to parameter sets suggested by the homomorphic encryption libraries HElib and SEAL yields revised security estimates. Our techniques scale the exponent of the dual-lattice attack by a factor of $(2\,L)/(2\,L+1)$ when $\log q = \Theta{\left(L \log n\right)}$, when the secret has constant hamming weight $$h$$ and where $$L$$ is the maximum depth of supported circuits. They also allow to half the dimension of the lattice under consideration at a multiplicative cost of $2^{h}$ operations. Moreover, our techniques yield revised concrete security estimates. For example, both libraries promise 80 bits of security for LWE instances with $n=1024$ and $\log_2 q \approx {47}$, while the techniques described in this work lead to estimated costs of 68 bits (SEAL) and 62 bits (HElib).
ePrint Report Practical Passive Leakage-Abuse Attacks Against Symmetric Searchable Encryption Matthieu Giraud, Alexandre Anzala-Yamajako, Olivier Bernard, Pascal Lafourcade
The problem of securely outsourcing client data with search functionality has given rise to efficient solutions called Symmetric Searchable Encryption (SSE) schemes. These schemes are provably secure with respect to an explicit leakage profile; however, determining how much information can be inferred in practice from this leakage remains difficult. First, we refine and formalize the leakage hierarchy introduced by Cash et al. in 2015. Second, we further extend the analysis of existing attacks to better understand their real-world efficiency and the practicality of their hypothesis. Finally, we present the first complete practical attacks on L4, L3 and L2 leakage profiles. Our attacks are passive and only assume the very realistic knowledge of a small sample of plaintexts; moreover, we show their devastating effect on real-world datasets.
Blind signatures are at the core of e-cash systems and has numerous other applications. In this work we construct efficient blind and partially blind signature schemes over bilinear groups in the standard model. Our schemes yield short signatures consisting of only a couple of elements from the shorter source group and have very short communication overhead consisting of $1$ group element on the user side and $3$ group elements on the signer side. At $80$-bit security, our schemes yield signatures consisting of only $40$ bytes which is approximately $70\%$ shorter than the most efficient existing scheme with the same security in the standard model. Verification in our schemes requires only a couple of pairings. Our schemes compare favorably in every efficiency measure to all existing counterparts offering the same security in the standard model. In fact, the efficiency of our signing protocol as well as the signature size compare favorably even to many existing schemes in the random oracle model. For instance, our signatures are shorter than those of Brands' scheme which is at the heart of the U-Prove anonymous credential system used in practice. The unforgeability of our schemes is based on new intractability assumptions of a one-more'' type which we show are intractable in the generic group model, whereas their blindness holds w.r.t. malicious signing keys in the information-theoretic sense. We also give variants of our schemes for a vector of messages.
ePrint Report Anonymous contribution of data Matthew McKague, David Eyers
Many application domains depend on the collection of aggregate statistics from a large number of participants. In such situations, often the individual data points are not required. Indeed participants may wish to preserve the privacy of their specific data despite being willing to contribute to the aggregate statistics. We propose a protocol that allows a server to gather aggregate statistics, while providing anonymity to participants. Our protocol is information theoretically secure so that the server gains no information about participants’ data other than what is revealed by the aggregate statistics themselves.
19 January 2017
ePrint Report Accumulators with Applications to Anonymity-Preserving Revocation Foteini Baldimtsi, Jan Camenisch, Maria Dubovitskaya, Anna Lysyanskaya, Leonid Reyzin, Kai Samelin, Sophia Yakoubov
Membership revocation is essential for cryptographic applications, from traditional PKIs to group signatures and anonymous credentials. Of the various solutions for the revocation problem that have been explored, dynamic accumulators are one of the most promising. We propose Braavos, a new, RSA-based, dynamic accumulator. It has optimal communication complexity and, when combined with efficient zero-knowledge proofs, provides an ideal solution for anonymous revocation. For the construction of Braavos we use a modular approach: we show how to build an accumulator with better functionality and security from accumulators with fewer features and weaker security guarantees. We then describe an anonymous revocation component (ARC) that can be instantiated using any dynamic accumulator. ARC can be added to any anonymous system, such as anonymous credentials or group signatures, in order to equip it with a revocation functionality. Finally, we implement ARC with Braavos and plug it into Idemix, the leading implementation of anonymous credentials. This work resolves, for the first time, the problem of practical revocation for anonymous credential systems.
18 January 2017
ePrint Report Five Rounds are Sufficient and Necessary for the Indifferentiability of Iterated Even-Mansour Yuanxi Dai, Yannick Seurin, John Steinberger, Aishwarya Thiruvengadam
We prove that the 5-round iterated Even-Mansour (IEM) construction (which captures the high-level structure of the class of key-alternating ciphers) with a non-idealized key-schedule (such as the trivial key-schedule, where all round keys are equal) is indifferentiable from an ideal cipher. In a separate result, we also prove that five rounds are necessary by describing an attack against the corresponding 4-round construction. This closes the gap regarding the exact number of rounds for which the IEM construction with a non-idealized key-schedule is indifferentiable from an ideal cipher, which was previously only known to lie between four and twelve.
Yao's garbled circuits have been extensively used in Secure Function Evaluations (SFE). Several improvements have been proposed to improve the efficiency of garbled circuits. Kolesnikov and Schneider (2008) proposed the free-XOR technique. Naor, Pinkas, and Sumner (1999) introduced garbled row-reduction technique GRR3 to reduce each garbled gate to three ciphertexts, Pinkas et al (2009) proposed GRR2, and Zahur, Rosulek, and Evans (2015) introduced a half-gates technique to design free-XOR compatible GRR2. The GRR2, half-gates, and free-XOR garbling schemes improve efficiency by leaking locations of XOR gates in the circuit. This kind of information leakage is acceptable for SFE protocols though it maybe be unacceptable for other protocols such as Private Function Evaluation (PFE). For example, these techniques could not be used to improve the efficiency of existing non-universal-circuit-based constant round PFE protocols. The first result of this paper is a Gate Privacy preserving Garbled Row Reduction technique GPGRR2 for designing garbled circuits with at most two ciphertexts for each garbled gate. Compared with state-of-the-art gate-privacy-preserving garbling scheme GRR3, the scheme GPGRR2 reduces the garbled circuit size by a factor of at least 33%. The second result is the design of a linear (over integers) garbling scheme to garble a single odd gate to one ciphertext. Zahur, Rosulek, and Evans (2015) proved that a linear garbling scheme garbles each odd gate to at least $2$ ciphertexts. Our result shows that Zahur et al's result should be formulated more appropriately by restricting the linear concept'' explicitly to the field $GF({2^t})$. The third result is to use the GPGRR2 scheme to reduce the number of ciphertexts in non-universal-circuit based PFE protocols by a factor of 25%.
In this work, we significantly improve the efficiency of non-malleable codes in the split state model, by constructing a code with codeword length |s|+O(k)|s|+O(k), where |s||s| is the length of the message, and kk is the security parameter. This is a substantial improvement over previous constructions, both asymptotically and concretely.

Our construction relies on a new primitive which we define and study, called &#8467;&#8467;-more extractable hash functions. This notion, which may be of independent interest, is strictly stronger than the previous notion of extractable hash by Goldwasser et al. (Eprint '11) and Bitansky et al. (ITCS '12, Eprint '14), yet we can instantiate it under the same assumption used for the previous extractable hash function (a variant of the Knowledge of Exponent Assumption).
ePrint Report PePTCAP: A Privacy-enhancing Protocol for(Temporary) Car Access Provision Iraklis Symeonidis, Abdelrahaman Aly, Mustafa A. Mustafa, Bart Preneel
This paper proposes a novel physical keyless car sharing protocol that allows users to share their cars conveniently. In spite of many advantages, keyless car sharing systems come with substantial security and privacy challenges. Within this work, we design a concrete decentralised and Privacy-enhanced Protocol for (Temporary) Car Access Provision namely \protocol. \protocol\ uses multiparty computation based on Shamir's secret sharing scheme which allows the generation, update, revocation and distribution of an access token for a car in a privacy-preserving manner. In order to solve disputes and to deal with law enforcement requests, our protocol provides forensic evidence of car incidents based on threshold secret sharing. We perform a security and privacy analysis of \protocol\ followed by a complexity analysis, practical efficiency and time estimations for the multiparty computation elements of \protocol\ under realistic scenarios.
15 January 2017
In an inner-product functional encryption scheme, the plaintexts are vectors and the owner of the secret key can delegate the ability to compute weighted sums of the coefficients of the plaintext of any ciphertext. Recently, many inner-product functional encryption schemes were proposed. However, none of the known schemes are secure against chosen ciphertext attacks (IND-FE-CCA). We present a generic construction of IND-FE-CCA inner-product functional encryption from projective hash functions with homomorphic properties. We show concrete instantiations based on the DCR assumption, the DDH assumption, and more generally, any Matrix DDH assumption.
ePrint Report Double-base scalar multiplication revisited Daniel J. Bernstein, Chitchanok Chuengsatiansup, Tanja Lange
This paper reduces the number of field multiplications required for scalar multiplication on conservative elliptic curves. For an average 256-bit integer n, this paper's multiply-by-n algorithm takes just 7.47M per bit on twisted Edwards curves -x^2+y^2=1+dx^2y^2 with small d. The previous record, 7.62M per bit, was unbeaten for seven years.

Unlike previous record-setting algorithms, this paper's multiply-by-n algorithm uses double-base chains. The new speeds rely on advances in tripling speeds and on advances in constructing double-base chains. This paper's new tripling formula for twisted Edwards curves takes just 11.4M, and its new algorithm for constructing an optimal double-base chain for n takes just (log n)^{2.5+o(1)} bit operations.

Extending this double-base algorithm to double-scalar multiplications, as in signature verification, takes 8.80M per bit to compute n_1P+n_2Q. Previous techniques used 9.34M per bit.
14 January 2017
ePrint Report Low-Complexity Cryptographic Hash Functions Benny Applebaum, Naama Haramaty, Yuval Ishai, Eyal Kushilevitz, Vinod Vaikuntanathan
Cryptographic hash functions are efficiently computable functions that shrink a long input into a shorter output while achieving some of the useful security properties of a random function. The most common type of such hash functions is {\em collision resistant} hash functions (CRH), which prevent an efficient attacker from finding a pair of inputs on which the function has the same output.

Despite the ubiquitous role of hash functions in cryptography, several of the most basic questions regarding their computational and algebraic complexity remained open. In this work we settle most of these questions under new, but arguably quite conservative, cryptographic assumptions, whose study may be of independent interest. Concretely, we obtain the following results:

(1) Low-complexity CRH. Assuming the intractability of finding short codewords in natural families of linear error-correcting codes, there are CRH that shrink the input by a constant factor and have a {\em constant algebraic degree} over $Z_2$ (as low as 3), or even {\em constant output locality and input locality}. Alternatively, CRH with an arbitrary polynomial shrinkage can be computed by {\em linear-size} circuits.

(2) Win-win results. If low-degree CRH with good shrinkage {\em do not} exist, this has useful consequences for learning algorithms and data structures.

(3) Degree-2 hash functions. Assuming the conjectured intractability of solving a random system of quadratic equations over $Z_2$, a uniformly random degree-2 mapping is a {\em universal one-way hash function} (UOWHF). UOWHF relaxes CRH by forcing the attacker to find a collision with a random input picked by a challenger. On the other hand, a uniformly random degree-2 mapping is {\em not} a CRH. We leave the existence of degree-2 CRH open, and relate it to open questions on the existence of degree-2 randomized encodings of functions.
13 January 2017
ePrint Report Privacy-Preserving Classification on Deep Neural Network Hervé Chabanne, Amaury de Wargny, Jonathan Milgram, Constance Morel, Emmanuel Prouff
Neural Networks (NN) are today increasingly used in Machine Learning where they have become deeper and deeper to accurately model or classify high-level abstractions of data. Their development however also gives rise to important data privacy risks. This observation motives Microsoft researchers to propose a framework, called Cryptonets. The core idea is to combines simplifications of the NN with Fully Homomorphic Encryptions (FHE) techniques to get both confidentiality of the manipulated data and efficiency of the processing. While efficiency and accuracy are demonstrated when the number of non-linear layers is small (eg $2$), Cryptonets unfortunately becomes ineffective for deeper NNs which let the problem of privacy preserving matching open in these contexts. This work successfully addresses this problem by combining the original ideas of Cryptonets' solution with the batch normalization principle introduced at ICML 2015 by Ioffe and Szegedy. We experimentally validate the soundness of our approach with a neural network with $6$ non-linear layers. When applied to the MNIST database, it competes the accuracy of the best non-secure versions, thus significantly improving Cryptonets.
ePrint Report Analysis of the NORX Core Permutation Alex Biryukov, Aleksei Udovenko, Vesselin Velichkov
NORX is one of the fifteen authenticated encryption algorithms that have reached the third round of the CAESAR competition. NORX is built using the sponge-based Monkey Duplex construction. In this note we analyze the core permutation $F$. We show that it has rotational symmetries on different structure levels. This yields simple distinguishing properties for the permutation, which propagate with very high probability or even probability one. We also investigate differential symmetries in NORX at the word level. A new type of truncated differentials called symmetric truncated differentials (STD) is proposed. It is shown that, under the Markov assumption, up to $2.125$ rounds of the $F$ function of NORX32 and NORX64 can be distinguished using STD. Finally, we note that our analysis covers only the permutation $F$ and does not immediately threaten the security claims of the designers.
Implementation security for lattice-based cryptography is still a vastly unexplored field. At CHES 2016, the very first side-channel attack on a lattice-based signature scheme was presented. Later, shuffling was proposed as an inexpensive means to protect the Gaussian sampling component against such attacks. However, the concrete effectiveness of this countermeasure has never been evaluated. We change that by presenting an in-depth analysis of the shuffling countermeasure. Our analysis consists of two main parts. First, we perform a side-channel attack on a Gaussian sampler implementation. We combine templates with a recovery of data-dependent branches, which are inherent to samplers. We show that an adversary can realistically recover some samples with very high confidence. Second, we present a new attack against the shuffling countermeasure in context of Gaussian sampling and lattice-based signatures. We do not attack the shuffling algorithm as such, but exploit differing distributions of certain variables. We give a broad analysis of our attack by considering multiple modeled SCA adversaries. We show that a simple version of shuffling is not an effective countermeasure. With our attack, a profiled SCA adversary can recover the key by observing only 7000 signatures. A second version of this countermeasure, which uses Gaussian convolution in conjunction with shuffling twice, can increase side-channel security and the number of required signatures significantly. Here, roughly 285000 observations are needed for a successful attack. Yet, this number is still practical.
GlobalPlatform (GP) card specifications are the de facto standards for the industry of smart cards. Being highly sensitive, GP specifications were defined regarding stringent security requirements. In this paper, we analyze the cryptographic core of these requirements; i.e. the family of Secure Channel Protocols (SCP). Our main results are twofold. First, we demonstrate a theoretical attack against SCP02, which is the most popular protocol in the SCP family. We discuss the scope of our attack by presenting an actual scenario in which a malicious entity can exploit it in order to recover encrypted messages. Second, we investigate the security of SCP03 that was introduced as an amendment in 2009. We find that it provably satisfies strong notions of security. Of particular interest, we prove that SCP03 withstands algorithm substitution attacks (ASAs) defined by Bellare et al. that may lead to secret mass surveillance. Our findings highlight the great value of the paradigm of provable security for standards and certification, since unlike extensive evaluation, it formally guarantees the absence of security flaws.
ePrint Report Honey Encryption for Language Marc Beunardeau, Houda Ferradi, Rémi Géraud, David Naccache
Honey Encryption (HE), introduced by Juels and Ristenpart (Eurocrypt 2014), is an encryption paradigm designed to produce ciphertexts yielding plausible-looking but bogus plaintexts upon decryption with wrong keys. Thus brute-force attackers need to use additional information to determine whether they indeed found the correct key.

At the end of their paper, Juels and Ristenpart leave as an open question the adaptation of honey encryption to natural language messages. A recent paper by Chatterjee et al. takes a mild attempt at the challenge and constructs a natural language honey encryption scheme relying on simple models for passwords.

In this position paper we explain why this approach cannot be extended to reasonable-size human-written documents e.g. e-mails. We propose an alternative approach and evaluate its security.
We propose a simple and efficient framework for obtaining communication-efficient, constant- round protocols for (malicious) secure two-party computation. Our framework uses a preprocessing phase to generate authentication information for the two parties; in the online phase, this information is used to construct a single “authenticated” garbled circuit which is transmitted and evaluated. We also discuss various instantiations of our framework:

• The preprocessing phase can be instantiated efficiently using, e.g., TinyOT. Using this approach with our improvements to TinyOT, we obtain a protocol in which secure evaluation of an AES circuit (at 128-bit computational security and 40-bit statistical security) uses roughly 6 MB of communication in total. Most of the communication is circuit independent. A single execution of our protocol performs even better than the best previous work supporting circuit-independent preprocessing when amortized over 1024 executions.

• If the preprocessing phase is instantiated using the IPS compiler, we obtain a constant- round protocol whose communication complexity is asymptotically as small as a semi- honest garbled-circuit protocol in the OT-hybrid model.

• If the preprocessing phase is carried out by a trusted server, we obtain a constant-round protocol whose communication complexity is essentially the same as in the linear-round protocol of Mohassel et al. in the analogous setting.
ePrint Report Bounded-Collusion Attribute-Based Encryption from Minimal Assumptions Gene Itkis, Emily Shen, Mayank Varia, David Wilson, Arkady Yerukhimovich
Attribute-based encryption (ABE) enables encryption of messages under access policies so that only users with attributes satisfying the policy can decrypt the ciphertext. In standard ABE, an arbitrary number of colluding users, each without an authorized attribute set, cannot decrypt the ciphertext. However, all existing ABE schemes rely on concrete cryptographic assumptions such as the hardness of certain problems over bilinear maps or integer lattices. Furthermore, it is known that ABE cannot be constructed from generic assumptions such as public-key encryption using black-box techniques.

In this work, we revisit the problem of constructing ABE that tolerates collusions of arbitrary but a priori bounded size. We present two ABE schemes secure against bounded collusions that require only semantically secure public-key encryption. Our schemes achieve significant improvement in the size of the public parameters, secret keys, and ciphertexts over the previous construction of bounded-collusion ABE from minimal assumptions by Gorbunov et al. (CRYPTO 2012). In fact, in our second scheme, the size of ABE secret keys does not grow at all with the collusion bound. As a building block, we introduce a multidimensional secret-sharing scheme that may be of independent interest. We also obtain bounded-collusion symmetric-key ABE (which requires the secret key for encryption) by replacing the public-key encryption with symmetric-key encryption, which can be built from the minimal assumption of one-way functions.
ePrint Report A Decentralized PKI In A Mobile Ecosystem Varun Chandrasekaran, Lakshminarayanan Subramanian
A public key infrastructure (PKI) supports the authentication and distribution of public encryption keys, enabling secure communication among other uses. However, PKIs rely heavily on a centralized trusted party called the certificate authority (CA) which acts as the root of trust, and is also a single point of compromise. We describe the design of a decentralized PKI utilizing the intrinsic trust of the cellular (GSM) network. Secure Mobile Identities (SMI) is a repetitive key-exchange protocol that utilizes cryptographic proofs to prove the unique identities of mobile users. In this paper, we discuss how one can create strong reputations for an identity despite the presence of an adversary who can exploit the probabilistic one-way trust property of GSM networks. Our evaluation shows minimal computational overhead on existing devices, and quick bootstrap times of under 10 minutes of mobile interaction despite minimal trust assumptions placed, suggesting easy adoption in today's operational ecosystem.
ePrint Report Scalable Multi-Party Private Set-Intersection Carmit Hazay, Muthuramakrishnan Venkitasubramaniam
In this work we study the problem of private set-intersection in the multi-party setting and design two protocols with the following improvements compared to prior work. First, our protocols are designed in the so-called star network topology, where a designated party communicates with everyone else, and take a new approach of leveraging the 2PC protocol of [FreedmanNP04]. This approach minimizes the usage of a broadcast channel, where our semi-honest protocol does not make any use of such a channel and all communication is via point-to-point channels. In addition, the communication complexity of our protocols scales with the number of parties.

More concretely, (1) our first semi-honest secure protocol implies communication complexity that is linear in the input sizes, namely $O((\sum_{i=1}^n m_i)\cdot\kappa)$ bits of communication where $\kappa$ is the security parameter and $m_i$ is the size of $P_i$'s input set, whereas overall computational overhead is quadratic in the input sizes only for a designated party, and linear for the rest. We further reduce this overhead by employing two types of hashing schemes. (2) Our second protocol is proven secure in the malicious setting. This protocol induces communication complexity $O((n^2 + nm_\maxx + nm_\minn\log m_\maxx)\kappa)$ bits of communication where $m_\minn$ (resp. $m_\maxx$) is the minimum (resp. maximum) over all input sets sizes and $n$ is the number of parties.
ePrint Report Constant Round Adaptively Secure Protocols in the Tamper-Proof Hardware Model Carmit Hazay, Antigoni Polychroniadou, Muthuramakrishnan Venkitasubramaniam
Achieving constant-round adaptively secure protocols (where all parties can be corrupted) in the plain model is a notoriously hard problem. Very recently, three works published in TCC 2015 (Dachman-Soled et al., Garg and Polychroniadou, Canetti et al.), solved the problem in the Common Reference String (CRS) model. In this work, we present a constant-round adaptive UC-secure computation protocol for all well-formed functionalities in the tamper-proof hardware model using stateless tokens from only one-way functions. In contrast, all prior works in the CRS model require very strong assumptions, in particular, the existence of indistinguishability obfuscation.

As a corollary to our techniques, we present the first adaptively secure protocols in the Random Oracle Model (ROM) with round complexity proportional to the depth of circuit implementing the functionality. Our protocols are secure in the Global Random Oracle Model introduced recently by Canetti, Jain and Scafuro in CCS 2014 that provides strong compositional guarantees. More precisely, we obtain an adaptively secure UC-commitment scheme in the global ROM assuming only one-way functions. In comparison, the protocol of Canetti, Jain and Scafuro achieves only static security and relies on the specific assumption of Discrete Diffie-Hellman assumption (DDH).
We show that the recent structure-preserving signature (SPS) scheme of Kiltz, Pan and Wee [CRYPTO 2015], provably secure under the standard bilinear pairings group assumption SXDH, can be improved to have one less group element and one less pairing product equation in the signature verification step. Our improved SPS scheme only requires six group elements (five in one group, and one in the other), and two pairing product equations for verification. The number of pairing product equations is optimal, as it matches a known lower bound of Abe et al [CRYPTO 2011]. The number of group elements in the signature also approaches the known lower bound of four for SXDH assumption. Further, while the earlier scheme had a security reduction which incurred a security loss that is quadratic in number of queries $Q$, our novel security reduction incurs only a $Q \log{Q}$ factor loss in security.

Structure-preserving signatures are used pervasively in group signatures, group encryptions, blind signatures, proxy signatures and many other anonymous credential applications. Our work directly leads to improvements in these schemes. Moreover, the improvements are usually of a higher multiplicative factor order, as these constructions use Groth-Sahai NIZK proofs for zero-knowledge verification of pairing-product equations.

We also give our construction under the more general and standard $\D_k$-MDDH (Matrix-DDH) assumption. The signature size in our scheme is $3k+2$ elements in one group, and one element in the other. The number of pairing product equations required for verification is only $2k$, whereas the earlier schemes required at least $2k+1$ equations.
ePrint Report Inference and Record-Injection Attacks on Searchable Encrypted Relational Databases {Mohamed Ahmed Abdelraheem, Tobias Andersson, Christian Gehrmann
We point out the risks of providing security to relational databases via searchable encryption schemes by mounting a novel inference attack exploiting the structure of relational databases together with the leakage of searchable encryption schemes. We discuss some techniques to reduce the effectiveness of inference attacks against searchable encryption schemes. Moreover, we show that record-injection attacks mounted on relational databases have worse consequences than their file-injection counterparts on unstructured databases which have been recently proposed at USENIX 2016.We point out the risks of providing security to relational databases via searchable encryption schemes by mounting a novel inference attack exploiting the structure of relational databases together with the leakage of searchable encryption schemes. We discuss some techniques to reduce the effectiveness of inference attacks against searchable encryption schemes. Moreover, we show that record-injection attacks mounted on relational databases have worse consequences than their file-injection counterparts on unstructured databases which have been recently proposed at USENIX 2016.
We propose a new generic framework for constructing fully secure attribute based encryption (ABE) in multilinear settings. It is applicable in a generic manner to any predicates. Previous generic frameworks of this kind are given only in bilinear group settings, where applicable predicate classes are limited. Our framework provides an abstraction of dual system paradigms over composite-order graded multilinear encoding schemes in a black-box manner.

As applications, we propose new fully secure ABE systems for general predicates, namely, ABE for circuits. We obtain two schemes for each of key-policy (KP) and ciphertext-policy (CP) variants of ABE. All of our four fully secure schemes can deal with unbounded-size circuits, while enjoy succinctness, meaning that the key and ciphertext sizes are (less than or) proportional to corresponding circuit sizes. In the CP-ABE case, no scheme ever achieves such properties, even when considering selectively secure systems. Furthermore, our second KP-ABE achieves constant-size ciphertexts, whereas our second CP-ABE achieves constant-size keys. Previous ABE systems for circuits are either selectively secure (Gorbunov et al. STOC'13, Garg et al. Crypto'13, and subsequent works), or semi-adaptively secure (Brakerski and Vaikuntanathan Crypto'16), or fully-secure but not succinct and restricted to bounded-size circuits (Garg et al. ePrint 2014/622, and Garg et al. TCC'16-A).
When data maintained in a decentralized fashion needs to be synchronized or exchanged between different databases, related data sets usually get associated with a unique identifier. While this approach facilitates cross-domain data exchange, it also comes with inherent drawbacks in terms of controllability. As data records can easily be linked, no central authority can limit or control the information flow. Worse, when records contain sensitive personal data, as is for instance the case in national social security systems, such linkability poses a massive security and privacy threat. An alternative approach is to use domain-specific pseudonyms, where only a central authority knows the cross-domain relation between the pseudonyms. However, current solutions require the central authority to be a fully trusted party, as otherwise it can provide false conversions and exploit the data it learns from the requests. We propose an (un)linkable pseudonym system that overcomes those limitations, and enables controlled yet privacy-friendly exchange of distributed data. We prove our protocol secure in the UC framework and provide an efficient instantiation based on discrete-logarithm related assumptions.
11 January 2017
We introduce a notion of watermarking for cryptographic functions and propose a concrete scheme for watermarking cryptographic functions. Informally speaking, a digital watermarking scheme for cryptographic functions embeds information, called a \textit{mark}, into functions such as one-way functions and decryption functions of public-key encryption. There are two basic requirements for watermarking schemes. (1) A mark-embedded function must be functionally equivalent to the original function. (2) It must be difficult for adversaries to remove the embedded mark without damaging the original functionality. In spite of its importance and usefulness, there have only been a few theoretical works on watermarking for functions (or programs). Furthermore, we do not have rigorous definitions of watermarking for cryptographic functions and concrete constructions.

To solve the above problem, we introduce a notion of watermarking for cryptographic functions and define its security. Furthermore, we present a lossy trapdoor function (LTF) based on the decisional linear (DLIN) problem and a watermarking scheme for the LTF. Our watermarking scheme is secure under the DLIN assumption in the standard model. We use techniques of dual system encryption and dual pairing vector spaces (DPVS) to construct our watermarking scheme. This is a new application of DPVS. Our watermarking for cryptographic functions is a generalized notion of copyrighted functions introduced by Naccache, Shamir, and Stern (PKC 1999) and our scheme is based on an identity-based encryption scheme whose private keys for identities (i.e., decryption functions) are marked, so our technique can be used to construct black-box traitor tracing schemes.
ePrint Report A Generic Approach to Constructing and Proving Verifiable Random Functions Rishab Goyal, Susan Hohenberger, Venkata Koppula, Brent Waters
Verifiable Random Functions (VRFs) as introduced by Micali, Rabin and Vadhan are a special form of Pseudo Random Functions (PRFs) wherein a secret key holder can also prove validity of the function evaluation relative to a statistically binding commitment. Prior works have approached the problem of constructing VRFs by proposing a candidate under specific number theoretic setting --- mostly in bilinear groups --- and then grapple with the challenges of proving security in the VRF environments. These constructions achieved different results and tradeoffs in practical efficiency, tightness of reductions and cryptographic assumptions.

In this work we take a different approach. Instead of tackling the VRF problem as a whole we demonstrate a simple and generic way of building Verifiable Random Functions from more basic and narrow cryptographic primitives. Then we can turn to exploring solutions to these primitives with a more focused mindset. In particular, we show that VRFs can be constructed generically from the ingredients of: (1) a 1-bounded constrained pseudo random function for a functionality that is admissible hash friendly" , (2) a non-interactive statistically binding commitment scheme (without trusted setup) and (3) a non-interactive witness indistinguishable proofs or NIWIs. The first primitive can be replaced with a more basic puncturable PRF constraint if one is willing to settle for selective security or assume sub-exponential hardness of assumptions.

In addition, we support our generic approach by giving new constructions of constrained PRFs under non bilinear groups and new constructions of perfectly binding commitments from the Learning with Errors (LWE) and Learning Parity with Noise (LPN) assumptions.
Over the years, SSL/TLS has become an essential part of Internet security. As such, it should offer robust and state-of-the-art security, in particular for HTTPS, its first application. Theoretically, the protocol allows for a trade-off between secure algorithms and decent performance. Yet in practice, servers do not always support the latest version of the protocol, nor do they all enforce strong cryptographic algorithms. To assess the quality of HTTPS and other TLS deployment at large, several studies have been led to grasp the state of the ecosystem, and to characterize the quality of certificate chains in particular.

In this paper, we propose to analyse some of the existing data concerning TLS measures on the Internet. We studied several datasets, from the first public ones in 2010 to more recent scans. Even if the collection methodology and the used tools vary between campaigns, we propose a unified and reproducible way to analyse the TLS ecosystem through different datasets. Our approach is based on a set of open-source tools, concerto.

Our contribution is therefore threefold: an analysis of existing datasets to propose a unified methodology, the implementation of our approach with concerto, and the presentation of some results to validate our toolsets.
ePrint Report SMART POOL : Practical Decentralized Pooled Mining Loi Luu, Yaron Velner, Jason Teutsch, Prateek Saxena
Many blockchain-based cryptocurrencies such as Bitcoin and Ethereum use Nakamoto consensus protocol to reach agreement on the blockchain state between a network of participant nodes. The Nakamoto consensus protocol probabilistically selects a leader via a mining process which rewards network participants (or miners) to solve computational puzzles. Finding solutions for such puzzles requires an enormous amount of computation. Thus, miners often aggregate resources into {\em pools} and share rewards amongst all pool members via {\em pooled mining} protocol. Pooled mining helps reduce the variance of miners' payoffs significantly and is widely adopted in popular cryptocurrencies. For example, as of this writing, more than $95\%$ of mining power in Bitcoin emanates from $10$ mining pools.

Although pooled mining benefits miners, it severely degrades decentralization, since a centralized pool manager administers the pooling protocol. Furthermore, pooled mining increases the transaction censorship significantly since pool managers decide which transactions are included in blocks. Due to this widely recognized threat, the Bitcoin community has proposed an alternative called P2Pool which decentralizes the operations of the pool manager. However, P2Pool is inefficient, increases the variance of miners' rewards, requires much more computation and bandwidth from miners, and has not gained wide adoption.

In this work, we propose a new protocol design for a decentralized mining pool. Our protocol called SmartPool shows how one can leverage {\em smart contracts}, which are autonomous agents themselves running on decentralized blockchains, to decentralize cryptocurrency mining. SmartPool guarantees high security, low reward's variance for miners and is cost-efficient. We implemented a prototype of SmartPool as an Ethereum smart contract working as a decentralized mining pool for Bitcoin. We have deployed it on the Ethereum testnet and our experiments confirm that SmartPool is efficient and ready for practical use.