International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

13 January 2017

{Mohamed Ahmed Abdelraheem, Tobias Andersson, Christian Gehrmann
ePrint Report ePrint Report
We point out the risks of providing security to relational databases via searchable encryption schemes by mounting a novel inference attack exploiting the structure of relational databases together with the leakage of searchable encryption schemes. We discuss some techniques to reduce the effectiveness of inference attacks against searchable encryption schemes. Moreover, we show that record-injection attacks mounted on relational databases have worse consequences than their file-injection counterparts on unstructured databases which have been recently proposed at USENIX 2016.We point out the risks of providing security to relational databases via searchable encryption schemes by mounting a novel inference attack exploiting the structure of relational databases together with the leakage of searchable encryption schemes. We discuss some techniques to reduce the effectiveness of inference attacks against searchable encryption schemes. Moreover, we show that record-injection attacks mounted on relational databases have worse consequences than their file-injection counterparts on unstructured databases which have been recently proposed at USENIX 2016.
Expand
Nuttapong Attrapadung
ePrint Report ePrint Report
We propose a new generic framework for constructing fully secure attribute based encryption (ABE) in multilinear settings. It is applicable in a generic manner to any predicates. Previous generic frameworks of this kind are given only in bilinear group settings, where applicable predicate classes are limited. Our framework provides an abstraction of dual system paradigms over composite-order graded multilinear encoding schemes in a black-box manner.

As applications, we propose new fully secure ABE systems for general predicates, namely, ABE for circuits. We obtain two schemes for each of key-policy (KP) and ciphertext-policy (CP) variants of ABE. All of our four fully secure schemes can deal with unbounded-size circuits, while enjoy succinctness, meaning that the key and ciphertext sizes are (less than or) proportional to corresponding circuit sizes. In the CP-ABE case, no scheme ever achieves such properties, even when considering selectively secure systems. Furthermore, our second KP-ABE achieves constant-size ciphertexts, whereas our second CP-ABE achieves constant-size keys. Previous ABE systems for circuits are either selectively secure (Gorbunov et al. STOC'13, Garg et al. Crypto'13, and subsequent works), or semi-adaptively secure (Brakerski and Vaikuntanathan Crypto'16), or fully-secure but not succinct and restricted to bounded-size circuits (Garg et al. ePrint 2014/622, and Garg et al. TCC'16-A).
Expand
Jan Camenisch, Anja Lehmann
ePrint Report ePrint Report
When data maintained in a decentralized fashion needs to be synchronized or exchanged between different databases, related data sets usually get associated with a unique identifier. While this approach facilitates cross-domain data exchange, it also comes with inherent drawbacks in terms of controllability. As data records can easily be linked, no central authority can limit or control the information flow. Worse, when records contain sensitive personal data, as is for instance the case in national social security systems, such linkability poses a massive security and privacy threat. An alternative approach is to use domain-specific pseudonyms, where only a central authority knows the cross-domain relation between the pseudonyms. However, current solutions require the central authority to be a fully trusted party, as otherwise it can provide false conversions and exploit the data it learns from the requests. We propose an (un)linkable pseudonym system that overcomes those limitations, and enables controlled yet privacy-friendly exchange of distributed data. We prove our protocol secure in the UC framework and provide an efficient instantiation based on discrete-logarithm related assumptions.
Expand

12 January 2017

Computer Science/Mathematics, University of Auckland, New Zealand
Job Posting Job Posting
Two, three-year PhD scholarships, covering international tuition fees and a stipend of NZ$27,500 per year.

Application deadline: March 12, 2017.

The project is to develop new theoretical foundations for practical obfuscation. The theoretical component will focus on designing and evaluating new security definitions for practical obfuscation solutions. The practical component will focus on creating practical obfuscation tools inspired by the results obtained in the theoretical component. The theoretical work will be led by Prof Steven Galbraith while Dr. Giovanni Russello will lead the practical aspects.

The ideal candidate will have an undergraduate degree in computer science, engineering or mathematics and have written a master thesis in some topic related to security, cryptography, or the underlying mathematics. Experience with obfuscation and programming preferable.

Closing date for applications: 12 March 2017

Contact: Professor Steven Galbraith

Mathematics Department

University of Auckland

aucklandobfuscationphd (at) gmail.com

More information: https://www.math.auckland.ac.nz/~sgal018/PhD-Marsden.pdf

Expand

11 January 2017

Ryo Nishimaki
ePrint Report ePrint Report
We introduce a notion of watermarking for cryptographic functions and propose a concrete scheme for watermarking cryptographic functions. Informally speaking, a digital watermarking scheme for cryptographic functions embeds information, called a \textit{mark}, into functions such as one-way functions and decryption functions of public-key encryption. There are two basic requirements for watermarking schemes. (1) A mark-embedded function must be functionally equivalent to the original function. (2) It must be difficult for adversaries to remove the embedded mark without damaging the original functionality. In spite of its importance and usefulness, there have only been a few theoretical works on watermarking for functions (or programs). Furthermore, we do not have rigorous definitions of watermarking for cryptographic functions and concrete constructions.

To solve the above problem, we introduce a notion of watermarking for cryptographic functions and define its security. Furthermore, we present a lossy trapdoor function (LTF) based on the decisional linear (DLIN) problem and a watermarking scheme for the LTF. Our watermarking scheme is secure under the DLIN assumption in the standard model. We use techniques of dual system encryption and dual pairing vector spaces (DPVS) to construct our watermarking scheme. This is a new application of DPVS. Our watermarking for cryptographic functions is a generalized notion of copyrighted functions introduced by Naccache, Shamir, and Stern (PKC 1999) and our scheme is based on an identity-based encryption scheme whose private keys for identities (i.e., decryption functions) are marked, so our technique can be used to construct black-box traitor tracing schemes.
Expand
Rishab Goyal, Susan Hohenberger, Venkata Koppula, Brent Waters
ePrint Report ePrint Report
Verifiable Random Functions (VRFs) as introduced by Micali, Rabin and Vadhan are a special form of Pseudo Random Functions (PRFs) wherein a secret key holder can also prove validity of the function evaluation relative to a statistically binding commitment. Prior works have approached the problem of constructing VRFs by proposing a candidate under specific number theoretic setting --- mostly in bilinear groups --- and then grapple with the challenges of proving security in the VRF environments. These constructions achieved different results and tradeoffs in practical efficiency, tightness of reductions and cryptographic assumptions.

In this work we take a different approach. Instead of tackling the VRF problem as a whole we demonstrate a simple and generic way of building Verifiable Random Functions from more basic and narrow cryptographic primitives. Then we can turn to exploring solutions to these primitives with a more focused mindset. In particular, we show that VRFs can be constructed generically from the ingredients of: (1) a 1-bounded constrained pseudo random function for a functionality that is ``admissible hash friendly" , (2) a non-interactive statistically binding commitment scheme (without trusted setup) and (3) a non-interactive witness indistinguishable proofs or NIWIs. The first primitive can be replaced with a more basic puncturable PRF constraint if one is willing to settle for selective security or assume sub-exponential hardness of assumptions.

In addition, we support our generic approach by giving new constructions of constrained PRFs under non bilinear groups and new constructions of perfectly binding commitments from the Learning with Errors (LWE) and Learning Parity with Noise (LPN) assumptions.
Expand
Olivier Levillain, Maxence Tury, Nicolas Vivet
ePrint Report ePrint Report
Over the years, SSL/TLS has become an essential part of Internet security. As such, it should offer robust and state-of-the-art security, in particular for HTTPS, its first application. Theoretically, the protocol allows for a trade-off between secure algorithms and decent performance. Yet in practice, servers do not always support the latest version of the protocol, nor do they all enforce strong cryptographic algorithms. To assess the quality of HTTPS and other TLS deployment at large, several studies have been led to grasp the state of the ecosystem, and to characterize the quality of certificate chains in particular.

In this paper, we propose to analyse some of the existing data concerning TLS measures on the Internet. We studied several datasets, from the first public ones in 2010 to more recent scans. Even if the collection methodology and the used tools vary between campaigns, we propose a unified and reproducible way to analyse the TLS ecosystem through different datasets. Our approach is based on a set of open-source tools, concerto.

Our contribution is therefore threefold: an analysis of existing datasets to propose a unified methodology, the implementation of our approach with concerto, and the presentation of some results to validate our toolsets.
Expand
Loi Luu, Yaron Velner, Jason Teutsch, Prateek Saxena
ePrint Report ePrint Report
Many blockchain-based cryptocurrencies such as Bitcoin and Ethereum use Nakamoto consensus protocol to reach agreement on the blockchain state between a network of participant nodes. The Nakamoto consensus protocol probabilistically selects a leader via a mining process which rewards network participants (or miners) to solve computational puzzles. Finding solutions for such puzzles requires an enormous amount of computation. Thus, miners often aggregate resources into {\em pools} and share rewards amongst all pool members via {\em pooled mining} protocol. Pooled mining helps reduce the variance of miners' payoffs significantly and is widely adopted in popular cryptocurrencies. For example, as of this writing, more than $95\%$ of mining power in Bitcoin emanates from $10$ mining pools.

Although pooled mining benefits miners, it severely degrades decentralization, since a centralized pool manager administers the pooling protocol. Furthermore, pooled mining increases the transaction censorship significantly since pool managers decide which transactions are included in blocks. Due to this widely recognized threat, the Bitcoin community has proposed an alternative called P2Pool which decentralizes the operations of the pool manager. However, P2Pool is inefficient, increases the variance of miners' rewards, requires much more computation and bandwidth from miners, and has not gained wide adoption.

In this work, we propose a new protocol design for a decentralized mining pool. Our protocol called SmartPool shows how one can leverage {\em smart contracts}, which are autonomous agents themselves running on decentralized blockchains, to decentralize cryptocurrency mining. SmartPool guarantees high security, low reward's variance for miners and is cost-efficient. We implemented a prototype of SmartPool as an Ethereum smart contract working as a decentralized mining pool for Bitcoin. We have deployed it on the Ethereum testnet and our experiments confirm that SmartPool is efficient and ready for practical use.
Expand
Nir Bitansky
ePrint Report ePrint Report
{\em Verifiable random functions} (VRFs) are pseudorandom functions where the owner of the seed, in addition to computing the function's value $y$ at any point $x$, can also generate a non-interactive proof $\pi$ that $y$ is correct (relative to so), without compromising pseudorandomness at other points. Being a natural primitive with a wide range of applications, considerable efforts have been directed towards the construction of such VRFs. While these efforts have resulted in a variety of algebraic constructions (from bilinear maps or the RSA problem), the relation between VRFs and other general primitives is still not well understood.

We present new constructions of VRFs from general primitives, the main one being {\em non-interactive witness-indistinguishable proofs} (NIWIs). This includes: \begin{itemize} \item A selectively-secure VRF assuming NIWIs and non-interactive commitments. As usual, the VRF can be made adaptively-secure assuming subexponential hardness of the underlying primitives. \item An adaptively-secure VRF assuming (polynomially-hard) NIWIs, non-interactive commitments, and {\em (single-key) constrained pseudorandom functions} for a restricted class of constraints. \end{itemize}

The above primitives have known instantiations under various standard assumptions, which yields corresponding VRF instantiations, under different assumptions than were known. One notable example is a non-uniform construction of VRFs from subexponentially-hard trapdoor permutations, or more generally, {\em verifiable pseudorandom generators} (the construction can be made uniform under a standard derandomization assumption). This partially answers an open question by Dwork and Naor (FOCS '00).

The construction and its analysis are quite simple. Both draw from ideas commonly used in the context of {\em indistinguishability obfuscation}.
Expand
Gottfried Herold, Elena Kirshanova
ePrint Report ePrint Report
We present an algorithm for the approximate $k$-List problem for the Euclidean distance that improves upon the Bai-Laarhoven-Stehle (BLS) algorithm from ANTS'16. The improvement stems from the observation that almost all the solutions to the approximate $k$-List problem form a particular configuration in $n$-dimensional space. Due to special properties of configurations, it is much easier to verify whether a $k$-tuple forms a configuration rather than checking whether it gives a solution to the $k$-List problem. Thus, phrasing the $k$-List problem as a problem of finding such configurations immediately gives a better algorithm. Furthermore, the search for configurations can be sped up using techniques from Locality-Sensitive Hashing (LSH). Stated in terms of configuration-search, our LSH-like algorithm offers a broader picture on previous LSH algorithms.

For the Shortest Vector Problem, our configuration-search algorithm results in an exponential improvement for memory-efficient sieving algorithms. For $k=3$, it allows us to bring down the complexity of the BLS sieve algorithm on an $n$-dimensional lattice from $2^{0.4812n+o(n)}$ to $2^{0.3962n + o(n)}$ with the same space-requirement $2^{0.1887n + o(n)}$. Note that our algorithm beats the Gauss Sieve algorithm with time resp. space requirements of $2^{0.415n+o(n)}$ resp. $2^{0.208n + o(n)}$, while being easy to implement. Using LSH techniques, we can further reduce the time complexity down to $2^{0.3717n + o(n)}$ while retaining a memory complexity of $2^{0.1887n+o(n)}$.
Expand
Yevgeniy Dodis, Jonathan Katz, John Steinberger, Aishwarya Thiruvengadam, Zhe Zhang
ePrint Report ePrint Report
Many modern block ciphers are constructed based on the paradigm of substitution-permutation networks (SPNs). But, somewhat surprisingly---especially in comparison with Feistel networks, which have been analyzed by dozens of papers going back to the seminal work of Luby and Rackoff---there are essentially no provable-security results about SPNs. In this work, we initiate a comprehensive study of the security of SPNs as strong pseudorandom permutations when the underlying "S-box" is modeled as a public random permutation. We show that 3 rounds of S-boxes are necessary and sufficient for secure linear SPNs, but that even 1-round SPNs can be secure when non-linearity is allowed.
Expand
Dana Dachman-Soled, Mukul Kulkarni, Aria Shahverdi
ePrint Report ePrint Report
In a recent result, Dachman-Soled et al.~(TCC '15) proposed a new notion called locally decodable and updatable non-malleable codes, which informally, provides the security guarantees of a non-malleable code while also allowing for efficient random access. They also considered locally decodable and updatable non-malleable codes that are leakage-resilient, allowing for adversaries who continually leak information in addition to tampering. Unfortunately, the locality of their construction in the continual setting was Omega(log n), meaning that if the original message size was n, then Omega(log n) positions of the codeword had to be accessed upon each decode and update instruction.

In this work, we ask whether super-constant locality is inherent in this setting. We answer the question affirmatively by showing tight upper and lower bounds. Specifically, in any threat model which allows for a rewind attack-wherein the attacker leaks a small amount of data, waits for the data to be overwritten and then writes the original data back-we show that a locally decodable and updatable non-malleable code with block size Chi in poly(lambda) number of bits requires locality delta(n) in omega(1), where n = poly(lambda) is message length and lambda is security parameter. On the other hand, we re-visit the threat model of Dachman-Soled et al.~(TCC '15)-which indeed allows the adversary to launch a rewind attack-and present a construction of a locally decodable and updatable non-malleable code with block size Chi in Omega(lambda^{1/mu}) number of bits (for constant 0 < mu < 1) with locality delta(n), for any delta(n) in omega(1), and n = poly(lambda).
Expand
Tommaso Gagliardoni, Nikolaos P. Karvelas, Stefan Katzenbeisser
ePrint Report ePrint Report
We study the security of {\em Oblivious Random Access Machines (ORAM)} in the quantum world. First we introduce a new formal treatment of ORAMs, which is at the same time elegant and simpler than the known formalization by Goldreich and Ostrovsky. Then we define and analyze the notion of post-quantum security for ORAMs, i.e., classical ORAMs resistant against quantum adversaries. We show that merely switching %from classically secure to post-quantum secure encryption in a classically secure ORAM construction does not generally yield a post-quantum secure ORAM construction. On the other hand, we provide a post-quantum secure construction based on a modification of Path-ORAM, the most efficient general ORAM construction, introduced by Stefanov et al.

Furthermore, we initiate the study of {\em Quantum ORAMs (QORAMs)}, that is, ORAM constructions meant to be executed between quantum parties acting on arbitrary quantum data. We address many problems arising when formalizing Quantum ORAMs, and we provide a secure construction (based on Path-ORAM and a quantum encryption scheme introduced by Alagic et al.) which has the interesting property of making read and write operations {\em inherently equivalent}. In so doing, we develop a novel technique of quantum extractability which is of independent interest. We believe that QORAMs represent a natural and interesting step in the direction of achieving privacy in future scenarios where quantum computing is ubiquitous.
Expand
Meilof Veeningen
ePrint Report ePrint Report
Pinocchio is a practical zk-SNARK that allows a prover to perform cryptographically verifiable computations with verification effort sometimes less than performing the computation itself. A recent proposal showed how to make Pinocchio adaptive (or ``hash-and-prove''), i.e., to enable proofs with respect to computation-independent commitments. This enables computations to be chosen after the commitments have been produced, and for data to be shared in different computations in a flexible way. Unfortunately, this proposal is not zero-knowledge. In particular, it cannot be combined with Trinocchio, a system in which Pinocchio is outsourced to three workers that do not learn the inputs thanks to multi-party computation (MPC). In this paper, we show how to make Pinocchio adaptive in a zero-knowledge way; apply it to make Trinocchio work on computation-independent commitments; present tooling to easily program fleible verifiable computations (with or without MPC); and use it to build a prototype in a medical research case study.
Expand
Venkata Koppula, Andrew Poelstra, Brent Waters
ePrint Report ePrint Report
Recently, Hofheinz, Jager, Khurana, Sahai, Waters and Zhandry~\cite{HJKSWZ14} proposed a new primitive called universal samplers that allows oblivious sampling from arbitrary distributions, and showed how to construct universal samplers using indistinguishability obfuscation ($\iO$) in the ROM.

One important limitation for applying universal samplers in practice is that the constructions are built upon indistinguishability obfuscation. The costs of using current $\iO$ constructions is prohibitively large. We ask is whether the cost of a (universal) sampling could be paid by one party and then shared (soundly) with all other users? We address this question by introducing the notion of universal samplers with verification. Our notion follows the general path of \cite{HJKSWZ14}, but has additional semantics that allows for validation of a sample.

In this work we define and give a construction for universal samplers with verification. Our verification procedure is simple and built upon one-time signatures, making verification of a sample much faster than computing it. Security is proved under the sub exponential hardness of indistinguishability obfuscation, puncturable pseudorandom functions, and one-time signatures.
Expand
Jan Camenisch, David Derler, Stephan Krenn, Henrich C. Pöhls, Kai Samelin, Daniel Slamanig
ePrint Report ePrint Report
A chameleon-hash function is a hash function that involves a trapdoor the knowledge of which allows one to find arbitrary collisions in the domain of the function. In this paper, we introduce the notion of chameleon-hash functions with ephemeral trapdoors. Such hash functions feature additional, i.e., ephemeral, trapdoors which are chosen by the party computing a hash value. The holder of the main trapdoor is then unable to find a second pre-image of a hash value unless also provided with the ephemeral trapdoor used to compute the hash value. We present a formal security model for this new primitive as well as provably secure instantiations. The first instantiation is a generic black-box construction from any secure chameleon-hash function. We further provide three direct constructions based on standard assumptions. Our new primitive has some appealing use-cases, including a solution to the long-standing open problem of invisible sanitizable signatures, which we also present.
Expand
Wutichai Chongchitmate, Rafail Ostrovsky
ePrint Report ePrint Report
Multi-key fully homomorphic encryption (MFHE) schemes allow polynomially many users without trusted setup assumptions to send their data (encrypted under different FHE keys chosen by users independently of each other) to an honest-but-curious server that can compute the output of an arbitrary polynomial-time computable function on this joint data and issue it back to all participating users for decryption. One of the main open problems left in MFHE was dealing with malicious users without trusted setup assumptions. We show how this can be done, generalizing previous results of circuit-private FHE. Just like standard circuit-private FHE, our security model shows that even if both ciphertexts and public keys of individual users are not well-formed, no information is revealed regarding the server computation--- other than that gained from the output on some well-formed inputs of all users. MFHE schemes have direct applications to server-assisted multiparty computation (MPC), called on-the-fly MPC, introduced by López-Alt et al. (STOC '12), where the number of users is not known in advance. In this setting, a poly-time server wants to evaluate a circuit $C$ on data uploaded by multiple clients and encrypted under different keys. Circuit privacy requires that users' work is independent of $|C|$ held by the server, while each client learns nothing about $C$ other than its output. We present a framework for transforming MFHE schemes with no circuit privacy into maliciously circuit-private schemes. We then construct 3-round on-the-fly MPC with circuit privacy against malicious clients in the plain model.
Expand
Georg Fuchsbauer, Romain Gay, Lucas Kowalczyk, Claudio Orlandi
ePrint Report ePrint Report
Access Control Encryption (ACE) is a novel paradigm for encryption which allows to control not only what users in the system are allowed to \emph{read} but also what they are allowed to \emph{write}.

The original work of Damg{\aa}rd et al.~\cite{cryptoeprint:2016:106} introducing this notion left several open questions, in particular whether it is possible to construct ACE schemes with polylogarithmic complexity (in the number of possible identities in the system) from standard cryptographic assumptions.

In this work we answer the question in the affirmative by giving (efficient) constructions of ACE for an interesting class of predicates which includes equality, comparison, interval membership, and more.

We instantiate our constructions based both on standard pairing assumptions (SXDH) or more efficiently in the generic group model.
Expand
Joshua Gancher, Adam Groce, Alex Ledger
ePrint Report ePrint Report
We present the idea of externally verifiable oblivious RAM (ORAM). Our goal is to allow a client and server carrying out an ORAM protocol to have disputes adjudicated by a third party, allowing for the enforcement of penalties against an unreliable or malicious server. We give a security definition that guarantees protection not only against a malicious server but also against a client making false accusations. We then give modifications of the Path ORAM and Ring ORAM protocols that meet this security definition. These protocols both have the same asymptotic runtimes as the semi-honest original versions and require the external verifier to be involved only when the client or server deviates from the protocol. Finally, we implement externally verified ORAM, along with an automated cryptocurrency contract to use as the external verifier.
Expand
Hossein Arabnezhad-Khanoki, Babak Sadeghiyan, Josef Pieprzyk
ePrint Report ePrint Report
Algebraic analysis of block ciphers aims at finding the secret key by solving a collection of polynomial equations that describe the internal structure of a cipher for chosen observations of plaintext/ciphertext pairs. Although algebraic attacks are already accepted as a standard test for block and stream ciphers, there is a lack of understanding of the impact of algebraic representation of the cipher on efficiency of solving the resulting collection of equations. The work investigates different S-box representations and their effect on complexity of algebraic attacks. In particular, we observe that a S-box representation defined in the work as \textit{Forward-Backward} (FWBW) leads to a collection of equations that can be solved efficiently. We show that the $SR(10,2,1,4)$ cipher can be broken using standard algebra software \textsc{Singular} and FGb. This is the best result achieved so far. The effect of description of S-boxes for some light-weight block ciphers is investigated. Our study and experiments confirms a counter-intuitive conclusion that algebraic attacks work best for the FWBW S-box representation. This contradicts a common belief that algebraic attacks are more efficient for quadratic S-box representation.
Expand
◄ Previous Next ►