International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

22 March 2018

Jens Groth, Markulf Kohlweiss, Mary Maller, Sarah Meiklejohn , Ian Miers
ePrint Report ePrint Report
By design, existing (pre-processing) zk-SNARKs embed a secret trapdoor in a relation-dependent common reference strings (CRS). The trapdoor is exploited by a (hypothetical) simulator to prove the scheme is zero knowledge, and the secret-dependent structure facilitates a linear-size CRS and linear-time prover computation. If known by a real party, however, the trapdoor can be used to subvert the security of the system. The structured CRS that makes zk-SNARKs practical also makes deploying zk-SNARKS problematic, as it is difficult to argue why the trapdoor would not be available to the entity responsible for generating the CRS. Moreover, for pre-processing zk-SNARKs a new trusted CRS needs to be computed every time the relation is changed. %

In this paper, we address both issues by proposing a model where a number of users can update a universal CRS. The updatable CRS model guarantees security if at least one of the users updating the CRS is honest. We provide both a negative result, by showing that zk-SNARKs with private secret-dependent polynomials in the CRS cannot be updatable, and a positive result by constructing a zk-SNARK based on a CRS consisting only of secret-dependent monomials. The CRS is of quadratic size, is updatable, and is universal in the sense that it can be specialized into one or more relation-dependent CRS of linear size with linear-time prover computation.
Expand
Zvika Brakerski, Vadim Lyubashevsky, Vinod Vaikuntanathan, Daniel Wichs
ePrint Report ePrint Report
We present a worst case decoding problem whose hardness reduces to that of solving the Learning Parity with Noise (LPN) problem, in some parameter regime. Prior to this work, no worst case hardness result was known for LPN (as opposed to syntactically similar problems such as Learning with Errors). The caveat is that this worst case problem is only mildly hard and in particular admits a quasi-polynomial time algorithm, whereas the LPN variant used in the reduction requires extremely high noise rate of $1/2-1/poly(n)$. Thus we can only show that ``very hard'' LPN is harder than some ``very mildly hard'' worst case problem.

Specifically, we consider the (n,m,w)-nearest codeword problem ($(n,m,w)$-NCP) which takes as input a generating matrix for a binary linear code in $m$ dimensions and rank $n$, and a target vector which is very close to the code (Hamming distance at most $w$), and asks to find the codeword nearest to the target vector. We show that for balanced (unbiased) codes and for relative error $w/m \approx {\log^2 n}/{n}$, $(n,m,w)$-NCP can be solved given oracle access to an LPN distinguisher with noise ratio $1/2-1/poly(n)$.

Our proof relies on a smoothing lemma for codes which we show to have further implications: We show that $(n,m,w)$-NCP with the aforementioned parameters lies in the complexity class $\text{Search-BPP}^{SZK}$ (i.e. reducible to a problem that has a statistical zero knowledge protocol) implying that it is unlikely to be $\mathcal{NP}$-hard. We then show that LPN with very low noise rate $\log^2(n)/n$ implies the existence of collision resistant hash functions (our aforementioned result implies that in this parameter regime LPN is also in $\text{BPP}^\text{SZK})$.
Expand
Rémi Géraud, David Naccache
ePrint Report ePrint Report
In this work we explore a combinatorial optimization problem stemming from the Naccache-Stern cryptosystem. We show that solving this problem results in bandwidth improvements, and suggest a polynomial-time approximation algorithm to find an optimal solution.

Our work suggests that using optimal radix encoding results in an asymptotic 50% increase in bandwidth.
Expand
Sebastian Meiser
ePrint Report ePrint Report
This technical report discusses three subtleties related to the widely used notion of differential privacy (DP). First, we discuss how the choice of a distinguisher influences the privacy notion and why we should always have a distinguisher if we consider approximate DP. Secondly, we draw a line between the very intuitive probabilistic differential privacy (with probability $1-\delta$ we have $\varepsilon$-DP) and the commonly used approximate differential privacy ($(\varepsilon,\delta)$-DP). Finally we see that and why probabilistic differential privacy (and similar notions) are not complete under post-processing, which has significant implications for notions used in the literature.
Expand
Mark Zhandry
ePrint Report ePrint Report
The quantum random oracle model (QROM) has become the standard model in which to prove the post-quantum security of random-oracle-based constructions. Unfortunately, none of the known proof techniques allow the reduction to record information about the adversary's queries, a crucial feature of many classical ROM proofs, including all proofs of indifferentiability for hash function domain extension.

In this work, we give a new QROM proof technique that overcomes this ``recording barrier''. Our central observation is that when viewing the adversary's query and the oracle itself in the Fourier domain, an oracle query switches from writing to the adversary's space to writing to the oracle itself. This allows a reduction to simulate the oracle by simply recording information about the adversary's query in the Fourier domain.

We then use this new technique to show the indifferentiability of the Merkle-Damgard domain extender for hash functions. Given the threat posed by quantum computers and the push toward quantum-resistant cryptosystems, our work represents an important tool for efficient post-quantum cryptosystems.
Expand
Rosario Gennaro, Michele Minelli, Anca Nitulescu, Michele Orrù
ePrint Report ePrint Report
Zero-knowledge SNARKs (zk-SNARKs) are non-interactive proof systems with short (i.e., independent of the size of the witness) and efficiently verifiable proofs. They elegantly resolve the juxtaposition of individual privacy and public trust, by providing an efficient way of demonstrating knowledge of secret information without actually revealing it. To this day, zk-SNARKs are widely deployed all over the planet and are used to keep alive a system worth billion of euros, namely the cryptocurrency Zcash. However, all current SNARKs implementations rely on so-called pre-quantum assumptions and, for this reason, are not expected to withstand cryptanalitic efforts over the next few decades.

In this work, we introduce a new zk-SNARK that can be instantiated from lattice-based assumptions, and which is thus believed to be post-quantum secure. We provide a generalization in the spirit of Gennaro et al. (Eurocrypt’13) to the SSP-based SNARK of Danezis et al. (Asiacrypt’14) that relies on weaker computational assumptions. We focus on designated-verifier proofs and propose a protocol in which a proof consists of just 5 LWE encodings. We provide a concrete choice of parameters, showing that our construction is practically instantiable.
Expand
Rachid El Bansarkhani, Rafael Misoczki
ePrint Report ePrint Report
Hash-based signature schemes are the most promising cryptosystem candidates in a post-quantum world, but offer little structure to enable more sophisticated constructions such as group signatures. Group signatures allow a group member to anonymously sign messages on behalf of the whole group (as needed for anonymous remote attestation). In this work, we introduce G-Merkle, the first (stateful) hash-based group signature scheme. Our proposal relies on minimal assumptions, namely the existence of one-way functions, and offers performance equivalent to the Merkle single-signer setting. The public key size (as small as in the single-signer setting) outperforms all other post-quantum group signatures. Moreover, for $N$ group members issuing at most $B$ signatures each, the size of a hash-based group signature is just as large as a Merkle signature with a tree composed by $N\cdot B$ leaf nodes. This directly translates into fast signing and verification engines. Different from lattice-based counterparts, our construction does not require any random oracle. Note that due to the randomized structure of our Merkle tree, the signature authentication paths are pre-stored or deduced from a public tree, which seems a requirement hard to circumvent. To conclude, we present implementation results to demonstrate the practicality of our proposal.
Expand
Prabhanjan Ananth, Xiong Fan
ePrint Report ePrint Report
Attribute based encryption (ABE) is an advanced encryption system with a built-in mechanism to generate keys associated with functions which in turn provide restricted access to encrypted data. Most of the known candidates of attribute based encryption model the functions as circuits. This results in significant efficiency bottlenecks, especially in the setting when the function, associated with the ABE key, admits a RAM program whose runtime is sublinear in the length of the attribute. In this work we study the notion of attribute based encryption for random access machines (RAMs), introduced in the work of Goldwasser, Kalai, Popa, Vaikuntanathan and Zeldovich (Crypto 2013). We present a construction of attribute based encryption for RAMs satisfying sublinear decryption complexity assuming learning with errors. This improves upon the work of Goldwasser et al., who achieved this result based on SNARKs and extractable witness encryption.

En route to constructing this primitive, we introduce the notion of controlled homomorphic recoding (CHR) schemes. We present a generic transformation from controlled homomorphic recoding schemes to attribute-based encryption for RAMs and then we show how to instantiate controlled homomorphic recoding schemes based on learning with errors.
Expand
Sam Kim, David J. Wu
ePrint Report ePrint Report
Non-interactive zero-knowledge (NIZK) proofs are fundamental to modern cryptography. Numerous NIZK constructions are known in both the random oracle and the common reference string (CRS) models. In the CRS model, there exist constructions from several classes of cryptographic assumptions such as trapdoor permutations, pairings, and indistinguishability obfuscation. Notably absent from this list, however, are constructions from standard lattice assumptions. While there has been partial progress in realizing NIZKs from lattices for specific languages, constructing NIZK proofs (and arguments) for all of NP from standard lattice assumptions remains open.

In this work, we make progress on this problem by giving the first construction of a multi-theorem NIZK argument for NP from standard lattice assumptions in the preprocessing model. In the preprocessing model, a (trusted) setup algorithm generates proving and verification keys. The proving key is needed to construct proofs and the verification key is needed to check proofs. In the multi-theorem setting, the proving and verification keys should be reusable for an unbounded number of theorems without compromising soundness or zero-knowledge. Existing constructions of NIZKs in the preprocessing model (or even the designated-verifier model) that rely on weaker assumptions like one-way functions or oblivious transfer are only secure in a single-theorem setting. Thus, constructing multi-theorem NIZKs in the preprocessing model does not seem to be inherently easier than constructing them in the CRS model.

We begin by constructing a multi-theorem preprocessing NIZK directly from context-hiding homomorphic signatures. Then, we show how to efficiently implement the preprocessing step using a new cryptographic primitive called blind homomorphic signatures. This primitive may be of independent interest. Finally, we show how to leverage our new lattice-based preprocessing NIZKs to obtain new malicious-secure MPC protocols purely from standard lattice assumptions.
Expand

21 March 2018

San Jose, USA, 9 January - 11 January 2019
Real World Crypto Real World Crypto
Event date: 9 January to 11 January 2019
Expand
Be'er Sheva, Israel, 21 June - 22 June 2018
Event Calendar Event Calendar
Event date: 21 June to 22 June 2018
Submission deadline: 25 March 2018
Notification: 8 April 2018
Expand

15 March 2018

Hong Kong, China, 27 August - 29 August 2018
Event Calendar Event Calendar
Event date: 27 August to 29 August 2018
Submission deadline: 15 April 2018
Notification: 1 June 2018
Expand

14 March 2018

Borching Su
ePrint Report ePrint Report
A public blockchain is proposed in an attempt to enable the coin holders to participate in verifying mathematical theorems for public access. Incentives are designed to encourage any party to contribute their knowledge by buying tokens of mathematical propositions that they believe are true. The proposed blockchain is a platform for people to exchange their belief in mathematical propositions. An implementation of this blockchain proposal, once established, will provide the general public an easy and instant access to reliable knowledge without having to read difficult proofs or having to blindly trust a small number of experts. Conversely, experts from various fields may find it be much easier for making their work appreciated by more people, leading to a better impact. According to the incentive inherently provided by the blockchain, they can even earn significantly if they do prove some theorems that were not previously known by the blockchain. Foundations who are interested in the validity of a particular proposition not yet explicitly recorded on the blockchain can donate a fund, which will distribute to experts who contribute positive efforts toward solving the specified problems. Only the people who erroneously create or buy tokens of a proposition that is eventually proven false will lose money. A reference design of the proposed blockchain that attempts to achieve the above-mentioned goal is described and reasoned.
Expand
Douglas R. Stinson
ePrint Report ePrint Report
The purpose of this paper is to describe and analyze the Cayley-Purser algorithm, which is a public-key cryptosystem proposed by Flannery in 1999. I will present two attacks on it, one of which is apparently new. I will also examine a variant of the Cayley-Purser algorithm that was patented by Slavin in 2008, and show that it is also insecure.
Expand

13 March 2018

Derek Leung, Adam Suhl, Yossi Gilad, Nickolai Zeldovich
ePrint Report ePrint Report
Decentralized cryptocurrencies rely on participants to keep track of the state of the system in order to verify new transactions. As the number of users and transactions grows, this requirement places a significant burden on the users, as they need to download, verify, and store a large amount of data in order to participate.

Vault is a new cryptocurrency designed to minimize these storage and bootstrapping costs for participants. Vault builds on Algorand’s proof-of-stake consensus protocol and uses several techniques to achieve its goals. First, Vault decouples the storage of recent transactions from the storage of account balances, which enables Vault to delete old account state. Second, Vault allows sharding state across participants in a way that preserves strong security guarantees. Finally, Vault introduces the notion of stamping certificates that allow a new client to catch up securely and efficiently in a proof-of-stake system without having to verify every single block.

Experiments with a prototype implementation of Vault’s data structures shows that Vault reduces the bandwidth cost of joining the network as a full client by 99.7% compared to Bitcoin and 90.5% compared to Ethereum when downloading a ledger containing 500 million transactions.
Expand
Michael Raskin, Mark Simkin
ePrint Report ePrint Report
In this work, we present a new approach to constructing Oblivious RAM (ORAM). Somewhat surprisingly, and despite the large amount of research interest that ORAM has received, all existing ORAM constructions are based on a handful of conceptually different approaches. We believe it is of theoretical and practical interest to explore new ways to construct this primitive.

Our first construction is conceptually extremely simple and has a worst-case bandwidth overhead of $\mathcal{O}\left(\sqrt{N} \frac{\log{N}}{\log{\log{N}}}\right)$, where N is the number of data blocks.

Our main construction, Lookahead ORAM, has a worst-case bandwidth overhead of $\mathcal{O}\left(\sqrt{N}\right)$ and an additive storage overhead of $\sqrt{2N}$, which is the smallest concrete storage overhead among all existing ORAM constructions with sublinear worst-case bandwidth overhead. A small storage overhead is particularly beneficial in outsourced storage settings, where data owners have to pay their storage provider for the amount of storage they consume.

In addition to having a small storage overhead, Lookahead ORAM has perfect correctness, i.e. every operation on the ORAM data structure succeeds with probability 1 and, assuming the client stores some small stash of data locally, an online bandwidth overhead of $\mathcal{O}\left(1\right)$ without server-side computation. In comparison with prior work that has sublinear worst-case bandwidth overhead, Lookahead ORAM is asymptotically the most efficient ORAM with perfect correctness.
Expand
Patrick Longa
ePrint Report ePrint Report
We discuss several post-quantum authenticated key exchange protocols based on the supersingular isogeny problem. Leveraging the design of the well-studied schemes by Krawczyk (2003), Boyd et al. (2008), Fujioka et al. (2013), Krawczyk and Wee (2015), and others, we show how to use the Supersingular Isogeny Diffie-Hellman (SIDH) and Supersingular Isogeny Key Encapsulation (SIKE) protocols as basic building blocks to construct efficient and flexible authenticated key exchange schemes featuring different functionalities and levels of security.

This note is also intended to be a ``gentle'' introduction to supersingular isogeny based cryptography, and its most relevant constructions, for protocol designers and cryptographers.
Expand
Steven D. Galbraith
ePrint Report ePrint Report
We survey authenticated key exchange (AKE) in the context of supersingular isogeny Diffie-Hellman key exchange (SIDH). We discuss different approaches to achieve authenticated key exchange, and survey the literature. We explain some challenges that arise in the SIDH setting if one wants to do a ``Diffie-Hellman-like'' AKE, and present several candidate authenticated key exchange protocols suitable for SIDH. We also discuss some open problems.
Expand
Ayesha Khalid, James Howe, Ciara Rafferty, Francesco Regazzoni, Maire O'Neill
ePrint Report ePrint Report
Lattice-based cryptography, one of the leading candidates for post-quantum security, relies heavily on discrete Gaussian samplers to provide necessary uncertainty, obfuscating computations on secret information. For reconfigurable hardware, the cumulative distribution table (CDT) scheme has previously been shown to achieve the highest throughput and the smallest resource utilisation, easily outperforming other existing samplers. However, the CDT sampler does not scale well. In fact, for large parameters, the lookup tables required are far too large to be practically implemented. This research proposes a hierarchy of multiple smaller samplers, extending the Gaussian convolution lemma to compute optimal parameters, where the individual samplers require much smaller lookup tables. A large range of parameter sets, covering encryption, signatures, and key exchange are evaluated. Hardware-optimised parameters are formulated and a practical implementation on Xilinx Artix- 7 FPGA device is realised. The proposed sampling designs demonstrate promising performance on reconfigurable hardware, even for large parameters, that were otherwise thought infeasible.
Expand
Vienna, Austria, 20 August - 24 August 2018
Event Calendar Event Calendar
Event date: 20 August to 24 August 2018
Submission deadline: 10 May 2018
Notification: 20 May 2018
Expand
◄ Previous Next ►