International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

11 September 2019

Rahim Toluee, Taraneh Eghlidos
ePrint Report ePrint Report
Multi-proxy multi-signature schemes are useful in distributed networks, where a group of users cooperatively could delegate their administrative rights to the users of another group, who are authorized to generate the proxy signatures cooperatively on behalf of the original signers. In this paper, we aim to propose an ID-based lattice-based multi-proxy multi-signature (ILMPMS) scheme, which enjoys security against quantum computers and efficiency due to ID-based framework, linear operations and possibility of parallel computations based on lattices. For this purpose, we first propose an ID-based lattice-based multi-signature scheme, used as the underlying signature in our ILMPMS scheme. We prove existential unforgeability of both schemes against adaptive chosen-message attack in the random oracle model based on the hardness of the learning with errors problem over standard lattices.
Expand
Aayush Jain, Huijia Lin, Christian Matt, Amit Sahai
ePrint Report ePrint Report
In this work, we introduce and construct $D$-restricted Functional Encryption (FE) for any constant $D \ge 3$, based only on the SXDH assumption over bilinear groups. This generalizes the notion of $3$-restricted FE recently introduced and constructed by Ananth et al. (ePrint 2018) in the generic bilinear group model.

A $D=(d+2)$-restricted FE scheme is a secret key FE scheme that allows an encryptor to efficiently encrypt a message of the form $M=(\vec{x},\vec{y},\vec{z})$. Here, $\vec{x}\in F_{p}^{d\times n}$ and $\vec{y},\vec{z}\in F_{p}^n$. Function keys can be issued for a function $f=\Sigma_{\vec{I}=(i_1,..,i_d,j,k)}\ c_{\vec{I}}\cdot \vec{x}[1,i_1] \cdots \vec{x}[d,i_d] \cdot \vec{y}[j]\cdot \vec{z}[k]$ where the coefficients $c_{\vec{I}}\in F_{p}$. Knowing the function key and the ciphertext, one can learn $f(\vec{x},\vec{y},\vec{z})$, if this value is bounded in absolute value by some polynomial in the security parameter and $n$. The security requirement is that the ciphertext hides $\vec{y}$ and $\vec{z}$, although it is not required to hide $\vec{x}$. Thus $\vec{x}$ can be seen as a public attribute.

$D$-restricted FE allows for useful evaluation of constant-degree polynomials, while only requiring the SXDH assumption over bilinear groups. As such, it is a powerful tool for leveraging hardness that exists in constant-degree expanding families of polynomials over $\mathbb{R}$. In particular, we build upon the work of Ananth et al. to show how to build indistinguishability obfuscation (iO) assuming only SXDH over bilinear groups, LWE, and assumptions relating to weak pseudorandom properties of constant-degree expanding polynomials over $\mathbb{R}$.
Expand
Yilei Chen, Nicholas Genise, Pratyay Mukherjee
ePrint Report ePrint Report
We study a relaxed notion of lattice trapdoor called approximate trapdoor, which is defined to be able to invert Ajtai's one-way function approximately instead of exactly. The primary motivation of our study is to improve the efficiency of the cryptosystems built from lattice trapdoors, including the hash-and-sign signatures.

Our main contribution is to construct an approximate trapdoor by modifying the gadget trapdoor proposed by Micciancio and Peikert. In particular, we show how to use the approximate gadget trapdoor to sample short preimages from a distribution that is simulatable without knowing the trapdoor. The analysis of the distribution uses a theorem (implicitly used in past works) regarding linear transformations of discrete Gaussians on lattices.

Our approximate gadget trapdoor can be used together with the existing optimization techniques to improve the concrete performance of the hash-and-sign signature in the random oracle model under (Ring-)LWE and (Ring-)SIS assumptions. Our implementation shows that the sizes of the public-key and signature can be reduced by half from those in schemes built from exact trapdoors.
Expand
Divesh Aggarwal, Bogdan Ursu, Serge Vaudenay
ePrint Report ePrint Report
Abstract. There is a large gap between theory and practice in the complexities of sieving algorithms for solving the shortest vector problem in an arbitrary Euclidean lattice. In this paper, we work towards reducing this gap, providing theoretical refinements of the time and space complexity bounds in the context of the approximate shortest vector problem. This is achieved by relaxing the requirements on the AKS algorithm, rather than on the ListSieve, resulting in exponentially smaller bounds starting from $\mu\approx 2$, for constant values of $\mu$. We also explain why these improvements carry over to also give the fastest quantum algorithms for the approximate shortest vector problem.
Expand
Marcel Tiepelt, Alan Szepieniec
ePrint Report ePrint Report
In this work we analyze the impact of translating the well-known LLL algorithm for lattice reduction into the quantum setting. We present the first (to the best of our knowledge) quantum circuit representation of a lattice reduction algorithm in the form of explicit quantum circuits implementing the textbook LLL algorithm. Our analysis identifies a set of challenges arising from constructing reversible lattice reduction as well as solutions to these challenges. We give a detailed resource estimate with the Toffoli gate count and the number of logical qubits as complexity metrics.

As an application of the previous, we attack Mersenne number cryptosystems by Groverizing an attack due to Beunardeau et. al that uses LLL as a subprocedure. While Grover's quantum algorithm promises a quadratic speedup over exhaustive search given access to a oracle that distinguishes solutions from non-solutions, we show that in our case, realizing the oracle comes at the cost of a large number of qubits. When an adversary translates the attack by Beunardeau et al. into the quantum setting, the overhead of the quantum LLL circuit may be as large as $2^52$ qubits for the text-book implementation and $2^33$ for a floating-point variant.
Expand
Mojtaba Khalili, Daniel Slamanig
ePrint Report ePrint Report
We show how to construct structure-preserving signatures (SPS) and unbounded quasi-adaptive non-interactive zero-knowledge (USS QA-NIZK) proofs with a tight security reduction to simple assumptions, being the first with a security loss of $\mathcal{O}(1)$. Specifically, we present a SPS scheme which is more efficient than existing tightly secure SPS schemes and from an efficiency point of view is even comparable with other non-tight SPS schemes. In contrast to existing work, however, we only have a lower security loss of $\mathcal{O}(1)$, resolving an open problem posed by Abe et al. (CRYPTO 2017). In particular, our tightly secure SPS scheme under the SXDH assumption requires 11 group elements. Moreover, we present the first tightly secure USS QA-NIZK proofs with a security loss of $\mathcal{O}(1)$ which also simultaneously have a compact common reference string and constant size proofs (5 elements under the SXDH assumption, which is only one element more than the best non-tight USS QA-NIZK).

From a technical perspective, we present a novel randomization technique, inspired by Naor-Yung paradigm and adaptive partitioning, to obtain a randomized pseudorandom function (PRF). In particular, our PRF uses two copies under different keys but with shared randomness. Then we adopt ideas of Kiltz, Pan and Wee (CRYPTO 2015), who base their SPS on a randomized PRF, but in contrast to their non-tight reduction our approach allows us to achieve tight security. Similarly, we construct the first compact USS QA-NIZK proofs adopting techniques from Kiltz and Wee (EUROCRYPT 2015). We believe that the techniques introduced in this paper to obtain tight security with a loss of $\mathcal{O}(1)$ will have value beyond our proposed constructions.
Expand
Gilad Asharov, Naomi Ephraim, Ilan Komargodski, Rafael Pass
ePrint Report ePrint Report
We give a method to transform any indistinguishability obfuscator that suffers from correctness errors into an indistinguishability obfuscator that is $\textit{perfectly}$ correct, assuming hardness of Learning With Errors (LWE). The transformation requires sub-exponential hardness of the obfuscator and of LWE. Our technique also applies to eliminating correctness errors in general-purpose functional encryption schemes, but here it is sufficient to rely on the polynomial hardness of the given scheme and of LWE. Both of our results can be based $\textit{generically}$ on any perfectly correct, single-key, succinct functional encryption scheme (that is, a scheme supporting Boolean circuits where encryption time is a fixed polynomial in the security parameter and the message size), in place of LWE.

Previously, Bitansky and Vaikuntanathan (EUROCRYPT ’17) showed how to achieve the same task using a derandomization-type assumption (concretely, the existence of a function with deterministic time complexity $2^{O(n)}$ and non-deterministic circuit complexity $2^{\Omega(n)}$) which is non-game-based and non-falsifiable.
Expand
Dor Bitan, Shlomi Dolev
ePrint Report ePrint Report
We present preprocessing-MPC schemes of arithmetic functions with optimal round complexity, function-independent correlated randomness, and communication and space complexities that grow linearly with the size of the function. We extend our results to the client-server model and present a scheme which enables a user to outsource the storage of confidential data to $N$ distrusted servers and have the servers perform computations over the encrypted data in a single round of communication. We further extend our results to handle Boolean circuits. All our schemes have perfect passive security against coalitions of up to $N-1$ parties. Our schemes are based on a novel secret sharing scheme, Distributed Random Matrix (DRM), which we present here. The DRM secret sharing scheme supports homomorphic multiplications, and, after a single round of communication, supports homomorphic additions.

Our approach deviates from standard conventions of MPC. First, we consider a representation of the function f as a multivariate polynomial (rather than an arithmetic circuit). Second, we divide the problem into two cases. We begin with solving the Non-Vanishing case, in which the inputs are non-zero elements of $F_p$. In this case, our schemes have space complexity $O(nkN)$ and communication complexity $O(nk(N^2))$, where $n$ is the size of the input, and $k$ is the number of monomials of the function. Then, we present several solutions for the general case, in which some of the secrets can be zero. In these solutions, the space and communication complexities are either $O(nk(N^2)(2^n))$ and $O(nk(N^3)(2^n))$, or $O(nkN)$ and $O(nk(N^2))$, respectively, where $K$ is the size of a modified version of $f$. $K$ is bounded by the square of the maximal possible size of $k$.
Expand
Dor Bitan, Shlomi Dolev
ePrint Report ePrint Report
Homomorphic encryption (HE) schemes enable processing of encrypted data and may be used by a user to outsource storage and computations to an untrusted server. A plethora of HE schemes has been suggested in the past four decades, based on various assumptions, and which achieve different attributes. In this work, we assume that the user and server are quantum computers, and look for HE schemes of classical data. We set a high bar of requirements and ask what can be achieved under these requirements. Namely, we look for HE schemes which are efficient, information-theoretically (IT) secure, perfectly correct, and which support homomorphic operations in a fully-compact and non-interactive way. Fully-compact means that decryption costs O(1) time and space. To the best of our knowledge, there is no known scheme which fulfills all the above requirements. We suggest an encryption scheme based on random bases and discuss the homomorphic properties of that scheme. We demonstrate the usefulness of random bases in an efficient and secure QKD protocol and other applications. In particular, our QKD scheme has safer security in the face of weak measurements.
Expand
Jintai Ding, Joshua Deaton, Zheng Zhang, Kurt Schmidt, Vishakha
ePrint Report ePrint Report
In 1998, Jerey Hostein, Jill Pipher, and Joseph H. Silverman introduced the famous Ntru cryptosystem, and called it "A ring-based public key cryptosystem". Actually it turns out to be a lattice based cryptosystem that is resistant to Shor's algorithm. There are several modifications to the original Ntru and two of them are selected as round 2 candidates of NIST post quantum public key scheme standardization.

In this paper, we present a simple attack on the original Ntru scheme. The idea comes from Ding et al.'s key mismatch attack. Essentially, an adversary can find information on the private key of a KEM by not encrypting a message as intended but in a manner which will cause a failure in decryption if the private key is in a certain form. In the present, Ntru has the encrypter generating a random polynomial with "small" coefficients, but we will have the coefficients be "large". After this, some further work will create an equivalent key.
Expand
Sean Bowe, Jack Grigg, Daira Hopwood
ePrint Report ePrint Report
Non-interactive proofs of knowledge allow us to publicly demonstrate the faithful execution of arbitrary computations. SNARKs have the additional property of succinctness, meaning that the proofs are short and fast to verify even when the computations involved are large. This property raises the prospect of recursive proof composition: proofs that verify other proofs. All previously known realizations of recursive proof composition have required a trusted setup and cycles of expensive pairing-friendly elliptic curves.

We obtain the first practical example of recursive proof composition without a trusted setup, using only ordinary cycles of elliptic curves. Our primary contribution is a novel technique for amortizing away expensive verification procedures from within the proof verification cycle so that we could obtain recursion using a composition of existing protocols and techniques. We devise a technique for amortizing the cost of verifying multiple inner product arguments which may be of independent interest.
Expand
Alexander Vlasov, Konstantin Panarin
ePrint Report ePrint Report
We introduce novel efficient and transparent construction of the polynomial commitment scheme. A polynomial commitment scheme allows one side (the prover) to commit to a polynomial of predefined degree $d$ with a string that can be later used by another side (the verifier) to confirm claimed evaluations of the committed polynomial at specific points. Efficiency means that communication costs of interaction between prover and verifier during the protocol are very small compared to sending the whole committed polynomial itself, and is polylogarithmic in our case. Transparency means that our scheme doesn't require any preliminary trusted setup ceremony. We explicitly state that our polynomial commitment scheme is not hiding, although zero knowledge can be achieved at the application level in most of the cases.
Expand
Yongha Son, Jung Hee Cheon
ePrint Report ePrint Report
In the practical use of the Learning With Error (LWE) based cryptosystems, it is quite common to choose the secret to be extremely small: one popular choice is ternary ($\pm 1, 0$) coefficient vector, and some further use ternary vector having only small numbers of nonzero coefficient, what is called sparse and ternary vector. This use of small secret also benefits to attack algorithms against LWE, and currently LWE-based cryptosystems including homomorphic encryptions (HE) set parameters based on the attack complexity of those improved attacks.

In this work, we revisit the well-known Howgrave-Graham's hybrid attack, which was originally designed to solve the NTRU problem, with respect to sparse and ternary secret LWE case, and also refine the previous analysis for the hybrid attack in line with LWE setting. Moreover, upon our analysis we estimate attack complexity of the hybrid attack for several LWE parameters. As a result, we argue the currently used HE parameters should be raised to maintain the same security level by considering the hybrid attack; for example, the parameter set $(n, \log q, \sigma) = (65536, 1240, 3.2)$ with Hamming weight of secret key $h = 64,$ which was estimated to satisfy $\ge 128$ bit-security by the previously considered attacks, is newly estimated to provide only $113$ bit-security by the hybrid attack.
Expand

10 September 2019

Asiacrypt Asiacrypt
ASIACRYPT 2019, the 25th Annual International Conference on the Theory and Application of Cryptology and Information Security, will take place at Kobe Portopia Hotel in the city of Kobe, Japan, December 8-12, 2019.

The conference webpage: https://asiacrypt.iacr.org/2019/.

Registration
ASIACRYPT 2019 registrations are now open at https://asiacrypt.iacr.org/2019/registration.html. The early registration deadline ends on November 8, 2019.

Technical Program
We are pleased to announce that Krzysztof Pietrzak (IST Austria) and Elaine Shi (Cornell University) will give us special lectures. Please visit https://asiacrypt.iacr.org/2019/invitedtalks.html for the invited speakers and talks. A list of accepted papers will appear on the conference site shortly. Stay tuned.

Venue and Travel
All technical sessions and social events take place at Kobe Portopia Hotel. The venue is easily reachable from three nearby airports (KIX, UKB and ITM) by public transportation. Consult https://asiacrypt.iacr.org/2019/travel.html for additional and detailed guidelines.

Accommodations
There are many hotels in Kobe downtown area in a variety of price ranges. A limited number of rooms of Kobe Portopia Hotel have been reserved for participants of the conference. Find more information at https://asiacrypt.iacr.org/2019/accommodations.html. Early booking is highly recommended.

Student Stipends
All student presenters of an accepted paper will have their registration fees waived. In addition, a limited number of stipends are available to those unable to obtain funding to join the conference. If such assistance is needed, please read https://asiacrypt.iacr.org/2019/stipends.html and contact the general chair.

Expand
Julia Kastner, Jiaxin Pan
ePrint Report ePrint Report
The Generic Group Model (GGM) is one of the most important tools for analyzing the hardness of a cryptographic problem. Although a proof in the GGM provides a certain degree of confidence in the problem's hardness, it is a rather strong and limited model, since it does not allow an algorithm to exploit any property of the group structure. To bridge the gap between the GGM and the Standard Model, Fuchsbauer, Kiltz, and Loss proposed a model, called the Algebraic Group Model (AGM, CRYPTO 2018). In the AGM, an adversary can take advantage of the group structure, but it needs to provide a representation of its group element outputs, which seems weaker than the GGM but stronger than the Standard Model. Due to this additional information we learn about the adversary, the AGM allows us to derive simple but meaningful security proofs.

In this paper, we take the first step to bridge the gap between the AGM and the Standard Model. We instantiate the AGM under Standard Assumptions. More precisely, we construct two algebraic groups under the Knowledge of Exponent Assumption (KEA). In addition to the KEA, our first construction requires symmetric pairings, and our second construction needs an additively homomorphic Non-Interactive Zero-Knowledge (NIZK) argument system, which can be implemented by a standard variant of Diffie-Hellman Assumption in the asymmetric pairing setting. Furthermore, we show that both of our constructions provide cryptographic hardness which can be used to construct secure cryptosystems. We note that the KEA provably holds in the GGM. Our results show that, instead of instantiating the seemingly complex AGM directly, one can try to instantiate the GKEA under falsifiable assumptions in the Standard Model. Thus, our results can serve as a stepping stone towards instantiating the AGM under falsifiable assumptions.
Expand
Mihir Bellare, Wei Dai, Lucy Li
ePrint Report ePrint Report
We bypass impossibility results for the deterministic encryption of public-key-dependent messages, showing that, in this setting, the classical Encrypt-with-Hash scheme provides message-recovery security, across a broad range of message distributions. The proof relies on a new variant of the forking lemma in which the random oracle is reprogrammed on just a single fork point rather than on all points past the fork.
Expand
Elena Kirshanova, Erik Mårtensson, Eamonn W. Postlethwaite, Subhayan Roy Moulik
ePrint Report ePrint Report
The Shortest Vector Problem (SVP) is one of the mathematical foundations of lattice based cryptography. Lattice sieve algorithms are amongst the foremost methods of solving SVP. The asymptotically fastest known classical and quantum sieves solve SVP in a \(d\)-dimensional lattice in \(2^{cd + o(d)}\) time steps with \(2^{c'd + o(d)}\) memory for constants \(c, c'\). In this work, we give various quantum sieving algorithms that trade computational steps for memory.

We first give a quantum analogue of the classical \(k\)-Sieve algorithm [Herold--Kirshanova--Laarhoven, PKC'18] in the Quantum Random Access Memory (QRAM) model, achieving an algorithm that heuristically solves SVP in \(2^{0.2989d + o(d)}\) time steps using \(2^{0.1395d + o(d)}\) memory. This should be compared to the state-of-the-art algorithm [Laarhoven, Ph.D Thesis, 2015] which, in the same model, solves SVP in \(2^{0.2653d + o(d)}\) time steps and memory. In the QRAM model these algorithms can be implemented using \(poly(d)\) width quantum circuits.

Secondly, we frame the \(k\)-Sieve as the problem of \(k\)-clique listing in a graph and apply quantum \(k\)-clique finding techniques to the \(k\)-Sieve.

Finally, we explore the large quantum memory regime by adapting parallel quantum search [Beals et al., Proc. Roy. Soc. A'13] to the \(2\)-Sieve and giving an analysis in the quantum circuit model. We show how to heuristically solve SVP in \(2^{0.1037d + o(d)}\) time steps using \(2^{0.2075d + o(d)}\) quantum memory.
Expand
Eleftherios Kokoris-Kogias, Alexander Spiegelman, Dahlia Malkhi, Ittai Abraham
ePrint Report ePrint Report
In this paper, we present the first fully asynchronous distributed key generation (ADKG) algorithm as well as the first distributed key generation algorithm that can create keys with a dual $(f,2f+1)-$threshold that are necessary for scalable consensus (which so far needs a trusted dealer assumption).

In order to create a DKG with a dual $(f,2f+1)-$ threshold we first answer in the affirmative the open question posed by Cachin et al. how to create an AVSS protocol with recovery thresholds $ f+1 < k \le 2f+1$, which is of independent interest. Our High-threshold-AVSS (\textit{HAVSS}) uses an asymmetric bi-variate polynomial, where the secret shared is hidden from any set of $k$ nodes but an honest node that did not participate in the sharing phase can still recover his share with only $n-2f$ shares, hence be able to contribute in the secret reconstruction.

Another building block for ADKG is a novel \textit{Eventually Perfect} Common Coin (EPCC) abstraction and protocol that enables the participants to create a common coin that might fail to agree at most $f+1$ times (even if invoked a polynomial number of times). Using \textit{EPCC} we implement an Eventually Efficient Asynchronous Binary Agreement (EEABA) in which each instance takes $O(n^2)$ bits and $O(1)$ rounds in expectation, except for at most $f+1$ instances which may take $O(n^4)$ bits and $O(n)$ rounds in total.

Using EEABA we construct the first fully Asynchronous Distributed Key Generation (ADKG) which has the same overhead and expected runtime as the best partially-synchronous DKG ($O(n^4)$ words, $O(n)$ rounds). As a corollary of our ADKG we can also create the first Validated Asynchronous Byzantine Agreement (VABA) in the authenticated setting that does not need a trusted dealer to setup threshold signatures of degree $n-f$. Our VABA has an overhead of expected $O(n^2)$ words and $O(1)$ time per instance after an initial $O(n^4)$ words and $O(n)$ time bootstrap via ADKG.
Expand
Estuardo Alpirez Bock, Chris Brzuska, Marc Fischlin, Christian Janson, Wil Michiels
ePrint Report ePrint Report
The goal of white-box cryptography is to provide security even when the cryptographic implementation is executed in adversarially controlled environments. White-box implementations nowadays appear in commercial products such as mobile payment applications, e.g., those certified by Mastercard. Interestingly, there, white-box cryptography is championed as a tool for secure storage of payment tokens, and importantly, the white-boxed storage functionality is bound to a hardware functionality to prevent code-lifting attacks.

In this paper, we show that the approach of using hardware binding and obfuscation for secure storage is conceptually sound. Following security specifications by Mastercard, we first define security for a white-box key derivation functions (WKDF) that is bound to a hardware functionality. WKDFs with hardware-binding model a secure storage functionality, as the WKDFs in turn can be used to derive encryption keys for secure storage. We then provide a proof-of-concept construction of WKDFs based on pseudorandom functions (PRF) and obfuscation. To show that our use of cryptographic primitives is sound, we perform a cryptographic analysis and reduce the security of our WKDF to the cryptographic assumptions of indistinguishability obfuscation and PRF-security. The hardware-functionality that our WKDF is bound to is a PRF-like functionality. Obfuscation helps us to hide the secret key used for the verification, essentially emulating a signature functionality as is provided by the Android key store.

We rigorously define the required security properties of a hardware-bound white-box payment application (WPAY) for generating and encrypting valid payment requests. We construct a WPAY, which uses a WKDF as a secure building block. We thereby show that a WKDF can be securely combined with any secure symmetric encryption scheme, including those based on standard ciphers such as AES.
Expand
Carolyn Whitnall, Elisabeth Oswald
ePrint Report ePrint Report
The ISO standardisation of `Testing methods for the mitigation of non-invasive attack classes against cryptographic modules' (ISO/IEC 17825:2016) specifies the use of the Test Vector Leakage Assessment (TVLA) framework as the sole measure to assess whether or not an implementation of (symmetric) cryptography is vulnerable to differential side-channel attacks. It is the only publicly available standard of this kind, and the first side-channel assessment regime to exclusively rely on a TVLA instantiation.

TVLA essentially specifies statistical leakage detection tests with the aim of removing the burden of having to test against an ever increasing number of attack vectors. It offers the tantalising prospect of `conformance testing': if a device passes TVLA, then, one is led to hope, the device would be secure against all (first-order) differential side-channel attacks.

In this paper we provide a statistical assessment of the specific instantiation of TVLA in this standard. This task leads us to inquire whether (or not) it is possible to assess the side-channel security of a device via leakage detection (TVLA) only. We find a number of grave issues in the standard and its adaptation of the original TVLA guidelines. We propose some innovations on existing methodologies and finish by giving recommendations for best practice and the responsible reporting of outcomes.
Expand
◄ Previous Next ►