International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

28 February 2019

Andrea Coladangelo, Alex B. Grilo, Stacey Jeffery, Thomas Vidick
ePrint Report ePrint Report
The problem of reliably certifying the outcome of a computation performed by a quantum device is rapidly gaining relevance. We present two protocols for a classical verifier to verifiably delegate a quantum computation to two non-communicating but entangled quantum provers. Our protocols have near-optimal complexity in terms of the total resources employed by the verifier and the honest provers, with the total number of operations of each party, including the number of entangled pairs of qubits required of the honest provers, scaling as O ( g log g ) for delegating a circuit of size g. This is in contrast to previous protocols, whose overhead in terms of resources employed, while polynomial, is far beyond what is feasible in practice. Our first protocol requires a number of rounds that is linear in the depth of the circuit being delegated, and is blind, meaning neither prover can learn the circuit or its input. The second protocol is not blind, but requires only a constant number of rounds of interaction.

Our main technical innovation is an efficient rigidity theorem which allows a verifier to test that two entangled provers perform measurements specified by an arbitrary m-qubit tensor product of single-qubit Clifford observables on their respective halves of m shared EPR pairs, with a robustness that is independent of m. Our two-prover classical-verifier delegation protocols are obtained by combining this rigidity theorem with a single-prover quantum-verifier protocol for the verifiable delegation of a quantum computation, introduced by Broadbent.
Expand
Serge Fehr, Chen Yuan
ePrint Report ePrint Report
Robust secret sharing enables the reconstruction of a secret-shared message in the presence of up to $t$ (out of $n$) {\em incorrect} shares. The most challenging case is when $n = 2t+1$, which is the largest $t$ for which the task is still possible, but only up to a small error probability $2^{- \kappa}$ and with some overhead in the share size.

Recently, Bishop, Pastro, Rajaraman and Wichs proposed a scheme with an (almost) optimal overhead of $\widetilde{O}(\kappa)$. This seems to answer the open question posed by Cevallos et al. who proposed a scheme with overhead of $\widetilde{O}(n+\kappa)$ and asked whether the linear dependency on $n$ was necessary or not. However, a subtle issue with Bishop et al.'s solution is that it (implicitly) assumes a {\em non-rushing} adversary, and thus it satisfies a {\em weaker} notion of security compared to the scheme by Cevallos et al. or to the classical scheme by Rabin and BenOr.

In this work, we almost close this gap. We propose a new robust secret sharing scheme that offers full security against a rushing adversary, and that has an overhead of $O(\kappa n^\varepsilon)$, where $\varepsilon > 0$ is arbitrary but fixed. This $n^\varepsilon$-factor is obviously worse than the $\mathrm{polylog}(n)$-factor hidden in the $\widetilde{O}$ notation of the scheme of Bishop et al., but it greatly improves on the linear dependency on $n$ of the best known scheme that features security against a rushing adversary.

A small variation of our scheme has the same $\widetilde{O}(\kappa)$ overhead as the scheme of Bishop et al.\ {\em and} achieves security against a rushing adversary, but suffers from a (slightly) superpolynomial reconstruction complexity.
Expand
Albert Cheu, Adam Smith, Jonathan Ullman, David Zeber, Maxim Zhilyaev
ePrint Report ePrint Report
We consider the problem of designing scalable, robust protocols for computing statistics about sensitive data. Specifically, we look at how best to design differentially private protocols in a distributed setting, where each user holds a private datum. The literature has mostly considered two models: the "central" model, in which a trusted server collects users' data in the clear, which allows greater accuracy; and the "local" model, in which users individually randomize their data, and need not trust the server, but accuracy is limited. Attempts to achieve the accuracy of the central model without a trusted server have so far focused on variants of cryptographic multiparty computation (MPC), which limits scalability. In this paper, we initiate the analytic study of a shuffled model for distributed differentially private algorithms, which lies between the local and central models. This simple-to-implement model, a special case of the ESA framework of (Bittau et al., SOSP 2017), augments the local model with an anonymous channel that randomly permutes a set of user-supplied messages.

For sum queries, we show that this model provides the power of the central model while avoiding the need to trust a central server and the complexity of cryptographic secure function evaluation. More generally, we give evidence that the power of the shuffled model lies strictly between those of the central and local models: for a natural restriction of the model, we show that shuffled protocols for a widely studied selection problem require exponentially higher sample complexity than do central-model protocols.
Expand
Viet Tung Hoang, David Miller, Ni Trieu
ePrint Report ePrint Report
We improve the attack of Durak and Vaudenay (CRYPTO'17) on NIST Format-Preserving Encryption standard FF3, reducing the running time from $O(N^5)$ to $O(N^{17/6})$ for domain $\Z_N \times \Z_N$. Concretely, DV's attack needs about $2^{50}$ operations to recover encrypted 6-digit PINs, whereas ours only spends about $2^{30}$ operations. In realizing this goal, we provide a pedagogical example of how to use distinguishing attacks to speed up slide attacks. In addition, we improve the running time of DV's known-plaintext attack on 4-round Feistel of domain $\Z_N \times \Z_N$ from $O(N^3)$ time to just $O(N^{5/3})$ time. We also generalize our attacks to a general domain $\Z_M \times \Z_N$, allowing one to recover encrypted SSNs using about $2^{50}$ operations. Finally, we provide some proof-of-concept implementations to empirically validate our results.
Expand
Akinori Hosoyamada, Tetsu Iwata
ePrint Report ePrint Report
The Luby-Rackoff construction, or the Feistel construction, is one of the most important approaches to construct secure block ciphers from secure pseudorandom functions. The 3-round and 4-round Luby-Rackoff constructions are proven to be secure against chosen-plaintext attacks (CPAs) and chosen-ciphertext attacks (CCAs), respectively, in the classical setting. However, Kuwakado and Morii showed that a quantum superposed chosen-plaintext attack (qCPA) can distinguish the 3-round Luby-Rackoff construction from a random permutation in polynomial time. In addition, a recent work by Ito et al. showed a quantum superposed chosen-ciphertext attack (qCCA) that distinguishes the 4-round Luby-Rackoff construction. Since Kuwakado and Morii showed the result, it has been a problem of much interest how many rounds are sufficient to achieve the provable security against quantum query attacks. This paper shows the answer to this fundamental question by showing that 4-rounds suffice against qCPAs. Concretely, we prove that the 4-round Luby-Rackoff construction is secure up to $O(2^{n/6})$ quantum queries. We also show that our bound is tight by giving a distinguishing qCPA with $O(2^{n/6})$ quantum queries. Our result is the first one that shows security of a typical block-cipher construction against quantum query attacks, without any algebraic assumptions. To give security proofs, we introduce a proof technique which is a modification of Zhandry's compressed oracle technique.
Expand
Alex Lombardi, Willy Quach, Ron D. Rothblum, Daniel Wichs, David J. Wu
ePrint Report ePrint Report
Non-interactive zero-knowledge arguments (NIZKs) for NP are an important cryptographic primitive, but we currently only have instantiations under a few specific assumptions. Notably, we are missing constructions from the plain learning with errors (LWE) assumption or the Diffie-Hellman (CDH/DDH) assumption.

In this paper, we study a relaxation of NIZKs to the designated-verifier setting (DV-NIZK), where a trusted setup generates a common reference string together with a secret key for the verifier. We want reusable schemes, which allow the verifier to reuse the secret key to verify many different proofs, and soundness should hold even if the malicious prover learns whether various proofs are accepted or rejected. Such reusable DV-NIZKs were recently constructed under the CDH assumption, but it was open whether they can also be constructed under LWE. In this work, we give a new construction using generic primitives that can be instantiated under CDH or LWE.

We also consider an extension of reusable DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK). In this setting, the only trusted setup consists of a common random string. However, there is also an additional untrusted setup in which the verifier chooses a public/secret key needed to generate/verify proofs, respectively. We require that zero-knowledge holds even if the public key is chosen maliciously by the verifier. Such reusable MDV-NIZKs were recently constructed under the ``one-more CDH'' assumption. In this work, we give a new construction using generic primitives that can be instantiated under DDH or LWE.
Expand
Benny Pinkas, Thomas Schneider, Oleksandr Tkachenko, Avishay Yanai
ePrint Report ePrint Report
We present a new protocol for computing a circuit which implements the private set intersection functionality (PSI). Using circuits for this task is advantageous over the usage of specific protocols for PSI, since many applications of PSI do not need to compute the intersection itself but rather functions based on the items in the intersection. Our protocol is the first circuit-based PSI protocol to achieve linear com- munication complexity. It is also concretely more efficient than all previous circuit-based PSI protocols. For example, for sets of size 220 it improves the communication of the recent work of Pinkas et al. (EUROCRYPT’18) by more than 10 times, and improves the run time by a factor of 2.8x in the LAN setting, and by a factor of 5.8x in the WAN setting. Our protocol is based on the usage of a protocol for computing oblivious programmable pseudo-random functions (OPPRF), and more specifically on our technique to amortize the cost of batching together multiple invocations of OPPRF.
Expand
Vipul Goyal, Yifan Song
ePrint Report ePrint Report
In this paper, we consider the setting where a party uses correlated random tapes across multiple executions of a cryptographic algorithm. We ask if the security properties could still be preserved in such a setting. As examples, we introduce the notion of correlated-tape zero knowledge, and, correlated-tape multi-party computation, where, the zero-knowledge property, and, the ideal/real model security must still be preserved even if a party uses correlated random tapes in multiple executions. Our constructions are based on a new type of randomness extractor which we call correlated-source extractors. Correlated-source extractors can be seen as a dual of non-malleable extractors, and, allow an adversary to choose several tampering functions which are applied to the randomness source. Correlated-source extractors guarantee that even given the output of the extractor on the tampered sources, the output on the original source is still uniformly random. Given (seeded) correlated-source extractors, and, resettably-secure computation protocols, we show how to directly get a positive result for both correlated-tape zero-knowledge and correlated-tape multi-party computation in the CRS model. This is tight considering the known impossibility results on cryptography with imperfect randomness. Our main technical contribution is an explicit construction of a correlated-source extractor where the length of the seed is independent of the number of tamperings. Additionally, we also provide a (non-explicit) existential result for correlated source extractors with almost optimal parameters.
Expand
Adam Groce, Peter Rindal, Mike Rosulek
ePrint Report ePrint Report
In this work we demonstrate that allowing differentially private leakage can significantly improve the concrete performance of secure 2-party computation (2PC) protocols. Specifically, we focus on the private set intersection (PSI) protocol of Rindal and Rosulek (CCS 2017), which is the fastest PSI protocol with security against malicious participants. We show that if differentially private leakage is allowed, the cost of the protocol can be reduced by up to 63%, depending on the desired level of differential privacy. On the technical side, we introduce a security model for differentially-private leakage in malicious-secure 2PC. We also introduce two new and improved mechanisms for "differentially private histogram overestimates," the main technical challenge for differentially-private PSI.
Expand
Rémi Géraud, David Naccache, Răzvan Roşie
ePrint Report ePrint Report
Robustness is a notion often tacitly assumed while working with encrypted data. Roughly speaking, it states that a ciphertext cannot be decrypted under different keys. Initially formalized in a public-key context, it has been further extended to key-encapsulation mechanisms, and more recently to pseudorandom functions, message authentication codes and authenticated encryption. In this work, we motivate the importance of establishing similar guarantees for functional encryption schemes, even under adversarially generated keys. Our main security notion is intended to capture the scenario where a ciphertext obtained under a master key (corresponding to Authority 1) is decrypted by functional keys issued under a different master key (Authority 2). Furthermore, we show there exist simple functional encryption schemes where robustness under adversarial key-generation is not achieved. As a secondary and independent result, we formalize robustness for digital signatures – a signature should not verify under multiple keys – and point out that certain signature schemes are not robust when the keys are adversarially generated. We present simple, generic transforms that turn a scheme into a robust one, while maintaining the original scheme’s security. For the case of public-key functional encryption, we look into ciphertext anonymity and provide a transform achieving it.
Expand
Zahra Jafargholi, Kasper Green Larsen, Mark Simkin
ePrint Report ePrint Report
In this work, we present the first asymptotically optimal oblivious priority queue, which matches the lower bound of Jacob, Larsen, and Nielsen (SODA'19). Our construction is conceptually simple, statistically secure, and has small hidden constants. We illustrate the power of our optimal oblivious priority queue by presenting a conceptually equally simple construction of asymptotically optimal offline ORAM. Our result provides the first matching upper bound to the 23 year old lower bound of Goldreich and Ostrovsky (JACM'96).
Expand
Geoffroy Couteau, Dennis Hofheinz
ePrint Report ePrint Report
We provide a generic construction of non-interactive zero-knowledge (NIZK) schemes. Our construction is a refinement of Dwork and Naor’s (FOCS 2000) implementation of the hidden bits model using verifiable pseudorandom generators (VPRGs). Our refinement simplifies their construction and relaxes the necessary assumptions considerably.

As a result of this conceptual improvement, we obtain interesting new instantiations:

– A designated-verifier NIZK (with unbounded soundness) based on the computational Diffie-Hellman (CDH) problem. If a pairing is available, this NIZK becomes publicly verifiable. This constitutes the first fully secure CDH-based designated-verifier NIZKs (and more generally, the first fully secure designated-verifier NIZK from a non-generic assumption which does not already imply publicly-verifiable NIZKs), and it answers an open problem recently raised by Kim and Wu (CRYPTO 2018).

– A NIZK based on the learning with errors (LWE) assumption, and assuming a non-interactive witness-indistinguishable (NIWI) proof system for bounded distance decoding (BDD). This simplifies and improves upon a recent NIZK from LWE that assumes a NIZK for BDD (Rothblum et al., PKC 2019).
Expand
Willy Quach, Ron D. Rothblum, Daniel Wichs
ePrint Report ePrint Report
Non-interactive zero-knowledge proofs (NIZKs) are a fundamental cryptographic primitive. Despite a long history of research, we only know how to construct NIZKs under a few select assumptions, such as the hardness of factoring or using bilinear maps. Notably, there are no known constructions based on either the computational or decisional Diffie-Hellman (CDH/DDH) assumption without relying on a bilinear map.

In this paper, we study a relaxation of NIZKs in the designated verifier setting (DV-NIZK), in which the public common-reference string is generated together with a secret key that is given to the verifier in order to verify proofs. In this setting, we distinguish between one-time and reusable schemes, depending on whether they can be used to prove only a single statement or arbitrarily many statements. For reusable schemes, the main difficulty is to ensure that soundness continues to hold even when the malicious prover learns whether various proofs are accepted or rejected by the verifier. One-time DV-NIZKs are known to exist for general NP statements assuming only public-key encryption. However, prior to this work, we did not have any construction of reusable DV-NIZKs for general NP statements from any assumption under which we didn't already also have standard NIZKs.

In this work, we construct reusable DV-NIZKs for general NP statements under the CDH assumption, without requiring a bilinear map. Our construction is based on the hidden-bits paradigm, which was previously used to construct standard NIZKs. We define a cryptographic primitive called a hidden-bits generator (HBG), along with a designated-verifier variant (DV-HBG), which modularly abstract out how to use this paradigm to get both standard NIZKs and reusable DV-NIZKs. We construct a DV-HBG scheme under the CDH assumption by relying on techniques from the Cramer-Shoup hash-proof system, and this yields our reusable DV-NIZK for general NP statements under CDH.

We also consider a strengthening of DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK) where the setup consists of an honestly generated common random string and the verifier then gets to choose his own (potentially malicious) public/secret key pair to generate/verify proofs. We construct MDV-NIZKs under the ``one-more CDH'' assumption without relying on bilinear maps.
Expand
Léo Ducas, Maxime Plançon, Benjamin Wesolowski
ePrint Report ePrint Report
The hardness of finding short vectors in ideals of cyclotomic number fields (hereafter, Ideal-SVP) can serve as a worst-case assumption for numerous efficient cryptosystems, via the average-case problems Ring-SIS and Ring-LWE. For a while, it could be assumed the Ideal-SVP problem was as hard as the analog problem for general lattices (SVP), even when considering quantum algorithms.

But in the last few years, a series of works has lead to a quantum algorithm for Ideal-SVP that outperforms what can be done for general SVP in certain regimes. More precisely, it was demonstrated (under certain hypotheses) that one can find in quantum polynomial time a vector longer by a factor at most $\alpha = \exp({\tilde O(n^{1/2})})$ than the shortest non-zero vector in a cyclotomic ideal lattice, where $n$ is the dimension.

In this work, we explore the constants hidden behind this asymptotic claim. While these algorithms have quantum steps, the steps that impact the approximation factor $\alpha$ are entirely classical, which allows us to estimate it experimentally using only classical computing. Moreover, we design heuristic improvements for those steps that significantly decrease the hidden factors in practice. Finally, we derive new provable effective lower bounds based on volumetric arguments.

This study allows to predict the crossover point with classical lattice reduction algorithms, and thereby determine the relevance of this quantum algorithm in any cryptanalytic context. For example we predict that this quantum algorithm provides shorter vectors than BKZ-300 (roughly the weakest security level of NIST lattice-based candidates) for cyclotomic rings of rank larger than about $24000$.
Expand
Nuttapong Attrapadung
ePrint Report ePrint Report
We present several transformations that combine a set of attribute-based encryption (ABE) schemes for simpler predicates into a new ABE scheme for more expressive composed predicates. Previous proposals for predicate compositions of this kind, the most recent one being that of Ambrona et.al. at Crypto'17, can be considered static (or partially dynamic), meaning that the policy (or its structure) that specifies a composition must be fixed at the setup. Contrastingly, our transformations are dynamic and unbounded: they allow a user to specify an arbitrary and unbounded-size composition policy right into his/her own key or ciphertext. We propose transformations for three classes of composition policies, namely, the classes of any monotone span programs, any branching programs, and any deterministic finite automata. These generalized policies are defined over arbitrary predicates, hence admitting modular compositions. One application from modularity is a new kind of ABE for which policies can be ``nested'' over ciphertext and key policies. As another application, we achieve the first fully secure completely unbounded key-policy ABE for non-monotone span programs, in a modular and clean manner, under the q-ratio assumption. Our transformations work inside a generic framework for ABE called symbolic pair encoding, proposed by Agrawal and Chase at Eurocrypt'17. At the core of our transformations, we observe and exploit an unbounded nature of the symbolic property so as to achieve unbounded-size policy compositions.
Expand
Dorit Aharonov, Zvika Brakerski, Kai-Min Chung, Ayal Green, Ching-Yi Lai, Or Sattath
ePrint Report ePrint Report
In (single-server) Private Information Retrieval (PIR), a server holds a large database $\db$ of size $n$, and a client holds an index $i \in [n]$ and wishes to retrieve $\db[i]$ without revealing $i$ to the server. It is well known that information theoretic privacy even against an ``honest but curious'' server requires $\Omega(n)$ communication complexity. This is true even if quantum communication is allowed and is due to the ability of such an adversarial server to execute the protocol on a superposition of databases instead of on a specific database (``input purification attack''). Nevertheless, there have been some proposals of protocols that achieve sub-linear communication and appear to provide some notion of privacy. Most notably, a protocol due to Le Gall (ToC 2012) with communication complexity $O(\sqrt{n})$, and a protocol by Kerenidis et al. (QIC 2016) with communication complexity $O(\log(n))$, and $O(n)$ shared entanglement.

We show that, in a sense, input purification is the only potent adversarial strategy, and protocols such as the two protocols above are secure in a restricted variant of the quantum honest but curious (a.k.a specious) model. More explicitly, we propose a restricted privacy notion called \emph{anchored privacy}, where the adversary is forced to execute on a classical database (i.e. the execution is anchored to a classical database). We show that for measurement-free protocols, anchored security against honest adversarial servers implies anchored privacy even against specious adversaries.

Finally, we prove that even with (unlimited) pre-shared entanglement it is impossible to achieve security in the standard specious model with sub-linear communication, thus further substantiating the necessity of our relaxation. This lower bound may be of independent interest (in particular recalling that PIR is a special case of Fully Homomorphic Encryption).
Expand
Benny Applebaum, Amos Beimel, Oriol Farràs, Oded Nir, Naty Peter
ePrint Report ePrint Report
A secret-sharing scheme allows some authorized sets of parties to reconstruct a secret; the collection of authorized sets is called the access structure. For over 30 years, it was known that any (monotone) collection of authorized sets can be realized by a secret-sharing scheme whose shares are of size $2^{n-o(n)}$ and until recently no better scheme was known. In a recent breakthrough, Liu and Vaikuntanathan (STOC 2018) have reduced the share size to $O(2^{0.994n})$. Our first contribution is improving the exponent of secret sharing down to $0.892$. For the special case of linear secret-sharing schemes, we get an exponent of $0.942$ (compared to $0.999$ of Liu and Vaikuntanathan).

Motivated by the construction of Liu and Vaikuntanathan, we study secret-sharing schemes for uniform access structures. An access structure is $k$-uniform if all sets of size larger than $k$ are authorized, all sets of size smaller than $k$ are unauthorized, and each set of size $k$ can be either authorized or unauthorized. The construction of Liu and Vaikuntanathan starts from protocols for conditional disclosure of secrets, constructs secret-sharing schemes for uniform access structures from them, and combines these schemes in order to obtain secret-sharing schemes for general access structures. Our second contribution in this paper is constructions of secret-sharing schemes for uniform access structures. We achieve the following results: (a) A secret-sharing scheme for $k$-uniform access structures for large secrets in which the share size is $O(k^2)$ times the size of the secret. (b) A linear secret-sharing scheme for $k$-uniform access structures for a binary secret in which the share size is $\tilde{O}(2^{h(k/n)n/2})$ (where $h$ is the binary entropy function). By counting arguments, this construction is optimal (up to polynomial factors). (c) A secret-sharing scheme for $k$-uniform access structures for a binary secret in which the share size is $2^{\tilde{O}(\sqrt{k \log n})}$.

Our third contribution is a construction of ad-hoc PSM protocols, i.e., PSM protocols in which only a subset of the parties will compute a function on their inputs. This result is based on ideas we used in the construction of secret-sharing schemes for $k$-uniform access structures for a binary secret.
Expand
Christos Andrikos, Lejla Batina, Lukasz Chmielewski, Liran Lerman, Vasilios Mavroudis, Kostas Papagiannopoulos, Guilherme Perin, Giorgos Rassias, Alberto Sonnino
ePrint Report ePrint Report
Near-field microprobes have the capability to isolate small regions of a chip surface and enable precise measurements with high spatial resolution. Being able to distinguish the activity of small regions has given rise to the location-based sidechannel attacks, which exploit the spatial dependencies of cryptographic algorithms in order to recover the secret key. Given the fairly uncharted nature of such leakages, this work revisits the location side-channel to broaden our modeling and exploitation capabilities. Our contribution is threefold. First, we provide a simple spatial model that partially captures the effect of location-based leakages. We use the newly established model to simulate the leakage of different scenarios/countermeasures and follow an information-theoretic approach to evaluate the security level achieved in every case. Second, we perform the first successful location-based attack on the SRAM of a modern ARM Cortex-M4 chip, using standard techniques such as difference of means and multivariate template attacks. Third, we put forward neural networks as classifiers that exploit the location side-channel and showcase their effectiveness on ARM Cortex-M4, especially in the context of single-shot attacks and small memory regions. Template attacks and neural network classifiers are able to reach high spacial accuracy, distinguishing between 2 SRAM regions of 128 bytes each with 100% success rate and distinguishing even between 256 SRAM byte-regions with 32% success rate. Such improved exploitation capabilities revitalize the interest for location vulnerabilities on various implementations, ranging from RSA/ECC with large memory footprint, to lookup-table-based AES with smaller memory usage.
Expand
Lukas Kölsch
ePrint Report ePrint Report
XOR-metrics measure the efficiency of certain arithmetic operations in binary finite fields. We prove some new results about two different XOR-metrics that have been used in the past. In particular, we disprove an existing conjecture about those XOR-metrics. We consider implementations of multiplication with one fixed element in a binary finite field. Here we achieve a complete characterization of all elements whose multiplication matrix can be implemented using exactly 2 XOR-operations. Further, we provide new results and examples in more general cases, showing that significant improvements in implementations are possible.
Expand
Nimrod Aviram, Kai Gellert, Tibor Jager
ePrint Report ePrint Report
The TLS 1.3 0-RTT mode enables a client reconnecting to a server to send encrypted application-layer data in "0-RTT" ("zero round-trip time"), without the need for a prior interactive handshake. This fundamentally requires the server to reconstruct the previous session's encryption secrets upon receipt of the client's first message. The standard techniques to achieve this are Session Caches or, alternatively, Session Tickets. The former provides forward security and resistance against replay attacks, but requires a large amount of server-side storage. The latter requires negligible storage, but provides no forward security and is known to be vulnerable to replay attacks.

In this paper, we first formally define session resumption protocols as an abstract perspective on mechanisms like Session Caches and Session Tickets. We give a new generic construction that provably provides forward security and replay resilience, based on puncturable pseudorandom functions (PPRFs). This construction can immediately be used in TLS 1.3 0-RTT and deployed unilaterally by servers, without requiring any changes to clients or the protocol.

We then describe two new constructions of PPRFs, which are particularly suitable for use for forward-secure and replay-resilient session resumption in TLS~1.3. The first construction is based on the strong RSA assumption. Compared to standard Session Caches, for "128-bit security" it reduces the required server storage by a factor of almost 20, when instantiated in a way such that key derivation and puncturing together are cheaper on average than one full exponentiation in an RSA group. Hence, a 1 GB Session Cache can be replaced with only about 51 MBs of storage, which significantly reduces the amount of secure memory required. For larger security parameters or in exchange for more expensive computations, even larger storage reductions are achieved. The second construction combines a standard binary tree PPRF with a new "domain extension" technique. For a reasonable choice of parameters, this reduces the required storage by a factor of up to 5 compared to a standard Session Cache. It employs only symmetric cryptography, is suitable for high-traffic scenarios, and can serve thousands of tickets per second.
Expand
◄ Previous Next ►