International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

14 June 2021

University of Stuttgart, Institute of Information Security
Job Posting Job Posting
The Institute of Information Security at University of Stuttgart offers a

fully-funded Postdoc position.

The successful candidate is expected to work on tool-supported formal analysis of cryptographic protocols and web applications building, among others, on our work published at EuroS&P 2021, S&P 2019, CSF 2017, CCS 2016, CCS 2015, ESORICS 2015, and S&P 2014. One goal is to provide tool-supported security analysis based on DY* for our web infrastructure model (WIM).

The position is available immediately with an internationally competitive salary (German public salary scale TV-L E13 or TV-L E14, depending on the candidate's qualification).

The Institute of Information Security offers a creative international environment for top-level international research in Germany's high-tech region.

The successful candidate should have a Ph.D. (or should be very close to completion thereof) in Computer Science, Mathematics, Information Security, or a related field. We value strong analytical skills. Knowledge in one or more of the following fields is an asset:

  • Formal Methods (Verification, Theorem Proving, F*, Type Checking, etc.)
  • Cryptographic Protocol Analysis
  • Web Security
Knowledge of German is not required.

The deadline for applications is

July 4th, 2021.

Late applications will be considered until the position is filled. See the official job offering for details of how to apply.

Closing date for applications:

Contact: Prof. Dr. Ralf Küsters
Institute of Information Security
University of Stuttgart
Germany
https://sec.uni-stuttgart.de

More information: https://www.sec.uni-stuttgart.de/institute/job-openings/

Expand
Adi Akavia, Margarita Vald
ePrint Report ePrint Report
Li and Micciancio (Eurocrypto 2021) shattered a widespread misconception regarding the security of protocols based on cpa-secure homomorphic encryption (HE). They showed an attack breaking security of HE-based protocols provided that the protocol employs an HE scheme for approximate numbers, like CKKS, and the adversary sees decrypted ciphertexts. However, their attack fails when employing exact HE schemes, like BGV, or denying access to decrypted data.

We show that the Li-Micciancio attack is only the tip of the iceberg: 1)We exhibit an input-recovery attack completely breaking the privacy of a wide and natural family of HE-based protocols, including protocols using only exact HE-schemes and with an adversary exposed solely to encrypted data. This proves that cpa-security is insufficient to ensure privacy in a much broader context than previously known. 2)To address the threat exhibited by our attack we introduce sufficient conditions, on either the encryption scheme or the protocol, that do guarantee privacy: (a) Every HE scheme with a sanitization algorithm (e.g., BGV and FHEW) can be transformed into a ``sanitized" scheme so that protocols instantiated with it preserve privacy against malicious adversaries. (b) Moreover, we characterize a natural sub-family of these protocols for which cpa-security does suffice to guarantee privacy, albeit against semi-honest adversaries.

To prove (2a) we define a notion of circuit-privacy+ that lies between semi-honest and malicious circuit-privacy and realize it from existing schemes; this may be of independent interest.
Expand
Divesh Aggarwal, Eldon Chung, Maciej Obremski, Joao Ribeiro
ePrint Report ePrint Report
Secret-sharing is one of the most basic and oldest primitives in cryptography, introduced by Shamir and Blakely in the 70s. It allows to strike a meaningful balance between availability and confidentiality of secret information. It has a host of applications most notably in threshold cryptography and multi-party computation. All known constructions of secret sharing (with the exception of those with a pathological choice of parameters) require access to uniform randomness. In practice, it is extremely challenging to generate a source of uniform randomness. This has led to a large body of research devoted to designing randomized algorithms and cryptographic primitives from imperfect sources of randomness.

Motivated by this, 15 years ago, Bosley and Dodis asked whether it is even possible to build 2-out-of-2 secret sharing without access to uniform randomness. In this work, we make progress towards resolving this question.

We answer this question for secret sharing schemes with important additional properties, i.e., either leakage-resilience or non-malleability. We prove that, unfortunately, for not too small secrets, it is impossible to construct any of 2-out-of-2 leakage-resilient secret sharing or 2-out-of-2 non-malleable secret sharing without access to uniform randomness.

Given that the problem whether 2-out-of-2 secret sharing requires uniform randomness has been open for a long time, it is reasonable to consider intermediate problems towards resolving the open question. In a spirit similar to NP-completeness, we study how the existence of a t-out-of-n secret sharing without access to uniform randomness is related to the existence of a t'-out-of-n' secret sharing without access to uniform randomness for a different choice of the parameters t,n,t',n'.
Expand
Mohammad Hassan Ameri, Alexander R. Block, Jeremiah Blocki
ePrint Report ePrint Report
We introduce and construct memory-hard puzzles. Intuitively, a puzzle is memory-hard if it cannot be solved by any parallel random access machine (PRAM) algorithm with small (amortized) area-time complexity $t^{2-\varepsilon}$. The puzzles should also be easy to generate and should be solvable by a sequential PRAM algorithm running in total time $t$ and area-time complexity at most $t^2$. We construct memory-hard puzzles in the standard model using succinct randomized encodings (Bitansky et al., STOC 2015) under the minimal assumption that a memory-hard language exists. Intuitively, a language is memory-hard if it is undecidable by any PRAM algorithm with small (amortized) area-time complexity $t^{2 - \varepsilon}$ while the language can be decided in time $t$ and with area-time complexity at most $t^2$.

We give two applications which highlight the utility of memory-hard puzzles. For our first application, we give a construction of a (one-time) memory-hard function (MHF) in the standard model, using our memory-hard puzzles and additionally assuming indistinguishability obfuscation. Our construction is the first provable MHF in the standard model --- prior MHF constructions require idealized models (e.g., random oracle model) to prove security. For our second application, we show any cryptographic puzzle (e.g., memory-hard, time-lock) can be used to construct resource-bounded locally decodable codes (LDC) in the standard model, answering an open question of Blocki, Kulkarni, and Zhou (ITC 2020). Prior constructions required idealized primitives like random oracles and assuming that the encoding algorithm efficiency was not resource-constrained like the channel. In particular, assuming time-lock puzzles or memory-hard puzzles, we construct resource-bounded LDCs which are secure against low-depth channels or channels with small (amortized) area-time complexity, respectively.
Expand
Leemon Baird, Pratyay Mukherjee, Rohit Sinha
ePrint Report ePrint Report
Time-locked encryption can encrypt a message to a future time such that it can only be decrypted after that time. Potential applications include sealed bid auctions, scheduled confidential transactions, and digital time capsules.

Prior practical schemes for time-locked encryption rely on a clock-equipped trusted server, who periodically publishes a time-specific decryption key based on a long-term secret. Their main idea is to model time periods as identities in an identity-based encryption scheme. While such schemes allow encryption to a future time periods, they offer limited support for decryption of past ciphertexts. In particular, they force a client to be online when the key is published, or interact with the server to re-generate the key.

This paper proposes a new notion of time-locked encryption where an aggregated decryption key can be used to decrypt any ciphertext locked to a prior time. Furthermore, we decentralize the trust amongst a number of servers, such that it can tolerate up to a threshold number of (malicious) corruptions. We call our notion threshold aggregated time-locked encryption (TATLE). We propose a practical construction that supports compact decryption keys as well as compact ciphertexts (both logarithmic in the total lifetime). Our construction is based on bilinear pairing and adapts ideas from Canetti et al.'s binary tree encryption [Eurocypt 2003] and Naor et al.'s distributed pseudorandom functions [Eurocrypt 1999].
Expand
Martin Albrecht, Léo Ducas
ePrint Report ePrint Report
Since its invention in 1982, the LLL lattice reduction algorithm (Lenstra, Lenstra, Lovasz 1982) has found countless applications. In cryptanalysis, the two most prominent applications of LLL and its generalisations --e.g. Slide, BKZ and SD-BKZ-- are factoring RSA keys with extra information on the secret key via Coppersmith's method and the cryptanalysis of lattice-based schemes.

After almost 40 years of cryptanalytic applications, predicting and optimising lattice reduction algorithms remains an active area of research. While we do have theorems bounding the worst-case performance of these algorithms, those bounds are asymptotic and not necessarily tight when applied to practical or even cryptographic instances. Reasoning about the behaviour of those algorithms relies on heuristics and approximations, some of which are known to fail for relevant corner cases.

Decades after Lenstra, Lenstra, and Lovász gave birth to this fascinating and lively research area, this state of affairs became a more pressing issue recently. Motivated by post-quantum security, standardisation bodies, governments and industry started to move towards deploying lattice-based cryptographic algorithms. This spurred the refinement of those heuristics and approximations, leading to a better understanding of the behaviour of these algorithms over the last few years.

Lattice reduction algorithms, such as LLL and BKZ, proceed with repeated local improvements to the lattice basis, and each such local improvement means solving the short(est) vector problem in a lattice of a smaller dimension. Therefore, two questions arise: how costly is it to find those local improvements and what is the global behaviour as those improvements are applied.

While those two questions may not be perfectly independent, we will, in this survey, focus on the second one, namely, the global behaviour of such algorithms, given oracle access for finding local improvements. Our focus on the global behaviour is motivated by our intent to draw more of the community's attention to this aspect. We will take a particular interest in the behaviour of such algorithms on a specific class of lattices, underlying the most popular lattice problems to build cryptographic primitives, namely the LWE problem and the NTRU problem. We will emphasise on the approximations that have been made, their progressive refinements and highlight open problems to be addressed.
Expand
Pierre Civit, Maria Potop-Butucaru
ePrint Report ePrint Report
We present probabilistic dynamic I/O automata, a framework to model dynamic probabilistic systems. Our work extends dynamic I/O Automata formalism [1] to probabilistic setting. The original dynamic I/O Automata formalism included operators for parallel composition, action hiding, action renaming, automaton creation, and behavioral sub-typing by means of trace inclusion. They can model mobility by using signature modification. They are also hierarchical: a dynamically changing system of interacting automata is itself modeled as a single automaton. Our work extends to probabilistic settings all these features. Furthermore, we prove necessary and sufficient conditions to obtain the implementation monotonicity with respect to automata creation and destruction. Our work lays down the premises for extending composable secure-emulation [3] to dynamic settings, an important tool towards the formal verification of protocols combining probabilistic distributed systems and cryptography in dynamic settings (e.g. blockchains, secure distributed computation, cybersecure distributed protocols etc).
Expand
Tim Heldmann, Thomas Schneider, Oleksandr Tkachenko, Christian Weinert, Hossein Yalame
ePrint Report ePrint Report
Multi-party computation (MPC) allows two or more parties to jointly and securely compute functions over private inputs. Cryptographic protocols that realize MPC require functions to be expressed as Boolean or arithmetic circuits. Deriving such circuits is either done manually, or with hardware synthesis tools and specialized MPC compilers. Unfortunately, such existing tools compile only from a single front-end language and neglect decades of research for optimizing regular compilers.

In this paper, we make MPC practical for developers by automating circuit compilation based on the compiler toolchain LLVM. For this, we develop an LLVM optimizer suite consisting of multiple transform passes that operate on the LLVM intermediate representation (IR) and gradually lower functions to circuit level. Our approach supports various front-end languages (currently C, C++, and Fortran) and takes advantage of powerful source code optimizations built into LLVM. We furthermore make sure to produce circuits that are optimized for MPC, and even offer fully automated post-processing for efficient post-quantum MPC.

We empirically measure the quality of our compilation results and compare them to the state-of-the-art specialized MPC compiler HyCC (Büscher et al., CCS'2018). For all benchmarked HyCC example applications (e.g., biomatch and linear equation solving), our highly generalizable approach achieves similar quality in terms of gate count and composition.
Expand
Karim Eldefrawy, Julian Loss, Ben Terner
ePrint Report ePrint Report
Consensus protocols enable $n$ parties, each holding some input string, to agree on a common output even in the presence of corrupted parties. While the problem is well understood in the classic byzantine setting, recent work has pushed to understand the problem when realistic types of failures are considered and a majority of parties may be corrupt. Garay and Perry consider a model with both byzantine and crash faults and develop a corruption-optimal protocol with perfect security tolerating $t_c$ crash faults and $t_b$ byzantine faults for $n>t_c+3t_b$. Follow up work by Zikas, Hauser, and Maurer extends the model by considering receive-corrupt parties that may not receive messages sent to them, and send-corrupt parties whose sent messages may be dropped. Otherwise, receive-corrupt and send-corrupt parties behave honestly and their inputs and outputs are considered by the security definitions. Zikas, Hauser, and Maurer gave a perfectly secure, linear-round protocol for $n > t_r+t_s+3t_b$, where $t_r$ and $t_s$ represent thresholds on the number of parties that are receive- or send-corrupted.

In this paper we ask ``what are optimal thresholds in the cryptographic setting that can be tolerated with such mixes of corruptions and faults?" We develop an expected-constant round protocol tolerating $n > t_r+2t_s+2t_b$. We are unable to prove optimality of our protocol's corruption budget in the general case; however, when we constrain the adversary to either drop all or none of a sender's messages in a round, we prove our protocol achieves an optimal threshold of $n > t_r+t_s+2t_b$. We denote this weakening of a send corruption a \emph{spotty send corruption}.

In light of this difference in corruption tolerance due to our weakening of a send corruption, we ask ``how close (with respect to corruption thresholds) to a byzantine corruption is a send corruption?" We provide a treatment of the difficulty of dealing with send corruptions in protocols with sublinear rounds. As an illustrative and surprising example (even though not in sublinear rounds), we show that the classical Dolev-Strong broadcast protocol degrades from $n > t_b$ corruptions in the byzantine-only model to $n > 2t_s+2t_b$ when send-corrupt parties' outputs must be consistent with honest parties; we also show why other recent dishonest-majority broadcast protocols degrade similarly. We leave open the question of optimal corruption tolerance for both send- and byzantine corruptions.
Expand
Wei Jiang
ePrint Report ePrint Report
Secure comparison (SC) is an essential primitive in Secure Multiparty Computation (SMC) and a fundamental building block in Privacy-Preserving Data Analytics. Although secure comparison has been studied since the introduction of SMC in the early 80s and many protocols have been proposed, there is still room for improvement, especially providing security against malicious adversaries who form the majority among the participating parties. It is not hard to develop an SC protocol secure against malicious majority based on the current state of the art SPDZ framework. SPDZ is design to work for arbitrary polynomially-bounded functionalities, and it may not provide the most efficient SMC implementation for a specific task, such as SC. In this paper, we propose a novel compiler that is specifically designed to convert most existing SC protocols with semi-honest security into the ones secure against the malicious majority. This compiler provides a flexible and efficient way to achieve both covert and active security for passively secure SC protocols.
Expand
Si Gao, Elisabeth Oswald, Dan Page
ePrint Report ePrint Report
Micro-architectural leakage is a reality even on low- to midrange commercial processors. Dealing with it is expensive, because micro-architectural leakage is often only discovered after implementation choices have been made (i.e. when evaluating the concrete implementation). We demonstrate that it is feasible, using a recent leakage modelling technique, to reverse engineer significant elements of the micro-architectural leakage of a mid-range commercial processor in a “grey-box” setting. Our approach first recovers the micro-architectural features of each stage in the pipeline, and the leakage of elements that are known to produce glitches. To put our reverse engineered micro-architectural leakage in context, we compare and contrast a leakage analysis of a relevant piece of masking code. More specifically, we compare the leakage that we would anticipate given our analysis, and predictions of the to-date most sophisticated leakage simulators (e.g. ELMO and MAPS) on the same piece of code. Our research demonstrates that reverse engineering of micro-architectural components (and their leakage) is clearly feasible using available side channel leakage, and following, it should be possible to build more accurate leakage simulators.
Expand
Nils Fleischhacker, Kasper Green Larsen, Mark Simkin
ePrint Report ePrint Report
Property-preserving hash functions allow for compressing long inputs $x_0$ and $x_1$ into short hashes $h(x_0)$ and $h(x_1)$ in a manner that allows for computing a predicate $P(x_0, x_1)$ given only the two hash values without having access to the original data. Such hash functions are said to be adversarially robust if an adversary that gets to pick $x_0$ and $x_1$ after the hash function has been sampled, cannot find inputs for which the predicate evaluated on the hash values outputs the incorrect result.

In this work we construct robust property-preserving hash functions for the hamming-distance predicate which distinguishes inputs with a hamming distance at least some threshold $t$ from those with distance less than $t$. The security of the construction is based on standard lattice hardness assumptions.

Our construction has several advantages over the best known previous construction by Fleischhacker and Simkin (Eurocrypt 2021). Our construction relies on a single well-studied hardness assumption from lattice cryptography whereas the previous work relied on a newly introduced family of computational hardness assumptions. In terms of computational effort, our construction only requires a small number of modular additions per input bit, whereas the work of Fleischhacker and Simkin required several exponentiations per bit as well as the interpolation and evaluation of high-degree polynomials over large fields. An additional benefit of our construction is that the description of the hash function can be compressed to $\lambda$ bits assuming a random oracle. Previous work has descriptions of length $\bigO{\ell \lambda}$ bits for input bit-length $\ell$, which has a secret structure and thus cannot be compressed.

We prove a lower bound on the output size of any property-preserving hash function for the hamming distance predicate. The bound shows that the size of our hash value is not far from optimal.
Expand
Madhurima Mukhopadhyay, Palash Sarkar
ePrint Report ePrint Report
We introduce a technique to obtain practical speed up for relation collection in class group computations. The idea is to perform a pseudo-random walk over the ideals. The ideals visited by the walk are used in the manner exactly as in the previous algorithm due to Gélin (2018). Under the heuristic assumption that the ideals visited by the walk behave as the ideals randomly generated in Gélin’s algorithm, the asymptotic complexity of the new algorithm remains the same as that of Gélin’s algorithm. The main advantage of the new method over Gélin’s method is that the pseudo-random walk requires a single ideal multiplication to generate the next ideal in the walk, whereas Gélin’s algorithm requires a number of ideal multiplications to generate each ideal to be tested. We have made Magma implementations of both the new algorithm and Gélin’s algorithm. Timing results confirm that there is indeed a substantial practical speed-up in relation collection by the new algorithm over Gélin’s algorithm.
Expand
Akashdeep Saha, Urbi Chatterjee, Debdeep Mukhopadhyay, Rajat Subhra Chakraborty
ePrint Report ePrint Report
CAS-Lock (proposed in CHES2020), is an advanced logic locking technique that harnesses the concept of single-point function in providing SAT-attack resiliency. It is claimed to be powerful and efficient enough in mitigating state-of-the-art attacks against logic locking techniques. Despite the security robustness of CAS-Lock as claimed by the authors, we expose a serious vulnerability by exploiting the same and device a novel attack algorithm. The proposed attack can reveal the correct key by extracting the Distinguishing Input Patterns (DIPs) pertaining to a carefully chosen key simulation of the locked design. The correct key is obtained from the combination of elements from the set of extracted DIPs. Our attack is successful against various AND/OR cascaded-chain configurations of CAS-Lock and reports a 100% success rate in recovering the correct key.
Expand
Amund Askeland, Sondre Rønjom
ePrint Report ePrint Report
We take a look at the current implementation of NTRU submitted to the NIST post-quantum standardization project, and identify two strong sources of leakage in the unpacking of the secret key. The strength of the leakages is due to the target processor handling data with very different Hamming weight depending on parts of the secret key. We focus on using only these strong leakages, present a single-trace side-channel attack that reliably recovers a large portion of the secret key, and use lattice reduction techniques to find the remaining parts. Further, we show how small changes to the implementation greatly reduces the leakage without any overhead.
Expand
Jongkil Kim, Seyit Camtepe, Joonsang Baek, Willy Susilo, Josef Pieprzyk, Surya Nepal
ePrint Report ePrint Report
The amount of encrypted Internet traffic almost doubles every year thanks to the wide adoption of end-to-end traffic encryption solutions such as IPSec, TLS and SSH. Despite all the benefits of user privacy the end-to-end encryption provides, the encrypted internet traffic blinds intrusion detection system (IDS) and makes detecting malicious traffic hugely difficult. The resulting conflict between the user's privacy and security has demanded solutions for deep packet inspection (DPI) over encrypted traffic. The approach of those solutions proposed to date is still restricted in that they require intensive computations during connection setup or detection. For example, BlindBox, introduced by Sherry et al. (SIGCOMM 2015) enables inspection over the TLS-encrypted traffic without compromising users' privacy, but its usage is limited due to a significant delay on establishing an inspected channel. PrivDPI, proposed more recently by Ning et al. (ACM CCS 2019), improves the overall efficiency of BlindBox and makes the inspection scenario more viable.Despite the improvement, we show in this paper that the user privacy of Ning et al.'s PrivDPI can be compromised entirely by the rule generator without involving any other parties, including the middlebox. Having observed the difficulties of realizing efficiency and security in the previous work, we propose a new DPI system for encrypted traffic, named ``Practical and Privacy-Preserving Deep Packet Inspection (P2DPI)''. P2DPI enjoys the same level of security and privacy that BlindBox provides. At the same time, P2DPI offers fast setup and encryption and outperforms PrivDPI. Our results are supported by formal security analysis. We implemented our P2DPI and comparable PrivDPI and performed extensive experimentation for performance analysis and comparison.
Expand
Yael Tauman Kalai, Vinod Vaikuntanathan, Rachel Yun Zhang
ePrint Report ePrint Report
We introduce the notion of a somewhere statistically sound (SSS) interactive argument, which is a hybrid between a statistically sound proof and a computationally sound proof (a.k.a. an argument).

- First, we show that Kilian's protocol, instantiated with a computationally non-signaling PCP (Brakerski, Holmgren, and Kalai, STOC 2017) and a somewhere statistically binding hash family (Hubacek and Wichs, ITCS 2015), is an SSS argument.

- Secondly, we show that the soundness of SSS arguments can be proved in a straight-line manner, implying that they are also post-quantum sound if the underlying assumption is post-quantum secure. This provides a straightforward proof that Kilian's protocol, instantiated this way, is post-quantum sound under the post-quantum hardness of LWE (though we emphasize that a computationally non-signaling PCP exists only for deterministic languages, and more generally, for specific subclasses of non-deterministic languages such as $\mathsf{NTISP}$, but not for all of $\mathsf{NP}$).

- We put forward a natural conjecture that constant-round SSS arguments can be soundly converted into non-interactive arguments via the Fiat-Shamir transformation. We argue that SSS arguments evade the current Fiat-Shamir counterexamples, including the one for Kilian's protocol (Bartusek, Bronfman, Holmgren, Ma and Rothblum, TCC 2019) by requiring additional properties from both the hash family and the PCP.

As an additional result, we show that by using a computationally non-signaling PCP and a somewhere statistically binding hash family, one can efficiently convert any succinct non-interactive argument (SNARG) for $\mathsf{BatchNP}$ into a SNARG for $\mathsf{P}$.
Expand
Sven Heiberg, Kristjan Krips, Jan Willemson
ePrint Report ePrint Report
This paper studies the challenges of creating a mobile device based voting client. We discuss the issues related to standalone and mobile browser based voting applications. In both cases we discuss the problems of vote privacy, integrity and voting channel availability. We conclude that neither of the options can currently achieve the level of security PC-based voting clients can provide, with the attack surface being larger in the case of mobile browser based voting application.
Expand
Yongjun Zhao, Huaxiong Wang, Kwok-Yan Lam
ePrint Report ePrint Report
Volumetric leakage in encrypted databases had been overlooked by the community for a long time until Kellaris et al. (CCS ’16) proposed the first database reconstruction attack leveraging communication volume. Their attack was soon improved and several query recovery attacks were discovered recently. In response to the advancements of volumetric leakage attacks, volume-hiding searchable symmetric encryption (SSE) schemes have been proposed (Kamara and Moataz, Eurocrypt ’19 & Patel et al., CCS ’19). In these schemes, the database is padded in a clever way so that the volume (i.e., the number of responses) for any search query is the same or computationally indistinguishable while keeping the storage complexity and search complexity as small as possible.

Unfortunately, existing volume-hiding SSE schemes do not support atomic updates (i.e., addition/deletion of an arbitrary keyword-document pair), which is the most common update operation considered in the SSE literature. Meanwhile, recent volumetric attacks (Wang et al., EuroS&P ’20 & Blackstone et al., NDSS ’20) indeed target dynamic databases.

We initiate a formal study of volume-hiding dynamic SSE. We extend the existing definition of volume-hiding leakage function into the dynamic setting and present efficient constructions VH-DSSE and VH-DSSE^k . VH-DSSE suffers from non-negligible correctness error. To remedy the disadvantage of VH-DSSE, we propose a multi-copy construction VH-DSSE^k that amplifies correctness by parallel repetition. As a side contribution, both VH-DSSE and VH-DSSE^k satisfy the strongest notions of backward-privacy, which is the first one in the literature, to the best of our knowledge.
Expand
Elena Kirshanova, Thijs Laarhoven
ePrint Report ePrint Report
In two of the main areas of post-quantum cryptography, based on lattices and codes, nearest neighbor techniques have been used to speed up state-of-the-art cryptanalytic algorithms, and to obtain the lowest asymptotic cost estimates to date [May-Ozerov, Eurocrypt'15; Becker-Ducas-Gama-Laarhoven, SODA'16]. These upper bounds are useful for assessing the security of cryptosystems against known attacks, but to guarantee long-term security one would like to have closely matching lower bounds, showing that improvements on the algorithmic side will not drastically reduce the security in the future. As existing lower bounds from the nearest neighbor literature do not apply to the nearest neighbor problems appearing in this context, one might wonder whether further speedups to these cryptanalytic algorithms can still be found by only improving the nearest neighbor subroutines. We derive new lower bounds on the costs of solving the nearest neighbor search problems appearing in these cryptanalytic settings. For the Euclidean metric we show that for random data sets on the sphere, the locality-sensitive filtering approach of [Becker-Ducas-Gama-Laarhoven, SODA 2016] using spherical caps is optimal, and hence within a broad class of lattice sieving algorithms covering almost all approaches to date, their asymptotic time complexity of $2^{0.292d + o(d)}$ is optimal. Similar conditional optimality results apply to lattice sieving variants, such as the $2^{0.265d + o(d)}$ complexity for quantum sieving [Laarhoven, PhD thesis 2016] and previously derived complexity estimates for tuple sieving [Herold-Kirshanova-Laarhoven, PKC 2018]. For the Hamming metric we derive new lower bounds for nearest neighbor searching which almost match the best upper bounds from the literature [May-Ozerov, Eurocrypt 2015]. As a consequence we derive conditional lower bounds on decoding attacks, showing that also here one should search for improvements elsewhere to significantly undermine security estimates from the literature.
Expand
◄ Previous Next ►