IACR News
Updates on the COVID-19 situation are on the
Announcement channel.
Here you can see all recent updates to the IACR webpage. These updates are also available:
22 April 2019
Nico Dottling, Sanjam Garg, Mohammad Hajiabadi, Daniel Masny, Daniel Wichs
ePrint Report
We show a new general approach for constructing maliciously secure two-round oblivious transfer (OT). Specifically, we provide a generic sequence of transformations to upgrade a very basic notion of two-round OT, which we call elementary OT, to UC-secure OT. We then give simple constructions of elementary OT under the Computational Diffie-Hellman (CDH) assumption or the Learning Parity with Noise (LPN) assumption, yielding the first constructions of malicious (UC-secure) two-round OT under these assumptions. Since two-round OT is complete for two-round 2-party and multi-party computation in the malicious setting, we also achieve the first constructions of the latter under these assumptions.
Itai Dinur
ePrint Report
An adversary with $S$ bits of
memory obtains a stream of $Q$ elements that are uniformly drawn from the set $\{1,2,\ldots,N\}$, either with or without replacement. This corresponds to sampling $Q$ elements using either a random function or a random permutation. The adversary's goal is to distinguish between these two cases.
This problem was first considered by Jaeger and Tessaro (EUROCRYPT 2019), which proved that the adversary's advantage is upper bounded by $\sqrt{Q \cdot S/N}$. Jaeger and Tessaro used this bound as a streaming switching lemma which allowed proving that known time-memory tradeoff attacks on several modes of operation (such as counter-mode) are optimal up to a factor of $O(\log N)$ if $Q \cdot S \approx N$. However, if $Q \cdot S \ll N$ there is a gap between the upper bound of $\sqrt{Q \cdot S/N}$ and the $Q \cdot S/N$ advantage obtained by known attacks. Moreover, the bound's proof assumed an unproven combinatorial conjecture.
In this paper, we prove a tight upper bound (up to poly-logarithmic factors) of $O(\log Q \cdot Q \cdot S/N)$ on the adversary's advantage in the streaming distinguishing problem. The proof does not require a conjecture and is based on a reduction from communication complexity to streaming.
This problem was first considered by Jaeger and Tessaro (EUROCRYPT 2019), which proved that the adversary's advantage is upper bounded by $\sqrt{Q \cdot S/N}$. Jaeger and Tessaro used this bound as a streaming switching lemma which allowed proving that known time-memory tradeoff attacks on several modes of operation (such as counter-mode) are optimal up to a factor of $O(\log N)$ if $Q \cdot S \approx N$. However, if $Q \cdot S \ll N$ there is a gap between the upper bound of $\sqrt{Q \cdot S/N}$ and the $Q \cdot S/N$ advantage obtained by known attacks. Moreover, the bound's proof assumed an unproven combinatorial conjecture.
In this paper, we prove a tight upper bound (up to poly-logarithmic factors) of $O(\log Q \cdot Q \cdot S/N)$ on the adversary's advantage in the streaming distinguishing problem. The proof does not require a conjecture and is based on a reduction from communication complexity to streaming.
Eliane KOUSSA, Gilles MACARIO-RAT, Jacques PATARIN
ePrint Report
In this document, we investigate the complexity of an old-time combinatorial
problem - namely Permuted Kernel Problem (PKP) - about which no
new breakthrough were reported for a while. PKP is an NP-complete algebraic
problem that consists of finding a kernel vector with particular entries for a
publicly known matrix. Its simple, and needs only basic linear algebra. Hence,
this problem was used to develop the first Identification Scheme (IDS) which has
an efficient implementation on low-cost smart cards.
The Permuted Kernel Problem has been extensively studied. We
present a summary of previously known algorithms devoted to solve this problem
and give an update to their complexity. Contrary to what is shown in JOUX-JAULMES's article,
and after a thorough analysis of the State-of-the-art attacks of PKP, we claim that
the most efficient algorithm for solving PKP is still the one introduced by J.
PATARIN and P. CHAUVAUD. We have been able to specify a theoretical
bound on the complexity of PKP which allows us to identify hard instances
of this latter.
Among all the attacks, the problem PKP is in spite of the research effort, still
exponential.
Tong Cao, Jiangshan Yu, Jérémie Decouchant, Xiapu Luo, Paulo Verissimo
ePrint Report
As of 12th January 2019, Monero is ranked as the
first privacy-preserving cryptocurrency by market capitalization,
and the 14th among all cryptocurrencies.
This paper aims at improving the understanding of the Monero
peer-to-peer network. We develop a tool set to explore the Monero
peer-to-peer network, including its network size, distribution, and
connectivity. In addition, we set up four Monero nodes that run
our tools: two in the U.S., one in Asia, and one in Europe. We
show that through a short time (one week) data collection, our
tool is already able to discover 68.7% more daily active peers (in
average) compared to a Monero mining pool, called Monerohash,
that also provides data on the Monero node distribution.
Overall, we discovered 21,678 peer IP addresses in the Monero
network. Our results show that only 4,329 (about 20%) peers
are active. We also show that the nodes in the Monero network
follow the power law distribution concerning the number of
connections they maintain. In particular, only 0.7% of the active
maintain more than 250 outgoing connections simultaneously,
and a large proportion (86.8%) of the nodes maintain less than
8 outgoing connections. These 86.8% nodes only maintain 17.14%
of the overall connections in the network, whereas the remaining
13.2% nodes maintain 82.86% of the overall connections. We
also show that our tool is able to dynamically infer the complete
outgoing connections of 99.3% nodes in the network, and infer
250 outgoing connections of the remaining 0.7% nodes.
Kai Samelin, Daniel Slamanig
ePrint Report
Sanitizable signatures allow a single, and signer-defined, sanitizer to modify signed messages in a controlled way without invalidating the respective signature. They turned out to be a fascinating primitive, proven by different variants and extensions, e.g., allowing multiple sanitizers or adding
new sanitizers one-by-one. Still, existing constructions are very limited regarding their flexibility in specifying potential sanitizers. In this paper, we propose a different and more powerful approach: Instead of using the sanitizers' public keys directly,
we assign attributes to them. Sanitizing is then based on policies, i.e., access structures defined over attributes.
A sanitizer can sanitize, if, and only if, it holds a secret key to attributes satisfying the policy associated to a signature,
while offering full-scale accountability.
Houda Ferradi, Keita Xagawa
ePrint Report
This paper presents a novel, yet efficient secret-key authentication and MAC, which provide post-quantum security promise, whose security is reduced to the quantum-safe conjectured hardness of Mersenne Low Hamming Combination (MERS) assumption recently introduced by Aggarwal, Joux, Prakash, and Santha (CRYPTO 2018). Our protocols are very suitable to weak devices like smart card and RFID tags.
Mustafa Khairallah
ePrint Report
This document includes a collision/forgery attack against SNEIKEN128/192/256,
where every message with more than 128 bytes of associated data can be converted
into another message with different associated data and the same ciphertext/tag.
The attack is a direct application of the probability 1 differential of the SNEIK
permutation found by Léo Perrin in [Per19]. We verify the attack using the reference
implementation of SNEIKEN128 provided by the designers, providing an example of
such collisions.
Binanda Sengupta, Yingjiu Li, Kai Bu, Robert H. Deng
ePrint Report
The end-users communicating over a network path currently have no control over the path. For a better quality of service, the source node often opts for a superior (or premium) network path in order to send packets to the destination node. However, the current Internet architecture provides no assurance that the packets indeed follow the designated path. Network path validation schemes address this issue and enable each node present on a network path to validate whether each packet has followed the specific path so far. In this work, we introduce two notions of privacy -- path privacy and index privacy -- in the context of network path validation. We show that, in case a network path validation scheme does not satisfy these two properties, the scheme is vulnerable to certain practical attacks (that affect the reliability, neutrality and quality of service offered by the underlying network). To the best of our knowledge, ours is the first work that addresses privacy issues related to network path validation. We design PrivNPV, a privacy-preserving network path validation protocol, that satisfies both path privacy and index privacy. We discuss several attacks related to network path validation and how PrivNPV defends against these attacks. Finally, we discuss the practicality of PrivNPV based on relevant parameters.
David Derler, Kai Samelin, Daniel Slamanig, Christoph Striecks
ePrint Report
Blockchain technologies recently received a considerable amount
of attention. While the initial focus was mainly on the use of
blockchains in the context of cryptocurrencies such as Bitcoin, application
scenarios now go far beyond this. Most blockchains have the property
that once some object, e.g., a block or a transaction, has been registered
to be included into the blockchain, it is persisted and there are
no means to modify it again. While this is an essential feature of most
blockchain scenarios, it is still often desirable - at times it may be even
legally required - to allow for breaking this immutability in a controlled
way.
Only recently, Ateniese et al. (EuroS&P 2017) proposed an elegant
solution to this problem on the block level. Thereby, the authors replace
standard hash functions with so-called chameleon-hashes (Krawczyk and
Rabin, NDSS 2000). While their work seems to offer a suitable solution to
the problem of controlled re-writing of blockchains, their approach is too
coarse-grained in that it only offers an all-or-nothing solution. We revisit
this idea and introduce the novel concept of policy-based chameleonhashes
(PCH). PCHs generalize the notion of chameleon-hashes by giving
the party computing a hash the ability to associate access policies to the
generated hashes. Anyone who possesses enough privileges to satisfy the
policy can then find arbitrary collisions for a given hash. We then apply
this concept to transaction-level rewriting within blockchains, and thus
support fine-grained and controlled modifiability of blockchain objects.
Besides modeling PCHs, we present a generic construction of PCHs (using
a strengthened version of chameleon-hashes with ephemeral trapdoors
which we also introduce), rigorously prove its security, and instantiate it
with efficient building blocks. We report first implementation results.
Jo Vliegen, Md Masoom Rabbani, Mauro Conti, Nele Mentens
ePrint Report
Field-Programmable Gate Arrays or FPGAs are popular platforms for hardware-based attestation. They offer protection
against physical and remote attacks by verifying if an embedded processor is running the intended application code. However, since FPGAs are configurable after deployment (thus not tamper-resistant), they are susceptible to attacks, just like microprocessors. Therefore, attesting an electronic system that uses an FPGA should be done by verifying the status of both the software and the hardware, without the availability of a dedicated tamper-resistant hardware module.
Inspired by the work of Perito and Tsudik, this paper proposes a partially reconfigurable FPGA architecture and attestation protocol that enable the self-attestation of the FPGA. Through the use of our solution, the FPGA can be used as a trusted hardware module to perform hardware-based attestation of a processor. This way, an entire hardware/software system can be protected against malicious code updates.
Kazuhiko Minematsu
ePrint Report
Message authentication code, MAC for short, is a symmetric-key cryptographic function for authenticity. A standard MAC verification only tells whether the message is valid or invalid, and thus we can not identify which part is corrupted in case of invalid message.
In this paper we study a class of MAC functions that enables to identify the part of corruption, which we call group testing MAC (GTM). This can be seen as an application of a classical (non-adaptive) combinatorial group testing to MAC.
Although the basic concept of GTM (or its keyless variant) has been proposed in various application areas, such as data forensics and computer virus testing, they rather treat the underlying MAC function as a black box, and exact computation cost for GTM seems to be overlooked.
In this paper, we study the computational aspect of GTM, and show that a simple yet non-trivial extension of parallelizable MAC (PMAC) enables $O(m+t)$ computation for $m$ data items and $t$ tests, irrespective of the underlying test matrix we use, under a natural security model. This greatly improves efficiency from naively applying a black-box MAC for each test, which requires $O(mt)$ time. Based on existing group testing methods, we also present experimental results of our proposal and observe that ours runs as fast as taking single MAC tag, with speed-up from the conventional method by factor around 8 to 15 for $m=10^4$ to $10^5$ items.
Riad S. Wahby, Dan Boneh
ePrint Report
Pairing-friendly elliptic curves in the Barreto-Lynn-Scott family have
experienced a resurgence in popularity due to their use in a number of
real-world projects. One particular Barreto-Lynn-Scott curve, called
BLS12-381, is the locus of significant development and deployment effort,
especially in blockchain applications. This effort has sparked interest
in using BLS12-381 for BLS signatures, and in particular for aggregatable
signatures, which requires hashing to one of the groups of the bilinear
pairing defined by the BLS12-381 elliptic curve.
While there is a substantial body of literature on the problem of hashing to elliptic curves, much of this work does not apply to Barreto-Lynn-Scott curves. Moreover, the work that does apply has the unfortunate property that fast implementations are complex, while simple implementations are slow.
In this work, we address these issues. First, we show a straightforward way of adapting the "simplified SWU" map of Brier et al. to BLS12-381. Second, we describe optimizations to the SWU map that both simplify its implementation and improve its performance; these optimizations may be of interest in other contexts. Third, we implement and evaluate. We find that our work yields constant-time hash functions that are simple to implement, yet perform within 9% of the fastest, non--constant-time alternatives, which require much more complex implementations.
While there is a substantial body of literature on the problem of hashing to elliptic curves, much of this work does not apply to Barreto-Lynn-Scott curves. Moreover, the work that does apply has the unfortunate property that fast implementations are complex, while simple implementations are slow.
In this work, we address these issues. First, we show a straightforward way of adapting the "simplified SWU" map of Brier et al. to BLS12-381. Second, we describe optimizations to the SWU map that both simplify its implementation and improve its performance; these optimizations may be of interest in other contexts. Third, we implement and evaluate. We find that our work yields constant-time hash functions that are simple to implement, yet perform within 9% of the fastest, non--constant-time alternatives, which require much more complex implementations.
Kevin Liao, Matthew A. Hammer, Andrew Miller
ePrint Report
The universal composability (UC) framework is the established standard for
analyzing cryptographic protocols in a modular way, such that security is
preserved under concurrent composition with arbitrary other protocols.
However, although UC is widely used for on-paper proofs, prior attempts at
systemizing it have fallen short, either by using a symbolic model (thereby
ruling out computational reduction proofs), or by limiting its expressiveness.
In this paper, we lay the groundwork for building a concrete, executable implementation of the UC framework. Our main contribution is a process calculus, dubbed the Interactive Lambda Calculus (ILC). ILC faithfully captures the computational model underlying UC---interactive Turing machines (ITMs)---by adapting ITMs to a subset of the pi-calculus through an affine typing discipline. In other words, well-typed ILC programs are expressible as ITMs. In turn, ILC's strong confluence property enables reasoning about cryptographic security reductions. We use ILC to develop a simplified implementation of UC called SaUCy.
In this paper, we lay the groundwork for building a concrete, executable implementation of the UC framework. Our main contribution is a process calculus, dubbed the Interactive Lambda Calculus (ILC). ILC faithfully captures the computational model underlying UC---interactive Turing machines (ITMs)---by adapting ITMs to a subset of the pi-calculus through an affine typing discipline. In other words, well-typed ILC programs are expressible as ITMs. In turn, ILC's strong confluence property enables reasoning about cryptographic security reductions. We use ILC to develop a simplified implementation of UC called SaUCy.
Manuel San Pedro, Victor Servant, Charles Guillemet
ePrint Report
Side-channel attacks rely on the fact that the physical behavior of a device depends on the data it manipulates. We show in this paper how to use this class of attacks to break the security of some cryptocurrencies hardware wallets when the attacker is given physical access to them. We mounted two profiled side-channel attacks: the first one extracts the user PIN used through the verification function, and the second one extracts the private signing key from the ECDSA scalar multiplication using a single signature. The results of our study were responsibly disclosed to the manufacturer who patched the PIN vulnerability through a firmware upgrade.
18 April 2019
Akira Takahashi, Mehdi Tibouchi
ePrint Report
In this paper, we describe several practically exploitable fault attacks against OpenSSL's implementation of elliptic curve cryptography, related to the singular curve point decompression attacks of Blömer and Günther (FDTC2015) and the degenerate curve attacks of Neves and Tibouchi (PKC 2016).
In particular, we show that OpenSSL allows to construct EC key files containing explicit curve parameters with a compressed base point. A simple single fault injection upon loading such a file yields a full key recovery attack when the key file is used for signing with ECDSA, and a complete recovery of the plaintext when the file is used for encryption using an algorithm like ECIES. The attack is especially devastating against curves with $j$-invariant equal to 0 such as the Bitcoin curve secp256k1, for which key recovery reduces to a single division in the base field.
Additionally, we apply the present fault attack technique to OpenSSL's implementation of ECDH, by combining it with Neves and Tibouchi's degenerate curve attack. This version of the attack applies to usual named curve parameters with nonzero $j$-invariant, such as P192 and P256. Although it is typically more computationally expensive than the one against signatures and encryption, and requires multiple faulty outputs from the server, it can recover the entire static secret key of the server even in the presence of point validation.
These various attacks can be mounted with only a single instruction skipping fault, and therefore can be easily injected using low-cost voltage glitches on embedded devices. We validated them in practice using concrete fault injection experiments on a Rapsberry Pi single board computer running the up to date OpenSSL command line tools---a setting where the threat of fault attacks is quite significant.
In particular, we show that OpenSSL allows to construct EC key files containing explicit curve parameters with a compressed base point. A simple single fault injection upon loading such a file yields a full key recovery attack when the key file is used for signing with ECDSA, and a complete recovery of the plaintext when the file is used for encryption using an algorithm like ECIES. The attack is especially devastating against curves with $j$-invariant equal to 0 such as the Bitcoin curve secp256k1, for which key recovery reduces to a single division in the base field.
Additionally, we apply the present fault attack technique to OpenSSL's implementation of ECDH, by combining it with Neves and Tibouchi's degenerate curve attack. This version of the attack applies to usual named curve parameters with nonzero $j$-invariant, such as P192 and P256. Although it is typically more computationally expensive than the one against signatures and encryption, and requires multiple faulty outputs from the server, it can recover the entire static secret key of the server even in the presence of point validation.
These various attacks can be mounted with only a single instruction skipping fault, and therefore can be easily injected using low-cost voltage glitches on embedded devices. We validated them in practice using concrete fault injection experiments on a Rapsberry Pi single board computer running the up to date OpenSSL command line tools---a setting where the threat of fault attacks is quite significant.
Divesh Aggarwal, Maciej Obremski
ePrint Report
Non-malleable codes, introduced by Dziembowski, Pietrzak and Wichs in ICS 2010, have emerged in the last few years as a fundamental object at the intersection of cryptography and coding theory. Non-malleable codes provide a useful message integrity guarantee in situations where traditional error-correction (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely ``unrelated value''. Although such codes do not exist if the family of ``tampering functions'' {\mathcal F} allowed to modify the original codeword is completely unrestricted, they are known to exist for many broad tampering families {\mathcal F}.
The family which received the most attention is the family of tampering functions in the so called (2-part) {\em split-state} model: here the message x is encoded into two shares L and R, and the attacker is allowed to arbitrarily tamper with each L and R individually.
Dodis, Kazana, and the authors in STOC 2015 developed a generalization of non-malleable codes called the concept of non-malleable reduction, where a non-malleable code for a tampering family {\mathcal F} can be seen as a non-malleable reduction from {\mathcal F} to a family NM of functions comprising the identity function and constant functions. They also gave a constant-rate reduction from a split-state tampering family to a tampering family {\mathcal G} containing so called $2$-lookahead functions, and forgetful functions.
In this work, we give a constant rate non-malleable reduction from the family {\mathcal G} to NM, thereby giving the first {\em constant rate non-malleable code in the split-state model.}
Central to our work is a technique called inception coding which was introduced by Aggarwal, Kazana and Obremski in TCC 2017, where a string that detects tampering on a part of the codeword is concatenated to the message that is being encoded.
The family which received the most attention is the family of tampering functions in the so called (2-part) {\em split-state} model: here the message x is encoded into two shares L and R, and the attacker is allowed to arbitrarily tamper with each L and R individually.
Dodis, Kazana, and the authors in STOC 2015 developed a generalization of non-malleable codes called the concept of non-malleable reduction, where a non-malleable code for a tampering family {\mathcal F} can be seen as a non-malleable reduction from {\mathcal F} to a family NM of functions comprising the identity function and constant functions. They also gave a constant-rate reduction from a split-state tampering family to a tampering family {\mathcal G} containing so called $2$-lookahead functions, and forgetful functions.
In this work, we give a constant rate non-malleable reduction from the family {\mathcal G} to NM, thereby giving the first {\em constant rate non-malleable code in the split-state model.}
Central to our work is a technique called inception coding which was introduced by Aggarwal, Kazana and Obremski in TCC 2017, where a string that detects tampering on a part of the codeword is concatenated to the message that is being encoded.
Daniel Apon, Dana Dachman-Soled, Huijing Gong, Jonathan Katz
ePrint Report
Group key-exchange protocols allow a set of N parties to agree on a shared, secret key by communicating over a public network. A number of solutions to this problem have been proposed over the years, mostly based on variants of Diffie-Hellman (two-party) key exchange. There has been relatively little work, however, looking at candidate post-quantum group key-exchange protocols.
Here, we propose a constant-round protocol for unauthenticated group key exchange (i.e., with security against a passive eavesdropper) based on the hardness of the Ring-LWE problem. By applying the Katz-Yung compiler using any post-quantum signature scheme, we obtain a (scalable) protocol for authenticated group key exchange with post-quantum security. Our protocol is constructed by generalizing the Burmester-Desmedt protocol to the Ring-LWE setting, which requires addressing several technical challenges.
Here, we propose a constant-round protocol for unauthenticated group key exchange (i.e., with security against a passive eavesdropper) based on the hardness of the Ring-LWE problem. By applying the Katz-Yung compiler using any post-quantum signature scheme, we obtain a (scalable) protocol for authenticated group key exchange with post-quantum security. Our protocol is constructed by generalizing the Burmester-Desmedt protocol to the Ring-LWE setting, which requires addressing several technical challenges.
Martin R. Albrecht, Lorenzo Grassi, Léo Perrin, Sebastian Ramacher, Christian Rechberger, Dragos Rotaru, Arnab Roy, Markus Schofnegger
ePrint Report
We study approaches to generalized Feistel constructions with low-degree round functions with a focus on x → x^3. Besides known constructions, we also provide a new balanced Feistel construction with improved diffusion properties. This then allows us to propose more efficient generalizations of the MiMC design (Asiacrypt16), which we in turn evaluate in three application areas. Whereas MiMC was not competitive at all in a recently proposed new class of PQ-secure signature schemes, our new construction leads to about 30 times smaller signatures than MiMC. In MPC use cases, where MiMC outperforms all other competitors, we observe improvements in throughput by a factor of more than 7 and simultaneously a 16-fold reduction of preprocessing effort, albeit at the cost of a higher latency. Another use case where MiMC already outperforms other designs, in the area of SNARKs, sees modest improvements. Additionally, this use case benefits from the flexibility to use smaller fields.
Evangelia Anna Markatou, Roberto Tamassia
ePrint Report
In recent years, a number of attacks have been developed that can reconstruct encrypted one-dimensional databases that support range queries under the persistent passive adversary model. These attacks allow an (honest but curious) adversary (such as the cloud provider) to find the order of the elements in the database and, in some cases, to even reconstruct the database itself.
In this paper we present two mitigation techniques to make it harder for the adversary to reconstruct the database. The first technique makes it impossible for an adversary to reconstruct the values stored in the database with an error smaller than $k/2$, for $k$ chosen by the client. By fine-tuning $k$, the user can increase the adversary's error at will.
The second technique is targeted towards adversaries who have managed to learn the distribution of the queries issued. Such adversaries may be able to reconstruct most of the database after seeing a very small (i.e. poly-logarithmic) number of queries. To neutralize such adversaries, our technique turns the database to a circular buffer. All known techniques that exploit knowledge of distribution fail, and no technique can determine which record is first (or last) based on access pattern leakage.
In this paper we present two mitigation techniques to make it harder for the adversary to reconstruct the database. The first technique makes it impossible for an adversary to reconstruct the values stored in the database with an error smaller than $k/2$, for $k$ chosen by the client. By fine-tuning $k$, the user can increase the adversary's error at will.
The second technique is targeted towards adversaries who have managed to learn the distribution of the queries issued. Such adversaries may be able to reconstruct most of the database after seeing a very small (i.e. poly-logarithmic) number of queries. To neutralize such adversaries, our technique turns the database to a circular buffer. All known techniques that exploit knowledge of distribution fail, and no technique can determine which record is first (or last) based on access pattern leakage.
Evangelia Anna Markatou, Roberto Tamassia
ePrint Report
The widespread use of cloud computing has enabled several database
providers to store their data on servers in the cloud and answer
queries from those servers. In order to protect the confidentiality
of the data stored in the cloud, a database can be stored in an
encrypted form and all queries can be executed on top of the
encrypted database. Recent research results suggest that a curious cloud provider may be able to decrypt some of the items in the database after seeing a large number of queries and their (encrypted) results.
In this paper, we focus on one-dimensional databases that support range queries and develop an attack that can achieve full database reconstruction, inferring the exact value of every element in the database. Previous work on full database reconstruction depends on a client issuing queries uniformly at random.
Let $N$ be the number of elements in the database. Our attack succeeds after the attacker has seen each of the possible query results at least once, independent of their distribution. For the sake of query complexity analysis and comparison with relevant work, if we assume that the client issues queries uniformly at random, we can decrypt the entire database after observing $O(N^2 \log N)$ queries with high probability, an improvement upon Kellaris et al.'s $O(N^4 \log N)$.
In this paper, we focus on one-dimensional databases that support range queries and develop an attack that can achieve full database reconstruction, inferring the exact value of every element in the database. Previous work on full database reconstruction depends on a client issuing queries uniformly at random.
Let $N$ be the number of elements in the database. Our attack succeeds after the attacker has seen each of the possible query results at least once, independent of their distribution. For the sake of query complexity analysis and comparison with relevant work, if we assume that the client issues queries uniformly at random, we can decrypt the entire database after observing $O(N^2 \log N)$ queries with high probability, an improvement upon Kellaris et al.'s $O(N^4 \log N)$.