## CryptoDB

### Roberto Tamassia

#### Publications

Year
Venue
Title
2016
ASIACRYPT
2015
EPRINT
2015
EPRINT
2014
EPRINT
2013
TCC
2013
EUROCRYPT
2011
CRYPTO
2010
EPRINT
We study the problem of authenticating a \emph{dynamic table} with $n$ entries in the authenticated data structures model, which is related to memory checking. We present the first dynamic authenticated table that is \emph{update-optimal}, using a \emph{lattice-based} construction. In particular, the update time is $O(1)$, improving in this way the a priori'' $O(\log n)$ update bounds for previous constructions, such as the Merkle tree. Moreover, the space used by our data structure is $O(n)$ and logarithmic bounds hold for the other complexity measures, such as \emph{proof size}. To achieve this result, we exploit the \emph{linearity} of lattice-based hash functions and show how the security of lattice-based digests can be guaranteed under updates. This is the first construction achieving constant update bounds without causing other time complexities to increase beyond logarithmic. All previous solutions enjoying constant update complexity have $\Omega(n^\epsilon)$ proof or query bounds. As an application of our lattice-based authenticated table, we provide the first construction of an authenticated Bloom filter, an update-intensive data structure that falls into our model.
2010
EPRINT
We study the problem of authenticating outsourced set operations performed by an untrusted server over a dynamic collection of sets that are owned by a trusted source. We present efficient methods for authenticating fundamental set operations, such as \emph{union} and \emph{intersection} so that the client can verify the correctness of the received answer. Based on a novel extension of the security properties of \emph{bilinear-map accumulators}, our authentication scheme is the first to achieve \emph{optimality} in several critical performance measures: (1) \emph{the verification overhead at the client is optimal}, that is, the client can verify an answer in time proportional to the size of the query parameters and answer; (2) \emph{the update overhead at the source is constant}; (3) \emph{the bandwidth consumption is optimal}, namely constant between the source and the server and operation-sensitive between the client and the server (i.e., proportional only to the size of the query parameters and the answer); and (4) \emph{the storage usage is optimal}, namely constant at the client and linear at the source and the server. Updates and queries are also efficient at the server. In contrast, existing schemes entail high bandwidth and verification costs or high storage usage since they recompute the query over authentic data or precompute answers to all possible queries. We also show applications of our techniques to the authentication of \emph{keyword searches} on outsourced document collections (e.g., inverted-index queries) and of queries in outsourced \emph{databases} (e.g., equi-join queries). Since set intersection is heavily used in these applications, we obtain new authentication schemes that compare favorably to existing approaches.
2008
EPRINT
As online storage-outsourcing services (e.g., Amazon's S3) and resource-sharing networks (e.g., peer-to-peer and grid networks) became popular, the problem of efficiently proving the integrity of data stored at untrusted servers has received increased attention. Ateniese et al. \cite{pdp} formalized this problem with a model called provable data possession (PDP). In this model, data is preprocessed by the client and then sent to an untrusted server for storage. The client can then challenge the server to prove that the data has not been tampered with or deleted (without sending the actual data). However, their PDP scheme applies only to static (or append-only) files. In reality, many outsourced storage applications (including network file systems and outsourced databases) need to handle dynamic data. This paper presents a definitional framework and an efficient construction for dynamic provable data possession (DPDP), which extends the PDP model to support provable updates to stored data (modifications to a block or insertion/deletion of a block). To achieve efficient DPDP, we use a new version of authenticated dictionaries based on rank information. The price of dynamic updates is a performance change from $O(1)$ to $O(\log{n})$, for a file consisting of $n$ blocks, while maintaining the same probability of detection. Yet, our experiments show that this price is very low in practice, and hence our system is applicable to real scenarios. Our contributions can be summarized as defining the DPDP framework formally, and constructing the first fully dynamic PDP solution, which performs verification without downloading actual data and is very efficient. We also show how our DPDP scheme can be extended to construct complete file systems and version control systems (e.g., CVS) at untrusted servers, so that it can be used in complex outsourcing applications.
2004
CRYPTO