International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

20 May 2019

Anasuya Acharya, Manoj Prabhakaran, Akash Trehan
ePrint Report ePrint Report
We present CellTree, a new architecture for distributed data repositories. The repository allows data to be stored in largely independent, and highly programmable cells, which are “assimilated” into a tree structure. The data in the cells are allowed to change over time, subject to each cell’s own policies; a cell’s policies also govern how the policies themselves can evolve. A design goal of the architecture is to let a CellTree evolve organically over time, and adapt itself to multiple applications. Different parts of the tree may be maintained by different sets of parties interested in the respective parts, and the core mechanisms used for maintaining the tree can also vary across the tree and over time.

We present provable guarantees of liveness, correctness and consistency (the last one being a generalization of the typical blockchain guarantee of “persistence,” when data is dynamic), when the CellTree architecture is instantiated using a simple set of modules. These properties can be guaranteed for individual cells that satisfy requisite trust assumptions, even if these trust assumptions do not hold for other cells in the tree.

We also discuss several features of a CellTree that can be exploited by applications. We leave it for future work to develop full-fledged applications on top of this powerful architecture.
Expand
Jakub Breier, Mustafa Khairallah, Xiaolu Hou, Yang Liu
ePrint Report ePrint Report
Current state-of-the-art countermeasures against Fault Injection Attacks (FIA) provide good protection against analysis methods that require the faulty ciphertext to derive the secret information, such as Differential Fault Analysis (DFA) or collision fault analysis. However, recent progress in Ineffective Fault Analysis (IFA) and Statistical IFA (SIFA) constitutes a real threat against cryptographic implementations and moreover, it cannot be thwarted by standard FIA countermeasures that focus on detecting the change in the intermediate data.

In this paper, we present a novel method based on error correcting codes that protects implementations against SIFA. We design a set of universal error-correcting gates that can be used for implementing block ciphers. We analyze a hardware implementation of protected GIFT-64 and show that our method provides 100% protection against SIFA.
Expand
Manu Drijvers, Sergey Gorbunov, Gregory Neven, Hoeteck Wee
ePrint Report ePrint Report
Multi-signatures enable a group of signers to jointly generate a short and efficiently verifiable signature on a common message. They are commonly used in proof-of-stake and permissioned blockchains, where reaching consensus usually involves a committee of nodes signing the next block. Adaptive corruptions, however, pose a common threat to such designs, because the adversary can corrupt committee members after they certified a block (and possibly after they sold their stake) and use their signing keys to fork the chain by certifying a different block, thereby undermining the main security goal of a blockchain. Forward-secure signatures protect against such attacks by letting signers evolve their keys over time, while keeping the verification key constant. We present Pixel, a pairing-based forward-secure multi-signature scheme optimized for use in blockchains, that achieves substantial savings in bandwidth, storage requirements, and verification effort. Pixel signatures consist of two group elements, regardless of the number of signers, and can be verified using three pairings and one exponentiation; they also support non-interactive aggregation of individual signatures into a multi-signature. We prove our scheme secure in the random-oracle model under a suitable variant of the bilinear Diffie-Hellman inversion problem.
Expand
Khoa Nguyen, Hanh Tang, Huaxiong Wang, Neng Zeng
ePrint Report ePrint Report
Code-based cryptography has a long history but did suffer from periods of slow development. The field has recently attracted a lot of attention as one of the major branches of post-quantum cryptography. However, its subfield of privacy-preserving cryptographic constructions is still rather underdeveloped, e.g., important building blocks such as zero-knowledge range proofs and set membership proofs, and even proofs of knowledge of a hash preimage, have not been known under code-based assumptions. Moreover, almost no substantial technical development has been introduced in the last several years.

This work introduces several new code-based privacy-preserving cryptographic constructions that considerably advance the state-of-the-art in code-based cryptography. Specifically, we present $3$ major contributions, each of which potentially yields various other applications. Our first contribution is a code-based statistically hiding and computationally binding commitment scheme with companion zero-knowledge (ZK) argument of knowledge of a valid opening that can be easily extended to prove that the committed bits satisfy other relations. Our second contribution is the first code-based zero-knowledge range argument for committed values, with communication cost logarithmic in the size of the range. A special feature of our range argument is that, while previous works on range proofs/arguments (in all branches of cryptography) only address ranges of non-negative integers, our protocol can handle signed fractional numbers, and hence, can potentially find a larger scope of applications. Our third contribution is the first code-based Merkle-tree accumulator supported by ZK argument of membership, which has been known to enable various interesting applications. In particular, it allows us to obtain the first code-based ring signatures and group signatures with logarithmic signature sizes.
Expand
Shuai Han, Shengli Liu, Lin Lyu, Dawu Gu
ePrint Report ePrint Report
We propose the concept of quasi-adaptive hash proof system (QAHPS), where the projection key is allowed to depend on the specific language for which hash values are computed. We formalize leakage-resilient(LR)-ardency for QAHPS by defining two statistical properties, including LR-<L_0,L_1>-universal and LR-<L_0,L_1>-key-switching.

We provide a generic approach to tightly leakage-resilient CCA (LR-CCA) secure public-key encryption (PKE) from LR-ardent QAHPS. Our approach is reminiscent of the seminal work of Cramer and Shoup (Eurocrypt'02), and employ three QAHPS schemes, one for generating a uniform string to hide the plaintext, and the other two for proving the well-formedness of the ciphertext. The LR-ardency of QAHPS makes possible the tight LR-CCA security. We give instantiations based on the standard k-Linear (k-LIN) assumptions over asymmetric and symmetric pairing groups, respectively, and obtain fully compact PKE with tight LR-CCA security. The security loss is O(log Q_e) where Q_e denotes the number of encryption queries. Specifically, our tightly LR-CCA secure PKE instantiation from SXDH has only 4 group elements in the public key and 7 group elements in the ciphertext, thus is the most efficient one.
Expand
Gilles Barthe, Sonia Belaïd, Thomas Espitau, Pierre-Alain Fouque, Mélissa Rossi, Mehdi Tibouchi
ePrint Report ePrint Report
In this paper, we propose a constant-time implementation of the BLISS lattice-based signature scheme. BLISS is possibly the most efficient lattice-based signature scheme proposed so far, with a level of performance on par with widely used pre-quantum primitives like ECDSA. It is only one of the few postquantum signatures to have seen real-world deployment, as part of the strongSwan VPN software suite.

The outstanding performance of the BLISS signature scheme stems in large part from its reliance on discrete Gaussian distributions, which allow for better parameters and security reductions. However, that advantage has also proved to be its Achilles’ heel, as discrete Gaussians pose serious challenges in terms of secure implementations. Implementations of BLISS so far have included secret-dependent branches and memory accesses, both as part of the discrete Gaussian sampling and of the essential rejection sampling step in signature generation. These defects have led to multiple devastating timing attacks, and were a key reason why BLISS was not submitted to the NIST postquantum standardization effort. In fact, almost all of the actual candidates chose to stay away from Gaussians despite their efficiency advantage, due to the serious concerns surrounding implementation security.

Moreover, naive countermeasures will often not cut it: we show that a reasonable-looking countermeasure suggested in previous work to protect the BLISS rejection sampling can again be defeated using novel timing attacks, in which the timing information is fed to phase retrieval machine learning algorithm in order to achieve a full key recovery.

Fortunately, we also present careful implementation techniques that allow us to describe an implementation of BLISS with complete timing attack protection, achieving the same level of efficiency as the original unprotected code, without resorting on floating point arithmetic or platform-specific optimizations like AVX intrinsics. These techniques, including a new approach to the polynomial approximation of transcendental function, can also be applied to the masking of the BLISS signature scheme, and will hopefully make more efficient and secure implementations of lattice-based cryptography possible going forward.
Expand
Jiang Zhang, Yu Yu, Shuqin Fan, Zhenfeng Zhang, Kang Yang
ePrint Report ePrint Report
Lattice-based cryptosystems are less efficient than their number-theoretic counterparts (based on RSA, discrete logarithm, etc.) in terms of key and ciphertext (signature) sizes. For adequate security the former typically needs thousands of bytes while in contrast the latter only requires at most hundreds of bytes. This significant difference has become one of the main concerns in replacing currently deployed public-key cryptosystems with lattice-based ones. Observing the inherent asymmetries in existing lattice-based cryptosystems, we propose asymmetric variants of the (module-)LWE and (module-)SIS assumptions, which yield further size-optimized KEM and signature schemes than those from standard counterparts.

Following the framework of Lindner and Peikert (CT-RSA 2011) and the Crystals-Kyber proposal (EuroS&P 2018), we propose an IND-CCA secure KEM scheme from the hardness of the asymmetric module-LWE (AMLWE), whose asymmetry is fully exploited to obtain shorter public keys and ciphertexts. To target at a 128-bit security, the public key (resp., ciphertext) of our KEM only has 896 bytes (resp., 992 bytes), which gives an improvement of 192 bytes (resp.,160 bytes) over Kyber.

Our signature scheme bears most resemblance to and improves upon the Crystals-Dilithium scheme (ToCHES 2018). By making full use of the underlying asymmetric module-LWE and module-SIS assumptions and carefully selecting the parameters, we obtain better compromise between computational costs, storage overheads and security and therefore construct an SUF-CMA secure signature scheme with shorter public keys and signatures. For a 128-bit security, the public key (resp., signature) of our signature scheme only has 1312 bytes (resp., 2445 bytes), which gives an improvement of 160 bytes (resp, 256 bytes) over Dilithium.

We adapt the best known attacks and their variants to our AMLWE and AMSIS problems and conduct a comprehensive and thorough analysis of several parameter choices (aiming at different security strengths) and their impacts on the sizes, security and error probability of lattice-based cryptosystems. Our analysis demonstrates that AMLWE and AMSIS problems admit more flexible and size-efficient choices of parameters than the respective standard versions. Furthermore, implementations of our proposed schemes appear to be (slightly) more computationally efficient than their counterparts.
Expand
Orr Dunkelman, Nathan Keller, Noam Lasry, Adi Shamir
ePrint Report ePrint Report
The slide attack is a powerful cryptanalytic tool which has the unusual property that it can break iterated block ciphers with a complexity that does not depend on their number of rounds. However, it requires complete self similarity in the sense that all the rounds must be identical. While this can be the case in Feistel structures, this rarely happens in SP networks since the last round must end with an additional post-whitening subkey. In addition, in many SP networks the final round has additional asymmetries -- for example, in AES the last round omits the MixColumns operation. Such asymmetry in the last round can make it difficult to utilize most of the advanced tools which were developed for slide attacks, such as deriving from one slid pair additional slid pairs by repeatedly re-encrypting their ciphertexts.

In this paper we overcome this "last round problem" by developing four new types of slide attacks. We demonstrate their power by applying them to many types of AES-like structures (with and without linear mixing in the last round, with known or secret S-boxes, with 1,2 and 3 periodicity in their subkeys, etc). In most of these cases, the time complexity of our attack is close to $2^{n/2}$, which is the smallest possible complexity for slide attacks. Our new slide attacks have several unique properties: The first attack uses slid sets in which each plaintext from the first set forms a slid pair with some plaintext from the second set, but without knowing the exact correspondence. The second attack makes it possible to create from several slid pairs an exponential number of new slid pairs which form a hypercube spanned by the given pairs. The third attack has the unusual property that it is always successful, and the fourth attack can use known messages instead of chosen messages, with only slightly higher time complexity.
Expand
Tsz Hon Yuen, Shi-feng Sun, Joseph K. Liu, Man Ho Au, Muhammed F. Esgin, Qingzhao Zhang, Dawu Gu
ePrint Report ePrint Report
In this paper, we propose the most competent blockchain ring confidential transaction protocol (RingCT3.0) for protecting the privacy of the sender's identity, the recipient's identity and the confidentiality of the transaction amount. For a typical 2-input transaction with a ring size of 1024, the ring signature size of our RingCT3.0 protocol is 97% less than the ring signature size of the original RingCT1.0 protocol used in Monero. Taking the advantage of our compact RingCT3.0 transcript size, privacy-preserving cryptocurrencies can enjoy a much lower transaction fee which will have a significant impact to the crypto-economy. Our implementation result shows that our protocol outperforms existing solutions, in terms of efficiency and security.

In addition to the significant improvement in terms of efficiency, our scheme is proven secure in a stronger security model. We remove the trusted setup assumption used in RingCT2.0. Our scheme is anonymous against ring insider (non-signing users who are included in the ring), while we show that the RingCT1.0 is not secure in this strong model.

Our RingCT3.0 protocol relies on our brand new designed ring signature scheme as an underlying primitive, which is believed to be the most efficient ring signature scheme up-to-date (in terms of signature size) without trusted setup. Our ring signature scheme is derived from our novel design of an efficient set membership proof of n public keys, with the proof size of O(log n). It is the first set membership proof without trusted setup for public keys in the base group, instead of in the exponent. These two primitives are of independent interest.
Expand
Jiaxin Guan, Mark Zhandry
ePrint Report ePrint Report
The bounded storage model promises unconditional security proofs against computationally unbounded adversaries, so long as the adversary’s space is bounded. In this work, we develop simple new constructions of two-party key agreement, bit commitment, and oblivious transfer in this model. In addition to simplicity, our constructions have several advantages over prior work, including an improved number of rounds and enhanced correctness. Our schemes are based on Raz’s lower bound for learning parities.
Expand
Erik-Oliver Blass, Guevara Noubir
ePrint Report ePrint Report
Logging is a key mechanism in the security of computer systems. Beyond supporting important forward security properties, it is critical that logging withstands both failures and intentional tampering to prevent subtle attacks leaving the system in an inconsistent state with inconclusive evidence. We propose new techniques combining forward integrity with crash recovery for secure data storage. Our main contribution is a new coding scheme resolving unique design constraints such as forward integrity and most importantly a {single-pass, constant number} of operations per encoding. Our idea is to add a new log item by XORing it to forward-securely selected $k$ cells of a table. If up to a certain threshold of cells is modified by the adversary, or lost due to a crash, we still guarantee the recovery of all stored log items. We instantiate our scheme into an abstract data structure which allows to either detect adversarial modifications to log items or treat modifications like data loss in a system crash. The data structure can recover lost log items, thereby effectively reverting adversarial modifications. The key advantage of this setup is its efficiency: we use spectral graph theory techniques to prove that $k$ is {constant} in the number $n$ of all log items ever stored and small in practice, e.g., $k=5$. Moreover, we prove that to cope with up to $\sqrt{n}$ lost log items, storage expansion is asymptotically constant in $n$ and small in practice. For $k=5$, the total size of the table is only $12\%$ more than the simple concatenation of all $n$ items.
Expand
Felix Wegener, Thorben Moos, Amir Moradi
ePrint Report ePrint Report
In recent years, deep learning has become an attractive ingredient to side-channel analysis (SCA) due to its potential to improve the success probability or enhance the performance of certain frequently executed tasks. One task that is commonly assisted by machine learning techniques is the profiling of a device's leakage behavior in order to carry out a template attack. Very recently at CHES 2019, deep learning has also been applied to non-profiled scenarios, extending its reach within SCA beyond template attacks for the first time. The proposed method, called DDLA, has some tempting advantages over traditional SCA due to merits inherited from (convolutional) neural networks. Most notably, it greatly reduces the need for pre-processing steps when the SCA traces are misaligned or when the leakage is of a multivariate nature. However, similar to traditional attack scenarios the success of this approach highly depends on the correct choice of a leakage model and the intermediate value to target.

In this work we explore whether deep learning can similarly be used as an instrument to advance another crucial (non-profiled) discipline of SCA which is inherently independent of leakage models and targeted intermediates, namely leakage assessment. In fact, given the simple classification-based nature of common leakage assessment techniques, in particular distinguishing two groups fixed-vs-random or fixed-vs-fixed, it comes as a surprise that machine learning has not been brought into this context, yet. Our contribution is the development of a full leakage assessment methodology based on deep learning which gives the evaluator the freedom to not worry about location, alignment and statistical order of the leakages and that easily covers multivariate and horizontal patterns as well. We test our approach against a number of case studies based on FPGA measurements of the PRESENT block cipher, equipped with state-of-the-art hardware-based countermeasures. Our results clearly show that the proposed methodology and network structure (which remains unchanged between the experiments) outperform the classical detection approaches ($t$-test and $\chi^2$-test) in all considered scenarios.
Expand
Bernardo Magri, Christian Matt, Jesper Buus Nielsen, Daniel Tschudi
ePrint Report ePrint Report
Most existing blockchains either rely on a Nakamoto-style of consensus, where the chain can fork and produce rollbacks, or on a committee-based Byzantine fault tolerant (CBFT) consensus, where no rollbacks are possible. While the latter ones offer better consistency, the former can be more efficient, tolerate more corruptions, and offer better availability during bad network conditions. To achieve the best of both worlds, we initiate the formal study of finality layers. Such a finality layer can be combined with a Nakamoto-style blockchain and periodically declare blocks as final, preventing rollbacks beyond final blocks.

As conceptual contributions, we identify the following properties to be crucial for a finality layer: finalized blocks form a chain (chain-forming), all parties agree on the finalized blocks (agreement), the last finalized block does not fall too far behind the last block in the underlying blockchain (updated), and all finalized blocks at some point have been on the chain adopted by at least $k$ honest parties ($k$-support). We also put forward an argument why finality layers should be asynchronous or semi-synchronous.

As technical contributions, we propose two variants of a finality layer protocol. We prove both of them secure in the setting with $t < n/3$ Byzantine parties and a semi-synchronous network. The first variant satisfies all of the aforementioned requirements (with $k = 1$) when combined with an arbitrary blockchain that satisfies the usual common-prefix, chain-growth, and chain-quality properties. The other one needs an additional, mild assumption on the underlying blockchain, but is more efficient and satisfies $k = n/3$-support. We finally show that $t < n/3$ is optimal for semi-synchronous finality layers.
Expand
Guilhem Castagnos, Dario Catalano, Fabien Laguillaumie, Federico Savasta, Ida Tucker
ePrint Report ePrint Report
ECDSA is a widely adopted digital signature standard. Unfortunately, efficient distributed variants of this primitive are notoriously hard to achieve and known solutions often require expensive zero knowledge proofs to deal with malicious adversaries. For the two party case, Lindell [Lin17] recently managed to get an efficient solution which, to achieve simulation-based security, relies on an interactive, non standard, assumption on Paillier's cryptosystem.

In this paper we generalize Lindell's solution using hash proof systems. The main advantage of our generic method is that it results in a simulation-based security proof without resorting to non-standard interactive assumptions.

Moving to concrete constructions, we show how to instantiate our framework using class groups of imaginary quadratic fields. Our implementations show that the practical impact of dropping such interactive assumptions is minimal. Indeed, while for 128-bit security our scheme is marginally slower than Lindell's, for 256-bit security it turns out to be better both in key generation and signing time. Moreover, in terms of communication cost, our implementation significantly reduces both the number of rounds and the transmitted bits without exception.
Expand
Shi Bai, Shaun Miller, Weiqiang Wen
ePrint Report ePrint Report
The learning with errors (LWE) problem (STOC'05) introduced by Regev is one of the fundamental problems in lattice-based cryptography. One standard strategy to solve the LWE problem is to reduce it to a unique SVP (uSVP) problem via Kannan's embedding and then apply a lattice reduction to solve the uSVP problem. There are two methods for estimating the cost for solving LWE via this strategy: the first method considers the largeness of the gap in the uSVP problem (Gama-Nguyen, Eurocrypt'08) and the second method (Alkim et al., USENIX'16) considers the shortness of the projection of the shortest vector to the Gram-Schmidt vectors. These two estimates have been investigated by Albrecht et al. (Asiacrypt'16) who present a sound analysis and show that the lattice reduction experiments fit more consistently with the second estimate. They also observe that in some cases the lattice reduction even behaves better than the second estimate perhaps due to the second intersection of the projected vector with the Gram-Schmidt vectors. In this work, we revisit the work of Alkim et al. and Albrecht et al. We first report further experiments providing more comparisons and suggest that the second estimate leads to a more accurate prediction in practice. We also present empirical evidence confirming the assumptions used in the second estimate. Furthermore, we examine the gaps in uSVP derived from the embedded lattice and explain why it is preferable to use embedding height equal to 1 for the embedded lattice. This shows there is a coherent relation between the second estimate and the gaps in uSVP. Finally, it has been conjectured by Albrecht et al. that the second intersection will not happen for large parameters. We will show that this is indeed the case: there is no second intersection as the block size goes to infinity.
Expand
María Naya-Plasencia, André Schrottenloher
ePrint Report ePrint Report
The k-xor problem or Generalized Birthday Problem asks, given k lists of bit-strings, for a k-tuple among them XORing to 0. If the lists are unbounded, the best classical (exponential) time complexity has withstood since Wagner's CRYPTO 2002 paper, while many subsequent works have improved its memory complexity, given better time-memory tradeoffs, and logarithmic improvements.

In this paper, we study quantum algorithms for several variants of the k-xor problem. When quantum oracle access is allowed, we improve over previous results of Grassi et al. for almost all values of k. We define a set of "merging trees" which represent strategies for quantum and classical merging in k-xor algorithms, and prove that our method is optimal among these. We provide, for the first time, quantum speedups when the lists can be queried only classically.

We also extend our study to lists of limited size, up to the case where a single solution exists. We give quantum dissection algorithms that outperform the best known for many k, and apply to the multiple-encryption problem. Our complexities are confirmed by a Mixed Integer Linear Program that computes the best strategy for a given k-xor problem. All our algorithms apply when considering modular additions instead of bitwise XORs.
Expand
Jean-Claude Bajard, Julien Eynard, Paulo Martins, Leonel Sousa, Vincent Zucca
ePrint Report ePrint Report
State-of-the-art implementations of homomorphic encryption exploit the Fan and Vercauteren (FV) scheme and the Residue Number System (RNS). While the RNS breaks down large integer arithmetic into smaller independent channels, its non-positional nature makes operations such as division and rounding hard to implement, and makes the representation of small values inefficient. In this work, we propose the application of the Hybrid Position-Residues Number System representation to the FV scheme. This is a positional representation of large radix where the digits are represented in RNS. It inherits the benefits from RNS and allows to accelerate the critical division and rounding operations while also making the representation of smaller values more compact. This directly benefits the decryption and the homomorphic multiplication procedures, reducing their asymptotic complexity, in dimension $n$, from $\mathcal{O} (n^2 \log n)$ to $\mathcal{O} (n \log n)$ and from $\mathcal{O}(n^3 \log n)$ to $\mathcal{O} (n^{3})$, respectively. This has also resulted in noticeable speedups when experimentally compared to related art RNS implementations.
Expand
Michael Naehrig, Joost Renes
ePrint Report ePrint Report
The isogeny-based protocols SIDH and SIKE have received much attention for being post-quantum key agreement candidates that retain relatively small keys. A recent line of work has proposed and further improved compression of public keys, leading to the inclusion of public-key compression in the SIKE proposal for Round 2 of the NIST Post-Quantum Cryptography Standardization effort. We show how to employ the dual isogeny to significantly increase performance of compression techniques, reducing their overhead from 160--182% to 77--86% for Alice's key generation and from 98--104% to 59--61% for Bob's across different SIDH parameter sets. For SIKE, we reduce the overhead of (1) key generation from 140--153% to 61--74%, (2) key encapsulation from 67--90% to 38--57%, and (3) decapsulation from 59--65% to 34--39%. This is mostly achieved by speeding up the pairing computations, which has until now been the main bottleneck, but we also improve (deterministic) basis generation.
Expand
Ward Beullens, Thorsten Kleinjung, Frederik Vercauteren
ePrint Report ePrint Report
In this paper we report on a new record class group computation of an imaginary quadratic field having 154-digit discriminant, surpassing the previous record of 130 digits. This class group is central to the CSIDH-512 isogeny based cryptosystem, and knowing the class group structure and relation lattice implies efficient uniform sampling and a canonical representation of its elements. Both operations were impossible before and allow us to instantiate an isogeny based signature scheme first sketched by Stolbunov, which we further optimize using multiple public keys and Merkle trees. We also show that including quadratic twists allows to cut the public key size in half for free. Optimizing for signature size, our implementation takes 390ms to sign/verify and results in signatures of $263$ bytes, at the expense of a large public key. This is 300 times faster and over 3 times smaller than an optimized version of SeaSign for the same parameter set. Optimizing for public key and signature size combined, results in a total size of 1468 bytes, which is smaller than any other post-quantum signature scheme at the 128-bit security level.
Expand
Jiafan Wang, Sherman S. M. Chow
ePrint Report ePrint Report
Dynamic searchable symmetric encryption (DSSE) allows a client to search or update over an outsourced encrypted database. Range query is commonly needed (AsiaCrypt'18) but order-preserving encryption approach is vulnerable to reconstruction attacks (SP'17). Previous range-searchable schemes (SIGMOD'16, ESORICS'18) require an ad-hoc instance of encrypted database to store the updates and/or suffer from other shortcomings, some brought by the usage of asymmetric primitives.

In this paper, with our encrypted index which enables queries for a sequence of contiguous keywords, we propose a generic upgrade of any DSSE to support range query (a.k.a. range DSSE), and a concrete construction which provides a new trade-off of reducing the client storage to "reclaim" the benefits of outsourcing.

Our schemes achieve forward security, an important property which mitigates file injection attacks. We identify a variant of fi le injection attack against a recent solution (ESORICS'18). We also extend the definition of backward security to range DSSE and show our schemes are compatible with a generic transformation for achieving backward security (CCS'17).

We comprehensively analyze the computation and communication overheads including some parts which were ignored in previous schemes, e.g., index-related operations in the client side. Our experiments demonstrate the high efficiency of our schemes.
Expand
◄ Previous Next ►