International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook
There is currently a problem with the jobs channel, and new jobs listings are not appearing here. Please see the jobs page.

21 October 2019

Jung Hee Cheon, Dongwoo Kim, Duhyeong Kim
ePrint Report ePrint Report
Comparison of two numbers is one of the most frequently used operations, but it has been a challenging task to efficiently compute the comparison function in homomorphic encryption (HE) which basically support addition and multiplication. Recently, Cheon et al. (Asiacrypt 2019) introduced a new approximate representation of the comparison function with a rational function, and showed that this rational function can be evaluated by an iterative algorithm. Due to this iterative feature, their method achieves a logarithmic computational complexity compared to previous polynomial approximation methods; however, the computational complexity is still not optimal, and the algorithm is quite slow for large-bit inputs in HE implementation.

In this work, we propose new comparison methods with optimal asymptotic complexity based on composite polynomial approximation. The main idea is to systematically design a constant-degree polynomial $f$ by identifying the core properties to make a composite polynomial $f\circ f \circ \cdots \circ f$ get close to the sign function (equivalent to the comparison function) as the number of compositions increases. Utilizing the devised polynomial $f$, our new comparison algorithms only require $\Theta(\log(1/\epsilon)) + \Theta(\log\alpha)$ computational complexity to obtain an approximate comparison result of $a,b\in[0,1]$ satisfying $|a-b|\ge \epsilon$ within $2^{-\alpha}$ error. The asymptotic optimality results in substantial performance enhancement: our comparison algorithm on encrypted $20$-bit integers for $\alpha = 20$ takes $1.43$ milliseconds in amortized running time, which is $30$ times faster than the previous work.
Expand
Koji Nuida, Satsuya Ohata, Shigeo Mitsunari, Nuttapong Attrapadung
ePrint Report ePrint Report
Homomorphic encryption (HE) is one of the main tools in secure multiparty computation (MPC), and the (elliptic-curve) lifted-ElGamal cryptosystem is certainly the most efficient among the existing HE schemes. However, the combination of MPC with this most efficient HE has rarely appeared in the literature. This is mainly because the major known techniques for (additively) HE-based MPC are not available for this scheme due to its typical restriction that only a plaintext in a small range can be efficiently decrypted.

In this paper, we resolve this problem. By our technique, a Server having a lifted-ElGamal ciphertext $[[m]]$ with unknown small plaintext $m$ can obtain a ciphertext $[[ \varphi(m) ]]$ for an arbitrary function $\varphi$ by just one-round communication with a semi-honest Client (and also two-rounds with a malicious Client) having a decryption key, where $m$ is kept secret for both parties. This property enlarges much the variations of MPC based on the most efficient lifted-ElGamal cryptosystem. As an application, we implemented MPC for exact edit distance between two encrypted strings; our experiment for strings of length $1024$ shows that the protocol takes only $45$ seconds in LAN environments and about $3$ minutes even in WAN environments. Moreover, our technique is also available with other "lifted-ElGamal type" HE schemes and admits different keys/schemes for the original and the resulting ciphertexts. For example, we can securely convert a level-2 (i.e., after multiplication) ciphertext for some two-level HE schemes into a level-1 (i.e., before multiplication) ciphertext, and securely apply arbitrary functions $\varphi(m)$ to encrypted plaintexts for some attribute-based HE schemes. This is the first result (even by using communication) on realizing these two functionalities.
Expand
Meenakshi Kansal, Ratna Dutta, Sourav Mukhopadhyay
ePrint Report ePrint Report
Nominative signature is a cryptographic primitive where two parties collude to produce a signature. It is a user certification system and has applications in variety of sectors where nominee cannot trust heavily on the nominator to validate nominee’s certificate and only targeted entities are allowed to verify signature on sensitive data. We provide a new construction for nominative signature from standard assumptions on lattice. Our construction relies on collision resistant preimage sampleable function and symmetric key primitives like collision resistant pseudorandom function and zero knowledge proof system ZKB++ for Boolean circuits. We provide a detailed security analysis and show that our construction achieves security under unforgeability, invisibility, impersonation and non-repudiation in existing model. Furthermore, our construction exhibits non-transferability. The security under non-repudiation is achieved in the quantum random oracle model using Unruh transform of ZKB++.
Expand
Zhao Chunhuan, Zheng Zhongxiang, Wang Xiaoyun, Xu Guangwu
ePrint Report ePrint Report
As a fundamental tool in lattice-based cryptosystems, discrete Gaussian samplers play important roles in both efficiency and security of lattice-based schemes. Approximate discrete rounded Gaussian sampler, central binomial sampler and bounded uniform sampler are three types of error samplers that are commonly used in the designs of various schemes. However, known cryptanalytics about error samplers concentrate on their standard deviations and no analysis about distinct structures of distributions have been proposed. In this paper, we address this problem by considering the dual attack for LWE instances and investigating Fourier transforms of these distributions. We introduce the concept of local width which enables us to get a more detailed look of these distributions and the distinguish advantages. We make an analysis of dual attack for different distributions and provide a novel measure model to describe the differences. Within this refined framework, we also propose a novel type of error sampler which can achieve high efficiency, security as well as flexibility.
Expand
Eli Ben-Sasson, Alessandro Chiesa, Lior Goldberg, Tom Gur, Michael Riabzev, Nicholas Spooner
ePrint Report ePrint Report
We study the problem of delegating computations via interactive proofs that can be probabilistically checked. Known as *interactive oracle proofs* (IOPs), these proofs extend probabilistically checkable proofs (PCPs) to multi-round protocols, and have received much attention due to their application to constructing cryptographic proofs (such as succinct non-interactive arguments).

We prove that a rich class of NEXP-complete problems, which includes machine computations over large fields and succinctly-described arithmetic circuits, has constant-query IOPs with O(T)-size proofs and polylog(T)-time verification for T-size computations. This is the first construction that simultaneously achieves linear-size proofs and fast verification, regardless of query complexity.

An important metric when using IOPs to delegate computations is the cost of producing the proof. The highly-optimized proof length in our construction enables a very efficient prover, with arithmetic complexity O(T log T). Hence this construction is also the first to simultaneously achieve prover complexity O(T log T) and verifier complexity polylog(T).
Expand
Benedikt Bünz, Ben Fisch, Alan Szepieniec
ePrint Report ePrint Report
We construct a new polynomial commitment scheme for univariate and multivariate polynomials over finite fields, with public-coin evaluation proofs that have logarithmic communication and verification cost in the number of coefficients of the polynomial. The underlying technique is a Diophantine Argument of Knowledge (DARK), leveraging integer representations of polynomials and groups of unknown order. Security is shown from the strong RSA and the adaptive root assumption. Moreover, the scheme does not require a trusted setup if instantiated with class groups. We apply this new cryptographic compiler to a restricted class of algebraic linear IOPs in order to obtain doubly-efficient public-coin IPs with succinct communication and witness-extended emulation for any NP relation. Allowing for linear preprocessing, the online verifier's work is logarithmic in the circuit complexity of the relation.

In particular, we obtain quasi-linear prover time when compiling the IOP employed in Sonic (MBKM, CCS 19). Applying the Fiat-Shamir transform in the random oracle model results in a SNARK system with quasi-linear preprocessing, quasi-linear (online) prover time, logarithmic proof size, and logarithmic (online) verification time for arbitrary circuits. The SNARK is also concretely efficient with $7.8$KB proofs ($70\times$ reduction over state of the art) and $75$ms verification time for circuits with 1 million gates. Most importantly, this SNARK is transparent: it does not require a trusted setup. We also obtain zk-SNARKs by applying a variant of our polynomial commitment scheme that is hiding and offers zero-knowledge evaluation proofs. This construction is the first transparent zk-SNARK that has both a practical prover time as well as asymptotically logarithmic proof size and verification time. We call this system *Supersonic*.
Expand
Lorenz Panny
ePrint Report ePrint Report
We (once again) refute recurring claims about a public-key encryption scheme that allegedly provides unconditional security. This is approached from two angles: We give an information-theoretic proof of impossibility, as well as a concrete attack breaking the proposed scheme in essentially no time.
Expand
Ioannis Demertzis, Javad Ghareh Chamani, Dimitrios Papadopoulos, Charalampos Papamanthou
ePrint Report ePrint Report
We study the problem of dynamic searchable encryption (DSE) with forward-and-backward privacy. Many DSE schemes have been proposed recently but the most efficient ones have one limitation: they require maintaining an operation counter for each unique keyword, either stored locally at the client or accessed obliviously (e.g., with an oblivious map) at the server, during every operation. We propose three new schemes that overcome the above limitation and achieve constant permanent client storage with improved performance, both asymptotically and experimentally, compared to prior state-of-the-art works. In particular, our first two schemes adopt a “static-to-dynamic” transformation which eliminates the need for oblivious accesses during searches. Due to this, they are the first practical schemes with minimal client storage and non-interactive search. Our third scheme is the first quasi-optimal forward-and-backward DSE scheme with only a logarithmic overhead for retrieving the query result (independently of previous deletions). While it does require an oblivious access during search in order to keep permanent client storage minimal, its practical performance is up to four orders of magnitude better than the best existing scheme with quasi-optimal search.
Expand
Jingchun Yang, Meicheng Liu, Dongdai Lin
ePrint Report ePrint Report
The cube attack is one of the most powerful techniques in cryptanalysis of symmetric cryptographic primitives. The basic idea of cube attack is to determine the value of a polynomial in key bits by summing over a cube (a subset of public variables, e.g., plaintext bits or IV bits). If the degree of the polynomial is relatively low, then we can obtain a low-degree equation in key bits, thus may contribute to reducing the complexity of key recovery. In this paper, we use cube cryptanalysis to analyze the authenticated stream cipher ACORN (one of the 6 algorithms in the final portfolio of the CAESAR competition), and give some new results in both distinguishing attacks and key recovery attacks. Firstly, we give a new method of finding cube testers, which is based on the greedy algorithm of finding cubes, and the numeric mapping method for estimating the algebraic degree of NFSR-based cryptosystems. We apply it to ACORN, and obtain the best practical distinguishing attacks for its 690-round variant using a cube of size 38, and its 706-round variant using a cube of size 46. Then we theoretically analyze the security bound of ACORN via the division property based cube attack. By exploiting the embedded property, we find some new distinguishers for ACORN, so the zero-sum property of the output of its 775-round variant can be observed with a complexity of $2^{127}$. Finally, we propose a key recovery attack on ACORN reduced to 772 rounds. The time complexity to recover the linear superpoly of the 123-dimensional cube is $2^{127.46}$. As far as we know, this is the best key recovery attack on round-reduced ACORN. It is also worth noting that this work does not threaten the security of ACORN.
Expand
Jianyu Niu, Chen Feng, Hoang Dau, Yu-Chih Huang, Jingge Zhu
ePrint Report ePrint Report
In the Bitcoin white paper, Nakamoto proposed a very simple Byzantine fault tolerant consensus algorithm that is also known as Nakamoto consensus. Despite its simplicity, some existing analysis of Nakamoto consensus appears to be long and involved. In this technical report, we aim to make such analysis simple and transparent so that we can teach senior undergraduate students and graduate students in our institutions. This report is largely based on a 3-hour tutorial given by one of the authors in June 2019.
Expand
Stephanie Wang, Rishabh Poddar, Jianan Lu, Raluca Ada Popa
ePrint Report ePrint Report
In recent years, there has been an increased interest towards strong security primitives, such as oblivious protocols, that hide which data records a query touches in a database, and reveal only the volume of results. However, recent work has shown that volume is a significant leakage that can enable reconstructing the entire database. Yet, such attacks make two limiting assumptions: they require a large number of queries to be issued by the user, and assume certain distributions on the queries (e.g., uniformly random), which are not realistic in practice.

In this work, we present new attacks for recovering the content of individual user queries, assuming no leakage from the system except the number of results, and avoiding the limiting assumptions above. Unlike prior attacks, our attacks require only a {\em single} query to be issued by the user for recovering the keyword. Furthermore, our attacks make no assumptions about the distribution of issued queries or the underlying data. Our key insight is to exploit the real behavior of specific applications.

We start by surveying 11 applications to identify two key characteristics that can be exploited by attackers---(i) file injection, and (ii) automatic query replay. We present attacks that leverage these two properties in concert with volume leakage, independent of the details of any encrypted database system. Subsequently, we perform an end-to-end attack on the Gmail web client by simulating a server-side adversary. Our attack on Gmail completes within a matter of minutes, demonstrating the feasibility of our techniques. We also present three ancillary attacks for situations when certain mitigation strategies are employed.
Expand
Sanaz Taheri Boshrooyeh, Alptekin Küpçü, Öznur Özkasap
ePrint Report ePrint Report
In the current designs of Online Social Networks (OSN), like Facebook and Twitter, with a central service provider, users' read and write requests over the shared data (e.g., Facebook wall or a group page) are handled via a central OSN provider. However, such centralization comes with view consistency issues where a corrupted provider may serve users with different views of the shared data e.g., by adding, dropping or reordering posts. Integrita provides a data-sharing platform that empowers view consistency relying on N federated servers whose N-1 can be malicious and colluding. Users are guaranteed that the servers cannot show divergence view of the shared data (e.g., posts of the group page) to the users (e.g., group members) without being detected. Unlike the state-of-the-art, Integrita enables detection of inconsistency neither by using storage inefficient data replication solution nor by requiring users to exchange their views out of the band. Every user, without relying on the presence of other users, can verify any server-side equivocation regarding her performed operation. We introduce and achieve a new level of view consistency called q-detectable consistency in which any inconsistency between users' view cannot remain undetected for more than q posts. The data-sharing platform of Integrita advances the centralized and distributed counterparts by improving the view-consistency and storage overhead (by the factor of 1/N where N is the number of the servers), respectively. Nevertheless, concerning per server storage overhead and cross-server communication, Integrita's overhead is the minimum among all its counterparts.
Expand
M. Sadegh Riazi, Beidi Chen, Anshumali Shrivastava, Dan Wallach, Farinaz Koushanfar
ePrint Report ePrint Report
In Near-Neighbor Search (NNS), a client queries a database (held by a server) for the most similar data (near-neighbors) given a certain similarity metric. The Privacy-Preserving variant (PP-NNS) requires that neither server nor the client shall learn information about the other party’s data except what can be inferred from the outcome of NNS. The overwhelming growth in the size of current datasets and the lack of a truly secure server in the online world render the existing solutions impractical; either due to their high computational requirements or non-realistic assumptions which potentially compromise privacy. PP-NNS having query time sub-linear in the size of the database has been suggested as an open research direction by Li et al. (CCSW’15). In this paper, we provide the first such algorithm, called Privacy-Preserving Locality Sensitive Indexing (SLSI) which has a sub-linear query time and the ability to handle honest-but-curious parties. At the heart of our proposal lies a secure binary embedding scheme generated from a novel probabilistic transformation over locality sensitive hashing family. We provide information-theoretic bound for the privacy guarantees and support our theoretical claims using substantial empirical evidence on real-world datasets.
Expand
David Clayton, Christopher Patton, Thomas Shrimpton
ePrint Report ePrint Report
Probabilistic data structures use space-efficient representations of data in order to (approximately) respond to queries about the data. Traditionally, these structures are accompanied by probabilistic bounds on query-response errors. These bounds implicitly assume benign attack models, in which the data and the queries are chosen non-adaptively, and independent of the randomness used to construct the representation. Yet probabilistic data structures are increasingly used in settings where these assumptions may be violated. This work provides a provable-security treatment of probabilistic data structures in adversarial environments. We give a syntax that captures a wide variety of in-use structures, and our security notions support derivation of error bounds in the presence of powerful attacks. We use our formalisms to analyze Bloom filters, counting (Bloom) filters and count-min sketch data structures. For the traditional version of these, our security findings are largely negative; however, we show that simple embellishments (e.g., using salts or secret keys) yields structures that provide provable security, and with little overhead.
Expand
Thomas Roche, Laurent Imbert, Victor Lomné
ePrint Report ePrint Report
In a series of recent articles (from 2011 to 2017), Schindler et al. show that exponent/scalar blinding is not as effective a countermeasure as expected against side-channel attacks targeting RSA modular exponentiation and ECC scalar multiplication. Precisely, these works demonstrate that if an attacker is able to retrieve many randomizations of the same secret, this secret can be fully recovered even when a significative proportion of the blinded secret bits are erroneous. With a focus on ECC, this paper improves the best results of Schindler et al. in both the generic case of random-order elliptic curves and the specific case of structured-order elliptic curves. Our results show that larger blinding material and higher error rates can be successfully handled by an attacker in practice. This study also opens new directions in this line of work by the proposal of a three-steps attack process that isolates the attack critical path (in terms of complexity and success rate) and hence eases the development of future solutions.
Expand
Nugier Cyrius, Adelin Remi, Migliore Vincent, Alata Eric
ePrint Report ePrint Report
Attribute Based Encryption, proposed by Sahai and Waters in 2007, is a set of promising cryptographic schemes that enable various fine grained access control on encrypted data. With a unique encryption key, a user is able to encrypt data for a very specific group of recipient that matches a set of attributes contained inside their decryption key. In current scenario where personal devices share an increasing volume of private data on the web, such encryption algorithms are more than ever a strong alternative to standard encryption algorithms.

In this paper, we propose two major improvements of ABE namely the Perfect Argument Order Optimization and the Multi-Locking. Multi-Locking ABE is an extension of ABE that enables to share access control policy on an arbitrary number of entities. We also make a step further for the speed-up of ABE by providing the ``Perfect Argument Order Optimization'', which is a generalization of the ``Fixed Argument Optimization'' of Scott et al. to a much wider range of ABE constructions (and in particular to our Multi-Locking ABE). Based on those two improvements we propose a construction of the first privacy-preserving Cloud service based on ABE, allowing ephemeral accesses to the data. The Multi-Locking ABE and the Perfect Argument Order Optimization have been successfully integrated to the OpenABE library, providing a speed-up for a variety of ABE constructions.
Expand

17 October 2019

Election Election
The 2019 Election for Board positions is now open. You may vote as often as you wish now through November 15th 23:00 UTC using the Helios cryptographically verifiable election system, but only your last vote will be counted.
Expand
Abdur Rehman Razaz, Khawir Mahmoodx , Muhammad Faisal Amjad, Haider Abbas, Mehreen Afzal
ePrint Report ePrint Report
Lightweight block ciphers are primarily designed for resource constrained devices. However, due to service requirements of large-scale IoT networks and systems, the need for efficient software implementations can not be ruled out. A number of studies have compared software implementations of different lightweight block ciphers on a specific platform but to the best of our knowledge, this is the first attempt to benchmark various software implementations of a single lightweight block cipher across different programming languages and platforms in the cloud architecture. In this paper, we defined six lookup-table based software implementations for lightweight block ciphers with their characteristics ranging from memory to throughput optimized variants. We carried out a thorough analysis of the two costs associated with each implementation (memory and operations) and discussed possible trade-offs in detail. We coded all six types of implementations for three key settings (64, 80, 128 bits) of LED (a lightweight block cipher) in four programming languages (Java, C#, C++, Python). We highlighted the impact of choice relating to implementation type, programming language, and platform by benchmarking the seventy-two implementations for throughput and software efficiency on 32 & 64-bit platforms for two major operating systems (Windows & Linux) on Amazon Web Services Cloud. The results showed that these choices can affect the efficiency of a cryptographic primitive by a factor as high as 400.
Expand
Ashutosh Dhar Dwivedi
ePrint Report ePrint Report
The internet has the main advantage of transparent and sharing, but on the other hand, it has a disadvantage that digital contents are not protected. Due to the online environment, it is not easy to achieve a well protected Digital Rights Management System. Any digital content that is freely allowed to spread online have zero value. The content provider only gets a one-time profit when they upload their work to a platform and transfer the right of the production to the platform. Now the platform is assumed to hold the right. But due to the online availability of content, anyone can download it and can make various copies. After this, the value of the digital content becomes zero, because the value can only be determined by the difficulty of access to the content. There is no way to track the leakage or copyright to the spread of digital material. Anyone is allowed to use it for their purpose. In this paper, we propose a distributed media transaction framework for digital rights management(DRM) scheme based on digital watermarking and scalable blockchain network model. The first generation of blockchain technology is suffering from high latency, low throughput, high transaction cost, high energy and high computational power consumption as well as centralization due to mining pools. In this paper, we mainly focus on removing or improving all these issues from the original blockchain system to make it suitable for our digital rights management model. Our model allows only authorized user to use online contents and provide original multimedia contents. The DRM also take care of digital contents and keep track records of required content modification, copyright transfer or other transaction trails related to multimedia data. We use digital watermarking to reclaim the uniqueness and copyright ownership of the off-line content once it is leaked.
Expand
Wei-Zhu Yeoh, Je Sen Teh, Jiageng Chen
ePrint Report ePrint Report
Differential cryptanalysis of block ciphers requires the identification of differential characteristics with high probability. For block ciphers that have a large block size and a large number of rounds, identifying these differential trails is a computationally intensive task. Matsui first proposed a branch-and-bound algorithm to search for differential trails in 1994. There have been numerous improvements made to the branch-and-bound algorithm since then, such as improving its efficiency by bounding the number of active s-boxes, incorporating a meet-in-the-middle approach, and adapting it to different block cipher architectures like ARX. Although mixed-integer linear programming (MILP) technique has been widely used recently to evaluate the differential resistance of block ciphers, MILP is still an inefficient technique for clustering differential trails (also known as the differential effect). The branch-and-bound method is still a tool better suited for the task of trail clustering. However, it still requires enhancements before being feasible for block ciphers with large block sizes, especially for a large number of rounds. Motivated by the need for a more efficient branch-and-bound algorithm to search for block cipher differential clusters, we propose a GPU-accelerated branch-and-bound approach. The proposed GPU-accelerated algorithm substantially increases the performance of the differential cluster search. We were able to derive a branch enumeration and evaluation kernel that is 5.95 times faster than its CPU counterpart. Then to showcase the practicality of the proposed approach, it is applied on TRIFLE-BC, a 128-bit block cipher. By utilizing the proposed GPU kernel together the incorporation of a meet-in-the-middle approach, we were able to improve the performance of the algorithm by approximately 60 times the original recursive algorithm based on a 20-round TRIFLE-BC. Also, differential clusters with sizes of approximately 2 million for 43 rounds were constructed, leading to slight improvements to the overall differential probabilities. This result depicts the practicality of the proposed GPU framework in constructing clusters consisting of millions of differential trails, which could be used to improve cryptanalytic findings against other block ciphers in the future.
Expand
Next ►