International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

17 May 2022

Mark Blunk, Paul Bunn, Samuel Dittmer, Steve Lu, Rafail Ostrovsky
ePrint Report ePrint Report
Secure merge considers the problem of combining two sorted lists (which are either held separately by two parties, or held by two parties in some privacy-preserving manner, e.g. via secret-sharing), and outputting a single merged (sorted) list in a privacy-preserving manner (typically the final list is encrypted or secret-shared amongst the original two parties). Just as algorithms for \textit{insecure} merge are faster than comparison-based sorting ($\Theta(n)$ versus $\Theta(n \log n)$ for lists of size $n$), we explore protocols for performing a \textit{secure} merge that are more performant than simply invoking a secure sort protocol. Namely, we construct a semi-honest protocol that requires $O(n)$ communication and computation and $O(\log \log n)$ rounds of communication. This matches the metrics of the insecure merge for communication and computation, although it does not match the $O(1)$ round-complexity of insecure merge. Our protocol relies only on black-box use of basic secure primitives, like secure comparison and shuffle.

Our protocol improves on previous work of [FNO22], which gave a $O(n)$ communication and $O(n)$ round complexity protocol, and other ``naive'' approaches, such as the shuffle-sort paradigm, which has $O(n \log n)$ communication and $O(\log n)$ round complexity. It is also more efficient for most practical applications than either a garbled circuit or fully-homomorphic encryption (FHE) approach, which each require $O(n \log n)$ communication or computation and have $O(1)$ round complexity.

There are several applications that stand to benefit from our result, including secure sort (in cases where two or more parties have access to their own list of data, secure sort reduces to secure merge since the parties can first sort their own data locally), which in-turn has implications for more efficient private set intersection (PSI) protocols; as well as secure mutable database storage and search, whereby secure merge can be used to insert new rows into an existing database.

In building our secure merge protocol, we develop several subprotocols that may be of independent interest. For example, we develop a protocol for secure asymmetric merge (where one list is much larger than the other), which matches theoretic lower-bounds for all three metrics (assuming the ratio of list sizes is small enough).
Expand
Simin Ghesmati, Andreas Kern, Aljosha Judmayer, Nicholas Stifter and
ePrint Report ePrint Report
Over the years, several privacy attacks targeted at UTXO-based cryptocurrencies such as Bitcoin have been proposed. This has led to an arms race between increasingly sophisticated analysis approaches and a continuous stream of proposals that seek to counter such attacks against users' privacy. Recently, PayJoin was presented as a new technique for mitigating one of the most prominent heuristics, namely \emph{common input ownership}. This heuristic assumes that the inputs of a transaction, and thus the associated addresses, belong to the same entity. However, a problem with PayJoin is that implementations can accidentally reveal such transactions if the corresponding inputs from involved parties are not chosen carefully. Specifically, if a transaction is formed in a way such that it contains seemingly unnecessary inputs, it can be identified through so-called unnecessary input heuristic (UIH). What is not yet clear is the impact of naive coin selection algorithms within PayJoin implementations that may flag such transactions as PayJoin. This paper investigates the resemblance of PayJoin transactions to ordinary payment transactions by examining the significance of the unnecessary input heuristic in transactions with more than one input and exactly two outputs which is the common template of recent PayJoin transactions.
Expand
Daniel Kales, Greg Zaverucha
ePrint Report ePrint Report
MPC-in-the-head based zero-knowledge proofs allow one to prove knowledge of a preimage for a circuit defined over a finite field F. In recent proofs the soundness depends on the size F, and small fields require more parallel repetitions, and therefore produce larger proofs. In this paper we develop and systematically apply lifting strategies to such proof protocols in order to increase soundness and reduce proof size. The strategies are (i) lifting parts of the protocol to extension fields of F, (ii) using reverse- multiplication friendly embeddings to pack elements of F into a larger field and (iii) to use an alternative circuit representation. Using a combination of these strategies at different points in the protocol, we design two new proof systems well suited to small circuits defined over small fields.

As a case study we consider efficient constructions of post-quantum signatures, where a signature is a proof of knowledge of a one-way function preimage, and two commonly used one-way functions are defined over small fields (AES and LowMC). We find that carefully applying these lifting strategies gives shorter signatures than the state-of-the-art: our AES-based signatures are 1.3x shorter than Banquet (PKC 2021) and our LowMC-based signatures are almost 2x shorter than the NIST-candidate algorithm Picnic3. We implement our schemes and provide benchmarks. Finally, we also give other optimizations: some generally applicable to this class of proofs, and some specific to the circuits we focused on.
Expand
Eduardo Soria-Vazquez
ePrint Report ePrint Report
We introduce the first proof system for layered arithmetic circuits over an arbitrary ring $R$ that is (possibly) non-commutative and (possibly) infinite, while only requiring black-box access to its arithmetic and a subset $A \subseteq R$. Our construction only requires limited commutativity and regularity properties from $A$, similar to recent work on efficient information theoretic multi-party computation over non-commutative rings by Escudero and Soria-Vazquez (CRYPTO 2021), but furthermore covering infinite rings.

We achieve our results through a generalization of GKR-style interactive proofs (Goldwasser, Kalai and Rothblum, Journal of the ACM, 2015). When $A$ is a subset of the center of $R$, generalizations of the sum-check protocol and other building blocks are not too problematic. The case when the elements of $A$ only commute with each other, on the other hand, introduces a series of challenges. In order to overcome those, we need to introduce a new definition of polynomial ring over a non-commutative ring, the notion of left (and right) multi-linear extensions, modify the layer consistency equation and adapt the sum-check protocol.

Despite these changes, our results are compatible with recent developments such as linear time provers. Moreover, for certain rings our construction achieves provers that run in sublinear time in the circuit size. We obtain such result both for known cases, such as matrix and polynomial rings, as well as new ones, such as for some rings resulting from Clifford algebras. Besides efficiency improvements in computation and/or round complexity for several instantiations, the core conclusion of our results is that state of the art doubly efficient interactive proofs do not require much algebraic structure. This enables exact rather than approximate computation over infinite rings as well as agile proof systems, where the black-box choice of the underlying ring can be easily switched through the software life cycle.
Expand
Diego F. Aranha, Youssef El Housni, Aurore Guillevic
ePrint Report ePrint Report
Elliptic curves have become key ingredients for instantiating zero-knowledge proofs and more generally proof systems. Recently, there have been many tailored constructions of these curves that aim at efficiently implementing different kinds of proof systems. In this survey we provide the reader with a comprehensive view on existing work and revisit the contributions in terms of efficiency and security. We present an overview at three stages of the process: curves to instantiate a SNARK, curves to instantiate a recursive SNARK, and also curves to express an elliptic-curve related statement. We provide new constructions of curves for SNARKs and generalize the state-of-the-art constructions for recursive SNARKs. We also exhaustively document the existing work and open-source implementations.
Expand
Yi Liu, Qi Wang, Siu-Ming Yiu
ePrint Report ePrint Report
Time-lock puzzle schemes allow one to encrypt messages for the future. More concretely, one can efficiently generate a time-lock puzzle for a secret/solution $s$, such that $s$ remains hidden until a specified time $T$ has elapsed, even for any parallel adversaries. However, since computation on secrets within multiple puzzles can be performed only when \emph{all} of these puzzles are solved, the usage of classical time-lock puzzles is greatly limited. Homomorphic time-lock puzzle (HTLP) schemes were thus proposed to allow evaluating functions over puzzles directly without solving them.

However, although efficient HTLP schemes exist, more improvements are still needed for practicability. In this paper, we improve HTLP schemes to broaden their application scenarios from the aspects of \emph{applicability} and \emph{verifiability}. In terms of applicability, we design the \emph{first} multiplicatively HTLP scheme with the solution space over $\ZZ_n^*$, which is more expressible than the original one, \eg representing integers. Then, to fit HTLP into scenarios requiring verifiability that is missing in existing schemes, we propose three \emph{simple} and \emph{fast} protocols for both the additively HTLP scheme and our multiplicatively HTLP scheme, respectively. The first two protocols allow a puzzle solver to convince others of the correctness of the solution or the invalidity of the puzzle so that others do not need to solve the puzzle themselves. The third protocol allows a puzzle generator to prove the validity of his puzzles. It is shown that a puzzle in our scheme is only $1.25$KB, and one multiplication on puzzles takes simply $0.01$ms. Meanwhile, the overhead of each protocol is less than $0.6$KB in communication and $40$ms in computation. Hence, HTLP still demonstrates excellent efficiency in both communication and computation with these versatile properties.
Expand
Lior Rotem
ePrint Report ePrint Report
We prove strong security guarantees for a wide array of computational and decisional problems, both in hidden-order groups and in bilinear groups, within the algebraic group model (AGM) of Fuchsbauer, Kiltz and Loss (CRYPTO '18). As our first contribution, we put forth a new fine-grained variant of the Uber family of assumptions in hidden-order groups. This family includes in particular the repeated squaring function of Rivest, Shamir and Wagner, which underlies their time-lock puzzle as well as the main known candidates for verifiable delay functions; and a computational variant of the generalized BBS problem, which underlies the timed commitments of Boneh and Naor (CRYPTO '00). We then provide two results within a variant of the AGM, which show that the hardness of solving problems in this family in a less-than-trivial number of steps is implied by well-studied assumptions. The first reduction may be applied in any group (and in particular, class groups), and is to the RSA assumption; and our second reduction is in RSA groups with a modulus which is the product of two safe primes, and is to the factoring assumption.

Additionally, we prove that the hardness of any computational problem in the Uber family of problems in bilinear groups is implied by the hardness of the $q$-discrete logarithm problem. The parameter $q$ in our reduction is the maximal degree in which a variable appears in the polynomials which define the specific problem within the Uber family. This improves upon a recent result of Bauer, Fuchsbauer and Loss (CRYPTO '20), who obtained a similar implication but for a parameter $q$ which is lower bounded by the maximal total degree of one of the above polynomials. We discuss the implications of this improvement to prominent group key-exchange protocols.
Expand
Lior Rotem, Gil Segev
ePrint Report ePrint Report
Identifying the concrete hardness of the discrete logarithm problem is crucial for instantiating a vast range of cryptographic schemes. Towards this goal, Corrigan-Gibbs and Kogan (EUROCRYPT '18) extended the generic-group model for capturing "preprocessing" algorithms, offering a tradeoff between the space $S$ required for storing their preprocessing information, the time $T$ required for their online phase, and their success probability. Corrigan-Gibbs and Kogan proved an upper bound of $\widetilde{O}(S T^2/N)$ on the success probability of any such algorithm, where $N$ is the prime order of the group, matching the known preprocessing algorithms.

However, the known algorithms assume the availability of truly random hash functions, without taking into account the space required for storing them as part of the preprocessing information, and the time required for evaluating them in essentially each and every step of the online phase. This led Corrigan-Gibbs and Kogan to pose the open problem of designing a discrete-logarithm preprocessing algorithm that is fully constructive in the sense that it relies on explicit hash functions whose description lengths and evaluation times are taken into account in the algorithm's space-time tradeoff.

We present a fully constructive discrete-logarithm preprocessing algorithm with an asymptotically optimal space-time tradeoff (i.e., with success probability $\widetilde{\Omega}(S T^2/N)$). In addition, we obtain an algorithm that settles the corresponding tradeoff for the computational Diffie-Hellman problem. Our approach is based on derandomization techniques that provide rather weak independence guarantees. On the one hand, we show that such guarantees can be realized in our setting with only a minor efficiency overhead. On the other hand, exploiting such weak guarantees requires a more subtle and in-depth analysis of the underlying combinatorial structure compared to that of the known preprocessing algorithms and their analyses.
Expand
Hao Chung, Elisaweta Masserova, Elaine Shi, Sri Aravinda Krishnan Thyagarajan
ePrint Report ePrint Report
Fair exchange is a fundamental primitive for blockchains, and is widely adopted in applications such as atomic swaps, payment channels, and DeFi. Most existing designs of blockchain-based fair exchange protocols consider only the users as strategic players, and assume honest miners. However, recent works revealed that the fairness of commonly deployed fair exchange protocols can be completely broken in the presence of user-miner collusion. In particular, a user can bribe the miners to help it cheat — a phenomenon also referred to as Miner Extractable Value (MEV).

We provide the first formal treatment of side-contract-resilient fair exchange. We propose a new fair exchange protocol called Ponyta, and we prove that the protocol is incentive compatible in the presence of user-miner collusion. In particular, we show that Ponyta satisfies a coalition-resistant Nash equilibrium. Further, we show how to use Ponyta to realize a cross-chain coin swap application, and prove that our coin swap protocol also satisfies coalition-resistant Nash equilibrium. Our work helps to lay the theoretical groundwork for studying side-contract-resilient fair exchange. Finally, we present practical instantiations of Ponyta in Bitcoin and Ethereum with minimal overhead in terms of costs for the users involved in the fair exchange, thus showcasing instantiability of Ponyta with a wide range of cryptocurrencies.
Expand

16 May 2022

Lidong Han, Guangwu Xu, Qi Xie, Xiao Tan , Chengliang Tian
ePrint Report ePrint Report
In this letter, we analyze the security of identity-based provable data possession protocol with compressed cloud storage (published in IEEE TIFS, doi:10.1109/TIFS.2022. 3159152). An adversary can recover the ephemeral secret keys from only two encrypted blocks and obtain the original data with a high probability under Dirichlet’s basic result. Moreover, he can impersonates a data owner to outsource any file to the cloud in an unwanted way.
Expand
Samuel Sousa, Roman Kern
ePrint Report ePrint Report
Deep learning (DL) models for natural language processing (NLP) tasks often handle private data, demanding protection against breaches and disclosures. Data protection laws, such as the European Union's General Data Protection Regulation (GDPR), thereby enforce the need for privacy. Although many privacy-preserving NLP methods have been proposed in recent years, no categories to organize them have been introduced yet, making it hard to follow the progress of the literature. To close this gap, this article systematically reviews over sixty DL methods for privacy-preserving NLP published between 2016 and 2020, covering theoretical foundations, privacy-enhancing technologies, and analysis of their suitability for real-world scenarios. First, we introduce a novel taxonomy for classifying the existing methods into three categories: data safeguarding methods, trusted methods, and verification methods. Second, we present an extensive summary of privacy threats, datasets for applications, and metrics for privacy evaluation. Third, throughout the review, we describe privacy issues in the NLP pipeline in a holistic view. Further, we discuss open challenges in privacy-preserving NLP regarding data traceability, computation overhead, dataset size, the prevalence of human biases in embeddings, and the privacy-utility tradeoff. Finally, this review presents future research directions to guide successive research and development of privacy-preserving NLP models.
Expand
Zhichuang Liang, Boyue Fang, Jieyu Zheng, Yunlei Zhao
ePrint Report ePrint Report
Post-quantum cryptography (PQC) is critical to the next generation of network security. The NTRU lattice is a promising candidate to construct practical cryptosystems resistant to quantum computing attacks, and particularly plays a leading role in the ongoing NIST post-quantum cryptography standardization. On the one hand, it is benefited from a strong security guarantee since it has essentially not been broken over 24 years. On the other hand, all the known patent threats against NTRU have expired, which is deemed a critical factor for consideration when deploying PQC algorithms in reality. Nevertheless, there are still some obstacles to the computational efficiency and bandwidth complexity of NTRU-based constructions of key encapsulation mechanisms (KEM). To address these issues, we propose a compact and efficient KEM based on the NTRU lattice, called CTRU, by introducing a scalable ciphertext compression technique. It demonstrates a new approach to decrypting NTRU ciphertext, where the plaintext message is recovered with the aid of our decoding algorithm in the scalable $\text{E}_8$ lattice (instead of eliminating the extra terms modulo $p$ in traditional NTRU-based KEM schemes). The instantiation of CTRU is over the NTT-friendly rings of the form $\mathbb{Z}_q[x]/(x^{n}-x^{n/2}+1)$. We remark that the scalable ciphertext compression technique can also be applied to NTRU-based KEM schemes over other polynomial rings. In order to deal with the inconvenient issue that various NTT algorithms are needed for different $n$'s, we present a unified NTT methodology over $\mathbb{Z}_q[x]/(x^n-x^{n/2}+1)$, $n\in \{512,768,1024\}$, such that only one type of NTT computation is required for different $n$'s, which might be of independent interest. To our knowledge, our CTRU is the most bandwidth efficient KEM based on the NTRU lattice up to now. In addition, roughly speaking, compared to other NTRU-based KEM schemes, CTRU has stronger security against known attacks, enjoys more robust CCA security reduction (starting from IND-CPA rather than OW-CPA), and its encapsulation and decapsulation processes are also among the most efficient. For example, when compared to the NIST Round 3 finalist NTRU-HRSS, our CTRU-768 has $15\%$ smaller ciphertext size and its security is strengthened by $(45,40)$ bits for classical and quantum security respectively. When compared to the NIST Round 3 finalist Kyber that is based on the Module-LWE (MLWE) assumption, CTRU has both smaller bandwidth and lower error probabilities at about the same security level.
Expand
Alexandre Adomnicai, Kazuhiko Minematsu, Maki Shigeri
ePrint Report ePrint Report
This paper reports new software implementation results for the Skinny-128 tweakable block ciphers on various SIMD architectures. More precisely, we introduce a decomposition of the 8-bit S-box into four 4-bit S-boxes in order to take advantage of vector permute instructions, leading to significant performance improvements over previous constant-time implementations. Since our approach is of particular interest when Skinny-128 is used in sequential modes of operation, we also report how it benefits to the Romulus authenticated encryption scheme, a finalist of the NIST LWC standardization process.
Expand
Xuting Zhou, Tianshuo Cong
ePrint Report ePrint Report
Maximum Distance Separable (MDS) matrices are usually used to be diffusion layers in cryptographic designs. The main advantage of involutory MDS matrices lies in that both encryption and decryption share the same matrix-vector product. In this paper, we present a new type of MDS matrices called generalized-involutory MDS matrices, implementation of whose inverse matrix-vector products in decryption is the combination of the matrix-vector products in encryption plus a few extra XOR gates. For the purpose of verifying the existence of such matrices, we found 4 × 4 Hadamard generalized-involutory MDS matrix over GF(24) consuming as little as 38 XOR gates with 4 additional XOR gates for inverse matrix, while the best previous single-clock implementation in IWSEC 2019 needs 46 XOR gates with 51 XOR gates for inverse matrix. For GF(28), our results also beat the best previous records in ToSC 2017.
Expand
Akira Ito, Rei Ueno, Naofumi Homma
ePrint Report ePrint Report
This study derives information-theoretical bounds of the success rate (SR) of side-channel attacks on masked implementations. We first develop a communication channel model representing side-channel attacks on masked implementations. We then derive two SR bounds based on the conditional probability distribution and mutual information of shares. The basic idea is to evaluate the upper-bound of the mutual information between the non-masked secret value and the side-channel trace by the conditional probability distribution of shares given its leakage, with a help of the Walsh–Hadamard transform. With the derived theorems, we also prove that the security of masking schemes: the SR decreases exponentially with an increase in the number of masking shares, under a much more relaxed condition compared with the previous proof. To validate and utilize our theorems in practice, we propose a deep-learning-based profiling method for estimating the conditional probability distribution of shares to estimate the SR bound and the number of traces required for attacking a given device.We experimentally confirm that our bounds are much tighter than the conventional bounds on masked implementations, which validates the relevance of our theorems to practice.
Expand
Charles Gouert, Rishi Khan, Nektarios Georgios Tsoutsos
ePrint Report ePrint Report
Homomorphic encryption is a powerful privacy-preserving technology that is notoriously difficult to configure, even for experts. In this article, we outline methodologies for determining optimal cryptographic parameters for any arbitrary application. We provide guidelines for both leveled and fully homomorphic encryption, and demonstrate the presented strategies with the BGV cryptosystem.
Expand
Gowri R Chandran, Carmit Hazay, Robin Hundt, Thomas Schneider
ePrint Report ePrint Report
With the large amount of data generated nowadays, analysis of this data has become eminent. Since a vast amount of this data is private, it is also important that the analysis is done in a secure manner. Comparison-based functions are commonly used in data analysis. These functions use the comparison operation as the basis. Secure computation of such functions have been discussed for median by Aggarwal et al. (EUROCRYPT'04) and for convex hull by Shelat and Venkitasubramaniam (ASIACRYPT'15).

In this paper, we present a generic protocol for the secure computation of comparison-based functions. In order to scale to a large number of participants, we propose this protocol in a star topology with an aim to reduce the communication complexity. We also present a protocol for one specific comparison-based function, the $k^{th}$ ranked element. The construction of one of our protocols leaks some intermediate values but does not reveal information about an individual party's inputs. We demonstrate that our protocol offers better performance than the protocol for $k^{th}$ ranked element by Tueno et. al. (FC'20) by providing an implementation.
Expand
Tyler Beauregard, Janabel Xia, Mike Rosulek
ePrint Report ePrint Report
Private set intersection (PSI) allows two parties, who each hold a set of items, to learn which items they have in common, without revealing anything about their other items. Some applications of PSI would be better served by revealing only one common item, rather than the entire set of all common items. In this work we develop special-purpose protocols for privately finding one common item (FOCI) from the intersection of two sets. The protocols differ in how that item is chosen --- e.g., uniformly at random from the intersection; the ``best'' item in the intersection according to one party's ranking; or the ``best'' item in the intersection according to the sum of both party's scores.
Expand
Ilia Iliashenko, Malika Izabachène, Axel Mertens, Hilder V. L. Pereira.
ePrint Report ePrint Report
We propose homomorphic algorithms for privacy-preserving applications where we are given an encrypted dataset and we want to compute the number of elements that share a common property. We consider a two-party scenario between a client and a server, where the storage and computation is outsourced to the server. We present two new efficient methods to solve this problem by homomorphically evaluating a selection function encoding the desired property, and counting the number of elements which evaluates to the same value. Our first method programs the homomorphic computation in the style of the functional bootstrapping of TFHE and can be instantiated with essentially any homomorphic encryption scheme that operates on polynomials, like FV or BGV. Our second method relies on new homomorphic operations and ciphertext formats, and it is more suitable for applications where the number of possible inputs is much larger than the number of possible values for the property. We illustrate the feasibility of our methods by presenting a publicly available proof-of-concept implementation in C++ and using it to evaluate a heatmap function over encrypted geographic points.
Expand
Boris Ryabko
ePrint Report ePrint Report
In 2002, Russell and Wang proposed a definition of entropic security, which was developed within the framework of secret-key cryptography. An entropically secure system is unconditionally secure, that is, unbreakable regardless of the adversary’s computing power. The notion of an entropically secure symmetric encryption scheme is important for cryptography because one can construct entropically secure symmetric encryption schemes with keys much shorter than the length of the input, thus circumventing Shannon’s famous lower bound on key length. In this report we suggest an entropically secure scheme for the case where the encrypted message is generated by a Markov chain with unknown statistics. The length of the required secret key is proportional to the logarithm of the message length (as opposed to the length of the message itself for the one-time pad).

keywords: Information Theory, entropy security, indistinguishability, symmetric encryption scheme, unconditionally secure, Markov chain, unknown statistics.
Expand
◄ Previous Next ►