International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

05 April 2023

Agnese Gini, Pierrick Méaux
ePrint Report ePrint Report
In this article we study the Algebraic Immunity (AI) of Weightwise Perfectly Balanced (WPB) functions. After showing a lower bound on the AI of two classes of WPB functions from the previous literature, we prove that the minimal AI of a WPB $n$-variables function is constant, equal to $2$ for $n\ge 4$ . Then, we compute the distribution of the AI of WPB function in $4$ variables, and estimate the one in $8$ and $16$ variables. For these values of $n$ we observe that a large majority of WPB functions have optimal AI, and that we could not obtain an AI-$2$ WPB function by sampling at random. Finally, we address the problem of constructing WPB functions with bounded algebraic immunity, exploiting a construction from 2022 by Gini and Méaux. In particular, we present a method to generate multiple WPB functions with minimal AI, and we prove that the WPB functions with high nonlinearity exhibited by Gini and Méaux also have minimal AI. We conclude with a construction giving WPB functions with lower bounded AI, and give as example a family with all elements with AI at least $n/2-\log(n)+1$.
Expand
Quang Dao, Paul Grubbs
ePrint Report ePrint Report
Increasing deployment of advanced zero-knowledge proof systems, especially zkSNARKs, has raised critical questions about their security against real-world attacks. Two classes of attacks of concern in practice are adaptive soundness attacks, where an attacker can prove false statements by choosing its public input after generating a proof, and malleability attacks, where an attacker can use a valid proof to create another valid proof it could not have created itself. Prior work has shown that simulation-extractability (SIM-EXT), a strong notion of security for proof systems, rules out these attacks. In this paper, we prove that two transparent, discrete-log-based zkSNARKs, Spartan and Bulletproofs, are simulation-extractable (SIM-EXT) in the random oracle model if the discrete logarithm assumption holds in the underlying group. Since these assumptions are required to prove standard security properties for Spartan and Bulletproofs, our results show that SIM-EXT is, surprisingly, "for free" with these schemes. Our result is the first SIM-EXT proof for Spartan and encompasses both linear- and sublinear-verifier variants. Our result for Bulletproofs encompasses both the aggregate range proof and arithmetic circuit variants, and is the first to not rely on the algebraic group model (AGM), resolving an open question posed by Ganesh et al. (EUROCRYPT '22). As part of our analysis, we develop a generalization of the tree-builder extraction theorem of Attema et al. (TCC '22), which may be of independent interest.
Expand
yufan jiang, Yong Li
ePrint Report ePrint Report
Tremendous efforts have been made to improve the efficiency of secure Multi-Party Computation (MPC), which allows n ≥ 2 parties to jointly evaluate a target function without leaking their own private inputs. It has been confirmed by previous researchers that 3-Party Computation (3PC) and outsourcing computations to GPUs can lead to huge performance improvement of MPC in computationally intensive tasks such as Privacy-Preserving Machine Learning (PPML). A natural question to ask is whether super-linear performance gain is possible for a linear increase in resources. In this paper, we give an affirmative answer to this question. We propose Force, an extremely efficient 4PC system for PPML. To the best of our knowledge, each party in Force enjoys the least number of local computations and lowest data exchanges between parties. This is achieved by introducing a new sharing type X -share along with MPC protocols in privacy-preserving training and inference that are semi-honest secure with an honest-majority. Our contribution does not stop at theory. We also propose engineering optimizations and verify the high performance of the protocols with implementation and experiments. By comparing the results with state-of-the-art researches such as Cheetah, Piranha, CryptGPU and CrypTen, we showcase that Force is sound and extremely efficient, as it can improve the PPML performance by a factor of 2 to 1200 compared with other latest 2PC, 3PC and 4PC system
Expand
Carlos Aguilar-Melchor, Martin R. Albrecht, Thomas Bailleux, Nina Bindel, James Howe, Andreas Hülsing, David Joseph, Marc Manzano
ePrint Report ePrint Report
We revisit batch signatures (previously considered in a draft RFC, and used in multiple recent works), where a single, potentially expensive, "inner" digital signature authenticates a Merkle tree constructed from many messages. We formalise a construction and prove its unforgeability and privacy properties.

We also show that batch signing allows us to scale slow signing algorithms, such as those recently selected for standardisation as part of NIST's post-quantum project, to high throughput, with a mild increase in latency. For the example of Falcon-512 in TLS, we can increase the amount of connections per second by a factor 3.2x, at the cost of an increase in the signature size by ~14% and the median latency by ~25%, where both are ran on the same 30 core server.

We also discuss applications where batch signatures allow us to increase throughput and to save bandwidth. For example, again for Falcon-512, once one batch signature is available, the additional bandwidth for each of the remaining N-1 is only 82 bytes.
Expand
Samuel Bedassa Alemu, Julia Kastner
ePrint Report ePrint Report
Blind signatures were originally introduced by Chaum (CRYPTO ’82) in the context of privacy-preserving electronic payment systems. Nowadays, the cryptographic primitive has also found applications in anonymous credentials and voting systems. However, many practical blind signature schemes have only been analysed in the game-based setting where a single signer is present. This is somewhat unsatisfactory as blind signatures are intended to be deployed in a setting with many signers. We address this in the following ways: – We formalise two variants of one-more-unforgeability of blind signatures in the Multi-Signer Setting. – We show that one-more-unforgeability in the Single-Signer Setting translates straightforwardly to the Multi-Signer Setting with a reduction loss proportional to the number of signers. – We identify a class of blind signature schemes which we call Key-Convertible where this reduction loss can be traded for an increased number of signing sessions in the Single-Signer Setting and show that many practical blind signature schemes such as blind BLS, blind Schnorr, blind Okamoto-Schnorr as well as two pairing-free, ROS immune schemes by Tessaro and Zhu (Eurocrypt’22) fulfil this property. – We further describe how the notion of key substitution attacks (Menezes and Smart, DCC’04) can be translated to blind signatures and provide a generic transformation of how they can be avoided.
Expand
Fuyuki Kitagawa, Tomoyuki Morimae, Ryo Nishimaki, Takashi Yamakawa
ePrint Report ePrint Report
We construct quantum public-key encryption from one-way functions. In our construction, public keys are quantum, but ciphertexts are classical. Quantum public-key encryption from one-way functions (or weaker primitives such as pseudorandom function-like states) are also proposed in some recent works [Morimae-Yamakawa, eprint:2022/1336; Coladangelo, eprint:2023/282; Grilo-Sattath-Vu, eprint:2023/345; Barooti-Malavolta-Walter, eprint:2023/306]. However, they have a huge drawback: they are secure only when quantum public keys can be transmitted to the sender (who runs the encryption algorithm) without being tampered with by the adversary, which seems to require unsatisfactory physical setup assumptions such as secure quantum channels. Our construction is free from such a drawback: it guarantees the secrecy of the encrypted messages even if we assume only unauthenticated quantum channels. Thus, the encryption is done with adversarially tampered quantum public keys. Our construction based only on one-way functions is the first quantum public-key encryption that achieves the goal of classical public-key encryption, namely, to establish secure communication over insecure channels.
Expand
Eric Sageloli, Pierre Pébereau, Pierrick Méaux, Céline Chevalier
ePrint Report ePrint Report
We provide identity-based signature (IBS) schemes with tight security against adaptive adversaries, in the (classical or quantum) random oracle model (ROM or QROM), in both unstructured and structured lattices, based on the SIS or RSIS assumption. These signatures are short (of size independent of the message length). Our schemes build upon a work from Pan and Wagner (PQCrypto’21) and improve on it in several ways. First, we prove their transformation from non-adaptive to adaptive IBS in the QROM. Then, we simplify the parameters used and give concrete values. Finally, we simplify the signature scheme by using a non-homogeneous relation, which helps us reduce the size of the signature and get rid of one costly trapdoor delegation. On the whole, we get better security bounds, shorter signatures and faster algorithms.
Expand
Sagnik Saha, Nikolaj Ignatieff Schwartzbach, Prashant Nalini Vasudevan
ePrint Report ePrint Report
In the average-case $k$-SUM problem, given $r$ integers chosen uniformly at random from $\{0,\ldots,M-1\}$, the objective is to find a set of $k$ numbers that sum to 0 modulo $M$ (this set is called a "solution"). In the related $k$-XOR problem, given $k$ uniformly random Boolean vectors of length log $M$, the objective is to find a set of $k$ of them whose bitwise-XOR is the all-zero vector. Both of these problems have widespread applications in the study of fine-grained complexity and cryptanalysis.

The feasibility and complexity of these problems depends on the relative values of $k$, $r$, and $M$. The dense regime of $M \leq r^k$, where solutions exist with high probability, is quite well-understood and we have several non-trivial algorithms and hardness conjectures here. Much less is known about the sparse regime of $M\gg r^k$, where solutions are unlikely to exist. The best answers we have for many fundamental questions here are limited to whatever carries over from the dense or worst-case settings.

We study the planted $k$-SUM and $k$-XOR problems in the sparse regime. In these problems, a random solution is planted in a randomly generated instance and has to be recovered. As $M$ increases past $r^k$, these planted solutions tend to be the only solutions with increasing probability, potentially becoming easier to find. We show several results about the complexity and applications of these problems.

Conditional Lower Bounds. Assuming established conjectures about the hardness of average-case (non-planted) $k$-SUM when $M = r^k$, we show non-trivial lower bounds on the running time of algorithms for planted $k$-SUM when $r^k\leq M\leq r^{2k}$. We show the same for $k$-XOR as well.

Search-to-Decision Reduction. For any $M>r^k$, suppose there is an algorithm running in time $T$ that can distinguish between a random $k$-SUM instance and a random instance with a planted solution, with success probability $(1-o(1))$. Then, for the same $M$, there is an algorithm running in time $\tilde{O}(T)$ that solves planted $k$-SUM with constant probability. The same holds for $k$-XOR as well.

Hardness Amplification. For any $M \geq r^k$, if an algorithm running in time $T$ solves planted $k$-XOR with success probability $\Omega(1/\text{polylog}(r))$, then there is an algorithm running in time $\tilde O(T)$ that solves it with probability $(1-o(1))$. We show this by constructing a rapidly mixing random walk over $k$-XOR instances that preserves the planted solution.

Cryptography. For some $M \leq 2^{\text{polylog}(r)}$, the hardness of the $k$-XOR problem can be used to construct Public-Key Encryption (PKE) assuming that the Learning Parity with Noise (LPN) problem with constant noise rate is hard for $2^{n^{0.01}}$-time algorithms. Previous constructions of PKE from LPN needed either a noise rate of $O(1/\sqrt{n})$, or hardness for $2^{n^{0.5}}$-time algorithms.

Algorithms. For any $M \geq 2^{r^2}$, there is a constant $c$ (independent of $k$) and an algorithm running in time $r^c$ that, for any $k$, solves planted $k$-SUM with success probability $\Omega(1/8^k)$. We get this by showing an average-case reduction from planted $k$-SUM to the Subset Sum problem. For $r^k \leq M \ll 2^{r^2}$, the best known algorithms are still the worst-case $k$-SUM algorithms running in time $r^{\lceil{k/2}\rceil-o(1)}$.
Expand
Nouri Alnahawi, Nicolai Schmitt, Alexander Wiesmaier, Andreas Heinemann, Tobias Grasmeyer
ePrint Report ePrint Report
The demand for crypto-agility, although dating back for more than two decades, recently started to increase in the light of the expected post-quantum cryptography (PQC) migration. Nevertheless, it started to evolve into a science on its own. Therefore, it is important to establish a unified definition of the notion, as well as its related aspects, scope, and practical applications. This paper presents a literature survey on crypto-agility and discusses respective development efforts categorized into different areas, including requirements, characteristics, and possible challenges. We explore the need for crypto-agility beyond PQC algorithms and security protocols and shed some light on current solutions, existing automation mechanisms, and best practices in this field. We evaluate the state of readiness for crypto-agility, and offer a discussion on the identified open issues. The results of our survey indicate a need for a comprehensive understanding. Further, more agile design paradigms are required in developing new IT systems, and in refactoring existing ones, in order to realize crypto-agility on a broad scale.
Expand
Yiping Ma, Jess Woods, Sebastian Angel, Antigoni Polychroniadou, Tal Rabin
ePrint Report ePrint Report
This paper introduces Flamingo, a system for secure aggregation of data across a large set of clients that fits the stringent needs of federated learning settings. In secure aggregation, a server sums up the private inputs of clients and obtains the result without learning anything about the individual inputs beyond what is implied by the final sum. Flamingo focuses on the multi-round setting found in federated learning in which many consecutive summations (averages) of model weights are performed to derive a good model. Prior works, designed for a single round, when adapted to the multi-round setting, require all clients to establish pairwise secrets per round, which is onerous when the number of clients is large and clients have varying network conditions. Flamingo introduces a novel protocol for reusing pairwise secrets to reduce the overall communication-round complexity and a new lightweight dropout resilience protocol to ensure that if clients leave in the middle of a sum the server can still obtain a meaningful result. These techniques help Flamingo reduce the end-to-end runtime and the number of interactions between clients and the server for a full training session over prior work. We implement and evaluate Flamingo and show that it can effectively be used to securely train a neural network on the (Extended) MNIST and CIFAR-100 datasets, and the model converges without a loss in accuracy, compared to a non-private federated learning system.
Expand
Martin R. Albrecht, Sofía Celi, Benjamin Dowling, Daniel Jones
ePrint Report ePrint Report
We report several practically-exploitable cryptographic vulnerabilities in the Matrix standard for federated real-time communication and its flagship client and prototype implementation, Element. These, together, invalidate the confidentiality and authentication guarantees claimed by Matrix against a malicious server. This is despite Matrix’ cryptographic routines being constructed from well-known and -studied cryptographic building blocks. The vulnerabilities we exploit differ in their nature (insecure by design, protocol confusion, lack of domain separation, implementation bugs) and are distributed broadly across the different subprotocols and libraries that make up the cryptographic core of Matrix and Element. Together, these vulnerabilities highlight the need for a systematic and formal analysis of the cryptography in the Matrix standard.
Expand
Kamyar Mohajerani, Luke Beckwith, Abubakr Abdulgadir, Eduardo Ferrufino, Jens-Peter Kaps, Kris Gaj
ePrint Report ePrint Report
Side-channel resistance is one of the primary criteria identified by NIST for use in evaluating candidates in the Lightweight Cryptography (LWC) Standardization process. In Rounds 1 and 2 of this process, when the number of candidates was still substantial (56 and 32, respectively), evaluating this feature was close to impossible. With ten finalists remaining, side-channel resistance and its effect on the performance and cost of practical implementations became of utmost importance. In this paper, we describe a general framework for evaluating the side-channel resistance of LWC candidates using resources, experience, and general practices of the cryptographic engineering community developed over the last two decades. The primary features of our approach are a) self-identification and self-characterization of side-channel security evaluation labs, b) distributed development of protected hardware and software implementations, matching certain high-level requirements and deliverable formats, and c) dynamic and transparent matching of evaluators with implementers in order to achieve the most meaningful and fair evaluation report. After the classes of hardware implementations with similar resistance to side-channel attacks are established, these implementations are comprehensively benchmarked using Xilinx Artix-7 FPGAs. All implementations belonging to the same class are then ranked according to several performance and cost metrics. Four candidates - Ascon, Xoodyak, TinyJAMBU, and ISAP - are selected as offering unique advantages over other finalists in terms of the throughput, area, throughput-to-area ratio, or randomness requirements of their protected hardware implementations.
Expand
Uddipana Dowerah, Subhranil Dutta, Aikaterini Mitrokotsa, Sayantan Mukherjee, Tapas Pal
ePrint Report ePrint Report
Predicate inner product functional encryption (P-IPFE) is essentially attribute-based IPFE (AB-IPFE) which additionally hides attributes associated to ciphertexts. In a P-IPFE, a message x is encrypted under an attribute w and a secret key is generated for a pair (y, v) such that recovery of ⟨x, y⟩ requires the vectors w, v to satisfy a linear relation. We call a P-IPFE unbounded if it can encrypt unbounded length attributes and message vectors. • zero predicate IPFE. We construct the first unbounded zero predicate IPFE (UZP-IPFE) which recovers ⟨x,y⟩ if ⟨w,v⟩ = 0. This construction is inspired by the unbounded IPFE of Tomida and Takashima (ASIACRYPT 2018) and the unbounded zero inner product encryption of Okamoto and Takashima (ASIACRYPT 2012). The UZP-IPFE stands secure against general attackers capable of decrypting the challenge ciphertext. Concretely, it provides full attribute-hiding security in the indistinguishability-based semi-adaptive model under the standard symmetric external Diffie-Hellman assumption. • non-zero predicate IPFE. We present the first unbounded non-zero predicate IPFE (UNP-IPFE) that successfully recovers ⟨x, y⟩ if ⟨w, v⟩ ≠ 0. We generically transform an unbounded quadratic FE (UQFE) scheme to weak attribute-hiding UNP-IPFE in both public and secret key settings. Interestingly, our secret key simulation secure UNP-IPFE has succinct secret keys and is constructed from a novel succinct UQFE that we build in the random oracle model. We leave the problem of constructing a succinct public key UNP-IPFE or UQFE in the standard model as an important open problem.
Expand
Buvana Ganesh, Apurva Vangujar, Alia Umrani, Paolo Palmieri
ePrint Report ePrint Report
Group signature (GS) schemes are an important primitive in cryptography that provides anonymity and traceability for a group of users. In this paper, we propose a new approach to constructing GS schemes using the homomorphic trapdoor function (HTDF). We focus on constructing an identity-based homomorphic signature (IBHS) scheme using the trapdoor, providing a simpler scheme that has no zero-knowledge proofs. Our scheme allows packing more data into the signatures by elevating the existing homomorphic trapdoor from the SIS assumption to the MSIS assumption to enable packing techniques. Compared to the existing group signature schemes, we provide a straightforward and alternate construction that is efficient and secure under the standard model. Overall, our proposed scheme provides an efficient and secure solution for GS schemes using HTDF.
Expand
Johannes Ernst, Aikaterini Mitrokotsa
ePrint Report ePrint Report
Despite its popularity, password based authentication is susceptible to various kinds of attacks, such as online or offline dictionary attacks. Employing biometric credentials in the authentication process can strengthen the provided security guarantees, but raises significant privacy concerns. This is mainly due to the inherent variability of biometric readings that prevents us from simply applying a standard hash function to them. In this paper we first propose an ideal functionality for modeling secure, privacy preserving biometric based two-factor authentication in the framework of universal composability (UC). The functionality is of independent interest and can be used to analyze other two-factor authentication protocols. We then present a generic protocol for biometric based two-factor authentication and prove its security (relative to our proposed functionality) in the UC framework. The first factor in our protocol is the possession of a device that stores the required secret keys and the second factor is the user's biometric template. Our construction can be instantiated with function hiding functional encryption, which computes for example the distance of the encrypted templates or the predicate indicating whether the templates are close enough. Our contribution can be split into three parts:

- We model privacy preserving biometric based two-factor authentication as an ideal functionality in the UC framework. To the best of our knowledge, this is the first description of an ideal functionality for biometric based two-factor authentication in the UC framework. - We propose a general protocol that uses functional encryption and prove that it UC-realizes our ideal functionality. - We show how to instantiate our framework with efficient, state of the art inner-product functional encryption. This allows the computation of the Euclidean distance, Hamming distance or cosine similarity between encrypted biometric templates. In order to show its practicality, we implemented our protocol and evaluated its performance.
Expand
Adda-Akram Bendoukha, Oana Stan, Renaud Sirdey, Nicolas Quero, Luciano Freitas
ePrint Report ePrint Report
Fully homomorphic encryption (FHE) is a powerful cryptographic technique allowing to perform computation directly over encrypted data. Motivated by the overhead induced by the homomorphic ciphertexts during encryption and transmission, the transciphering technique, consisting in switching from a symmetric encryption to FHE encrypted data was investigated in several papers. Different stream and block ciphers were evaluated in terms of their "FHE-friendliness", meaning practical implementations costs while maintaining sufficient security levels. In this work, we present a first evaluation of hash functions in the homomorphic domain, based on well-chosen block ciphers. More precisely, we investigate the cost of transforming PRINCE, SIMON, SPECK, and LowMC, a set of lightweight block-ciphers into secure hash primitives using well-established hash functions constructions based on block-ciphers, and provide evaluation under bootstrappable FHE schemes. We also motivate the necessity of practical homomorphic evaluation of hash functions by providing several use cases in which the integrity of private data is also required. In particular, our hash constructions can be of significant use in a threshold-homomorphic based protocol for the single secret leader election problem occurring in blockchains with Proof-of-stake consensus. Our experiments showed that using a TFHE implementation of a hash function, we are able to achieve practical runtime, and appropriate security levels (e.g., for PRINCE it takes 1.28 minutes to obtain a 128 bits of hash).
Expand
Hiroki Okada, Kazuhide Fukushima, Shinsaku Kiyomoto, Tsuyoshi Takagi
ePrint Report ePrint Report
Agrawal et al. (Asiacrypt 2013) proved the discrete Gaussian leftover hash lemma, which states that the linear transformation of the discrete spherical Gaussian is statistically close to the discrete ellipsoid Gaussian. Showing that it is statistically close to the discrete spherical Gaussian, which we call the discrete spherical Gaussian leftover hash lemma (SGLHL), is an open problem posed by Agrawal et al. In this paper, we solve the problem in a weak sense: we show that the distribution of the linear transformation of the discrete spherical Gaussian and the discrete spherical Gaussian are close with respect to the Rényi divergence (RD), which we call the weak SGLHL (wSGLHL). As an application of wSGLHL, we construct a sharper self-reduction of the learning with errors problem (LWE) problem. Applebaum et al. (CRYPTO 2009) showed that linear sums of LWE samples are statistically close to (plain) LWE samples with some unknown error parameter. In contrast, we show that linear sums of LWE samples and (plain) LWE samples with a known error parameter are close with respect to RD. As another application, we weaken the independence heuristic required for the fully homomorphic encryption scheme TFHE.
Expand
Hyeonbum Lee, Jae Hong Seo
ePrint Report ePrint Report
We propose a new inner product argument (IPA), called TENET, which features sublogarithmic proof size and sublinear verifier without a trusted setup. IPA is a core primitive for various advanced proof systems including range proofs, circuit satisfiability, and polynomial commitment, particularly where a trusted setup is hard to apply. At ASIACRYPT 2022, Kim, Lee, and Seo showed that pairings can be utilized to exceed the complexity barrier of the previous discrete logarithm-based IPA without a trusted setup. More precisely, they proposed two pairing-based IPAs, one with sublogarithmic proof size and the other one with sublinear verifier cost, but they left achieving both complexities simultaneously as an open problem. We investigate the obstacles for this open problem and then provide our solution TENET, which achieves both sublogarithmic proof size and sublinear verifier. We prove the soundness of TENET under the discrete logarithm assumption and double pairing assumption.
Expand
Yodai Watanabe
ePrint Report ePrint Report
Non-malleability is one of the basic security goals for encryption schemes which ensures the resistance of the scheme against ciphertext modifications in the sense that any adversary, given a ciphertext of a plaintext, cannot generate another ciphertext whose underlying plaintext is meaningfully related to the initial one. There are multiple formulations of non-malleable encryption schemes, depending on whether they are based on simulation or comparison, or whether they impose valid ciphertext condition, in which an adversary is required to generate only valid ciphertexts, or not. In addition to the simulation-based and comparison-based formulations (SNM and CNM), an indistinguishability-based characterization of non-malleability (IND), called ciphertext indistinguishability against parallel chosen-ciphertext attacks has been proposed. These three formulations, SNM, CNM and IND, have been shown equivalent if the valid ciphertext condition is not imposed; however, if that condition is imposed, then they have been shown equivalent only against the strongest type of attack models, and the relations among them against the weaker types of the attack models remain open. This work answers this open question by showing the separations SNM*$\not\rightarrow$CNM* and IND*$\not\rightarrow$SNM* against the weaker types of the attack models, where the asterisk attached to the short-hand notations represents that the valid ciphertext condition is imposed. Moreover, motivated by the proof of the latter separation, this paper introduces simulation-based and comparison-based formulations of semantic security (SSS* and CSS*) against parallel chosen-ciphertext attacks, and shows the equivalences SSS*$\leftrightarrow$SNM* and CSS*$\leftrightarrow$CNM* against all types of the attack models. It thus follows that IND*$\not\rightarrow$SSS*, that is, semantic security and ciphertext indistinguishability, which have been shown equivalent in various settings, separate against the weaker parallel chosen-ciphertext attacks under the valid ciphertext condition.
Expand
Muhammad Imran
ePrint Report ePrint Report
Private set intersection (PSI) is a cryptographic primitive that allows two or more parties to learn the intersection of their input sets and nothing else. In this paper, we present a private set intersection protocol based on a new secure multi-party quantum protocol for greatest common divisor (GCD). The protocol is mainly inspired by the recent quantum private set union protocol based on least common multiple by Liu, Yang, and Li. Performance analysis guarantees the correctness and it also shows that the proposed protocols are completely secure in semi-honest model. Moreover, the complexity is proven to be efficient (poly logarithmic) in the size of the input sets.
Expand
◄ Previous Next ►