IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
09 August 2024
Yusuke Naito, Yu Sasaki, Takeshi Sugawara
ePrint Report
We propose a new wide encryption (WE) mode of operation that satisfies robust authenticated encryption (RAE) and committing security with minimum ciphertext expansion. WE is attracting much attention in the last few years, and its advantage includes RAE security that provides robustness against wide range of misuses, combined with the encode-then-encipher (EtE) construction. Unfortunately, WE-based EtE does not provide good committing security, and there is a recent constant-time CMT-4 attack (Chen et al., ToSC 2023(4)). Improving CMT-4 security requires considerable ciphertext expansion, and the state-of-the-art scheme expands the ciphertext by s_rae + 2 s_cmt bits from an original message to achieve s_rae-bit RAE and s_cmt-bit CMT-4 security. Our new WE mode FFF addresses the issue by achieving s_rae-bit RAE and s_cmt-bit CMT-4 security only with max{s_cmt, s_rae} bits of ciphertext expansion. Our design is based on the committing concealer proposed by Bellare et al., and its extension to WE (cf. tag-based AE) while satisfying RAE security is the main technical innovation.
Theo Fanuela Prabowo, Chik How Tan
ePrint Report
Lyubashevsky’s signature can be viewed as a lattice-based adapation of the Schnorr signature, with the core difference being the use of aborts during signature generation process. Since the proposal of Lyubashevsky’s signature, a number of other variants of Schnorr-type signatures with aborts have been proposed, both in lattice-based and code-based setting. In this paper, we examine the security of Schnorr-type signature schemes with aborts. We give a detailed analysis of when the expected value of the signature is correlated to the secret key, and when it is not. Our analysis shows that even when abort condition is employed, it is crucial to set the parameters carefully in order to defend against statistical attack. In particular, we recommend to set δ ≥ β (where δ, β are public parameters) as in this case we prove that the signature does not reveal any information about the secret key. On the other hand, if this condition is not satisfied, then some information about the secret key are leaked, making the scheme susceptible to statistical attacks. For completeness, we also analyze the security of Schnorr-type signatures without aborts. In particular, we present a detailed key recovery attack via statistical method on the EagleSign signature, which is one of the submission to the NIST call for Additional PQC Signature. Moreover, we give a formula for determining the number of required signatures to successfully launch the statistical attack.
Jinhao Zhu, Liana Patel, Matei Zaharia, Raluca Ada Popa
ePrint Report
We introduce Compass, a semantic search system over encrypted data that offers high accuracy, comparable to state-of-the-art plaintext search algorithms while protecting data, queries and search results from a fully compromised server. Compass also enables privacy-preserving RAG where both the RAG database and the query are protected. Compass's search index contributes a novel way to traverse the search graph in Hierarchical Navigable Small Worlds (HNSW), a top performing vector nearest neighbor search, using Oblivious RAM, a cryptographic primitive with strong security guarantees. Our techniques, Directional Neighbor Filtering, Speculative Greedy Search and HNSW-tailored Path ORAM ensure that Compass achieves user-perceived latencies of few seconds and is orders of magnitude faster than a baseline for encrypted embeddings search.
Quang Dao, Aayush Jain, Zhengzhong Jin
ePrint Report
We give the first construction of non-interactive zero-knowledge (NIZK) arguments from post-quantum assumptions other than Learning with Errors. In particular, we achieve NIZK under the polynomial hardness of the Learning Parity with Noise (LPN) assumption, and the exponential hardness of solving random under-determined multivariate quadratic equations (MQ). We also construct NIZK satisfying statistical zero-knowledge assuming a new variant of LPN, Dense-Sparse LPN, introduced by Dao and Jain (CRYPTO 2024), together with exponentially-hard MQ.
The main technical ingredient of our construction is an extremely natural (but only in hindsight!) construction of correlation-intractable (CI) hash functions from MQ, for a NIZK-friendly sub-class of constant-degree polynomials that we call concatenated constant-degree polynomials. Under exponential security, this hash function also satisfies the stronger notion of approximate CI for concatenated constant-degree polynomials. The NIZK construction then follows from a prior blueprint of Brakerski-Koppula-Mour (CRYPTO 2020). In addition, we show how to construct (approximate) CI hashing for degree-$d$ functions from the (exponential) hardness of solving random degree-$d$ equations, a natural generalization of MQ. To realize NIZK with statistical zero-knowledge, we design a lossy public-key encryption scheme with approximate linear decryption and inverse-polynomial decryption error from Dense-Sparse LPN. These constructions may be of independent interest.
Our work therefore gives a new way to leverage MQ with uniformly random equations, which has found little cryptographic applications to date. Indeed, most applications in the context of encryption and signature schemes make use of structured variants of MQ, where the polynomials are not truly random but posses a hidden planted structure. We believe that the MQ assumption may plausibly find future use in the designing other advanced proof systems.
The main technical ingredient of our construction is an extremely natural (but only in hindsight!) construction of correlation-intractable (CI) hash functions from MQ, for a NIZK-friendly sub-class of constant-degree polynomials that we call concatenated constant-degree polynomials. Under exponential security, this hash function also satisfies the stronger notion of approximate CI for concatenated constant-degree polynomials. The NIZK construction then follows from a prior blueprint of Brakerski-Koppula-Mour (CRYPTO 2020). In addition, we show how to construct (approximate) CI hashing for degree-$d$ functions from the (exponential) hardness of solving random degree-$d$ equations, a natural generalization of MQ. To realize NIZK with statistical zero-knowledge, we design a lossy public-key encryption scheme with approximate linear decryption and inverse-polynomial decryption error from Dense-Sparse LPN. These constructions may be of independent interest.
Our work therefore gives a new way to leverage MQ with uniformly random equations, which has found little cryptographic applications to date. Indeed, most applications in the context of encryption and signature schemes make use of structured variants of MQ, where the polynomials are not truly random but posses a hidden planted structure. We believe that the MQ assumption may plausibly find future use in the designing other advanced proof systems.
Sam Coulon, Tianyou Bao, Jiafeng Xie
ePrint Report
The Extended Greatest Common Divisor (XGCD) computation is a critical component in various cryptographic applications and algorithms, including both pre- and post-quantum cryptosystems. In addition to computing the greatest common divisor (GCD) of two integers, the XGCD also produces Bezout coefficients $b_a$ and $b_b$ which satisfy $\mathrm{GCD}(a,b) = a\times b_a + b\times b_b$. In particular, computing the XGCD for large integers is of significant interest. Most recently, XGCD computation between 6,479-bit integers is required for solving $N$-th degree Truncated polynomial Ring Unit (NTRU) trapdoors in Falcon, a National Institute of Standards and Technology (NIST)-selected Post-Quantum digital signature scheme. To this point, existing literature has primarily focused on exploring software-based implementations for XGCD. The few existing high-performance hardware architectures require significant hardware resources and may not be desirable for practical usage, and the lightweight architectures suffer from poor performance. To fill the research gap, this work proposes a novel FPGA-based scalablE and Lightweight accelerator for large Integer XGCD (FELIX). First, a new algorithm suitable for scalable and lightweight computation of XGCD is proposed. Next, a hardware accelerator (FELIX) is presented, including both constant- and variable-time versions. Finally, a thorough evaluation is carried out to showcase the efficiency of the proposed FELIX. In certain configurations, FELIX involves 81% less equivalent area-time product (eATP) than the state-of-the-art design for 1,024-bit integers, and achieves a 95% reduction in latency over the software for 6,479-bit integers (Falcon parameter set) with reasonable resource usage. Overall, the proposed FELIX is highly efficient, scalable, lightweight, and suitable for very large integer computation, making it the first such XGCD accelerator in the literature (to the best of our knowledge).
Henry Corrigan-Gibbs, David J. Wu
ePrint Report
The Legendre sequence of an integer $x$ modulo a prime $p$ with respect to offsets $\vec a = (a_1, \dots, a_\ell)$ is the string of Legendre symbols $(\frac{x+a_1}{p}), \dots, (\frac{x+a_\ell}{p})$. Under the quadratic-residuosity assumption, we show that the function that maps the pair $(x,p)$ to the Legendre sequence of $x$ modulo $p$, with respect to public random offsets $\vec a$, is a pseudorandom generator. This answers an open question of Damgård (CRYPTO 1988), up to the choice of the offsets $\vec a$.
Daniel Dobkin, Edut Katz, David Popovtzer, Itamar Levi
ePrint Report
Considering side-channel analysis (SCA) security for cryptographic devices, the mitigation of electromagnetic leakage and electromagnetic interference (EMI) between modules poses significant challenges. This paper presents a comprehensive review and deep analysis of the utilization of EMI shielding materials, devised for reliability purposes and standards such as EMI/EMC, as a countermeasure to enhance EM-SCA security. We survey the current landscape of EMI-shields materials, including conductive polymers, metal-foams, carbon-based materials, and meta-materials, evaluating their effectiveness in attenuating emissions and preventing information-leakage, a task done with security-centric metrics for such materials for the first time. Through a systematic examination of existing literature, experimental studies and a construction of fully-simulatable EM environment in ANSYS-solver, we identify key factors influencing the performance of EMI-shield materials, such as shielding-effectiveness (SE), bandwidth, thickness, and material properties, on security characteristics.
We devise a connection between SE and cryptographic-SNR, and we demonstrate from real hardware measurements how and in what conditions can such materials provide very high security levels. By synthesizing insights from multidisciplinary research domains, this paper aims to provide valuable two-way benefit and guidance for researchers, engineers, and practitioners in the design and deployment of robust side-channel security measures leveraging EMI-shields, already in utilization devised by reliability standards.
07 August 2024
Zhenyu Guan, Ran Mao, Qianyun Zhang, Zhou Zhang, Zian Zhao, Song Bian
ePrint Report
Recently, an emerging branch of research in the field of fully homomorphic encryption (FHE) attracts growing attention, where optimizations are carried out in developing fast and efficient homomorphic logic circuits. While existing works have pointed out that compound homomorphic gates can be constructed without incurring significant computational overheads, the exact theory and mechanism of homomorphic gate design have not yet been explored. In this work, we propose AutoHoG, an automated procedure for the generation of compound gates over FHE. We show that by formalizing the gate generation procedure, we can adopt a match-and-replace strategy to significantly improve the evaluation speed of logic circuits over FHE. In the experiment, we first show the effectiveness of AutoHoG through a set of benchmark gates. We then apply AutoHoG to optimize common Boolean tasks, including adders, multipliers, the ISCAS’85 benchmark circuits, and the ISCAS’89 benchmark circuits. We show that for various circuit benchmarks, we can achieve up to 5.7x reduction in computational latency when compared to the state-of-the-art implementations of logic circuits using conventional gates.
Parisa Amiri Eliasi, Yanis Belkheyar, Joan Daemen, Santosh Ghosh, Daniël Kuijsters, Alireza Mehrdad, Silvia Mella, Shahram Rasoolzadeh, Gilles Van Assche
ePrint Report
This paper introduces the Koala PRF, which maps a variable-length sequence of $64$-bit input blocks to a single $257$-bit output block.
Its design focuses on achieving low latency in its implementation in ASIC.
To construct Koala, we instantiate the recently introduced Kirby construction with the Koala-P permutation and add an input encoding layer.
The Koala-P permutation is obtained as the $8$-fold iteration of a simple round function inspired by that of Subterranean. Based on careful preliminary cryptanalysis, we made a variant of the Subterranean permutation by reordering and modifying it in a way that does not introduce any implementation overhead and enhances the cryptographic resistance of the resulting PRF. Indeed, we demonstrate that Koala exhibits a high resistance against integral, cube, division property, and higher-order differential attacks.
Additionally, we compare the hardware implementation of Koala with the smallest latency with state-of-the-art low-latency PRF Orthros and Gleeok and the block cipher Prince in the same ASIC synthesis setup. Our results show that Koala outperforms these primitives not only in terms of latency but also with respect to various other performance measures.
The Koala-P permutation is obtained as the $8$-fold iteration of a simple round function inspired by that of Subterranean. Based on careful preliminary cryptanalysis, we made a variant of the Subterranean permutation by reordering and modifying it in a way that does not introduce any implementation overhead and enhances the cryptographic resistance of the resulting PRF. Indeed, we demonstrate that Koala exhibits a high resistance against integral, cube, division property, and higher-order differential attacks.
Additionally, we compare the hardware implementation of Koala with the smallest latency with state-of-the-art low-latency PRF Orthros and Gleeok and the block cipher Prince in the same ASIC synthesis setup. Our results show that Koala outperforms these primitives not only in terms of latency but also with respect to various other performance measures.
Morgane Guerreau, Mélissa Rossi
ePrint Report
HAWK is a lattice-based signature scheme candidate to the fourth call of the NIST's Post-Quantum standardization campaign. Considered as a cousin of Falcon (one of the future NIST post-quantum standards) one can wonder whether HAWK shares the same drawbacks as Falcon in terms of side-channel attacks. Indeed, Falcon signature algorithm and particularly its Gaussian sampler, has shown to be highly vulnerable to power-analysis attacks. Besides, efficiently protecting Falcon's signature algorithm against these attacks seems a very challenging task.
This work presents the first power analysis leakage review on HAWK signature scheme: it extensively assesses the vulnerabilities of a central and sensitive brick of the scheme, the discrete Gaussian sampler. Knowing the output x of the sampler for a given signature leads to linear information about the private key of the scheme.
This paper includes several demonstrations of simple power analysis attacks targeting this sample x with various attacker strengths, all of them performed on the reference implementation on a ChipWhisperer Lite with STM32F3 target (ARM Cortex M4). We report being able to perform key recoveries with very low (to no) offline resources. As this reference implementation of HAWK is not claimed to be protected against side-channel attacks, the existence of such attacks is not surprising, but they still concretely warn about the use of this unprotected signature on physical devices.
To go further, our study proposes a generic way of assessing the performance of a side-channel attack on x even when less information is recovered, in a setting where some protections are implemented or when the attacker has less measurement possibilities. While it is easy to see that x is a sensitive value, quantifying the residual complexity of the key recovery with some knowledge about x (like the parity or the sign of some coefficients) is not straightforward as the underlying hardness assumption is the newly introduced Module-LIP problem. We propose to adapt the existing methodology of leaky LWE estimation tools (Dachman-Soled et al. at Crypto 2020) to exploit the retrieved information and lower down the residual key recovery complexity.
To finish, we propose an ad-hoc technique to lower down the leakage on the identified vulnerability points. These modifications prevent our attacks on our platform and come with essentially no cost in terms of performance. It could be seen as a temporary solution and encourages more analysis on proven side-channel protection of HAWK like masking.
This work presents the first power analysis leakage review on HAWK signature scheme: it extensively assesses the vulnerabilities of a central and sensitive brick of the scheme, the discrete Gaussian sampler. Knowing the output x of the sampler for a given signature leads to linear information about the private key of the scheme.
This paper includes several demonstrations of simple power analysis attacks targeting this sample x with various attacker strengths, all of them performed on the reference implementation on a ChipWhisperer Lite with STM32F3 target (ARM Cortex M4). We report being able to perform key recoveries with very low (to no) offline resources. As this reference implementation of HAWK is not claimed to be protected against side-channel attacks, the existence of such attacks is not surprising, but they still concretely warn about the use of this unprotected signature on physical devices.
To go further, our study proposes a generic way of assessing the performance of a side-channel attack on x even when less information is recovered, in a setting where some protections are implemented or when the attacker has less measurement possibilities. While it is easy to see that x is a sensitive value, quantifying the residual complexity of the key recovery with some knowledge about x (like the parity or the sign of some coefficients) is not straightforward as the underlying hardness assumption is the newly introduced Module-LIP problem. We propose to adapt the existing methodology of leaky LWE estimation tools (Dachman-Soled et al. at Crypto 2020) to exploit the retrieved information and lower down the residual key recovery complexity.
To finish, we propose an ad-hoc technique to lower down the leakage on the identified vulnerability points. These modifications prevent our attacks on our platform and come with essentially no cost in terms of performance. It could be seen as a temporary solution and encourages more analysis on proven side-channel protection of HAWK like masking.
George Teseleanu
ePrint Report
In our paper, we explore the consequences of replacing the commutative group operation used in Lai-Massey structures with a quasigroup operation.
We introduce four quasigroup versions of the Lai-Massey structure, and prove that for quasigroups isotopic with a group $\mathbb{G}$, the complexity of launching a differential attack against these variants of the Lai-Massey structure is equivalent to attacking an alternative structure based on $\mathbb{G}$.
Then we provide the conditions needed for correct decryption, and further refine the resulting structure. The emerging structure is both intriguing and novel, and we hope that it will form the basis for future secure block ciphers based on non-commutative groups. In the case of commutative groups, we show that the resulting structure reduces to the classical Lai-Massey structure.
Pengcheng Qiu, Guiming Wu, Tingqiang Chu, Changzheng Wei, Runzhou Luo, Ying Yan, Wei Wang, Hui Zhang
ePrint Report
Multi-scalar multiplication (MSM) is the most computation-intensive part in proof generation of Zero-knowledge proof (ZKP). In this paper, we propose MSMAC, an FPGA accelerator for large-scale MSM. MSMAC adopts a specially designed Instruction Set Architecture (ISA) for MSM and optimizes pipelined Point Addition Unit (PAU) with hybrid Karatsuba multiplier. Moreover, a runtime system is proposed to split MSM tasks with the optimal sub-task size and orchestrate execution of Processing Elements (PEs). Experimental results show that MSMAC achieves up to 328× and 1.96× speedups compared to the state-of-the-art implementation on CPU (one core) and GPU, respectively, outperforming the state-of-the-art ASIC accelerator by 1.79×. On 4 FPGAs, MSMAC performs 1,261× faster than a single CPU core.
Michel Dellepere, Pratyush Mishra, Alireza Shirzad
ePrint Report
SNARKs are powerful cryptographic primitives that allow a prover to produce a succinct proof of a computation. Two key goals of SNARK research are to minimize the size of the proof and to minimize the time required to generate the proof. In this work, we present new SNARK constructions that push the frontier on both of these goals.
Our first construction, Pari, is a SNARK that achieves the smallest proof size amongst *all* known SNARKs. Specifically, Pari achieves a proof size of just two group elements and two field elements, which, when instantiated with the BLS12-381 curve, totals just 160 bytes, smaller than that of Groth16 [Groth, EUROCRYPT '16] and Polymath [Lipmaa, CRYPTO '24].
Our second construction, Garuda, is a SNARK that reduces proof generation time by supporting, for the first time, arbitrary "custom" gates and *free* linear gates. To demonstrate Garuda's performance, we implement and evaluate it, and show that it provides significant prover-time savings compared to both the state-of-the-art SNARKs (Groth16 and HyperPlonk [EUROCRYPT '22])
Both constructions rely on a new cryptographic primitive: "equifficient" polynomial commitment schemes that enforce that committed polynomials have the same representation in particular bases. We provide both rigorous security definitions for this primitive as well as efficient constructions for univariate and multilinear polynomials.
Both constructions rely on a new cryptographic primitive: "equifficient" polynomial commitment schemes that enforce that committed polynomials have the same representation in particular bases. We provide both rigorous security definitions for this primitive as well as efficient constructions for univariate and multilinear polynomials.
Zhengjun Cao, Lihua Liu
ePrint Report
We show that the key agreement scheme [IEEE Trans. Serv. Comput. 16(4): 3000-3013, 2023] fails to keep user anonymity, not as claimed. The scheme simply acknowledges that user anonymity is equivalent to preventing user's identity from being recovered. But the true anonymity means that the adversary cannot attribute different sessions to target users. It relates to entity-distinguishable, not just identity-revealable. To the best of our knowledge, it is the first time to clarify the explicit signification of user anonymity.
Thales B. Paiva, Marcos A. Simplicio Jr, Syed Mahbub Hafiz, Bahattin Yildiz, Eduardo L. Cominetti
ePrint Report
Kyber is a post-quantum lattice-based key encapsulation mechanism (KEM) selected by NIST for standardization as ML-KEM. The scheme is designed to ensure that the unintentional errors accumulated during decryption do not prevent the receiver to correctly recover the encapsulated key. This is done by using a simple error-correction code independently applied to each bit of the message, for which it is possible to show that the decryption failure rate (DFR) is negligible. Although there have been other proposals of more complex error-correction codes for Kyber, these have important limitations. Some proposals use independence assumptions on the noise distribution that do not hold. Others require significant changes in Kyber's core parameters, which make them unpractical. In this work, we propose a family of 2-dimensional codes that can, in principle, be applied to any lattice-based scheme. Even though our 2D codes have a rather simple construction, they can be tailored for the specific noise distribution observed for different Kyber parameters, and reduce Kyber's DFR by factors of $2^{4.8}$, $2^{5.4}$, and $2^{9.9}$, for security levels 1, 3, and 5, respectively, without requiring independence assumptions. Alternatively, the proposed codes allow for up to $6\%$ ciphertext compression in Kyber Level 5 while maintaining the DFR lower than $2^{-160}$, which is the target value defined in Kyber's specification. Furthermore, we provide an efficient isochronous implementation of the encoding and decoding procedures for our 2D codes. Compared with Kyber's reference implementation, the performance impact of the 2D codes in the decapsulation time is negligible (namely, between $0.08\%$ to $0.18\%$, depending on the security level).
David Wong, Denis Kogelov, Ivan Mikushin
ePrint Report
This paper presents a collection of lessons learned from analyzing the real-world security of various Byzantine Fault Tolerant (BFT) consensus protocol implementations. Drawing upon our experience as a team of security experts who have both developed and audited BFT systems, including BA★, HotStuff variants, Paxos variants, and DAG-based algorithms like Narwhal and Bullshark, we identify and analyze a variety of security vulnerabilities discovered in the translation of theoretical protocols into real-world code. Our analysis covers a range of issues, including subtle logic errors, concurrency bugs, cryptographic vulnerabilities, and mismatches between the theoretical model and the implementation. We provide detailed case studies illustrating these vulnerabilities, discuss their potential impact, and propose mitigation strategies. This work aims to provide valuable insights for both designers and implementers of BFT consensus protocols, ultimately contributing to the development of more secure and reliable distributed systems.
Kushal Babel, Nerla Jean-Louis, Yan Ji, Ujval Misra, Mahimna Kelkar, Kosala Yapa Mudiyanselage, Andrew Miller, Ari Juels
ePrint Report
Users of decentralized finance (DeFi) applications face significant risks from adversarial actions that manipulate the order of transactions to extract value from users. Such actions---an adversarial form of what is called maximal-extractable value (MEV)---impact both individual outcomes and the stability of the DeFi ecosystem. MEV exploitation, moreover, is being institutionalized through an architectural paradigm known Proposer-Builder Separation (PBS).
This work introduces a system called PROF (PRotected Order Flow) that is designed to limit harmful forms of MEV in existing PBS systems. PROF aims at this goal using two ideas. First, PROF imposes an ordering on a set ("bundle") of privately input transactions and enforces that ordering all the way through to block production-preventing transaction-order manipulation. Second, PROF creates bundles whose inclusion is profitable to block producers, thereby ensuring that bundles see timely inclusion in blocks.
PROF is backward-compatible, meaning that it works with existing and future PBS designs. PROF is also compatible with any desired algorithm for ordering transactions within a PROF bundle (e.g., first-come, first-serve, fee-based, etc.). It executes efficiently, i.e., with low latency, and requires no additional trust assumptions among PBS entities. We quantitatively and qualitatively analyze PROF’s incentive structure, and its utility to users compared with existing solutions. We also report on inclusion likelihood of PROF transactions, and concrete latency numbers through our end-to-end implementation.
This work introduces a system called PROF (PRotected Order Flow) that is designed to limit harmful forms of MEV in existing PBS systems. PROF aims at this goal using two ideas. First, PROF imposes an ordering on a set ("bundle") of privately input transactions and enforces that ordering all the way through to block production-preventing transaction-order manipulation. Second, PROF creates bundles whose inclusion is profitable to block producers, thereby ensuring that bundles see timely inclusion in blocks.
PROF is backward-compatible, meaning that it works with existing and future PBS designs. PROF is also compatible with any desired algorithm for ordering transactions within a PROF bundle (e.g., first-come, first-serve, fee-based, etc.). It executes efficiently, i.e., with low latency, and requires no additional trust assumptions among PBS entities. We quantitatively and qualitatively analyze PROF’s incentive structure, and its utility to users compared with existing solutions. We also report on inclusion likelihood of PROF transactions, and concrete latency numbers through our end-to-end implementation.
Patricia Greene, Mark Motley, Bryan Weeks
ePrint Report
In this paper, we describe a low-latency block cipher (ARADI) and authenticated encryption mode (LLAMA) intended to support memory encryption applications.
Xinyu Peng, Yufei Wang, Weiran Liu, Liqiang Peng, Feng Han, Zhen Gu, Jianling Sun, Yuan Hong
ePrint Report
Private Set Intersection (PSI) enables a sender and a receiver to jointly compute the intersection of their sets without disclosing other information about items not in the intersection. However, in many cases of joint data analysis, it is not just the items outside the intersection that are sensitive but the items within it. To protect such sensitive information, prior work presents a Differentially Private version of PSI (DPSI) based on a circuit-PSI using Fully Homomorphic Encryption. However, their concrete protocol is somewhat inefficient compared with the state-of-the-art (SOTA) circuit-PSI.
In this paper, we revisit the DPSI definition and formalize its ideal functionality. We identify the key desiderata required by PSI-related tools to construct DPSI and propose two frameworks to construct efficient DPSI protocols. The first one generalizes the idea of existing DPSI, showing that any circuit-PSI can be used to construct DPSI. We obtain a more efficient DPSI protocol by plugging the SOTA circuit-PSI protocol in the framework. The second one helps to obtain a more efficient DPSI protocol based on the multi-query Reverse Private Membership Test (mqRPMT) that was previously used to construct Private Set Operation (PSO). However, mqRPMT additionally leaks the intersection size to the sender. We bound such leakage using differential privacy by padding random dummy items in input sets. We implement numerous constructions based on our frameworks. Experiments show that our protocols significantly outperform the existing DPSI construction, 2.5-22.6$\times$ more communication efficient and up to 110.5-151.8$\times$ faster. Our work also shows a new use case for mqRPMT besides obtaining PSO.
In this paper, we revisit the DPSI definition and formalize its ideal functionality. We identify the key desiderata required by PSI-related tools to construct DPSI and propose two frameworks to construct efficient DPSI protocols. The first one generalizes the idea of existing DPSI, showing that any circuit-PSI can be used to construct DPSI. We obtain a more efficient DPSI protocol by plugging the SOTA circuit-PSI protocol in the framework. The second one helps to obtain a more efficient DPSI protocol based on the multi-query Reverse Private Membership Test (mqRPMT) that was previously used to construct Private Set Operation (PSO). However, mqRPMT additionally leaks the intersection size to the sender. We bound such leakage using differential privacy by padding random dummy items in input sets. We implement numerous constructions based on our frameworks. Experiments show that our protocols significantly outperform the existing DPSI construction, 2.5-22.6$\times$ more communication efficient and up to 110.5-151.8$\times$ faster. Our work also shows a new use case for mqRPMT besides obtaining PSO.
Rachit Garg, Rishab Goyal, George Lu
ePrint Report
Functional Encryption (FE) is a powerful notion of encryption which enables computations and partial message recovery of encrypted data. In FE, each decryption key is associated with a function $f$ such that decryption recovers the function evaluation $f(m)$ from an encryption of $m$. Informally, security states that a user with access to function keys $\mathsf{sk}_{f_1}, \mathsf{sk}_{f_2}, \ldots$ (and so on) can only learn $f_1(m), f_2(m), \ldots$ (and so on) but nothing more about the message. The system is said to be $q$-bounded collusion resistant if the security holds as long as an adversary gets access to at most $q = q(\lambda)$ decryption keys. In the last decade, numerous works have proposed many FE constructions from a wide array of algebraic and general cryptographic assumptions, and proved their security in the bounded collusion model.
However, until very recently, all these works studied bounded collusion resistance in a "static model", where the collusion bound $q$ was a global system parameter. While the static collusion model led to great research progress in the community, it has many major drawbacks. Very recently, Agrawal et al. (Crypto 2021) and Garg et al. (Eurocrypt 2022) independently introduced the "dynamic model" for bounded collusion resistance, where the collusion bound $q$ was a fluid parameter that was not globally set but only chosen by each encryptor. The dynamic collusion model enabled harnessing the many virtues of the static collusion model, while avoiding its various drawbacks.
In this work, we give a simple and generic approach to upgrade any scheme from the static collusion model to the dynamic collusion model. Our result captures all existing results in the dynamic model in the form of a single unified framework, and also gives new results as simple corollaries with a lot more potential in the future. An interesting artifact of our result is that it gives a generic way to match existing lower bounds in functional encryption.
However, until very recently, all these works studied bounded collusion resistance in a "static model", where the collusion bound $q$ was a global system parameter. While the static collusion model led to great research progress in the community, it has many major drawbacks. Very recently, Agrawal et al. (Crypto 2021) and Garg et al. (Eurocrypt 2022) independently introduced the "dynamic model" for bounded collusion resistance, where the collusion bound $q$ was a fluid parameter that was not globally set but only chosen by each encryptor. The dynamic collusion model enabled harnessing the many virtues of the static collusion model, while avoiding its various drawbacks.
In this work, we give a simple and generic approach to upgrade any scheme from the static collusion model to the dynamic collusion model. Our result captures all existing results in the dynamic model in the form of a single unified framework, and also gives new results as simple corollaries with a lot more potential in the future. An interesting artifact of our result is that it gives a generic way to match existing lower bounds in functional encryption.