International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

18 October 2024

Guofeng Tang, Shuai Han, Li Lin, Changzheng Wei, Ying Yan
ePrint Report ePrint Report
With the demand of cryptocurrencies, threshold ECDSA recently regained popularity. So far, several methods have been proposed to construct threshold ECDSA, including the usage of OT and homomorphic encryptions (HE). Due to the mismatch between the plaintext space and the signature space, HE-based threshold ECDSA always requires zero-knowledge range proofs, such as Paillier and Joye-Libert (JL) encryptions. However, the overhead of range proofs constitutes a major portion of the total cost.

In this paper, we propose efficient batch range proofs to improve the efficiency of threshold ECDSA. At the heart of our efficiency improvement is a new technical tool called Multi-Dimension Forking Lemma, as a generalization of the well-known general forking lemma [Bellare and Neven, CCS 2006]. Based on our new tool, we construct efficient batch range proofs for Paillier and JL encryptions, and use them to give batch multiplication-to-addition (MtA) protocols, which are crucial to most threshold ECDSA. Our constructions improve the prior Paillier-based MtA by a factor of 2 and the prior JL-based MtA by a factor of 3, in both computation and bandwidth in an amortized way. Our batch MtA can be used to improve the efficiency of most Paillier and JL based threshold ECDSA. As three typical examples, our benchmarking results show: -- We improve the Paillier-based CGGMP20 [Canetti et al., CCS 2020] in bandwidth by a factor of 2.1 to 2.4, and in computation by a factor of 1.5 to 1.7. -- By implementing threshold ECDSA with the batch JL MtA of XAL+23 [Xue et al., CCS 2023] and our batch JL MtA, respectively, our batch construction improves theirs in bandwidth by a factor of 2.0 to 2.29, and in computation by a factor of 1.88 to 2.09. -- When replacing OT-based MtA in DKLs24 [Doerner et al., S$\&$P 2024] with our Paillier-based batch MtA, we improve the bandwidth efficiency by $7.8\times$ at the cost of $5.7\times$ slower computation.
Expand
Mahimna Kelkar, Yunqi Li, Nerla Jean-Louis, Carolina Ortega Pérez, Kushal Babel, Andrew Miller, Ari Juels
ePrint Report ePrint Report
We introduce superclass accountability, a new notion of accountability for security protocols. Classical notions of accountability typically aim to identify specific adversarial players whose violation of adversarial assumptions has caused a security failure. Superclass accountability describes a different goal: to prove the existence of adversaries capable of violating security assumptions.

We develop a protocol design approach for realizing superclass accountability called the sting framework (SF). Unlike classical accountability, SF can be used for a broad range of applications without making protocol modifications and even when security failures aren’t attributable to particular players.

SF generates proofs of existence for superclass adversaries that are publicly verifiable, making SF a promising springboard for reporting by whistleblowers, high-trust bug-bounty programs, and so forth.

We describe how to use SF to prove the existence of adversaries capable of breaching the confidentiality of practical applications that include Tor, block-building infrastructure in web3, ad auctions, and private contact discovery---as well as the integrity of fair-transaction-ordering systems. We report on two end-to-end SF systems we have constructed---for Tor and block-building---and on experiments with those systems.
Expand
Lars Wolfgang Folkerts, Nektarios Georgios Tsoutsos
ePrint Report ePrint Report
Large language models (LLMs) have recently transformed many industries, enhancing content generation, customer service agents, data analysis and even software generation. These applications are often hosted on remote servers to protect the neural-network model IP; however, this raises concerns about the privacy of input queries. Fully Homomorphic Encryption (FHE), an encryption technique that allows for computations on private data, has been proposed as a solution to the challenge. Nevertheless, due to the increased size of LLMs and the computational overheads of FHE, today's practical FHE LLMs are implemented using a split model approach. Here, a user sends their FHE encrypted data to the server to run an encrypted attention head layer; then the server returns the result of the layer for the user to run the rest of the model locally. By employing this method, the server maintains part of their model IP, and the user still gets to perform private LLM inference. In this work, we evaluate the neural-network model IP protections of single layer split model LLMs, and demonstrate a novel attack vector that makes it easy for a user to extract the neural network model IP from the server, bypassing the claimed protections for encrypted computation. In our analysis, we demonstrate the feasibility of this attack, and discuss potential mitigations.
Expand
Alexandra Boldyreva, Virendra Kumar, Jiahao Sun
ePrint Report ePrint Report
The paper provides the first provable security analysis of the Butterfly Key Mechanism (BKM) protocol from IEEE 1609.2.1 standard. The BKM protocol specifies a novel approach for efficiently requesting multiple certificates for use in vehicle-to-everything (V2X) communication. We define the main security goals of BKM, such as vehicle privacy and communication authenticity. We prove that the BKM protocol, with small modifications, meets those security goals. We also propose a way to significantly improve the protocol's efficiency without sacrificing security.
Expand
Lars Wolfgang Folkerts, Nektarios Georgios Tsoutsos
ePrint Report ePrint Report
Fully Homomorphic Encryption (FHE) is a powerful technology that allows a cloud server to perform computations directly on ciphertexts. To overcome the overhead of sending and storing large FHE ciphertexts, the concept of FHE transciphering was introduced, allowing symmetric key encrypted ciphertexts to be transformed into FHE ciphertexts by deploying symmetric key decryption homomorphically. However, existing FHE transciphering schemes remain unauthenticated and malleable, allowing attackers to manipulate data and remain undetected. This work introduces Proteus, a new methodology for authenticated transciphering, which enables oblivious access control, preventing users from downloading unauthenticated or malicious data. Our protocol implementation adopts ASCON, NIST's new standard for lightweight cryptography, to enable homomorphic hashing and authenticated transciphering. Our ASCON transcipher is paired with the TFHE encryption scheme, which is well suited to perform encrypted rotation and bitwise operations. We evaluate our approach with a variety of real-life privacy-preserving applications, including URL phishing detection, private content moderation of hate speech, and biometric authentication.
Expand
Hongbo Li, Dengfa Liu, Guangsheng Ma
ePrint Report ePrint Report
Bootstrapping is the core task in fully homomorphic encryption. It is designed to self-clean encrypted data to support unlimited level of homomorphic computing. FHEW/TFHE cryptosystem provides the fastest bootstrapping machinery in addition to the unique homomorphic evaluation functionality. In 2021, the problem of large-precision bootstrapping was investigated in the literature, with fast algorithms proposed and implemented. A common strategy to all the algorithms is to decompose the plaintext homomorphically into blocks from the tail up, at the same bootstrap the blocks sequentially. This paper proposes two new strategies to improve the efficiency of large-precision plaintext bootstrapping. Both strategies are based on a new design of continuous nega-cyclic function with varying resolution, for making accurate computation with blockwise approximate computing. To minimize the approximation error in each block, optimizations are proposed based on rigorous error estimation, and are illustrated by error bounds in power-of-two binomial representation. The first strategy is to make homomorphic approximate decomposition of the input plaintext from the head on. Compared with the tail-up approach, the head-on approach reduces the number of blocks at most by half asympotitically, at the same time reducing the final refreshed error by at most $1-1/\sqrt{2}\approx 29.3\%$. The second strategy extends the head-on approach from large-precision plaintext bootstrapping to large error reduction. It can be used directly to the input ciphertext for the purpose of plaintext bootstrapping; it can also be used after plaintext bootstrapping to further reduce the refreshed error. Two algorithms based on the above two strategies are proposed, together with some variants combining the tail-up approach. The tail-up approach is completely re-developed for optimal blocksize control based on careful error analysis, and a corresponding algorithm is proposed. All the algorithms are implemented on PALISADE, and experiments based on real data show that the by the new strategies, the speed of large-precision plaintext bootstrapping can be improved to as many as 7 times.
Expand
Muhammed Ali Bingol
ePrint Report ePrint Report
This document provides a specification guide for the Multi-party Computation (MPC) setup ceremony for the Tokamak zk-SNARK scheme. It begins by revisiting the MMORPG protocol proposed in BGM17 for Groth16 setup generation, which leverages a random beacon to ensure public randomness. Additionally, it explores the alternative design approach presented in the ``Snarky Ceremonies" paper KMSV21, which removes the need for a random beacon. The document includes a detailed pseudocode and workflow for each stage of parameter generation in the Tokamak zk-SNARK protocol.

Tokamak zk-SNARK employs a universal setup through sub-circuits, which allows for CRS reuse across multiple circuits. This approach reduces the need for repeated trusted setups and emphasizes efficiency in verifier preprocessing. The document also introduces pseudocodes for various types of parameter generation during the MPC setup. This includes the generation of parameters like Powers of $\tau$, circuit-specific parameters, and different types of mappings across both the random beacon and non-random beacon based approaches. These pseudocodes ensure clarity in the protocol's step-by-step process, from the computation of shared parameters to verifying correctness.

Finally, the document presents a sketch security analysis of both protocols, relying on the Algebraic Group Model (AGM) and the Random Oracle Model (ROM) to prove knowledge soundness and security of the generated CRS. The analysis considers potential attacks and demonstrates that, even without a random beacon, the setup remains secure under the assumptions of these models.
Expand
Giovanni Deligios, Anders Konring, Chen-Da Liu-Zhang, Varun Narayanan
ePrint Report ePrint Report
The seminal work of Rabin and Ben-Or (STOC'89) showed that the problem of secure $n$-party computation can be solved for $t
The need to better capture the dynamics of large scale and long-lived computations, where compromised parties may recover and the set of parties can change over time, has sparked renewed interest in the proactive security model by Ostrovsky and Yung (PODC'91). This abstraction, where the adversary may periodically uncorrupt and corrupt a new set of parties, is taken even a step further in the more recent YOSO and Fluid MPC models (CRYPTO'21) which allow, in addition, disjoint sets of parties participating in each round. Previous solutions with guaranteed output delivery and statistical security only tolerate $t
In this work, we settle this question considering the unifying Layered MPC abstraction recently introduced by David et al. (CRYPTO'23). In this model, the interaction pattern is defined by a layered acyclic graph, where each party sends secret messages and broadcast messages only to parties in the very next layer. We complete the feasibility landscape of layered MPC, by extending the Rabin and Ben-Or result to this setting. Our results imply maximally-proactive MPC with statistical security in the honest-majority setting.
Expand
Samed Düzlü, Patrick Struck
ePrint Report ePrint Report
In the present work, we establish a new relationship among the Beyond UnForgeability Features (BUFF) introduced by Cremers et al. (SP’21). There, the BUFF notions have been shown to be independent of one another. On the other hand, the analysis by Aulbach et al. (PQCrypto’24) reveals that one of the BUFF notions—message-bound signatures (MBS)—is achieved by most schemes. To achieve BUFF security, there is the generic BUFF transform that achieves all the beyond unforgeability features. The BUFF transform works by signing a hash of the public key and the message (rather than just the message), and appending this hash value to the signature. The need for appending the hash comes from the intuitive notion of weak keys that verify all message-signature pairs. We explain that MBS security effectively rules out the possibility of weak keys. This opens the possibility for a more efficient transform to achieve BUFF. We show that this transform, first introduced by Pornin and Stern (ACNS’05), indeed suffices to achieve BUFF security, if the original signature schemes satisfies MBS. Only in the malicious setting of exclusive ownership, we present an attack on UOV, even after applying the PS-3 transform.
Expand
Slim Bettaieb, Loïc Bidoux, Philippe Gaborit, Mukul Kulkarni
ePrint Report ePrint Report
The Multi-Party Computation in the Head (MPCitH) paradigm has proven to be a versatile tool to design proofs of knowledge (PoK) based on variety of computationally hard problems. For instance, many post-quantum signatures have been designed from MPC based proofs combined with the Fiat-Shamir transformation. Over the years, MPCitH has evolved significantly with developments based on techniques such as threshold computing and other optimizations. Recently, Vector Oblivious Linear Evaluation (VOLE) and the VOLE in the Head framework has spurred further advances. In this work, we introduce three VOLE-friendly modelings for generic and communication efficient PoK to prove the knowledge of secret witness in the form of elementary vectors, vectors of Hamming weight at most $\omega$, and permutation matrices. Remarkably, these modelings scale logarithmically with respect to the original witness sizes. Specifically, our modeling for elementary vectors of size $n$ transforms the witness size to $\mathcal{O}(\log_2(n))$, in case of vectors of size $n$ and Hamming weight at most $\omega$ the reduced witness is of size $\mathcal{O}\left(\omega \log_2(n)\right)$ while our modeling for permutation matrix of size $n \times n$ results in an (equivalent) witness of size $\mathcal{O}(n\log_2(n))$, which leads to small proofs in practice. To achieve this, we consider modelings with higher multiplicative depth $d > 2$. Even if this increases the proof size, we show that our approach compares favorably with existing proofs. Such design choices were mostly overlooked in previous comparable works, maybe because prior to the VOLEitH framework, multiplications were often emulated with Beaver's triples causing the proof size to scale poorly with $d$. We also provide several applications for our modelings namely i) post-quantum signature schemes based on the $\mathsf{SD}$ (Syndrome Decoding) problem and $\mathsf{PKP}$ (Permuted Kernel Problem), ii) PoK of secrets key for code-based key encapsulation mechanism (KEM), and iii) ring signatures from $\mathsf{SD}$ and $\mathsf{PKP}$. Our signatures based on $\mathsf{SD}$ over $\mathbb{F}_2$ and $\mathsf{PKP}$ feature sizes of $3.9$ kB and $3.6$ kB for NIST-I security level which is respectively $26\%$ and $38\%$ shorter than state-of-the-art alternatives. Our PoK of secret key of BIKE and HQC are twice shorter than similar PoK for Kyber. In addition, we obtain the smallest ring signature based on $\mathsf{SD}$ and the first ring signature based on $\mathsf{PKP}$.
Expand
Deokhwa Hong, Youngjin Choi, Yongwoo Lee, Young-Sik Kim
ePrint Report ePrint Report
Homomorphic Encryption (HE) enables operations on encrypted data without requiring decryption, thus allowing for secure handling of confidential data within smart contracts. Among the known HE schemes, FHEW and TFHE are particularly notable for use in smart contracts due to their lightweight nature and support for arbitrary logical gates. In contrast, other HE schemes often require several gigabytes of keys and are limited to supporting only addition and multiplication. As a result, there has been significant work implementing smart contract functionalities over HE, broadening the potential applications of blockchain technology. However, a significant drawback of the FHEW/TFHE schemes is the need for bootstrapping after the execution of each binary gate. While bootstrapping reduces noise in the ciphertext, it also becomes a performance bottleneck due to its computational complexity.

In this work, we propose an efficient new bootstrapping method for FHEW/TFHE that takes advantage of the flexible scaling factors of encrypted data. The proposed method is particularly beneficial in circuits with consecutive XOR gates. Moreover, we implement Keccak using FHEW/TFHE, as it is one of the most important functions in smart contracts. Our experimental results demonstrate that the proposed method reduces the runtime of Keccak over HE by 42%. Additionally, the proposed method does not require additional keys or parameter sets from the key-generating party and can be adopted by the computing party without need for any extra information.
Expand
Akhil Bandarupalli, Xiaoyu Ji, Aniket Kate, Chen-Da Liu-Zhang, Yifan Song
ePrint Report ePrint Report
We consider the setting of asynchronous multi-party computation (AMPC) with optimal resilience $n=3t+1$ and linear communication complexity, and employ only ``lightweight'' cryptographic primitives, such as random oracle hash.

In this model, we introduce two concretely efficient AMPC protocols for a circuit with $|C|$ multiplication gates: a protocol achieving fairness with $\mathcal{O}(|C|\cdot n + n^3)$ field elements of communication, and a protocol achieving guaranteed output delivery with $\mathcal{O}(|C|\cdot n + n^5)$ field elements. These protocols significantly improve upon the best prior AMPC protocol in this regime communicating $\mathcal{O}(|C|\cdot n + n^{14})$ elements. To achieve this, we introduce novel variants of asynchronous complete secret sharing (ACSS) protocols with linear communication in the number of sharings, providing different abort properties.
Expand
Alexander Bienstock, Ujjwal Kumar, Antigoni Polychroniadou
ePrint Report ePrint Report
Federated Learning (FL) has gained lots of traction recently, both in industry and academia. In FL, a machine learning model is trained using data from various end-users arranged in committees across several rounds. Since such data can often be sensitive, a primary challenge in FL is providing privacy while still retaining utility of the model. Differential Privacy (DP) has become the main measure of privacy in the FL setting. DP comes in two flavors: central and local. In the former, a centralized server is trusted to receive the users' raw gradients from a training step, and then perturb their aggregation with some noise before releasing the next version of the model. In the latter (more private) setting, noise is applied on users' local devices, and only the aggregation of users' noisy gradients is revealed even to the server. Great strides have been made in increasing the privacy-utility trade-off in the central DP setting, by utilizing the so-called \emph{matrix mechanism}. However, progress has been mostly stalled in the local DP setting. In this work, we introduce the \emph{distributed} matrix mechanism to achieve the best-of-both-worlds; local DP and also better privacy-utility trade-off from the matrix mechanism. We accomplish this by proposing a cryptographic protocol that securely transfers sensitive values across rounds, which makes use of \emph{packed secret sharing. This protocol accommodates the dynamic participation of users per training round required by FL, including those that may drop out from the computation. We provide experiments which show that our mechanism indeed significantly improves the privacy-utility trade-off of FL models compared to previous local DP mechanisms, with little added overhead.
Expand
Jehyuk Jang
ePrint Report ePrint Report
This paper addresses verifiable consensus of pre-processed circuit polynomials for succinct non-interactive argument of knowledge (SNARK). More specifically, we focus on parts of circuits, referred to as wire maps, which may change based on program inputs or statements being argued. Preparing commitments to wire maps in advance is essential for certain SNARK protocols to maintain their succinctness, but it can be costly. SNARK verifiers can alternatively consider receiving wire maps from an untrusted parties.

We propose a consensus protocol that reaches consensus on wire maps using a majority rule. The protocol can operate on a distributed, irreversible, and transparent server, such as a blockchain. Our analysis shows that while the protocol requires over 50\% honest participants to remain robust against collusive attacks, it enables consensus on wire maps with a low and fixed verification complexity per communication, even in adversarial settings. The protocol guarantees consensus completion within a time frame ranging from a few hours to several days, depending on the wire map degree and the honest participant proportion.

Technically, our protocol leverages a directed acyclic graph (DAG) structure to represent conflicting wire maps among the untrusted deliverers. Wire maps are decomposed into low-degree polynomials, forming vertices and edges of this DAG. The consensus participants, or deliverers, collaboratively manage this DAG by submitting edges to branches they support. The protocol then returns a commitment to the wire map that is written in the first fully grown branch. The protocol's computational efficiency is derived from an interactive commit-prove-verify scheme that enables efficient validation of submitted edges.

Our analysis implies that the practical provides a practical solution for achieving secure consensus on SNARK wire maps in environments with dynamic proportion of honest participants. Additionally, we introduce a tunable parameter $N$ that allows the protocol to minimize cost and time to consensus while maintaining a desired level of security.
Expand
Eli Bradley, George Lu, Shafik Nassar, Brent Waters, David J. Wu
ePrint Report ePrint Report
We give a new approach for constructing statistical ZAP arguments (a two-message public-coin statistically witness indistinguishable argument) from quasi-polynomial hardness of the learning with errors (LWE) assumption with a polynomial modulus-to-noise ratio. Previously, all ZAP arguments from lattice-based assumptions relied on correlation-intractable hash functions. In this work, we present the first construction of a ZAP from LWE via the classic hidden-bits paradigm. Our construction matches previous lattice-based schemes by being public-coin and satisfying statistical witness indistinguishability. Moreover, our construction is the first lattice-based ZAP that is fully black-box in the use of cryptography. Previous lattice-based ZAPs based on correlation-intractable hash functions all made non-black-box use of cryptography.
Expand
Jiahui Liu, Mark Zhandry
ePrint Report ePrint Report
Software watermarking allows for embedding a mark into a piece of code, such that any attempt to remove the mark will render the code useless. Provably secure watermarking schemes currently seems limited to programs computing various cryptographic operations, such as evaluating pseudorandom functions (PRFs), signing messages, or decrypting ciphertexts (the latter often going by the name ``traitor tracing''). Moreover, each of these watermarking schemes has an ad-hoc construction of its own.

We observe, however, that many cryptographic objects are used as building blocks in larger protocols. We ask: just as we can compose building blocks to obtain larger protocols, can we compose watermarking schemes for the building blocks to obtain watermarking schemes for the larger protocols? We give an affirmative answer to this question, by precisely formulating a set of requirements that allow for composing watermarking schemes. We use our formulation to derive a number of applications.
Expand
Aram Jivanyan, Gohar Hovhannisyan, Hayk Hovhannisyan, Nerses Asaturyan
ePrint Report ePrint Report
This paper introduces zkFFT, a novel zero-knowledge argument designed to efficiently generate proofs for FFT (Fast Fourier Transform) relations. Our approach enables the verification that one committed vector is the FFT of another, addressing an efficiency need in general-purpose non-interactive zero-knowledge proof systems where the proof relation utilizes vector commitments inputs.

We present a concrete enhancement to the Halo2 proving system, demonstrating how zkFFT optimizes proofs in scenarios where the proof relation includes one or more vector commitments. Specifically, zkFFT incorporates streamlined logic within Halo2 and similar systems, augmenting proof and verification complexity by only $O(\text{log}N)$, where $N$ is the vector size. This represents a substantial improvement over conventional approach, which often necessitates specific circuit extensions to validate the integrity of vector commitments and their corresponding private values in the arithmetic framework of the proof relation. The proposed zkFFT method supports multiple vector commitments with only a logarithmic increase in extension costs, making it highly scalable. This capability is pivotal for practical applications involving multiple pre-committed values within proof statements.

Apart from Halo2, our technique can be adapted to any other zero-knowledge proof system that relies on arithmetization, where each column is treated as an evaluation of a polynomial over a specified domain, computes this polynomial via FFT, and subsequently commits to the resulting polynomial using a polynomial commitment scheme based on inner-product arguments. Along with efficient lookup and permutation arguments, zkFFT will streamline and significantly optimize the generation of zero-knowledge proofs for arbitrary relations.

Beyond the applications in augmenting zero-knowledge proof systems, we believe that the formalized zkFFT argument can be of independent interest.
Expand
Amit Berman, Ariel Doubchak, Noam Livne
ePrint Report ePrint Report
In the Dilithium digital signature scheme, there is an inherent tradeoff between the length of the public key, and the length of the signature. The coefficients of the main part of the public-key, the vector $\mathbf{t}$, are compressed (in a lossy manner), or "quantized", during the key-generation procedure, in order to save on the public-key size. That is, the coefficients are divided by some fixed denominator, and only the quotients are published. However, this results in some "skew" during the verification process, and to fix this, a special signature-dependent "hint" is computed during the signing process. Roughly speaking, stronger compression of $\mathbf{t}$ results in the hint carrying more information, consequently increasing the signature length. Prior to the hint computation, a test is performed to check whether a proper hint can indeed be composed to fix this skew, and if the test fails, the signing process is rerun with a different seed for the (pseudo-)randomness. However, in this short report we observe that this test is not performed optimally: the test calculates a sufficient condition for the hint to work, but not a necessary one. We suggest a new refined test that results in a lower probability for the sign iteration to fail. The new test exhibits some improvement (in terms of expected running time) in certain configurations that are characterized by shorter public-key length on the expense of slightly longer signature length. It is noted that the change does not imply any change in the security of the algorithm.
Expand
Gal Arnon, Shany Ben-David, Eylon Yogev
ePrint Report ePrint Report
Collision-resistant hashing (CRH) is a cornerstone of cryptographic protocols. However, despite decades of research, no construction of a CRH based solely on one-way functions has been found. Moreover, there are black-box limitations that separate these two primitives.

Harnik and Naor [HN10] overcame this black-box barrier by introducing the notion of instance compression. Instance compression reduces large NP instances to a size that depends on their witness size while preserving the "correctness" of the instance relative to the language. Shortly thereafter, Fortnow and Santhanam showed that efficient instance compression algorithms are unlikely to exist (as the polynomial hierarchy would collapse). Bronfman and Rothblum defined a computational analog of instance compression, which they called computational instance compression (CIC), and gave a construction of CIC under standard assumptions. Unfortunately, this notion is not strong enough to replace instance compression in Harnik and Naor's CRH construction.

In this work, we revisit the notion of computation instance compression and ask what the "correct" notion for CIC is, in the sense that it is sufficiently strong to achieve useful cryptographic primitives while remaining consistent with common assumptions. First, we give a natural strengthening of the CIC definition that serves as a direct substitute for the instance compression scheme in the Harnik--Naor construction. However, we show that even this notion is unlikely to exist.

We then identify a notion of CIC that gives new hope for constructing CRH from one-way functions via instance compression. We observe that this notion is achievable under standard assumptions and, by revisiting the Harnik--Naor proof, demonstrate that it is sufficiently strong to achieve CRH. In fact, we show that our CIC notion is existentially equivalent to CRH.

Beyond Minicrypt, Harnik and Naor showed that a strengthening of instance compression can be used to construct OT and public-key encryption. We rule out the computational analog of this stronger notion by showing that it contradicts the existence of incompressible public-key encryption, which was recently constructed under standard assumptions.
Expand
Guy Zyskind, Avishay Yanai, Alex "Sandy" Pentland
ePrint Report ePrint Report
Distributed point functions (DPF) are increasingly becoming a foundational tool with applications for application-specific and general secure computation. While two-party DPF constructions are readily available for those applications with satisfiable performance, the three-party ones are left behind in both security and efficiency. In this paper we close this gap and propose the first three-party DPF construction that matches the state-of-the-art two-party DPF on all metrics. Namely, it is secure against a malicious adversary corrupting both the dealer and one out of the three evaluators, its function's shares are of the same size and evaluation takes the same time as in the best two-party DPF. Compared to the state-of-the-art three-party DPF, our construction enjoys $40-120\times$ smaller function's share size and shorter evaluation time, for function domains of $2^{16}-2^{40}$, respectively.

Apart from DPFs as a stand-alone tool, our construction finds immediate applications to private information retrieval (PIR), writing (PIW) and oblivious RAM (ORAM). To further showcase its applicability, we design and implement an ORAM with access policy, an extension to ORAMs where a policy is being checked before accessing the underlying database. The policy we plug-in is the one suitable for account-based digital currencies, and in particular to central bank digital currencies (CBDCs). Our protocol offers the first design and implementation of a large scale privacy-preserving account-based digital currency. While previous works supported anonymity sets of 64-256 clients and less than 10 transactions per second (tps), our protocol supports anonymity sets in the millions, performing $\{500,200,58\}$ tps for anonymity sets of $\{2^{16},2^{18},2^{20}\}$, respectively.

Toward that application, we introduce a new primitive called updatable DPF, which enables a direct computation of a dot product between a DPF and a vector; we believe that updatable DPF and the new dot-product protocol will find interest in other applications.
Expand
◄ Previous Next ►