International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Papers from TCC 2024

Year
Venue
Title
2024
TCC
Adaptive Security, Erasures, and Network Assumptions in Communication-Local MPC
The problem of reliable/secure all-to-all communication over low-degree networks has been essential for communication-local (CL) n-party MPC (i.e., MPC protocols where every party directly communicates only with a few, typically polylogarithmic in n, parties) and more recently for com- munication over ad hoc networks, which are used in blockchain protocols. However, a limited number of adaptively secure solutions exist, and they all make relatively strong assumptions on the ability of parties to act in some specific manner before the adversary can corrupt them. Two such assumptions were made in the work of Chandran et al. [ITCS ’15]---parties can (a) multisend messages to several receivers simultaneously; and (b) securely erase the message and the identities of the receivers, before the adversary gets a chance to corrupt the sender (even if a receiver is corrupted). A natural question to ask is: Are these assumptions necessary for adaptively secure CL MPC? In this paper, we characterize the feasibility landscape for all-to-all reliable message transmission (RMT) under these two assumptions, and use this characterization to obtain (asymptotically) tight feasibility results for CL MPC. – First, we prove a strong impossibility result for a broad class of RMT protocols, termed here store-and-forward protocols, which includes all known communication protocols for CL MPC from standard cryptographic assumptions. Concretely, we show that no such protocol with a certain expansion rate can tolerate a constant fraction of parties being corrupted. – Next, under the assumption of only a PKI, we show that assuming secure erasures, we can obtain an RMT protocol between all pairs of parties with polylogarithmic locality (even without assuming multisend) for the honest majority setting. We complement this result by showing a negative result for the setting of dishonest majority. – Finally, and somewhat surprisingly, under stronger assumptions (i.e., trapdoor permutations with a reverse domain sampler, and compact and malicious circuit-private FHE), we construct a polylogarithmic-locality all-to-one RMT protocol, which is adaptively secure and tolerates any constant fraction of corruptions, without assuming either secure erasures or multisend. This last result uses a novel combination of adaptively secure (e.g., non-committing) encryption and (static) FHE to bypass the impossibility of compact adaptively secure FHE by Katz et al. [PKC’13], which we believe may be of independent interest. Intriguingly, even such assumptions do not allow reduc- ing all-to-all RMT to all-to-one RMT (a reduction which is trivial in the non-CL setting). Still, we can implement what we call sublinear output-set RMT (SOS-RMT for short). We show how SOS- RMT can be used for SOS-MPC under the known bounds for feasibility of MPC in the standard (i.e., non-CL) setting assuming, in addition to SOS-RMT, an anonymous PKI.
2024
TCC
Adaptively Secure Attribute-Based Encryption from Witness Encryption
Attribute-based encryption (ABE) enables fine-grained control over which ciphertexts various users can decrypt. A master authority can create secret keys $\sk_f$ with different functions (circuits) $f$ for different users. Anybody can encrypt a message under some attribute $x$ so that only recipients with a key $\sk_f$ for a function such that $f(x)=1$ will be able to decrypt. There are a number of different approaches toward achieving selectively secure ABE, where the adversary has to decide on the challenge attribute $x$ ahead of time before seeing any keys, including constructions via bilinear maps (for NC1 circuits), learning with errors, or witness encryption. However, when it comes adaptively secure ABE, the problem seems to be much more challenging and we only know of two potential approaches: via the ``dual systems'' methodology from bilinear maps, or via indistinguishability obfuscation. In this work, we give a new approach that constructs adaptively secure ABE from witness encryption (along with statistically sound NIZKs and one-way functions). While witness encryption is a strong assumption, it appears to be fundamentally weaker than indistinguishability obfuscation. Moreover, we have candidate constructions of witness encryption from some assumptions (e.g., evasive LWE) from which we do not know how to construct indistinguishability obfuscation, giving us adaptive ABE from these assumptions as a corollary of our work.
2024
TCC
An Improvement Upon the Bounds for the Local Leakage Resiliance of Shamir's Secret Sharing Scheme
Shamir's Secret Sharing Scheme allows for the distribution of information amongst n parties so that any t of them can combine their information to recover the secret. By design, it is secure against the total corruption of (t-1) parties, but open questions remain around its security against side-channel attacks, where an adversary may obtain a small amount of information about each of the n party's shares. An initial result by Benhamouda, Degwekar, Ishai and Rabin showed that if n is sufficiently large and t \geq 0.907n, then the scheme was secure under one bit of local leakage. These bounds continued to be improved in following works, and most recently Klein and Komargodski introduced a proof using a new analytical proxy that showed leakage resilience for t \geq 0.69n. In this paper we will use the analytic proxy of Klein and Komargodski to show leakage resilience for t \geq 0.668. We do this by introducing two new bounds on the proxy. The first uses a result from additive combinatorics to improve their original bound on the proxy. The second is an averaging argument that exploits the rarity of worst-case bounds occurring.
2024
TCC
Asynchronous Agreement on a Core Set in Constant Expected Time and More Efficient Asynchronous VSS and MPC
A major challenge of any asynchronous MPC protocol is the need to reach an agreement on the set of private inputs to be used as input for the MPC functionality. Ben-Or, Canetti and Goldreich [STOC 93] call this problem Agreement on a Core Set (ACS) and solve it by running n parallel instances of asynchronous binary Byzantine agreements. To the best of our knowledge, all results in the perfect and statistical security setting used this same paradigm for solving ACS. Using all known asynchronous binary Byzantine agreement protocols, this type of ACS has Omega(log n) expected round complexity, which results in such a bound on the round complexity of MPC protocols as well (even for constant depth circuits). We provide a new solution for Agreement on a Core Set that runs in expected O(1) rounds. Our perfectly secure variant is optimally resilient (t<n/4) and requires just O(n^4 log n) expected communication complexity. We show a similar result with statistical security for t<n/3. Our ACS is based on a new notion of Asynchronously Validated Asynchronous Byzantine Agreement (AVABA) and new information-theoretic analogs to techniques used in the authenticated model. Along the way, we also construct a new perfectly secure packed asynchronous verifiable secret sharing (AVSS) protocol with just O(n^3 log n) communication complexity, improving the state of the art by a factor of O(n). This leads to a more efficient asynchronous MPC that matches the state-of-the-art synchronous MPC. We provide a new solution for Agreement on a Core Set that runs in expected O(1) rounds. Our perfectly secure variant is optimally resilient (t<n/4) and requires just O(n^4 log n) expected communication complexity. We show a similar result with statistical security for t<n/3. Our ACS is based on a new notion of Asynchronously Validated Asynchronous Byzantine Agreement (AVABA) and new information-theoretic analogs to techniques used in the authenticated model. Along the way, we also construct a new perfectly secure packed asynchronous verifiable secret sharing (AVSS) protocol with just O(n^3 log n) communication complexity, improving the state of the art by a factor of O(n). This leads to a more efficient asynchronous MPC that matches the state-of-the-art synchronous MPC.
2024
TCC
Batch Arguments to NIZKs from One-Way Functions
Succinctness and zero-knowledge are two fundamental properties in the study of cryptographic proof systems. Several recent works have formalized the connections between these two notions by showing how to realize non-interactive zero-knowledge (NIZK) arguments from succinct non-interactive arguments. Specifically, Champion and Wu (CRYPTO 2023) as well as Bitansky, Kamath, Paneth, Rothblum, and Vasudevan (ePrint 2023) recently showed how to construct a NIZK argument for NP from a (somewhere-sound) non-interactive batch argument (BARG) and a dual-mode commitment scheme (and in the case of the Champion-Wu construction, a local pseudorandom generator). The main open question is whether a BARG suffices for a NIZK (just assuming one-way functions). In this work, we first show that an adaptively-sound BARG for NP together with an one-way function imply a computational NIZK argument for NP. We then show that the weaker notion of somewhere soundness achieved by existing BARGs from standard algebraic assumptions are also adaptively sound if we assume sub-exponential security. This transformation may also be of independent interest. Taken together, we obtain a NIZK argument for NP from one-way functions and a sub-exponentially-secure somewhere-sound BARG for NP. If we instead assume plain public-key encryption, we show that a standard polynomially-secure somewhere-sound batch argument for NP suffices for the same implication. As a corollary, this means a somewhere-sound BARG can be used to generically upgrade any semantically-secure public-key encryption scheme into one secure against chosen-ciphertext attacks. More broadly, our results demonstrate that constructing non-interactive batch arguments for NP is essentially no easier than constructing NIZK arguments for NP.
2024
TCC
Batching Adaptively-Sound SNARGs for NP
A succinct non-interactive argument (SNARG) for NP allows a prover to convince a verifier that an NP statement $x$ is true with a proof whose size is sublinear in the length of the traditional NP witness. Moreover, a SNARG is adaptively sound if the adversary can choose the statement it wants to prove after seeing the scheme parameters. Very recently, Waters and Wu (STOC 2024) showed how to construct adaptively-sound SNARGs for NP in the plain model from falsifiable assumptions (specifically, sub-exponentially secure indistinguishability obfuscation, sub-exponentially secure one-way functions, and polynomial hardness of discrete log). We consider the batch setting where the prover wants to prove a collection of $T$ statements $x_1, \ldots, x_T$ and its goal is to construct a proof whose size is sublinear in both the size of a single witness and the number of instances $T$. In this setting, existing constructions either require the size of the public parameters to scale linearly with $T$ (and thus, can only support an a priori bounded number of instances), or only provide non-adaptive soundness, or have proof size that scales linearly with the size of a single NP witness. In this work, we give two approaches for batching adaptively-sound SNARGs for NP, and in particular, show that under the same set of assumptions as those underlying the Waters-Wu adaptively-sound SNARG, we can obtain an adaptively-sound SNARG for batch NP where the size of the proof is $\mathsf{poly}(\lambda)$ and the size of the CRS is $\mathsf{poly}(\lambda + |C|)$, where $\lambda$ is a security parameter and $|C|$ is the size of the circuit that computes the associated NP relation. Our first approach builds directly on top of the Waters-Wu construction and relies on indistinguishability obfuscation and a homomorphic re-randomizable one-way function. Our second approach shows how to combine ideas from the Waters-Wu SNARG with the chaining-based approach by Garg, Sheridan, Waters, and Wu (TCC 2022) to obtain a SNARG for batch NP.
2024
TCC
Bit Security: optimal adversaries, equivalence results, and a toolbox for computational/statistical security analysis
We investigate the notion of bit-security for decisional cryptographic properties, as originally proposed in (Micciancio & Walter, Eurocrypt 2018), and its main variants and extensions, with the goal clarifying the relation between different definitions, and facilitating their use. Specific contributions of this paper include: (1) identifying the optimal adversaries achieving the highest possible MW advantage, showing that they are deterministic and have a very simple threshold structure; (2) giving a simple proof that a competing definition proposed by (Watanabe & Yasunaga, Asiacrypt 2021) is actually equivalent to the original MW definition; and (3) developing tools for the use of the extended notion of computational-statistical bit-security introduced in (Li, Micciancio, Schultz & Sorrell, Crypto 2022), showing that it fully supports common cryptographic proof techniques like hybrid arguments and probability replacement theorems. On the technical side, our results are obtained by introducing a new notion of "fuzzy" distinguisher (which we prove equivalent to the "aborting" distinguishers of Micciancio and Walter), and a tight connection between the MW advantage and the Le Cam metric, a standard quantity used in statistics.
2024
TCC
Bit-Security Preserving Hardness Amplification
Hardness amplification is one of the important reduction techniques in cryptography, and it has been extensively studied in the literature. The standard XOR lemma known in the literature evaluates the hardness in terms of the probability of correct prediction; the hardness is amplified from mildly hard (close to $1$) to very hard $1/2 + \varepsilon$ by inducing $\varepsilon^2$ multiplicative decrease of the circuit size. Translating such a statement in terms of the bit-security framework introduced by Micciancio-Walter (EUROCRYPT 2018) and Watanabe-Yasunaga (ASIACRYPT 2021), it may cause a bit-security loss of $\log(1/\varepsilon)$. To resolve this issue, we derive a new variant of the XOR lemma in terms of the R\'enyi advantage, which directly characterizes the bit security. In the course of proving this result, we prove a new variant of the hardcore lemma in terms of the conditional squared advantage; our proof uses a boosting algorithm that may output the $\bot$ symbol in addition to $0$ and $1$, which may be of independent interest.
2024
TCC
Black-Box Timed Commitments from Time-Lock Puzzles
A Timed Commitment (TC) with time parameter $t$ is hiding for time at most $t$, that is, commitments can be force-opened by any third party within time $t$. In addition to various cryptographic assumptions, the security of all known TC schemes relies on the sequentiality assumption of repeated squarings in hidden-order groups. The repeated squaring assumption is therefore a security bottleneck. In this work, we give a black-box construction of TCs from any time-lock puzzle (TLP) by additionally relying on one-way permutations and collision-resistant hashing. Currently, TLPs are known from (a) the specific repeated squaring assumption, (b) the general (necessary) assumption on the \emph{existence of worst-case non-parallelizing languages} and indistinguishability obfuscation, and (c) any iteratively sequential function and the hardness of the circular small-secret LWE problem. The latter admits a plausibly post-quantum secure instantiation. Hence, thanks to the generality of our transform, we get i) the first TC whose \emph{timed} security is based on the the existence of non-parallelizing languages and ii) the first TC that is plausibly post-quantum secure. We first define \emph{quasi publicly-verifiable} TLPs (QPV-TLPs) and construct them from any standard TLP in a black-box manner without relying on any additional assumptions. Then, we devise a black-box commit-and-prove system to transform any QPV-TLPs into a TC.
2024
TCC
Bruisable Onions: Anonymous Communication in the Asynchronous Model
In onion routing, a message travels through the network via a series of intermediaries, wrapped in layers of encryption to make it difficult to trace. Onion routing is an attractive approach to realizing anonymous channels because it is simple and fault tolerant. Onion routing protocols provably achieving anonymity in realistic adversary models are known for the synchronous model of communication so far. In this paper, we give the first onion routing protocol that achieves anonymity in the asynchronous model of communication. The key tool that our protocol relies on is the novel cryptographic object that we call bruisable onion encryption. The idea of bruisable onion encryption is that even though neither the onion's path nor its message content can be altered in transit, an intermediate router on the onion's path that observes that the onion is delayed can nevertheless slightly damage, or bruise it. An onion that is chronically delayed will have been bruised by many intermediaries on its path and become undeliverable. This prevents timing attacks and, as we show, yields a provably secure onion routing protocol in the asynchronous setting.
2024
TCC
Compact Key Storage in the Standard Model
In recent work [Crypto'24], Dodis, Jost, and Marcedone introduced Compact Key Storage (CKS) as a modern approach to backup for end-to-end (E2E) secure applications. As most E2E-secure applications rely on a sequence of secrets (s_1,...,s_n) from which, together with the ciphertexts sent over the network, all content can be restored, Dodis et al.\ introduced CKS as a primitive for backing up (s_1,...,s_n). The authors provided definitions as well as two practically efficient schemes (with different functionality-efficiency trade-offs). Both, their security definitions and schemes relied however on the random oracle model (ROM). In this paper, we first show that this reliance is inherent. More concretely, we argue that in the standard model, one cannot have a general CKS instantiation that is applicable to all "CKS-compatible games", as defined by Dodis et al., and realized by their ROM construction. Therefore, one must restrict the notion of CKS-compatible games to allow for standard model CKS instantiations. We then introduce an alternative standard-model CKS definition that makes concessions in terms of functionality (thereby circumventing the impossibility). More precisely, we specify CKS which does not recover the original secret s_i but a derived key k_i, and then observe that this still suffices for many real-world applications. We instantiate this new notion based on minimal assumptions. For passive security, we provide an instantiation based on one-way functions only. For stronger notions, we additionally need collision-resistant hash functions and dual-PRFs, which we argue to be minimal. Finally, we provide a modularization of the CKS protocols of Dodis et al. In particular, we present a unified protocol (and proof) for standard-model equivalents for both protocols introduced in the original work.
2024
TCC
Composability in Watermarking Schemes
Software watermarking allows for embedding a mark into a piece of code, such that any attempt to remove the mark will render the code useless. Provably secure watermarking schemes currently seems limited to programs computing various cryptographic operations, such as evaluating pseudorandom functions (PRFs), signing messages, or decrypting ciphertexts (the latter often going by the name ``traitor tracing''). Moreover, each of these watermarking schemes has an ad-hoc construction of its own. We observe, however, that many cryptographic objects are used as building blocks in larger protocols. We ask: just as we can compose building blocks to obtain larger protocols, can we compose watermarking schemes for the building blocks to obtain watermarking schemes for the larger protocols? We give an affirmative answer to this question, by precisely formulating a set of requirements that allow for composing watermarking schemes. We use our formulation to derive a number of applications.
2024
TCC
Consensus in the Presence of Overlapping Faults and Total Omission
Understanding the fault tolerance of Byzantine Agreement protocols is an important question in distributed computing. While the setting of Byzantine faults has been thoroughly explored in the literature, the (arguably more realistic) omission fault setting is far less studied. In this paper, we revisit the recent work of Loss and Stern who gave the first protocol in the mixed fault model tolerating t Byzantine faults, s send faults, and r receive faults, when 2t+r+s<n and omission faults do not overlap. We observe that their protocol makes no guarantees when omission faults can overlap, i.e., when parties can simultaneously have send and receive faults. We give the first protocol that overcomes this limitation and tolerates the same number of potentially overlapping faults. We then study, for the first time, the total omission setting where all parties can become omission faulty. This setting is motivated by real-world scenarios where every party may experience connectivity issues from time to time, yet agreement should still hold for the parties who manage to output values. We show the first agreement protocol in this setting with parameters s<n and s+r=n. On the other hand, we prove that there is no consensus protocol for the total omission setting which tolerates even a single overlapping omission fault, i.e., where s+r=n+1 and s>2, or a broadcast protocol for s+r=n and s>1 even without overlapping faults.
2024
TCC
Cryptography in the Common Haar State Model: Feasibility Results and Separations
Common random string model is a popular model in classical cryptography. We study a quantum analogue of this model called the common Haar state (CHS) model. In this model, every party participating in the cryptographic system receives many copies of one or more i.i.d Haar random states. We study feasibility and limitations of cryptographic primitives in this model and its variants: 1) We present a construction of pseudorandom function-like states, that is optimal in terms of its query bound, with statistical security. As a consequence, by suitably instantiating the CHS model, we obtain a new approach to construct pseudorandom function-like states in the plain model. 2) We present new separations between pseudorandom function-like states (with super logarithmic length) and quantum cryptographic primitives, such as interactive key agreement and bit commitment, with classical communication. To show these separations, we show the indistinguishability of identical versus independent Haar states against LOCC (local operations, classical communication) adversaries.
2024
TCC
Depth-Reduction Algorithms for Directed Acyclic Graphs and Applications to Secure Multiparty Computation
We consider the graph-theoretic problem of removing (few) nodes from a directed acyclic graph in order to reduce its depth. While this problem is intractable in the general case, we provide a variety of algorithms in the case where the graph is that of a circuit of fan-in (at most) two, and explore applications of these algorithms to secure multiparty computation with low communication. Over the past few years, a paradigm for low-communication secure multiparty computation has found success based on decomposing a circuit into low-depth ``chunks''. This approach was however previously limited to circuits with a ``layered'' structure. Our graph-theoretic approach extends this paradigm to all circuits. In particular, we obtain the following contributions: 1) Fractionally linear-communication MPC in the correlated randomness model: We provide an $N$-party protocol for computing an $n$-input, $m$-output $\F$-arithmetic circuit with $s$ internal gates (over any basis of binary gates) with communication complexity $(\frac{2}{3}s + n + m)\cdot N\cdot\log |\F|$, which can be improved to $((1+\epsilon)\cdot\frac{2}{5}s+n+m)\cdot N\cdot\log |\F|$ (at the cost of increasing the computational overhead from a small constant factor to a large one). Previously, comparable protocols either used more than $s\cdot N\cdot \log |\F|$ bits of communication, required super-polynomial computation, were restricted to layered circuits, or tolerated a sub-optimal corruption threshold. 2) Sublinear-Communication MPC: Assuming the existence of $N$-party Homomorphic Secret Sharing for logarithmic depth circuits (respectively doubly logarithmic depth circuits), we show there exists sublinear-communication secure $N$-party computation for \emph{all} $\log^{1+o(1)}$-depth (resp.~$(\log\log)^{1+o(1)}$-depth) circuits. Previously, this result was limited to $(\mathcal{O}(\log))$-depth (resp.~$(\mathcal{O}(\log\log))$-depth) circuits, or to circuits with a specific structure (e.g. layered). 3) The 1-out-of-M-OT complexity of MPC: We introduce the `` 1-out-of-M-OT complexity of MPC'' of a function $f$, denoted $C_M(f)$, as the number of oracle calls required to securely compute $f$ in the 1-out-of-M-OT hybrid model. We establish the following upper bound: for every $M\geq 2$, $C_N(f) \leq (1+g(M))\cdot \frac{2 |f|}{5}$, where $g(M)$ is an explicit vanishing function. We also obtain additional contributions to reducing the amount of bootstrapping for fully homomorphic encryption, and to other types of sublinear-communication MPC protocols such as those based on correlated symmetric private information retrieval.
2024
TCC
Distributed Broadcast Encryption from Lattices
A broadcast encryption scheme allows a user to encrypt a message to $N$ recipients with a ciphertext whose size scales sublinearly with $N$. While broadcast encryption enables succinct encrypted broadcasts, it also introduces a strong trust assumption and a single point of failure; namely, there is a central authority who generates the decryption keys for all users in the system. Distributed broadcast encryption offers an appealing alternative where there is a one-time (trusted) setup process that generates a set of public parameters. Thereafter, users can independently generate their own public keys and post them to a public-key directory. Moreover, anyone can broadcast an encrypted message to any subset of user public keys with a ciphertext whose size scales sublinearly with the size of the broadcast set. Unlike traditional broadcast encryption, there are no long-term secrets in distributed broadcast encryption and users can join the system at any time (by posting their public key to the public-key directory). Previously, distributed broadcast encryption schemes were known from standard pairing-based assumptions or from powerful tools like indistinguishability obfuscation or witness encryption. In this work, we provide the first distributed broadcast encryption scheme from a falsifiable lattice assumption. Specifically, we rely on the $\ell$-succinct learning with errors (LWE) assumption introduced by Wee (CRYPTO 2024). Previously, the only lattice-based candidate for distributed broadcast encryption goes through general-purpose witness encryption, which in turn is only known from the private-coin evasive LWE assumption, a strong and non-falsifiable lattice assumption. Along the way, we also describe a more direct construction of broadcast encryption from lattices.
2024
TCC
Distributing Keys and Random Secrets with Constant Complexity
In the Distributed Secret Sharing Generation (DSG) problem n parties wish to obliviously sample a secret-sharing of a random value s taken from some finite field, without letting any of the parties learn s. Distributed Key Generation (DKG) is a closely related variant of the problem in which, in addition to their private shares, the parties also generate a public "commitment" g^s to the secret. Both DSG and DKG are central primitives in the domain of secure multiparty computation and threshold cryptography. In this paper, we study the communication complexity of DSG and DKG. Motivated by large-scale cryptocurrency and blockchain applications, we ask whether it is possible to obtain protocols in which the communication per party is a constant that does not grow with the number of parties. We answer this question to the affirmative in a model where broadcast communication is implemented via a public bulletin board (e.g., a ledger). Specifically, we present a constant-round DSG/DKG protocol in which the number of bits that each party sends/receives from the public bulletin board is a constant that depends only on the security parameter and the field size but does not grow with the number of parties n. In contrast, in all existing solutions at least some of the parties send \Omega(n) bits. Our protocol works in the near-threshold setting. Given arbitrary privacy/correctness parameters $0<\tau_p<\tau_c<1$, the protocol tolerates up to \tau_p n$ actively corrupted parties and delivers shares of a random secret according to some $\tau_p n$-private $\tau_c n$-correct secret sharing scheme, such that the adversary cannot bias the secret or learn anything about it. The protocol is based on non-interactive zero-knowledge proofs, non-interactive commitments and a novel secret-sharing scheme with special robustness properties that is based on Low-Density Parity-Check codes. As a secondary contribution, we extend the formal MPC-based treatment of DKG/DSG, and study new aspects of Affine Secret Sharing Schemes.
2024
TCC
Doubly-Efficient Batch Verification in Statistical Zero-Knowledge
A sequence of recent works, concluding with Mu et al. (Eurocrypt, 2024) has shown that every problem $\Pi$ admitting a non-interactive statistical zero-knowledge proof (NISZK) has an efficient zero-knowledge \emph{batch verification} protocol. Namely, an NISZK protocol for proving that $x_1, \dots, x_k \in \Pi$ with communication that only scales poly-logarithmically with $k$. A caveat of this line of work is that the prover runs in exponential-time, whereas for NP problems it is natural to hope to obtain a \emph{doubly-efficient proof} -- that is, a prover that runs in polynomial-time given the $k$ NP witnesses. In this work we show that every problem in NISZK $\cap$ UP has a \emph{doubly-efficient} interactive statistical zero-knowledge proof with communication $\poly(n, \log(k))$ and $\poly(\log(k), \log(n))$ rounds. The prover runs in time $\poly(n, k)$ given access to the $k$ UP witnesses. Here $n$ denotes the length of each individual input, and UP is the subclass of NP relations in which YES instances have unique witnesses. This result yields doubly-efficient statistical zero-knowledge batch verification protocols for a variety of concrete and central cryptographic problems from the literature.
2024
TCC
Efficient Secure Communication Over Dynamic Incomplete Networks With Minimal Connectivity
We study the problem of implementing unconditionally secure reliable and private communication (and hence secure computation) in dynamic incomplete networks. Our model assumes that the network is always k-connected, for some k, but the concrete connection graph is adversarially chosen in each round of interaction. We show that, with n players and t malicious corruptions, perfectly secure communication is possible if and only if k > 2t. This disproves a conjecture from earlier work, that k > 3t is necessary. Our new protocols are much more efficient than previous work; in particular, we improve the round and communication complexity by an exponential factor (in n) in both the semi-honest and the malicious corruption setting, leading to protocols with polynomial complexity.
2024
TCC
From One-Time to Two-Round Reusable Multi-Signatures without Nested Forking
Multi-signature schemes are gaining significant interest due to their blockchain applications. Of particular interest are two-round schemes in the plain public-key model that offer key aggregation, and whose security is based on the hardness of the DLOG problem. Unfortunately, despite substantial recent progress, the security proofs of the proposed schemes provide rather insufficient concrete guarantees (especially for 256-bit groups). This frustrating situation has so far been approached either by relying on the security of seemingly-stronger assumptions or by considering restricted classes of attackers (e.g., algebraic attackers, which are assumed to provide an algebraic justification of each group element that they produce). We present a complementing approach by constructing multi-signature schemes that satisfy two relaxed notions of security, whose applicability nevertheless ranges from serving as drop-in replacements to enabling expressive smart contract validation procedures. Our first notion, one-time unforgeability, extends the analogous single-signer notion by considering attackers that obtain a single signature for some message and set of signers of their choice. We construct a non-interactive one-time scheme based on any ring-homomorphic one-way function, admitting efficient instantiations based on the DLOG and RSA assumptions. Aggregated verification keys and signatures consist of two group elements and a single group element, respectively, and our security proof consists of a single application of the forking lemma (thus avoiding the substantial security loss exhibited by the proposed two-round schemes). Additionally, we demonstrate that our scheme naturally extends to a $t$-time scheme, where aggregated verification keys consist of $t+1$ group elements, while aggregated signatures still consist of a single group element. Our second notion, single-set unforgeability, considers attackers that obtain any polynomial number of signatures but are restricted to a single set of signers of their choice. We transform any non-interactive one-time scheme into a two-round single-set scheme via a novel forking-free construction that extends the seminal Naor-Yung tree-based approach to the multi-signer setting. Aggregated verification keys are essentially identical to those of the underlying one-time scheme, and the length of aggregated signatures is determined by that of the underlying scheme while scaling linearly with the length of messages (noting that long messages can always be hashed using a collision-resistant function). Instantiated with our one-time scheme, we obtain aggregated verification keys and signatures whose lengths are completely independent of the number of signers.
2024
TCC
General Adversary Structures in Byzantine Agreement and Multi-Party Computation with Active and Omission Corruption
Typical results in multi-party computation (in short, MPC) capture faulty parties by assuming a threshold adversary corrupting parties actively and/or fail-corrupting. These corruption types are, however, inadequate for capturing correct parties that might suffer temporary network failures and/or localized faults--these are particularly relevant for MPC over large, global scale networks. Omission faults and general adversary structures have been proposed as more suitable alternatives. However, to date, there is no characterization of the feasibility landscape combining the above ramifications of fault types and patterns. In this work we provide a tight characterization of feasibility of MPC in the presence of general adversaries--characterized by an adversary structure--that combine omission and active corruption. To this front we first provide a tight characterization of feasibility for Byzantine agreement (BA), a key tool in MPC protocols--this BA result can be of its own separate significance. Subsequently, we demonstrate that the common techniques employed in the threshold MPC literature to deal with omission corruptions do not work in the general adversary setting, not even for proving bounds that would appear straightforward, e.g, sufficiency of the well known $Q^3$ condition on omission-only general adversaries. Nevertheless we provide a new protocol that implements general adversary MPC under a surprisingly complex, yet tight as we prove, bound. All our results are for the classical synchronous model of computation. As a contribution of independent interest, our work puts forth, for the first time, a formal treatment of general-adversary MPC with (active and) omission corruptions in Canetti's universal composition framework.
2024
TCC
Hamming Weight Proofs of Proximity with One-Sided Error
We provide a wide systematic study of proximity proofs with one-sided error for the Hamming weight problem Ham_alpha (the language of bit vectors with Hamming weight at least alpha), surpassing previously known results for this problem. We demonstrate the usefulness of the one-sided error property in applications: no malicious party can frame an honest prover as cheating by presenting verifier randomness that leads to a rejection. We show proofs of proximity for Ham_alpha with one-sided error and sublinear proof length in three models (MA, PCP, IOP), where stronger models allow for smaller query complexity. For n-bit input vectors, highlighting input query complexity, our MA has O(log n) query complexity, the PCP makes O(loglog n) queries, and the IOP makes a single input query. The prover in all of our applications runs in expected quasi-linear time. Additionally, we show that any perfectly complete IP of proximity for Ham_alpha with input query complexity n^{1-epsilon} has proof length Omega(log n). Furthermore, we study PCPs of proximity where the verifier is restricted to making a single input query (SIQ). We show that any SIQ-PCP for Ham_alpha must have a linear proof length, and complement this by presenting a SIQ-PCP with proof length n+o(n). As an application, we provide new methods that transform PCPs (and IOPs) for arbitrary languages with nonzero completeness error into PCPs (and IOPs) that exhibit perfect completeness. These transformations achieve parameters previously unattained.
2024
TCC
Hide-and-Seek and the Non-Resignability of the BUFF Transform
The BUFF transform, due to Cremers et al. (S&P'21), is a generic transformation for digital signature scheme, with the purpose of obtaining additional security guarantees beyond unforgeability: exclusive ownership, message-bound signatures, and non-resignability. Non-resignability (which essentially challenges an adversary to re-sign an unknown message for which it only obtains the signature) turned out to be a delicate matter, as recently Don et al. (CRYPTO'24) showed that the initial definition is essentially unachieveable; in particular, it is not achieved by the BUFF transfom. This led to the introduction of new, weakened versions of non-resignability, which are (potentially) achievable. In particular, it was shown that a salted variant of the BUFF transform does achieves some weakened version of non-resignability. However, the salting requires additional randomness and leads to slightly larger signatures. Whether the original BUFF transform also achieves some meaningful notion of non-resignability remained a natural open question. In this work, we answer this question in the affirmative. We show that the BUFF transform satisfies the (almost) strongest notions of non-resignability one can hope for, facing the known impossibility results. Our results cover both the statistical and the computational case, and both the classical and the quantum setting. At the core of our analysis lies a new security game for random oracles that we call Hide-and-Seek. While seemingly innocent at first glance, it turns out to be surprisingly challenging to rigorously analyze.
2024
TCC
Homomorphic Secret Sharing with Verifiable Evaluation
A homomorphic secret sharing (HSS) scheme allows a client to delegate a computation to a group of untrusted servers while achieving input privacy as long as at least one server is honest. In recent years, many HSS schemes have been constructed that have, in turn, found numerous applications to cryptography. Prior work on HSS focuses on the setting where the servers are semi-honest. In this work we lift HSS to the setting of malicious evaluators. We propose the notion of *HSS with verifiable evaluation* (ve-HSS) that guarantees correctness of output *even when all the servers are corrupted*. ve-HSS retains all the attractive features of HSS and adds the new feature of succinct (public) verification of output. We present *black-box* constructions of ve-HSS by devising generic transformations for semi-honest HSS schemes (with negligible error). This provides a new non-interactive method for verifiable and private outsourcing of computation.
2024
TCC
Indistinguishability Obfuscation from Bilinear Maps and LPN Variants
We construct an indistinguishability obfuscation (IO) scheme from the sub-exponential hardness of the decisional linear problem on bilinear maps together with two variants of the learning parity with noise (LPN) problem, namely large-field LPN and (binary-field) sparse LPN. This removes the need to assume the existence of polynomial-stretch PRGs in $\mathsf{NC}^0$ from the state-of-the-art construction of IO (Jain, Lin, and Sahai, EUROCRYPT 2022). As an intermediate step in our construction, we abstract away a notion of structured-seed polynomial-stretch PRGs in $\mathsf{NC}^0$ which is implied by both sparse LPN and the existence of polynomial-stretch PRGs in $\mathsf{NC}^0$. As immediate applications, from the sub-exponential hardness of the decisional linear assumption on bilinear groups, large-field LPN, and sparse LPN, we get alternative constructions of (a) FHE without lattices or circular security assumptions (Canetti, Lin, Tessaro, and Vaikuntanathan, TCC 2015), and (b) perfect zero-knowledge adaptively-sound Succinct Non-interactive Arguments (SNARGs) for NP (Waters and Wu, STOC 2024).
2024
TCC
Information-Theoretic Multi-Server Private Information Retrieval with Client Preprocessing
A private information retrieval (PIR) protocol allows a client to fetch any entry from single or multiple servers who hold a public database (of size $n$) while ensuring no server learns any information about the client's query. Initial works on PIR were focused on reducing the communication complexity of PIR schemes. However, standard PIR protocols are often impractical to use in applications involving large databases, due to its inherent large server-side computation complexity, that's at least linear in the database size. Hence, a line of research has focused on considering alternative PIR models that can achieve improved server complexity. The model of private information retrieval with client prepossessing has received a lot of interest beginning with the work due to Corrigan-Gibbs and Kogan (Eurocrypt 2020). In this model, the client interacts with two servers in an offline phase and it stores a local state, which it uses in the online phase to perform PIR queries. Constructions in this model achieve online client/server computation and bandwidth that's sublinear in the database size, at the cost of a one-time expensive offline phase. Till date all known constructions in this model are based on symmetric key primitives or on stronger public key assumptions like Decisional Diffie-Hellman (DDH) and Learning with Error (LWE). This work initiates the study of unconditional PIR with client prepossessing - where we avoid using any cryptographic assumptions. We present a new PIR protocol for $2t$ servers (where $t \in [2,\log_2n/2]$) with threshold 1, where client and server online computation is $\OO(\sqrt{n})$\footnote{the $\OO(.)$ notation hides $\poly\log$ factors} - matching the computation costs of other works based on cryptographic assumptions. The client storage and online communication complexity are $\OO(n^{0.5+1/2t})$ and $\OO(n^{1/2})$ respectively. Compared to previous works our PIR with client preprocessing protocol also has a very concretely efficient client/server online computation phase - which is dominated by xor operations, compared to cryptographic operations that are orders of magnitude slower. As a building block for our construction, we introduce a new information-theoretic primitive called \textit{privately multi-puncturable random set }(\pprs), which might be of independent interest. This new primitive can be viewed as a generalization of privately puncturable pseudo-random set, which is the key cryptographic building block used in previous works on PIR with client preprocessing. block used in previous works on PIR with client preprocessing.
2024
TCC
Instance-Hiding Interactive Proofs
In an Instance-Hiding Interactive Proof (IHIP) [Beaver et al. CRYPTO 90], an efficient verifier with a _private_ input x interacts with an unbounded prover to determine whether x is contained in a language L. In addition to completeness and soundness, the instance-hiding property requires that the prover should not learn anything about x in the course of the interaction. Such proof systems capture natural privacy properties, and may be seen as a generalization of the influential concept of Randomized Encodings [Ishai et al. FOCS 00, Applebaum et al. FOCS 04, Agrawal et al. ICALP 15], and as a counterpart to Zero-Knowledge proofs [Goldwasser et al. STOC 89]. We investigate the properties and power of such instance-hiding proofs, and show the following: 1. Any language with an IHIP is contained in NP/poly and coNP/poly. 2. If an average-case hard language has an IHIP, then One-Way Functions exist. 3. There is an oracle with respect to which there is a language that has an IHIP but not an SZK proof. 4. IHIP's are closed under composition with any efficiently computable function. We further study a stronger version of IHIP (that we call Simulatable IHIP) where the view of the honest prover can be efficiently simulated. For these, we obtain stronger versions of some of the above: 5. Any language with a Simulatable IHIP is contained in AM and coAM. 6. If a _worst-case_ hard language has a Simulatable IHIP, then One-Way Functions exist.
2024
TCC
Key-Homomorphic and Aggregate Verifiable Random Functions
A verifiable random function (VRF) allows one to compute a random-looking image, while at the same time providing a unique proof that the function was evaluated correctly. VRFs are a cornerstone of modern cryptography and, among other applications, are at the heart of recently proposed proof-of-stake consensus protocols. In this work we initiate the formal study of \emph{aggregate VRFs}, i.e., VRFs that allow for the aggregation of proofs/images into a small digest, whose size is \emph{independent} of the number of input proofs/images, yet it still enables sound verification. We formalize this notion along with its security properties and we propose two constructions: The first scheme is conceptually simple, concretely efficient, and uses (asymmetric) bilinear groups of prime order. Pseudorandomness holds in the random oracle model and aggregate pseudorandomness is proven in the algebraic group model. The second scheme is in the standard model and it is proven secure against the learning with errors (LWE) problem. As a cryptographic building block of independent interest, we introduce the notion of \emph{key homomorphic VRFs}, where the verification keys and the proofs are endowed with a group structure. We conclude by discussing several applications of key-homomorphic and aggregate VRFs, such as distributed VRFs and aggregate proof-of-stake protocols.
2024
TCC
Limits on Adaptive Security for Attribute-Based Encryption
This work addresses the long quest for proving full (adaptive) security for attribute-based encryption (ABE). We show that in order to prove full security in a black-box manner, the scheme must be "irregular" in the sense that it is impossible to "validate" secret keys to ascertain consistent decryption of ciphertexts. This extends a result of Lewko and Waters (Eurocrypt 2014) that was only applicable to straight-line proofs (without rewinding). Our work, therefore, establishes that it is impossible to circumvent the irregularity property using creative proof techniques, so long as the adversary is used in a black-box manner. As a consequence, our work provides an explanation as to why some lattice-based ABE schemes cannot be proven fully secure, even though no known adaptive attacks exist.
2024
TCC
Low-degree Security of the Planted Random Subgraph Problem
The planted random subgraph detection conjecture of Abram et al. (TCC 2023) asserts the pseudorandomness of a pair of graphs $(H, G)$, where $G$ is an Erdos-Renyi random graph on $n$ vertices, and $H$ is a random induced subgraph of $G$ on $k$ vertices. Assuming the hardness of distinguishing these two distributions (with two leaked vertices), Abram et al. construct communication-efficient, computationally secure (1) 2-party private simultaneous messages (PSM) and (2) secret sharing for forbidden graph structures. We prove low-degree hardness of detecting planted random subgraphs all the way up to $k\leq n^{1 - \Omega(1)}$. This improves over Abram et al.'s analysis for $k \leq n^{1/2 - \Omega(1)}$. The hardness extends to $r$-uniform hypergraphs for constant $r$. Our analysis is tight in the distinguisher's degree, its advantage, and in the number of leaked vertices. Extending the constructions of Abram et al, we apply the conjecture towards (1) communication-optimal multiparty PSM protocols that are secure even against multiple random evaluations and (2) bit secret sharing with share size $(1 + \epsilon)\log n$ for any $\epsilon > 0$ in which arbitrary coalitions of up to $r$ parties can reconstruct and secrecy holds against all unqualified subsets of up to $\ell = o(\epsilon \log n)^{1/(r-1)}$ parties.
2024
TCC
Lower Bounds for Levin–Kolmogorov Complexity
The hardness of Kolmogorov complexity is intricately connected to the existence of oneway functions and derandomization. An important and elegant notion is Levin’s version of Kolmogorov complexity, Kt, and its decisional variant, MKtP. The question whether MKtP can be computed in polynomial time is particularly interesting because it is not subject to known technical barriers such as algebrization or natural proofs that would explain the lack of a proof for MKtP ∉ P. We take a major step towards proving MKtP ∉ P by developing a novel yet simple diagonalization technique to show unconditionally that MKtP ∉ DTIME[O(n)], i.e., no deterministic algorithm can solve MKtP on every instance. This allows us to affirm a conjecture by Ren and Santhanam [STACS22] about a non-halting variant of Kt complexity. Additionally, we give conditional lower bounds for MKtP that tolerate either more runtime or one-sided error.
2024
TCC
Monotone Policy BARGs from BARGs and Additively Homomorphic Encryption
A monotone policy batch $\mathsf{NP}$ language $\mathcal{L}_{\mathcal{R}, P}$ is parameterized by a monotone policy $P \colon \{0,1\}^k \to \{0,1\}$ and an $\mathsf{NP}$ relation $\mathcal{R}$. A statement $(x_1, \ldots, x_k)$ is a \textsc{yes} instance if there exists $w_1, \ldots, w_k$ where $P(\mathcal{R}(x_1, w_1), \ldots, \mathcal{R}(x_k, w_k)) = 1$. For example, we might say that an instance $(x_1, \ldots, x_k)$ is a \textsc{yes} instance if a majority of the statements are true. A monotone policy batch argument (BARG) for $\mathsf{NP}$ allows a prover to prove that $(x_1, \ldots, x_k) \in \mathcal{L}_{\mathcal{R}, P}$ with a proof of size $\mathsf{poly}(\lambda, |\mathcal{R}|, \log k)$, where $\lambda$ is the security parameter, $|\mathcal{R}|$ is the size of the Boolean circuit that computes $\mathcal{R}$, and $k$ is the number of instances. Recently, Brakerski, Brodsky, Kalai, Lombardi, and Paneth (CRYPTO~2023) gave the first monotone policy BARG for $\mathsf{NP}$ from the learning with errors (LWE) assumption. In this work, we describe a generic approach for constructing monotone policy BARGs from any BARG for $\mathsf{NP}$ together with an additively homomorphic encryption scheme. This yields the first constructions of monotone policy BARGs from the $k$-$\ms{Lin}$ assumption in prime-order pairing groups as well as the (subexponential) DDH assumption in /pairing-free/ groups. Central to our construction is a notion of a zero-fixing hash function, which is a relaxed version of a predicate-extractable hash function from the work of Brakerski~et~al. Our relaxation enables a direct realization of zero-fixing hash functions from BARGs for $\mathsf{NP}$ and additively homomorphic encryption, whereas the previous notion relied on leveled homomorphic encryption, and by extension, the LWE assumption. As an application, we also show how to combine a monotone policy BARG with a puncturable signature scheme to obtain a monotone policy aggregate signature scheme. Our work yields the first (statically-secure) monotone policy aggregate signatures that supports general monotone Boolean circuits from standard pairing-based assumptions. Previously, this was only known from LWE.
2024
TCC
More Efficient Functional Bootstrapping for General Functions in Polynomial Modulus
Functional bootstrapping seamlessly integrates the benefits of homomorphic computation using a look-up table and the noise reduction capabilities of bootstrapping. Its wide-ranging applications in privacy-preserving protocols underscore its broad impacts and significance. In this work, our objective is to craft more efficient and less restricted functional bootstrapping methods for general functions within a polynomial modulus. We introduce a series of novel techniques, proving that functional bootstrapping for general functions can be essentially as efficient as regular FHEW/TFHE bootstrapping. Our new algorithms operate within the realm of prime-power and odd composite cyclotomic rings, offering versatility without any additional requirements on input noise and message space beyond correct decryption.
2024
TCC
Multi-Authority Functional Encryption with Bounded Collusions from Standard Assumptions
Multi-Authority Functional Encryption (MAFE) [\textit{Chase, TCC'07; Lewko-Waters, Eurocrypt'11; Brakerski et al., ITCS'17}] is a popular generalization of functional encryption (FE) with the central goal of decentralizing the trust assumption from a single central trusted key authority to a group of multiple, \emph{independent and non-interacting}, key authorities. Over the last several decades, we have seen tremendous advances in new designs and constructions for FE supporting different function classes, from a variety of assumptions and with varying levels of security. Unfortunately, the same has not been replicated in the multi-authority setting. The current scope of MAFE designs is rather limited, with positive results only known for certain attribute-based functionalities or from general-purpose code obfuscation. This state-of-the-art in MAFE could be explained in part by the implication provided by Brakerski et al. (ITCS'17). It was shown that a general-purpose obfuscation scheme can be designed from any MAFE scheme for circuits, even if the MAFE scheme is secure only in a bounded-collusion model, where at most \emph{two} keys per authority get corrupted. In this work, we revisit the problem of MAFE and show that existing implication from MAFE to obfuscation is not tight. We provide new methods to design MAFE for circuits from simple and minimal cryptographic assumptions. Our main contributions are summarized below- \begin{enumerate} \item We design a $\poly(\lambda)$-authority MAFE for circuits in the bounded-collusion model. Under the existence of public-key encryption, we prove it to be statically simulation-secure. Further, if we assume sub-exponential security of public-key encryption, then we prove it to be adaptively simulation-secure in the Random Oracle Model. \item We design a $O(1)$-authority MAFE for circuits in the bounded-collusion model. Under the existence of 2-party or 3-party non-interactive key exchange and public-key encryption, we prove it to be adaptively simulation-secure. \item We provide a new generic bootstrapping compiler for MAFE for general circuits to design a simulation-secure $(n_1 + n_2)$-authority MAFE from any two $n_1$-authority and $n_2$-authority MAFE. \end{enumerate}
2024
TCC
Multi-Client Attribute-Based and Predicate Encryption from Standard Assumptions
Multi-input Attribute-Based Encryption (ABE) is a generalization of key-policy ABE where attributes can be independently encrypted across several ciphertexts, and a joint decryption of these ciphertexts is possible if and only if the combination of attributes satisfies the policy of the decryption key. We extend this model by introducing a new primitive that we call Multi-Client ABE (MC-ABE), which provides the usual enhancements of multi-client functional encryption over multi-input functional encryption. Specifically, we separate the secret keys that are used by the different encryptors and consider the case that some of them may be corrupted by the adversary. Furthermore, we tie each ciphertext to a label and enable a joint decryption of ciphertexts only if all ciphertexts share the same label. We provide constructions of MC-ABE for various policy classes based on SXDH. Notably, we can deal with policies that are not a conjunction of local policies, which has been a limitation of previous constructions from standard assumptions. Subsequently, we introduce the notion of Multi-Client Predicate Encryption (MC-PE) which, in contrast to MC-ABE, does not only guarantee message-hiding but also attribute-hiding. We present a new compiler that turns any constant-arity MC-ABE into an MC-PE for the same arity and policy class. Security is proven under the LWE assumption.
2024
TCC
New Upper Bounds for Evolving Secret Sharing via Infinite Branching Programs
Evolving secret-sharing schemes, defined by Komargodski, Naor, and Yogev [TCC 2016B], are secret-sharing schemes in which there is no a-priory bound on the number of parties. In such schemes, parties arrive one by one; when a party arrives, the dealer gives it a share and cannot update this share in later stages. The requirement is that some predefined sets (called authorized sets) should be able to reconstruct the secret, while other sets should learn no information on the secret. The collection of authorized sets that can reconstruct the secret is called an evolving access structure. The challenge of the dealer is to be able to give short shares to the current parties without knowing how many parties will arrive in the future. The requirement that the dealer cannot update shares is designed to prevent expensive updates. Komargodski et al. constructed an evolving secret-sharing scheme for every monotone evolving access structure; the share size of the t-th party in this scheme is $2^{t-1}$. Recently, Mazor [ITC 2023] proved that evolving secret-sharing schemes require exponentially-long shares for some evolving access structures, namely shares of size $2^{t-o(t)}$. In light of these results, our goal is to construct evolving secret-sharing schemes with non-trivial share size for wide classes of evolving access structures; e.g., schemes with share size $2^{ct}$ for $c<1$ or even polynomial size. We provide several results achieving this goal: (1) We define layered infinite branching programs representing evolving access structures, show how to transform them into generalized infinite decision trees, and show how to construct evolving secret-sharing schemes for generalized infinite decision trees. Combining these steps, we get a secret-sharing scheme realizing the evolving access structure. As an application of this framework, we construct an evolving secret-sharing scheme with non-trivial share size for access structures that can be represented by layered infinite branching programs with width at layer $t$ of at most $2^{0.15t}$. If the width is polynomial, then we get an evolving secret-sharing scheme with quasi-polynomial share size. (2) We construct efficient evolving secret-sharing schemes for dynamic-threshold access structures with high dynamic-threshold and for infinite 2-slice and 3-slice access structures. (3) We prove lower bounds on the share size of evolving secret-sharing schemes for infinite $k$-hypergraph access structures and for infinite directed st-connectivity access structures. As a by-product of the lower bounds, we provide the first non-trivial lower bound for \emph{finite} directed st-connectivity access structures for general secret-sharing schemes.
2024
TCC
On black-box separations of quantum digital signatures from pseudorandom states
It is well-known that digital signatures can be constructed from one-way functions in a black-box way. While one-way functions are essentially the minimal assumption in classical cryptography, this is not the case in the quantum setting. A variety of qualitatively weaker and inherently quantum assumptions (e.g. EFI pairs, one-way state generators, and pseudorandom states) are known to be sufficient for non-trivial quantum cryptography. While it is known that commitments, zero-knowledge proofs, and even multiparty computation can be constructed from these assumptions, it has remained an open question whether the same is true for quantum digital signatures schemes (QDS). In this work, we show that there does not exist a black-box construction of a QDS scheme with classical signatures from pseudorandom states with linear, or greater, output length. Our result complements that of Morimae and Yamakawa (2022), who described a one-time secure QDS scheme with classical signatures, but left open the question of constructing a standard multi-time secure one.
2024
TCC
On Bounded Storage Key Agreement and One-Way Functions
We study key agreement in the bounded-storage model, where the participants and the adversary can use an a priori fixed bounded amount of space, and receive a large stream of data. While key agreement is known to exist unconditionally in this model (Cachin and Maurer, Crypto'97), there are strong lower bounds on the space complexity of the participants, round complexity, and communication complexity that unconditional protocols can achieve. In this work, we explore how a minimal use of cryptographic assumptions can help circumvent these lower bounds. We obtain several contributions: - Assuming one-way functions, we construct a one-round key agreement in the bounded-storage model, with arbitrary polynomial space gap between the participants and the adversary, and communication slightly larger than the adversarial storage. Additionally, our protocol can achieve everlasting security using a second streaming round. - In the other direction, we show that one-way functions are \emph{necessary} for key agreement in the bounded-storage model with large space gaps. We further extend our results to the setting of \emph{fully-streaming} adversaries, and to the setting of key agreement with multiple streaming rounds. Our results rely on a combination of information-theoretic arguments and technical ingredients such as pseudorandom generators for space-bounded computation, and a tight characterization of the space efficiency of known reductions between standard Minicrypt primitives (from distributional one-way functions to pseudorandom functions), which might be of independent interest.
2024
TCC
On One-way Functions and the Worst-case Hardness of Time-Bounded Kolmogorov Complexity, and Computational Depth
Whether one-way functions (OWF) exist is arguably the most important problem in Cryptography, and beyond. While lots of candidate constructions of one-way functions are known, and recently also problems whose average-case hardness characterize the existence of OWFs have been demonstrated, the question of whether there exists some \emph{worst-case hard problem} that characterizes the existence of one-way functions has remained open since their introduction in 1976. In this work, we present the first ``OWF-complete'' promise problem---a promise problem whose worst-case hardness w.r.t. $\BPP$ (resp. $\Ppoly$) is \emph{equivalent} to the existence of OWFs secure against $\PPT$ (resp. $\nuPPT$) algorithms. The problem is a variant of the Minimum Time-bounded Kolmogorov Complexity problem ($\mktp[s]$ with a threshold $s$), where we condition on instances having small ``computational depth''. We furthermore show that depending on the choice of the threshold $s$, this problem characterizes either ``standard'' (polynomially-hard) OWFs, or quasi polynomially- or subexponentially-hard OWFs. Additionally, when the threshold is sufficiently small (e.g., $2^{O(\sqrt{n})}$ or $\poly\log n$) then \emph{sublinear} hardness of this problem suffices to characterize quasi-polynomial/sub-exponential OWFs. While our constructions are black-box, our analysis is \emph{non- black box}; we additionally demonstrate that fully black-box constructions of OWF from the worst-case hardness of this problem are impossible. We finally show that, under Rudich's conjecture, and standard derandomization assumptions, our problem is not inside $\coAM$; as such, it yields the first candidate problem believed to be outside of $\AM \cap \coAM$, or even ${\bf SZK}$, whose worst case hardness implies the existence of OWFs.
2024
TCC
On the (Im)possibility of Game-Theoretically Fair Leader Election Protocols
We consider the problem of electing a leader among $n$ parties with the guarantee that each (honest) party has a reasonable probability of being elected, even in the presence of a coalition that controls a subset of parties, trying to bias the output. This notion is called ``game-theoretic fairness'' because such protocols ensure that following the honest behavior is an equilibrium and also the best response for every party and coalition. In the two-party case, Blum's commit-and-reveal protocol (where if one party aborts, then the other is declared the leader) satisfies this notion and it is also known that one-way functions are necessary. Recent works study this problem in the multi-party setting. They show that composing Blum's 2-party protocol for $\log n$ rounds in a tournament-tree-style manner results with {perfect game-theoretic fairness}: each honest party has probability $\ge 1/n$ of being elected as leader, no matter how large the coalition is. Logarithmic round complexity is also shown to be necessary if we require perfect fairness against a coalition of size $n-1$. Relaxing the above two requirements, i.e., settling for approximate game-theoretic fairness and guaranteeing fairness against only constant fraction size coalitions, it is known that there are $O(\log ^* n)$ round protocols. This leaves many open problems, in particular, whether one can go below logarithmic round complexity by relaxing only one of the strong requirements from above. We manage to resolve this problem for commit-and-reveal style protocols, showing that \begin{itemize} \item $\Omega(\log n/\log\log n)$ rounds are necessary if we settle for approximate fairness against very large (more than constant fraction) coalitions; \item $\Omega(\log n)$ rounds are necessary if we settle for perfect fairness against $n^\epsilon$ size coalitions (for any constant $\epsilon>0$). \end{itemize} These show that both relaxations made in prior works are necessary to go below logarithmic round complexity. Lastly, we provide several additional upper and lower bounds for the case of single-round commit-and-reveal style protocols.
2024
TCC
On the Black-Box Complexity of Private-Key Inner-Product Functional Encryption
We initiate the study of the black-box complexity of private- key functional encryption (FE). Of central importance in the private-key setting is the inner-product functionality, which is currently only known from assumptions that imply public-key encryption, such as Decisional Diffie-Hellman or Learning-with-Errors. As our main result, we rule out black-box constructions of private-key inner-product FE from random oracles. This implies a black-box separation between private-key inner- product FE from all symmetric-key primitives implied by random oracles (e.g., symmetric-key encryption, collision-resistant hash functions). Proving lower bounds for private-key functional encryption schemes introduces challenges that were absent in prior works. In particular, the combinatorial techniques developed by prior works for proving black-box lower bounds are only useful in the public-key setting and predicate encryption settings, which all fail for the private-key FE case. Our work develops novel combinatorial techniques based on Fourier analysis to overcome these barriers. We expect these techniques to be widely useful in future research in this area.
2024
TCC
One-Way Functions and pKt Complexity
We introduce pKt complexity, a new notion of time-bounded Kolmogorov complexity that can be seen as a probabilistic analogue of Levin's Kt complexity. Using pKt complexity, we upgrade two recent frameworks that characterize one-way functions (OWF) via symmetry of information and meta-complexity, respectively. Among other contributions, we establish the following results: (i) OWF can be based on the worst-case assumption that BPEXP is not contained infinitely often in P/poly if the failure of symmetry of information for pKt in the worst-case implies its failure on average. (ii) (Infinitely-often) OWF exist if and only if the average-case easiness of approximating pKt with two-sided error implies its (mild) average-case easiness with one-sided error. Previously, in a celebrated result, Liu and Pass (CRYPTO 2021 and CACM 2023) proved that one can base (infinitely often) OWF on the assumption that EXP is not contained in BPP if and only if there is a reduction from computing Kt on average with zero error to computing Kt on average with two-sided error. In contrast, our second result shows that closing the gap between two-sided error and one-sided error average-case algorithms for approximating pKt is both necessary and sufficient to unconditionally establish the existence of OWF.
2024
TCC
Perfectly-Secure MPC with Constant Online Communication Complexity
In this work, we study the communication complexity of perfectly secure MPC protocol with guaranteed output delivery against $t=(n-1)/3$ corruptions. The previously best-known result in this setting is due to Goyal, Liu, and Song (CRYPTO, 2019) which achieves $O(n)$ communication per gate, where $n$ is the number of parties. On the other hand, in the honest majority setting, a recent trend in designing efficient MPC protocol is to rely on packed Shamir sharings to speed up the online phase. In particular, the work by Escudero et al. (CCS 2022) gives the first semi-honest protocol that achieves a constant communication overhead per gate across all parties in the online phase while maintaining overall $O(n)$ communication per gate. We thus ask the following question: ``Is it possible to construct a perfectly secure MPC protocol with GOD such that the online communication per gate is $O(1)$ while maintaining overall $O(n)$ communication per gate?'' In this work, we give an affirmative answer by providing an MPC protocol with communication complexity $O(|C|+\mathsf{Depth}\cdot n+n^5 \cdot \log n)$ elements for the online phase, and $O(|C|\cdot n+\mathsf{Depth}\cdot n^2 + n^5 \cdot \log n)$ elements for the preprocessing phase, where $|C|$ is the circuit size and $\mathsf{Depth}$ is the circuit depth.
2024
TCC
Quantum Key-Revocable Dual-Regev Encryption, Revisited
Quantum information can be used to achieve novel cryptographic primitives that are impossible to achieve classically. A recent work by Ananth, Poremba, Vaikuntanathan (TCC 2023) focuses on equipping the dual-Regev encryption scheme, introduced by Gentry, Peikert, Vaikuntanathan (STOC 2008), with key revocation capabilities using quantum information. They further showed that the key-revocable dual-Regev scheme implies the existence of fully homomorphic encryption and pseudorandom functions, with both of them also equipped with key revocation capabilities. Unfortunately, they were only able to prove the security of their schemes based on new conjectures and left open the problem of basing the security of key revocable dual-Regev encryption on well-studied assumptions. In this work, we resolve this open problem. Assuming polynomial hardness of learning with errors (over sub-exponential modulus), we show that key-revocable dual-Regev encryption is secure. As a consequence, for the first time, we achieve the following results: -Key-revocable public-key encryption and key-revocable fully-homomorphic encryption satisfying classical revocation security and based on polynomial hardness of learning with errors. Prior works either did not achieve classical revocation or were based on sub-exponential hardness of learning with errors. -Key-revocable pseudorandom functions satisfying classical revocation from the polynomial hardness of learning with errors. Prior works relied upon unproven conjectures.
2024
TCC
Quantum Pseudorandom Scramblers
Quantum pseudorandom state generators (PRSGs) have stimulated exciting developments in recent years. A PRSG, on a fixed initial (e.g., all-zero) state, produces an output state that is computationally indistinguishable from a Haar random state. However, pseudorandomness of the output state is not guaranteed on other initial states. In fact, known PRSG constructions provably fail on some initial states. In this work, we propose and construct quantum Pseudorandom State Scramblers (PRSSs), which can produce a pseudorandom state on an arbitrary initial state. In the information-theoretical setting, we obtain a scrambler which maps an arbitrary initial state to a distribution of quantum states that is close to Haar random in total variation distance. As a result, our scrambler exhibits a dispersing property. Loosely, it can span an ɛ-net of the state space. This significantly strengthens what standard PRSGs can induce, as they may only concentrate on a small region of the state space provided that average output state approximates a Haar random state. Our PRSS construction develops a parallel extension of the famous Kac's walk, and we show that it mixes exponentially faster than the standard Kac's walk. This constitutes the core of our proof. We also describe a few applications of PRSSs. While our PRSS construction assumes a post-quantum one-way function, PRSSs are potentially a weaker primitive and can be separated from one-way functions in a relativized world similar to standard PRSGs.
2024
TCC
Rate-1 Arithmetic Garbling From Homomorphic Secret Sharing
We present a new approach to garbling arithmetic circuits using techniques from homomorphic secret sharing, obtaining constructions with high rate that support free addition gates. In particular, we build upon non-interactive protocols for computing distributed discrete logarithms in groups with an easy discrete-log subgroup, further demonstrating the versatility of tools from homomorphic secret sharing. Relying on distributed discrete log for the Damgård-Jurik cryptosystem (Roy and Singh, Crypto'21), whose security follows from the decisional composite residuosity assumption (DCR), we get the following main results: 1) [Two ciphertexts per multiplication, from IND-CPA security of Damgård-Jurik.] Assuming the Damgård-Jurik cryptosystem is semantically secure (which follows from DCR), there is a garbling scheme for circuits with B-bounded integer arithmetic using only two ciphertexts per multiplication. The total bit-size of the resulting garbled circuit is: $(n + 2s_\times+2D_\times)\cdot (\zeta + 1) \cdot \log N$, where n is the number of inputs, $s_\times$ is the number of multiplications, $D_\times$ is the multiplicative depth of the circuit, N is an RSA modulus and $N^{\zeta-1}$ is a rough bound on the magnitude of wire values in the computation. 2) [One ciphertext per multiplication, from KDM security of Damgård-Jurik.] Assuming the Damgård-Jurik encryption scheme remains secure given encryption of the key and its inverse, the construction achieves rate-1. The total bit-size of the resulting garbled circuit is: $(n + s_\times + 1) \cdot (\zeta + 1) \cdot \log N$, where the parameters are as above, except $N^{\zeta-2}$ is the magnitude bound.
2024
TCC
Rate-1 Zero-Knowledge Proofs from One-Way Functions
We show that every NP relation that can be verified by a bounded-depth polynomial-sized circuit, or a bounded-space polynomial-time algorithm, has a computational zero-knowledge proof (with statistical soundness) with communication that is only additively larger than the witness length. Our construction relies only on the minimal assumption that one-way functions exist. In more detail, assuming one-way functions, we show that every NP relation that can be verified in NC has a zero-knowledge proof with communication $|w|+poly(\lambda,\log(|x|))$ and relations that can be verified in SC have a zero-knowledge proof with communication $|w|+|x|^\epsilon \cdot poly(\lambda)$. Here $\epsilon>0$ is an arbitrarily small constant and \lambda denotes the security parameter. As an immediate corollary, we also get that any NP relation, with a size S verification circuit (using unbounded fan-in XOR, AND and OR gates), has a zero-knowledge proof with communication $S+poly(\lambda,\log(S))$. Our result improves on a recent result of Nassar and Rothblum (Crypto, 2022), which achieve length $(1+\epsilon) \cdot |w|+|x|^\epsilon \cdot poly(\lambda)$ for bounded-space computations, and is also considerably simpler. Building on a work of Hazay et al. (TCC 2023), we also give a more complicated version of our result in which the parties only make a black-box use of the one-way function, but in this case we achieve only an inverse polynomial soundness error.
2024
TCC
Real-Valued Somewhat-Pseudorandom Unitaries
We explore a very simple distribution of unitaries: random (binary) phase -- Hadamard -- random (binary) phase -- random computational-basis permutation. We show that this distribution is statistically indistinguishable from random Haar unitaries for any polynomial set of orthogonal input states (in any basis) with polynomial multiplicity. This shows that even though real-valued unitaries cannot be completely pseudorandom (Haug, Bharti, Koh, arXiv:2306.11677), we can still obtain some pseudorandom properties without giving up on the simplicity of a real-valued unitary. Our analysis shows that an even simpler construction: applying a random (binary) phase followed by a random computational-basis permutation, would suffice, assuming that the input is orthogonal and \emph{flat} (that is, has high min-entropy when measured in the computational basis). Using quantum-secure one-way functions (which imply quantum-secure pseudorandom functions and permutations), we obtain an efficient cryptographic instantiation of the above.
2024
TCC
Reducing the Share Size of Weighted Threshold Secret Sharing Schemes via Chow Parameters Approximation
A secret sharing scheme is a cryptographic primitive that allows a dealer to share a secret among a set of parties, so that only authorized subsets of them can recover it. The access structure of the scheme is the family of authorized subsets. In a weighted threshold access structure, each party is assigned a weight according to its importance, and the authorized subsets are those in which the sum of their weights is at least the threshold value. For these access structures, the share size of the best known secret sharing schemes is either linear on the weights or quasipolynomial on the number of parties, which leads to long shares, in general. In certain settings, a way to circumvent this efficiency problem is to approximate the access structure by another one that admits more efficient schemes. This work is dedicated to the open problem posed by this strategy: Finding secret sharing schemes with a good tradeoff between the efficiency and the accuracy of the approximation. We present a method to approximate weighted threshold access structures by others that admit schemes with small shares. This method is based on the techniques for the approximation of the Chow parameters developed by De et al. [Journal of the ACM, 2014]. Our method provides secret sharing schemes with share size $n^{1+o(1)}$, where $n$ is the number of parties, and whose access structure is \emph{close} to the original one. Namely, in this approximation the condition of being authorized or not is preserved for almost all subsets of parties. In addition, applying the recent results on computational secret sharing schemes by Applebaum et al. [STOC, 2023] we show that there exist computational secret sharing schemes whose security is based on the RSA assumption and whose share size is polylogarithmic in the number of parties.
2024
TCC
Robust Combiners and Universal Constructions for Quantum Cryptography
A robust combiner combines many candidates for a cryptographic primitive and generates a new candidate for the same primitive. Its correctness and security hold as long as one of the original candidates satisfies correctness and security. A universal construction is a closely related notion to a robust combiner. A universal construction for a primitive is an explicit construction of the primitive that is correct and secure as long as the primitive exists. It is known that a universal construction for a primitive can be constructed from a robust combiner for the primitive in many cases. Although robust combiners and universal constructions for classical cryptography are widely studied, robust combiners and universal constructions for quantum cryptography have not been explored so far. In this work, we define robust combiners and universal constructions for several quantum cryptographic primitives including one-way state generators, public-key quantum money, quantum bit commitments, and unclonable encryption, and provide constructions of them. On a different note, it was an open problem how to expand the plaintext length of unclonable encryption. In one of our universal constructions for unclonable encryption, we can expand the plaintext length, which resolves the open problem.
2024
TCC
Secret-Sharing Schemes for High Slices
In a secret-sharing scheme, a secret is shared among $n$ parties such that the secret can be recovered by authorized coalitions, while it should be kept hidden from unauthorized coalitions. In this work we study secret-sharing for $k$-slice access structures, in which coalitions of size $k$ are either authorized or not, larger coalitions are authorized and smaller are unauthorized. Known schemes for these access structures had smaller shares for small $k$'s than for large ones; hence our focus is on ``high'' $(n-k)$-slices where $k$ is small. Our work is inspired by several motivations: 1) Obtaining efficient schemes (with perfect or computational security) for natural families of access structures; 2) Making progress in the search for better schemes for general access structures, which are often based on schemes for slice access structures; 3) Proving or disproving the conjecture by Csirmaz (J. Math. Cryptol., 2020) that an access structures and its dual can be realized by secret-sharing schemes with the same share size. The main results of this work are: 1) Perfect schemes for high slices. We present a scheme for $(n-k)$-slices with information-theoretic security and share size $kn\cdot 2^{\tilde{O}(\sqrt{k \log n})}$. Using a different scheme with slightly larger shares, we prove that the ratio between the optimal share size of $k$-slices and that of their dual $(n-k)$-slices is bounded by $n$. 2) Computational schemes for high slices. We present a scheme for $(n-k)$-slices with computational security and share size $O(k^2 \lambda \log n)$ based on the existence of one-way functions. Our scheme makes use of a non-standard view point on Shamir secret-sharing schemes that allows to share many secrets with different thresholds with low cost. 3) Multislice access structures. \emph{$(a:b)$-multislices} are access structures that behave similarly to slices, but are unconstrained on coalitions in a wider range of cardinalities between $a$ and $b$. We use our new schemes for high slices to realize multislices with the same share sizes that their duals have today. This solves an open question raised by Applebaum and Nir (Crypto, 2021), and allows to realize hypergraph access structures that are chosen uniformly at random under a natural set of distributions with share size $2^{0.491n+o(n)}$ compared to the previous result of $2^{0.5n+o(n)}$.
2024
TCC
Secure Computation with Parallel Calls to 2-ary Functions
Reductions are the workhorses of cryptography. They allow constructions of complex cryptographic primitives from simple building blocks. A prominent example is the non-interactive reduction from securely computing a ``complex" function f to securely computing a ``simple" function g via randomized encodings. Prior work equated simplicity with functions of small degree. In this work, we consider a different notion of simplicity where we require g to only take inputs from a small number of parties. In other words, we want the arity of g to be as small as possible. In more detail, we consider the problem of reducing secure computation of arbitrary functions to secure computation of functions with arity two (two is the minimal arity required to compute non-trivial functions). Specifically, we want to compute a function f via a protocol that makes parallel calls to a 2-ary function g. We want this protocol to be secure against malicious adversaries that could corrupt an arbitrary number of parties. We obtain the following results: - Negative Result: We show that there exists a degree-2 polynomial p such that no protocol that makes parallel calls to 2-ary functions can compute p with statistical security with abort. - Positive Results: We give two ways to bypass the above impossibility result. 1. Weakening the Security Notion. We show that every degree-2 polynomial can be computed with statistical privacy with knowledge of outputs (PwKO) by making parallel calls to a 2-ary function. Privacy with knowledge of outputs is weaker than security with abort. 2. Computational Security. We prove that for every function f, there exists a protocol for computing f that makes parallel calls to a 2-ary function and achieves security with abort against computationally-bounded adversaries. The security of this protocol relies on the existence of semi-honest secure oblivious transfer. - Applications: We give connections between this problem and the task of reducing the encoding complexity of Multiparty Randomized Encodings (MPRE) (Applebaum, Brakerski, and Tsabary, TCC 2018). Specifically, we show that under standard computational assumptions, there exists an MPRE where the encoder can be implemented by an NC0 circuit with constant fan-out. - Extensions: We explore this problem in the honest majority setting and give similar results assuming one-way functions. We also show that if the parties have access to 3-ary functions then we can construct a computationally secure protocol in the dishonest majority setting assuming one-way functions.
2024
TCC
Security Bounds for Proof-Carrying Data from Straightline Extractors
Proof-carrying data (PCD) is a powerful cryptographic primitive that allows mutually distrustful parties to perform distributed computation in an efficiently verifiable manner. Real-world deployments of PCD have sparked keen interest within the applied community and industry. Known constructions of PCD are obtained by recursively-composing SNARKs or related primitives. Unfortunately, known security analyses incur expensive blowups, which practitioners have disregarded as the analyses would lead to setting parameters that are prohibitively expensive. In this work we study the concrete security of recursive composition, with the goal of better understanding how to reasonably set parameters for certain PCD constructions of practical interest. Our main result is that PCD obtained from SNARKs with \emph{straightline knowledge soundness} has essentially the same security as the underlying SNARK (i.e., recursive composition incurs essentially no security loss). We describe how straightline knowledge soundness is achieved by SNARKs in several oracle models, which results in a highly efficient security analysis of PCD that makes black-box use of the SNARK's oracle (there is no need to instantiated the oracle to carry out the security reduction). As a notable application, our work offers an idealized model that provides new, albeit heuristic, insights for the concrete security of \emph{recursive STARKs} used in blockchain systems. Our work could be viewed as partial evidence justifying the parameter choices for recursive STARKs made by practitioners.
2024
TCC
Space-Lock Puzzles and Verifiable Space-Hard Functions from Root-Finding in Sparse Polynomials
Timed cryptography has initiated a paradigm shift in the design of cryptographic protocols: Using timed cryptography we can realize tasks \emph{fairly}, which is provably out of range of standard cryptographic concepts. To a certain degree, the success of timed cryptography is rooted in the existence of efficient protocols base on the \emph{sequential squaring assumption}. In this work, we consider space analogues of timed cryptographic primitives, which we refer to as \emph{space-hard} primitives. Roughly speaking, these notions require honest protocol parties to invest a certain amount of space and provide security against space constrained adversaries. While inefficient generic constructions of timed-primitives from strong assumptions such as indistinguishability obfuscation can be adapted to the space-hard setting, we currently lack concrete and versatile assumptions for space-hard cryptography. In this work, we initiate the study of space-hard primitives from concrete algebraic assumptions relating to the problem of root-finding of sparse polynomials. Our motivation to study this problem is a candidate construction of VDFs by Boneh et al. (CRYPTO 2018) which are based on the hardness of inverting permutation polynomials. Somewhat anticlimactically, our first contribution is a full break of this candidate. However, we then revise this hardness assumption by dropping the permutation requirement and considering arbitrary sparse high degree polynomials. We argue that this type of assumption is much better suited for space-hardness rather than timed cryptography. We then proceed to construct both space-lock puzzles and verifiable space-hard functions from this assumption.
2024
TCC
Sparse Linear Regression and Lattice Problems
Sparse linear regression (SLR) is a well-studied problem in statistics where one is given a design matrix $\mathbf{X} \in \mathbb{R}^{m \times n}$ and a response vector $\mathbf{y} = \mathbf{X} \boldsymbol{\theta}^* + \mathbf{w}$ for a $k$-sparse vector $\boldsymbol{\theta}^*$ (that is, $\|\boldsymbol{\theta}^*\|_0 \leq k$) and small, arbitrary noise $\mathbf{w}$, and the goal is to find a $k$-sparse $\widehat{\boldsymbol{\theta}} \in \mathbb{R}^{n}$ that minimizes the mean squared prediction error $\frac{1}{m} \|\mathbf{X} \widehat{\boldsymbol{\theta}} - \mathbf{X} \boldsymbol{\theta}^*\|^2_2$. While $\ell_1$-relaxation methods such as basis pursuit, Lasso, and the Dantzig selector solve SLR when the design matrix is well-conditioned, no general algorithm is known, nor is there any formal evidence of hardness in an average-case setting with respect to all efficient algorithms. We give evidence of average-case hardness of SLR w.r.t. all efficient algorithms assuming the worst-case hardness of lattice problems. Specifically, we give an instance-by-instance reduction from a variant of the bounded distance decoding (BDD) problem on lattices to SLR, where the condition number of the lattice basis that defines the BDD instance is directly related to the restricted eigenvalue condition of the design matrix, which characterizes some of the classical statistical-computational gaps for sparse linear regression. Also, by appealing to worst-case to average-case reductions from the world of lattices, this shows hardness for a distribution of SLR instances; while the design matrices are ill-conditioned, the resulting SLR instances are in the identifiable regime. Furthermore, for well-conditioned (essentially) isotropic Gaussian design matrices, where Lasso is known to behave well in the identifiable regime, we show hardness of outputting any good solution in the unidentifiable regime where there are many solutions, assuming the worst-case hardness of standard and well-studied lattice problems.
2024
TCC
Split-State Non-Malleable Codes and Secret Sharing Schemes for Quantum Messages
Non-malleable codes are fundamental objects at the intersection of cryptography and coding theory. These codes provide security guarantees even in settings where error correction and detection are impossible, and have found applications to several other cryptographic tasks. One of the strongest and most well-studied adversarial tampering models is 2-split-state tampering. Here, a codeword is split into two parts which are stored in physically distant servers, and the adversary can then independently tamper with each part using arbitrary functions. This model can be naturally extended to the secret sharing setting with several parties by having the adversary independently tamper with each share. Previous works on non-malleable coding and secret sharing in the split-state tampering model only considered the encoding of classical messages. Furthermore, until recent work by Aggarwal, Boddu, and Jain (IEEE Trans. Inf. Theory 2024 & arXiv 2022), adversaries with quantum capabilities and shared entanglement had not been considered, and it is a priori not clear whether previous schemes remain secure in this model. In this work, we introduce the notions of split-state non-malleable codes and secret sharing schemes for quantum messages secure against quantum adversaries with shared entanglement. Then, we present explicit constructions of such schemes that achieve low-error non-malleability. More precisely, for some constant c>0, we construct efficiently encodable and decodable split-state non-malleable codes and secret sharing schemes for quantum messages preserving entanglement with external systems and achieving security against quantum adversaries having shared entanglement with codeword length n, any message length at most n^c, and error \eps=2^{-{n^{c}}}. In the easier setting of average-case non-malleability, we achieve efficient non-malleable coding with rate close to 1/11.
2024
TCC
Statistical Layered MPC
The seminal work of Rabin and Ben-Or (STOC '89) showed that the problem of secure $n$-party computation can be solved for $t<n/2$ corruptions with guaranteed output delivery and statistical security. This holds in the traditional static model where the set of parties is fixed throughout the entire protocol execution. The need to better capture the dynamics of large scale and long-lived computations, where compromised parties may recover and the set of parties can change over time, has sparked renewed interest in the proactive security model by Ostrovsky and Yung (PODC '91). This abstraction, where the adversary may periodically uncorrupt and corrupt a new set of parties, is taken even a step further in the more recent YOSO and Fluid MPC models (CRYPTO '21) which allow, in addition, disjoint sets of parties participating in each round. Previous solutions with guaranteed output delivery and statistical security only tolerate $t<n/3$ corruptions, or assume a random corruption pattern plus non-standard communication models. Matching the Rabin and Ben-Or bound in these settings remains an open problem. In this work, we settle this question considering the unifying Layered MPC abstraction recently introduced by David et al. (CRYPTO '23). In this model, the interaction pattern is defined by a layered acyclic graph, where each party sends secret messages and broadcast messages only to parties in the very next layer. We complete the feasibility landscape of layered MPC, by extending the Rabin and Ben-Or result to this setting. Our results imply maximally-proactive MPC with statistical security in the honest-majority setting.
2024
TCC
The Brave New World of Global Generic Groups and UC-Secure Zero-Overhead SNARKs
The universal composability (UC) model provides strong security guarantees for protocols used in arbitrary contexts. While these guarantees are highly desirable, in practice, schemes with a standalone proof of security, such as the Groth16 proof system, are preferred. This is because UC security typically comes with undesirable overhead, sometimes making UC-secure schemes significantly less efficient than their standalone counterparts. We establish the UC security of Groth16 without any significant overhead. In the spirit of global random oracles, we design a global (restricted) observable generic group functionality that models a natural notion of observability: computations that trace back to group elements derived from generators of other sessions are observable. This notion turns out to be surprisingly subtle to formalize. We provide a general framework for proving protocols secure in the presence of global generic groups, which we then apply to Groth16.
2024
TCC
The Cost of Maintaining Keys in Dynamic Groups with Applications to Multicast Encryption and Group Messaging
In this work we prove lower bounds on the (communication) cost of maintaining a shared key among a dynamic group of users. Being ``dynamic'' means one can add and remove users from the group. This captures important protocols like multicast encryption (ME) and continuous group-key agreement (CGKA), which is the primitive underlying many group messaging applications. We prove our bounds in a combinatorial setting where the state of the protocol progresses in rounds. The state of the protocol in each round is captured by a set system, with each of its elements specifying a set of users who share a secret key. We show this combinatorial model implies bounds in symbolic models for ME and CGKA that capture, as building blocks, PRGs, PRFs, dual PRFs, secret sharing, and symmetric encryption in the setting of ME, and PRGs, PRFs, dual PRFs, secret sharing, public-key encryption, and key-updatable public-key encryption in the setting of CGKA. The models are related to the ones used by Micciancio and Panjwani (Eurocrypt'04) and Bienstock et al. (TCC'20) to analyze ME and CGKA, respectively. We prove -- using the Bollobas' Set Pairs Inequality -- that the cost (number of uploaded ciphertexts) for replacing a set of d users in a group of size n is \Omega(d*\ln(n/d)). Our lower bound is asymptotically tight and both improves on a bound of \Omega(d) by Bienstock et al. (TCC'20), and generalizes a result by Micciancio and Panjwani (Eurocrypt'04), who proved a lower bound of \Omega(\log(n)) for d=1.
2024
TCC
The Power of NAPs: Compressing OR-Proofs via Collision-Resistant Hashing
Proofs of partial knowledge, first considered by Cramer, Dam\-gård and Schoenmakers (CRYPTO'94) and De Santis et al. (FOCS'94), allow for proving the validity of $k$ out of $n$ different statements without revealing which ones those are. In this work, we present a new approach for transforming certain proofs system into new ones that allows for proving partial knowledge. The communication complexity of the resulting proof system only depends logarithmically on the total number of statements $n$ and its security only relies on the existence of collision-resistant hash functions. As an example, we show that our transformation is applicable to the proof systems of Goldreich, Micali, and Wigderson (FOCS'86) for the graph isomorphism and the graph 3-coloring problem. Our main technical tool, which we believe to be of independent interest, is a new cryptographic primitive called non-adaptively programmable functions (NAPs). Those functions can be seen as pseudorandom functions which allow for re-programming the output at an input point, which must be fixed during key generation. Even when given the re-programmed key, it remains infeasible to find out where re-programming happened. Finally, as an additional technical tool, we also build explainable samplers for any distribution that can be sampled efficiently via rejection sampling and use them to construct NAPs for various output distributions.
2024
TCC
Tighter Adaptive IBEs and VRFs: Revisiting Waters’ Artificial Abort
One of the most popular techniques to prove adaptive security of identity-based encryptions (IBE) and verifiable random functions (VRF) is the _partitioning technique_. Currently, there are only two methods to relate the adversary's advantage and runtime (\epsilon, T) to those of the reduction's (\epsilon_proof, T_proof) using this technique: One originates to Waters (Eurocrypt 2005) who introduced the famous _artificial abort_ step to prove his IBE, achieving (\epsilon_proof, T_proof) = (O(\epsilon/Q), T+O(Q^2/\epsilon^2)), where Q is the number of key queries. Bellare and Ristenpart (Eurocrypt 2009) provide an alternative analysis for the same scheme removing the artificial abort step, resulting in (\epsilon_proof, T_proof) = (O(\epsilon^2/Q), T+O(Q)). Importantly, the current reductions all loose quadratically in \epsilon. In this paper, we revisit this two decade old problem and analyze proofs based on the partitioning technique through a new lens. For instance, the Waters IBE can now be proven secure with (\epsilon_proof, T_proof) = (O(\epsilon^{3/2}/Q), T+O(Q)), breaking the quadratic dependence on \epsilon. At the core of our improvement is a finer estimation of the failing probability of the reduction in Waters' original proof relying on artificial abort. We use Bonferroni's inequality, a tunable inequality obtained by cutting off higher order terms from the equality derived by the inclusion-exclusion principle. Our analysis not only improves the reduction of known constructions but also opens the door for new constructions. While a similar improvement to Waters IBE is possible for the lattice-based IBE by Agrawal, Boneh, and Boyen (Eurocrypt 2010), we can slightly tweak the so-called partitioning function in their construction, achieving (\epsilon_proof, T_proof) = (O(\epsilon/Q), T+O(Q)). This is a much better reduction than the previously known (O(\epsilon^3/Q^2), T+O(Q)). We also propose the first VRF with proof and verification key sizes sublinear in the security parameter under the standard d-LIN assumption, while simultaneously improving the reduction cost compared to all prior constructions.
2024
TCC
Towards general-purpose program obfuscation via local mixing
We explore the possibility of obtaining general-purpose obfuscation for all circuits by way of making only simple, local, functionality preserving random perturbations in the circuit structure. Towards this goal, we use the additional structure provided by reversible circuits, but no additional algebraic structure. Specifically: * We formulate a new (and relatively weak) obfuscation task regarding the ability to obfuscate random circuits of bounded length. We call such obfuscators random input & output (RIO) obfuscators. * We construct indistinguishability obfuscators for all (unbounded length) circuits given only an RIO obfuscator. We prove security of this construction under a new assumption regarding the pseudorandomness of sufficiently-long random reversible circuits with known functionality. This assumption builds on a conjecture made by Gowers (Comb. Prob. Comp. '96) regarding the pseudorandomness of bounded-size random reversible circuits and appears to be of independent interest. * We give candidate constructions of RIO obfuscators using only local, functionality preserving perturbations of the circuit structure. Our approach is rooted in statistical mechanics and can be thought of as locally ``thermalizing'' a circuit while preserving its functionality. We also provide arguments for security of the constructions and point to connections with the geometry of non-Abelian infinite groups. Given the power of program obfuscation, viability of the proposed approach would provide an alternative route to realizing almost all cryptographic tasks using the computational hardness of problems that are very different from standard ones. Furthermore, our specific candidate obfuscators are very simple and relatively efficient: the obfuscated version of an n-wire, m-gate (reversible) circuit with security parameter k has n wires and poly(n,k)*m gates. We hope that our initial exploration will motivate further study of this alternative path to program obfuscation.
2024
TCC
Unbounded Leakage-Resilience and Intrusion-Detection in a Quantum World
Can an adversary hack into our computer and steal sensitive data such as cryptographic keys? This question is almost as old as the Internet and significant effort has been spent on designing mechanisms to prevent and detect hacking attacks. Once quantum computers arrive, will the situation remain the same or can we hope to live in a better world? We first consider ubiquitous side-channel attacks, which aim to leak side information on secret system components, studied in the leakage-resilient cryptography literature. Classical leakage-resilient cryptography must necessarily impose restrictions on the type of leakage one aims to protect against. As a notable example, the most well-studied leakage model is that of bounded leakage, where it is assumed that an adversary learns at most l bits of leakage on secret components, for some leakage bound l. Although this leakage bound is necessary, many real-world side-channel attacks cannot be captured by bounded leakage. In this work, we design cryptographic schemes that provide guarantees against arbitrary side-channel attacks: • Using techniques from unclonable quantum cryptography, we design several basic leakage- resilient primitives, such as public- and private-key encryption, (weak) pseudorandom func- tions, digital signatures and quantum money schemes which remain secure under (polyno- mially) unbounded classical leakage. In particular, this leakage can be much longer than the (quantum) secret being leaked upon. In our view, leakage is the result of observations of quantities such as power consumption and hence is most naturally viewed as classi- cal information. Notably, the leakage-resilience of our schemes holds even in the stronger “LOCC leakage” model where the adversary can obtain adaptive leakage for (polynomially) unbounded number of rounds. • What if the adversary simply breaks into our system to steal our secret keys, rather than mounting only a side-channel attack? What if the adversary can even tamper with the data arbitrarily, for example to cover its tracks? We initiate the study of intrusion- detection in the quantum setting, where one would like to detect if security has been compromised even in the face of an arbitrary intruder attack which can leak and tamper with classical as well as quantum data. We design cryptographic schemes supporting intrusion detection for a host of primitives such as public- and private-key encryption, digital signature, functional encryption, program obfuscation and software protection. Our schemes are based on techniques from cryptography with secure key leasing and certified deletion
2024
TCC
Unclonable Commitments and Proofs
Non-malleable cryptography, proposed by Dolev, Dwork, and Naor (SICOMP '00), has numerous applications in protocol composition. In the context of proofs, it guarantees that an adversary who receives a proof cannot maul it into another valid proof. However, non-malleable cryptography (particularly in the non-interactive setting) suffers from an important limitation: An attacker can always copy the proof and resubmit it to another verifier (or even multiple verifiers). In this work, we prevent even the possibility of copying the proof as it is, by relying on quantum information. We call the resulting primitive unclonable proofs, making progress on a question posed by Aaronson. We also consider the related notion of unclonable commitments. We introduce formal definitions of these primitives that model security in various settings of interest. We also provide a near tight characterization of the conditions under which these primitives are possible, including a rough equivalence between unclonable proofs and public-key quantum money.
2024
TCC
Unclonable Cryptography with Unbounded Collusions and Impossibility of Hyperefficient Shadow Tomography
Quantum no-cloning theorem gives rise to the intriguing possibility of quantum copy protection where we encode a program or functionality in a quantum state such that a user in possession of k copies cannot create k + 1 copies, for any k. Introduced by Aaronson (CCC’09) over a decade ago, copy protection has proven to be notoriously hard to achieve. Previous work has been able to achieve copy-protection for various functionalities only in restricted models: (i) in the bounded collusion setting where k → k + 1 security is achieved for a-priori fixed collusion bound k (in the plain model with the same computational assumptions as ours, by Liu, Liu, Qian, Zhandry [TCC’22]), or, (ii) only k → 2k security is achieved (relative to a structured quantum oracle, by Aaronson [CCC’09]). In this work, we give the first unbounded collusion-resistant (i.e. multiple-copy secure) copy- protection schemes, answering the long-standing open question of constructing such schemes, raised by multiple previous works starting with Aaronson (CCC’09). More specifically, we obtain the following results. - We construct (i) public-keyencryption,(ii) public-keyfunctionalencryption,(iii) signature and (iv) pseudorandom function schemes whose keys are copy-protected against unbounded collusions in the plain model (i.e. without any idealized oracles), assuming (post-quantum) subexponentially secure iO and LWE. - We show that any unlearnable functionality can be copy-protected against unbounded collusions, relative to a classical oracle. - As a corollary of our results, we rule out the existence of hyperefficient quantum shadow tomography and hence answer an open question by Aaronson (STOC’18). We obtain our results through a novel technique which uses identity-based encryption to construct multiple copy secure copy-protection schemes from 1-copy → 2-copy secure schemes. We believe our technique is of independent interest. Along the way, we also obtain the following results. - We define and prove the security of new collusion-resistant monogamy-of-entanglement games for coset states. - We construct a classical puncturable functional encryption scheme whose master secret key can be punctured at all functions f such that f(m0) ̸= f(m1). This might also be of independent interest.
2024
TCC
Untangling the Security of Kilian's Protocol: Upper and Lower Bounds
Sigma protocols are elegant cryptographic proofs that have become a cornerstone of modern cryptography. A notable example is Schnorr's protocol, a zero-knowledge proof-of-knowledge of a discrete logarithm. Despite extensive research, the security of Schnorr's protocol in the standard model is not fully understood. In this paper we study \emph{Kilian's protocol}, an influential public-coin interactive protocol that, while not a sigma protocol, shares striking similarities with sigma protocols. The first example of a succinct argument, Kilian's protocol is proved secure via \emph{rewinding}, the same idea used to prove sigma protocols secure. In this paper we show how, similar to Schnorr's protocol, a precise understanding of the security of Kilian's protocol remains elusive. We contribute new insights via upper bounds and lower bounds. \begin{itemize} \item \emph{Upper bounds.} We establish the tightest known bounds on the security of Kilian's protocol in the standard model, via strict-time reductions and via expected-time reductions. Prior analyses are strict-time reductions that incur large overheads or assume restrictive properties of the PCP underlying Kilian's protocol. \item \emph{Lower bounds.} We prove that significantly improving on the bounds that we establish for Kilian's protocol would imply improving the security analysis of Schnorr's protocol beyond the current state-of-the-art (an open problem). This partly explains the difficulties in obtaining tight bounds for Kilian's protocol. \end{itemize}
2024
TCC
Worst-Case to Average-Case Hardness of LWE: An Alternative Perspective
In this work, we study the worst-case to average-case hardness of the Learning with Errors problem (LWE) under an alternative measure of hardness − the maximum success probability achievable by a probabilistic polynomial-time (PPT) algorithm. Previous works by Regev (STOC 2005), Peikert (STOC 2009), and Brakerski, Peikert, Langlois, Regev, Stehle (STOC 2013) give worst-case to average-case reductions from lattice problems to LWE, specifically from the approximate decision variant of the Shortest Vector Problem (GapSVP) and the Bounded Distance Decoding (BDD) problem. These reductions, however, are lossy in the sense that even the strongest assumption on the worst-case hardness of GapSVP or BDD implies only mild hardness of LWE. Our alternative perspective gives a much tighter reduction and strongly relates the hardness of LWE to that of BDD. In particular, we show that under a reasonable assumption about the success probability of solving BDD via a PPT algorithm, we obtain a nearly tight lower bound on the highest possible success probability for solving LWE via a PPT algorithm. Furthermore, we show a tight relationship between the best achievable success probability by any PPT algorithm for decision-LWE to that of search-LWE. Our results not only refine our understanding of the computational complexity of LWE, but also provide a useful framework for analyzing the practical security implications.
2024
TCC
zkSNARKs in the ROM with Unconditional UC-Security
The universal composability (UC) framework is a “gold standard” for security in cryptography. UC-secure protocols achieve strong security guarantees against powerful adaptive adversaries, and retain these guarantees when used as part of larger protocols. Zero knowledge succinct non-interactive arguments of knowledge (zkSNARKs) are a popular cryptographic primitive that are often used within larger protocols deployed in dynamic environments, and so UC-security is a highly desirable, if not necessary, goal. In this paper we prove that there exist zkSNARKs in the random oracle model (ROM) that unconditionally achieve UC-security. Here, “unconditionally” means that security holds against adversaries that make a bounded number of queries to the random oracle, but are otherwise computationally unbounded. Prior work studying UC-security for zkSNARKs obtains transformations that rely on computational assumptions and, in many cases, lose most of the succinctness property of the zkSNARK. Moreover, these transformations make the resulting zkSNARK more expensive and complicated. In contrast, we prove that widely used zkSNARKs in the ROM are UC-secure without modifications. We prove that the Micali construction, which is the canonical construction of a zkSNARK, is UC-secure. Moreover, we prove that the BCS construction, which many zkSNARKs deployed in practice are based on, is UC-secure. Our results confirm the intuition that these natural zkSNARKs do not need to be augmented to achieve UC-security, and give confidence that their use in larger real-world systems is secure.