International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Papers from CRYPTO 2023

Year
Venue
Title
2023
CRYPTO
A Detailed Analysis of Fiat-Shamir with Aborts
Lyubashevky’s signatures are based on the Fiat-Shamir with Aborts paradigm. It transforms an interactive identification protocol that has a non-negligible probability of aborting into a signature by repeating executions until a loop iteration does not trigger an abort. Interaction is removed by replacing the challenge of the verifier by the evaluation of a hash function, modeled as a random oracle in the analysis. The access to the random oracle is classical (ROM), resp. quantum (QROM), if one is interested in security against classical, resp. quantum, adversaries. Most analyses in the literature consider a setting with a bounded number of aborts (i.e., signing fails if no signature is output within a prescribed number of loop iterations), while practical instantiations (e.g., Dilithium) run until a signature is output (i.e., loop iterations are unbounded). In this work, we emphasize that combining random oracles with loop iterations induces numerous technicalities for analyzing correctness, runtime, and security of the resulting schemes, both in the bounded and unbounded case. As a first contribution, we put light on errors in all existing analyses. We then provide two detailed analyses in the QROM for the bounded case, adapted from Kiltz et al [EUROCRYPT’18] and Grilo et al [ASIACRYPT’21]. In the process, we prove the underlying Σ-protocol to achieve a stronger zero-knowledge property than usually considered for Σ-protocols with aborts, which enables a corrected analysis. A further contribution is a detailed analysis in the case of unbounded aborts, the latter inducing several additional subtleties.
2023
CRYPTO
A Framework for Practical Anonymous Credentials from Lattices
We present a framework for building practical anonymous credential schemes based on the hardness of lattice problems. The running time of the prover and verifier is independent of the number of users and linear in the number of attributes. The scheme is also compact in practice, with the proofs being as small as a few dozen kilobytes for arbitrarily large (say up to $2^{128}$) users with each user having several attributes. The security of our scheme is based on a new family of lattice assumptions which roughly states that given short pre-images of random elements in some set $S$, it is hard to create a pre-image for a fresh element in such a set. We show that if the set admits efficient zero-knowledge proofs of knowledge of a commitment to a set element and its pre-image, then this yields practically-efficient privacy-preserving primitives such as blind signatures, anonymous credentials, and group signatures. We propose a candidate instantiation of a function from this family which allows for such proofs and thus yields practical lattice-based primitives.
2023
CRYPTO
A Framework for Statistically Sender Private OT with Optimal Rate
Statistical sender privacy (SSP) is the strongest achievable security notion for two-message oblivious transfer (OT) in the standard model, providing statistical security against malicious receivers and computational security against semi-honest senders. In this work we provide a novel construction of SSP OT from the Decisional Diffie-Hellman (DDH) and the Learning Parity with Noise (LPN) assumptions achieving (asymptotically) optimal amortized communication complexity, i.e. it achieves rate 1. Concretely, the total communication complexity for $k$ OT instances is $2k(1+o(1))$, which (asymptotically) approaches the information-theoretic lower bound. Previously, it was only known how to realize this primitive using heavy rate-1 FHE techniques [Brakerski et al., Gentry and Halevi TCC'19]. At the heart of our construction is a primitive called statistical co-PIR, essentially a a public key encryption scheme which statistically erases bits of the message in a few hidden locations. Our scheme achieves nearly optimal ciphertext size and provides statistical security against malicious receivers. Computational security against semi-honest senders holds under the DDH assumption.
2023
CRYPTO
A Lower Bound for Proving Hardness of Learning with Rounding with Polynomial Modulus
Regev's Learning with Errors (LWE) problem (STOC 2005) is a fundamental hardness assumption for modern cryptography. The Learning with Rounding (LWR) Problem was put forth by Banerjee, Peikert and Rosen (Eurocrypt 2012) as an alternative to LWE, for use in cryptographic situations which require determinism. The only method we currently have for proving hardness of LWR is the so-called ``rounding reduction'' which is a specific reduction from an analogous LWE problem. This reduction works whenever the LWE error is small relative to the noise introduced by rounding, but it fails otherwise. For this reason, all prior work on establishing hardness of LWR forces the LWE error to be small, either by setting other parameters extremely large (which hurts performance), or by limiting the number of LWR samples seen by the adversary (which rules out certain applications). Hardness of LWR is poorly understood when the LWE modulus ($q$) is polynomial and when the number of LWE samples ($m$) seen by the adversary is an unbounded polynomial. This range of parameters is the most relevant for practical implementations, so the lack of a hardness proof in this situation is not ideal. In this work, we identify an obstacle for proving the hardness of LWR from LWE in the above framework when $q$ is polynomial and $m$ is an unbounded polynomial. Specifically, we show that any ``pointwise'' reduction from LWE to LWR (\emph{i.e.}, any reduction which maps LWE samples independently to LWR samples) admits an efficient algorithm which directly solves LWE (without the use of an LWR solver). Consequently, LWE cannot be reduced to LWR in our pointwise reduction model with our setting of $q$ and $m$, unless LWE is easy. Our model of a pointwise reduction from LWE to LWR captures all prior reductions from LWE to LWR except the rejection sampling reduction of Bogdanov \emph{et al.} (TCC 2016); while their reduction still operates in a pointwise manner, it can reject an LWE sample instead of mapping it to an LWR sample. However we conjecture that our result still holds in this setting. Our argument proceeds roughly as follows. First, we show that any pointwise reduction from LWE to LWR must have good agreement with some affine map. Then, we use the affine agreement of a pointwise reduction together with a type of Goldreich-Levin ``prediction-implies-inversion'' argument to extract the LWE secret from LWE input samples. Both components may be of independent interest.
2023
CRYPTO
A Note on Non-Interactive Zero-Knowledge from CDH
We build non-interactive zero-knowledge (NIZK) and ZAP arguments for all NP where soundness holds for infinitely-many security parameters, and against uniform adversaries, assuming the subexponential hardness of the Computational Diffie-Hellman (CDH) assumption. We additionally prove the existence of NIZK arguments with these same properties assuming the polynomial hardness of both CDH and the Learning Parity with Noise (LPN) assumption. In both cases, the CDH assumption does not require a group equipped with a pairing. Infinitely-often uniform security is a standard byproduct of commonly used non-black-box techniques that build on disjunction arguments on the (in)security of some primitive. In the course of proving our results, we develop a new variant of this non-black-box technique that yields improved guarantees: we obtain explicit constructions (previous works generally only obtained existential results) where security holds for a relatively dense set of security parameters (as opposed to an arbitrary infinite set of security parameters). We demonstrate that our techniques can have applications beyond our main results.
2023
CRYPTO
Accelerating HE Operations from Key Decomposition Technique
Lattice-based homomorphic encryption (HE) schemes are based on the noisy encryption technique, where plaintexts are masked with some random noise for security. Recent advanced HE schemes rely on a decomposition technique to manage the growth of noise, which involves a conversion of a ciphertext entry into a short vector followed by multiplication with an evaluation key. Prior to this work, the decomposition procedure turns out to be the most time-consuming part, as it requires discrete Fourier transforms (DFTs) over the base ring for efficient polynomial arithmetic. In this paper, an expensive decomposition operation over a large modulus is replaced with relatively cheap operations over a ring of integers with a small bound. Notably, the cost of DFTs is reduced from quadratic to linear with the level of a ciphertext without any extra noise growth. We demonstrate the implication of our approach by applying it to the key-switching procedure. Our experiments show that the new key-switching method achieves a speedup of 1.2--2.3 or 2.1--3.3 times over the previous method, when the dimension of a base ring is $2^{15}$ or $2^{16}$, respectively.
2023
CRYPTO
Additive Randomized Encodings and Their Applications
Addition of $n$ inputs is often the easiest nontrivial function to compute securely. Motivated by several open questions, we ask what can be computed securely given only an oracle that computes the sum. Namely, what functions can be computed in a model where parties can only encode their input locally, then sum up the encodings over some Abelian group $\G$, and decode the result to get the function output. An {\em additive randomized encoding} (ARE) of a function $f(x_1,\ldots,x_n)$ maps every input $x_i$ independently into a randomized encoding $\hat x_i$, such that $\sum_{i=1}^n$ $\hat x_i$ reveals $f(x_1,\ldots,x_n)$ and nothing else about the inputs. In a {\em robust} ARE, the sum of {\em any subset} of the $\hat x_i$ only reveals the {\em residual function} obtained by restricting the corresponding inputs. We obtain positive and negative results on ARE. In particular: \begin{itemize} \item {\em Information-theoretic ARE.} We fully characterize the 2-party functions $f:X_1\times X_2\to\{0,1\}$ admitting a perfectly secure ARE. For $n\ge 3$ parties, we show a useful ``capped sum'' function that separates statistical security from perfect security. \item {\em Computational ARE.} We present a general feasibility result, showing that \emph{all functions} can be computed in this model, under a standard hardness assumption in bilinear groups. We also describe a heuristic lattice-based construction. \item {\em Robust ARE.} We present a similar feasibility result for {\em robust} computational ARE based on ideal obfuscation along with standard cryptographic assumptions. \end{itemize} We then describe several applications of ARE and the above results. \begin{itemize} \item Under a standard cryptographic assumption, our computational ARE schemes imply the feasibility of general non-interactive secure computation in the {\em shuffle model}, where messages from different parties are shuffled. This implies a general utility-preserving compiler from differential privacy in the central model to computational differential privacy in the (non-robust) shuffle model. \item The existence of information-theoretic {\em robust} ARE implies ``best-possible'' information-theoretic MPC protocols (Halevi et al., TCC 2018) and degree-2 multiparty randomized encodings (Applebaum et al., TCC 2018). This yields new positive results for specific functions in the former model, as well as a simple unifying barrier for obtaining negative results in both models. \end{itemize}
2023
CRYPTO
Algebraic Reductions of Knowledge
We introduce reductions of knowledge, a generalization of arguments of knowledge, which reduce checking knowledge of a witness in one relation to checking knowledge of a witness in another (simpler) relation. Reductions of knowledge unify a growing class of modern techniques as well as provide a compositional framework to modularly reason about individual steps in complex arguments of knowledge. As a demonstration, we simplify and unify recursive arguments over linear algebraic statements by decomposing them as a sequence of reductions of knowledge. To do so, we develop the tensor reduction of knowledge, which generalizes the central reductive step common to many recursive arguments. Underlying the tensor reduction of knowledge is a new information-theoretic reduction, which, for any modules $U$, $U_1$, and $U_2$ such that $U \cong U_1 \otimes U_2$, reduces the task of evaluating a homomorphism in $U$ to evaluating a homomorphism in $U_1$ and evaluating a homomorphism in $U_2$.
2023
CRYPTO
Almost Tight Multi-User Security under Adaptive Corruptions from LWE in the Standard Model
In this work, we construct the {\it first} digital signature (SIG) and public-key encryption (PKE) schemes with almost tight multi-user security under adaptive corruptions based on the learning-with-errors (LWE) assumption in the standard model. Our PKE scheme achieves almost tight IND-CCA security and our SIG scheme achieves almost tight strong EUF-CMA security, both in the multi-user setting with adaptive corruptions. The security loss is quadratic in the security parameter, and independent of the number of users, signatures or ciphertexts. Previously, such schemes were only known to exist under number-theoretic assumptions or in classical random oracle model, thus vulnerable to quantum adversaries. To obtain our schemes from LWE, we propose new frameworks for constructing SIG and PKE with a core technical tool named {\it probabilistic} quasi-adaptive hash proof system (pr-QA-HPS). As a new variant of HPS, our pr-QA-HPS provides {\it probabilistic} public and private evaluation modes that may toss coins. This is in stark contrast to the traditional HPS [Cramer and Shoup, Eurocrypt 2002] and existing variants like approximate HPS [Katz and Vaikuntanathan, Asiacrypt 2009], whose public and private evaluations are deterministic in their inputs. Moreover, we formalize a new property called evaluation indistinguishability by requiring statistical indistinguishability of the two probabilistic evaluation modes, even in the presence of the secret key. The evaluation indistinguishability, as well as other nice properties resulting from the probabilistic features of pr-QA-HPS, are crucial for the multi-user security proof of our frameworks under adaptive corruptions. As for instantiations, we construct pr-QA-HPS from the LWE assumption and prove its properties with almost tight reductions, which admit almost tightly secure LWE-based SIG and PKE schemes under our frameworks. Along the way, we also provide new almost-tight reductions from LWE to multi-secret LWE, which may be of independent interest.
2023
CRYPTO
Analysis of the security of the PSSI problem and cryptanalysis of the Durandal signature scheme
We present a new attack against the PSSI problem, one of the three problems at the root of security of Durandal, an efficient rank metric code-based signature scheme with a public key size of 15 kB and a signature size of 4 kB, presented at EUROCRYPT'19. Our attack recovers the private key using a leakage of information coming from several signatures produced with the same key. Our approach is to combine pairs of signatures and perform Cramer-like formulas in order to build subspaces containing a secret element. We break all existing parameters of Durandal: the two published sets of parameters claiming a security of 128 bits are broken in respectively $2^{66}$ and $2^{73}$ elementary bit operations, and the number of signatures required to finalize the attack is 1,792 and 4,096 respectively. We implemented our attack and ran experiments that demonstrated its success with smaller parameters.
2023
CRYPTO
Anamorphic Signatures: Secrecy From a Dictator Who Only Permits Authentication!
The goal of this research is to raise technical doubts regarding the usefulness of the repeated attempts by governments to curb Cryptography (aka the ``Crypto Wars''), and argue that they, in fact, cause more damage than adding effective control. The notion of \emph{Anamorphic Encryption} was presented in Eurocrypt'22 for a similar aim. There, despite the presence of a Dictator who possesses all keys and knows all messages, parties can arrange a hidden ``{\em anamorphic}'' message in an otherwise indistinguishable from regular ciphertexts (wrt the Dictator). In this work, we postulate a stronger cryptographic control setting where encryption does not exist (or is neutralized) since all communication is passed through the Dictator in, essentially, cleartext mode (or otherwise, when secure channels to and from the Dictator are the only confidentiality mechanism). Messages are only authenticated to assure recipients of the identity of the sender. We ask whether security against the Dictator still exists, even under such a~strict regime which allows only authentication (i.e., authenticated/ signed messages) to pass end-to-end, and where received messages are determined by/ known to the Dictator, and the Dictator also eventually gets all keys to verify compliance of past signing. To frustrate the dictator, this authenticated message setting gives rise to the possible notion of anamorphic channels inside signature and authentication schemes, where parties attempt to send undetectable secure messages (or other values) using authentication/ signature tags which are indistinguishable from regular tags. We define and present implementation of schemes for anamorphic signature and authentication; these are applicable to existing and standardized signature and authentication schemes which were designed independently of the notion of anamorphic messages. Further, some cornerstone constructions of the foundations of signatures, in fact, introduce anamorphism.
2023
CRYPTO
Anonymous Tokens with Stronger Metadata Bit Hiding from Algebraic MACs
On the one hand, the web needs to be secured from malicious activities such as bots or DoS attacks; on the other hand, such needs ideally should not justify services tracking people's activities on the web. Anonymous tokens provide a nice tradeoff between allowing an issuer to ensure that a user has been vetted and protecting the users' privacy. However, in some cases, whether or not a token is issued reveals a lot of information to an adversary about the strategies used to distinguish honest users from bots or attackers. In this work, we focus on designing an anonymous token protocol between a client and an issuer (also a verifier) that enables the issuer to support its fraud detection mechanisms while preserving users' privacy. This is done by allowing the issuer to embed a hidden (from the client) metadata bit into the tokens. We first study an existing protocol from CRYPTO 2020 which is an extension of Privacy Pass from PoPETs 2018; that protocol aimed to provide support for a hidden metadata bit, but provided a somewhat restricted security notion. We demonstrate a new attack, showing that this is a weakness of the protocol, not just the definition. In particular, the metadata bit hiding is weak in the setting where the attacker can redeem some tokens and get feedback on whether the bit extraction succeeded. We then revisit the formalism of anonymous tokens with private metadata bit, consider the more natural notion, and design a scheme which achieves it. In order to design this new secure protocol, we base our construction on algebraic MACs instead of PRFs. Our security definitions capture a realistic threat model where adversaries could, through direct feedback or side channels, learn the embedded bit when the token is redeemed. Finally, we compare our protocol with one of the CRYPTO 2020 protocols. We obtain 20% more efficient performance.
2023
CRYPTO
Arithmetic Sketching
This paper introduces arithmetic sketching, an abstraction of a primitive that several previous works use to achieve lightweight, low-communication zero-knowledge verification of secret-shared vectors. An arithmetic sketching scheme for a language L ⊆ F^n consists of (1) a randomized linear function compressing a long input x to a short “sketch,” and (2) a small arithmetic circuit that accepts the sketch if and only if x ∈ L, up to some small error. If the language L has an arithmetic sketching scheme with short sketches, then it is possible to test membership in L using an arithmetic circuit with few multiplication gates. Since multiplications are the dominant cost in protocols for computation on secret-shared, encrypted, and committed data, arithmetic sketching schemes give rise to lightweight protocols in each of these settings. In addition to the formalization of arithmetic sketching, our contributions are: – A general framework for constructing arithmetic sketching schemes from algebraic varieties. This framework unifies schemes from prior work and gives rise to schemes for useful new languages and with improved soundness error. – The first arithmetic sketching schemes for languages of sparse vectors: vectors with bounded Hamming weight, bounded L1 norm, and vectors whose few non-zero values satisfy a given predicate. – A method for “compiling” any arithmetic sketching scheme for a language L into a low-communication malicious-secure multi-server protocol for securely testing that a client-provided secret-shared vector is in L. We also prove the first nontrivial lower bounds showing limits on the sketch size for certain languages (e.g., vectors of Hamming-weight one) and proving the non-existence of arithmetic sketching schemes for others (e.g., the language of all vectors that contain a specific value).
2023
CRYPTO
Attribute-Based Multi-Input FE (and more) for Attribute-Weighted Sums
Recently, Abdalla, Gong and Wee (Crypto 2020) provided the first functional encryption scheme for attribute-weighted sums (AWS), where encryption takes as input $N$ (unbounded) attribute-value pairs $\Set{x_i, z_i}_{i \in [N]}$ where $x_i$ is public and $z_i$ is private, the secret key is associated with an arithmetic branching programs $f$, and decryption returns the weighted sum ${\sum}_{{i \in [N]}} f(x_i)^\top z_i$, leaking no additional information about the $z_i$'s. We extend FE for AWS to the significantly more challenging multi-party setting and provide the first construction for {\it attribute-based} multi-input FE (MIFE) supporting AWS. For $i \in [n]$, encryptor $i$ can choose an attribute $y_i$ together with AWS input $\Set{x_{i,j}, z_{i,j}}$ where $j \in [N_i]$ and $N_i$ is unbounded, the key generator can choose an access control policy $g_i$ along with its AWS function $h_i$ for each $i \in [n]$, and the decryptor can compute $$\sum_{i \in [n]} \sum_{j \in [N_{i}]}h_{i}(x_{i,j})^\top z_{i,j} \text{ iff } g_{i}(y_{i}) =0 \text{ for all } i \in [n]$$ Previously, the only known attribute based MIFE was for the inner product functionality (Abdalla \etal~Asiacrypt 2020), where additionally, $y_i$ had to be fixed during setup and must remain the same for all ciphertexts in a given slot. Our attribute based MIFE implies the notion of multi-input attribute based encryption (MIABE) recently studied by Agrawal, Yadav and Yamada (Crypto 2022) and Francati, Friolo, Malavolta and Venturi (Eurocrypt 2023), for a conjunction of predicates represented as arithmetic branching programs (ABP). Along the way, we also provide the first constructions of multi-client FE (MCFE) and dynamic decentralized FE (DDFE) for the AWS functionality. Previously, the best known MCFE and DDFE schemes were for inner products (Chotard \etal~ePrint 2018, Abdalla, Benhamouda and Gay, Asiacrypt 2019, and Chotard \etal~Crypto 2020).
2023
CRYPTO
Best of Both Worlds: Revisiting the Spymasters Double Agent Problem
This work introduces the notion of secure multiparty computation: MPC with fall-back security. Fall-back security for an $n$-party protocol is defined with respect to an adversary structure $\cZ$ wherein security is guaranteed in the presence of both a computationally unbounded adversary with adversary structure $\cZ$, and a computationally bounded adversary corrupting an arbitrarily large subset of the parties. This notion was considered in the work of Chaum (Crypto 89) via the Spymaster's double agent problem where he showed a semi-honest secure protocol for the honest majority adversary structure. Our first main result is a compiler that can transform any $n$-party protocol that is semi-honestly secure with statistical security tolerating an adversary structure $\cZ$ to one that (additionally) provides semi-honest fall-back security w.r.t $\cZ$. The resulting protocol has optimal round complexity, up to a constant factor, and is optimal in assumptions and the adversary structure. Our second result fully characterizes when malicious fall-back security is feasible. More precisely, we show that malicious fallback secure protocol w.r.t $\cZ$ exists if and only if $\cZ$ admits unconditional MPC against a semi-honest adversary (namely, iff $\cZ \in \cQ^2$).
2023
CRYPTO
Bingo: Adaptivity and Asynchrony in Verifiable Secret Sharing and Distributed Key Generation
We present Bingo, an adaptively secure and optimally resilient packed asynchronous verifiable secret sharing (PAVSS) protocol that allows a dealer to share f+1 secrets with a total communication complexity of O(λn^2) words, where λ is the security parameter and n is the number of parties. Using Bingo, we obtain an adaptively secure validated asynchronous Byzantine agreement (VABA) protocol that uses O(λn^3) expected words and constant expected time, which we in turn use to construct an adaptively secure high-threshold asynchronous distributed key generation (ADKG) protocol that uses O(λn^3) expected words and constant expected time. To the best of our knowledge, our ADKG is the first to allow for an adaptive adversary while matching the asymptotic complexity of the best known static ADKGs.
2023
CRYPTO
Black-Hole Radiation Decoding is Quantum Cryptography
We propose to study equivalence relations between phenomena in high-energy physics and the existence of standard cryptographic primitives, and show the first example where such an equivalence holds. A small number of prior works showed that high-energy phenomena can be explained by cryptographic hardness. Examples include using the existence of one-way functions to explain the hardness of decoding black-hole Hawking radiation (Harlow and Hayden 2013, Aaronson 2016), and using pseudorandom quantum states to explain the hardness of computing AdS/CFT dictionary (Bouland, Fefferman and Vazirani, 2020). In this work we show, for the former example of black-hole radiation decoding, that it also implies the existence of secure quantum cryptography. In fact, we show an existential equivalence between the hardness of black-hole radiation decoding and a variety of cryptographic primitives, including bit-commitment schemes and oblivious transfer protocols (using quantum communication). This can be viewed (with proper disclaimers, as we discuss) as providing a physical justification for the existence of secure cryptography. We conjecture that such connections may be found in other high-energy physics phenomena.
2023
CRYPTO
Brakedown: Linear-time and field-agnostic SNARKs for R1CS
This paper introduces a SNARK called Brakedown. Brakedown targets R1CS, a popular NP-complete problem that generalizes circuit-satisfiability. It is the first built system that provides a linear-time prover, meaning the prover incurs O(N) finite field operations to prove the satisfiability of an N-sized R1CS instance. Brakedown’s prover is faster, both concretely and asymptotically, than prior SNARK implementations. It does not require a trusted setup and may be post-quantum secure. Furthermore, it is compatible with arbitrary finite fields of sufficient size; this property is new among built proof systems with sublinear proof sizes. To design Brakedown, we observe that recent work of Bootle, Chiesa, and Groth (BCG, TCC 2020) provides a polynomial commitment scheme that, when combined with the linear-time interactive proof system of Spartan (CRYPTO 2020), yields linear-time IOPs and SNARKs for R1CS (a similar theoretical result was previously established by BCG, but our approach is conceptually simpler, and crucial for achieving high-speed SNARKs). A core ingredient in the polynomial commitment scheme that we distill from BCG is a linear-time encodable code. Existing constructions of such codes are believed to be impractical. Nonetheless, we design and engineer a new one that is practical in our context. We also implement a variant of Brakedown that uses Reed-Solomon codes instead of our linear-time encodable codes; we refer to this variant as Shockwave. Shockwave is not a linear-time SNARK, but it provides shorter proofs and lower verification times than Brakedown, and also provides a faster prover than prior plausibly post-quantum SNARKs.
2023
CRYPTO
Cloning Games: A General Framework for Unclonable Primitives
The powerful no-cloning principle of quantum mechanics can be leveraged to achieve interesting primitives, referred to as unclonable primitives, that are impossible to achieve classically. In the past few years, we have witnessed a surge of new unclonable primitives. While prior works have mainly focused on establishing feasibility results, another equally important direction, that of understanding the relationship between different unclonable primitives is still in its nascent stages. Moving forward, we need a more systematic study of unclonable primitives. \par To this end, we introduce a new framework called {\em cloning games}. This framework captures many fundamental unclonable primitives such as quantum money, copy-protection, unclonable encryption, single-decryptor encryption, and many more. By reasoning about different types of cloning games, we obtain many interesting implications to unclonable cryptography, including the following: 1. We obtain the first construction of information-theoretically secure single-decryptor encryption in the one-time setting. 2. We construct unclonable encryption in the quantum random oracle model based on BB84 states, improving upon the previous work, which used coset states. Our work also provides a simpler security proof for the previous work. 3. We construct copy-protection for single-bit point functions in the quantum random oracle model based on BB84 states, improving upon the previous work, which used coset states, and additionally, providing a simpler proof. 4. We establish a relationship between different challenge distributions of copy-protection schemes and single-decryptor encryption schemes. 5. Finally, we present a new construction of one-time encryption with certified deletion.
2023
CRYPTO
Coefficient Grouping for Complex Affine Layers
Designing symmetric-key primitives for applications in Fully Homomorphic Encryption (FHE) has become important to address the issue of the ciphertext expansion. In such a context, cryptographic primitives with a low-AND-depth decryption circuit are desired. Consequently, quadratic nonlinear functions are commonly used in these primitives, including the well-known $\chi$ function over $\mbb{F}_2^n$ and the power map over a large finite field $\mbb{F}_{p^n}$. In this work, we study the growth of the algebraic degree for an SPN cipher over $\mbb{F}_{2^n}^{\width}$, whose S-box is defined as the combination of a power map $x\mapsto x^{2^d+1}$ and an $\mbb{F}_2$-linearized affine polynomial $x\mapsto c_0+\sum_{i=1}^{w}c_ix^{2^{h_i}}$ where $c_1,\ldots,c_w\neq0$. Specifically, motivated by the fact that the original coefficient grouping technique published at EUROCRYPT 2023 becomes less efficient for $w>1$, we develop a variant technique that can efficiently work for arbitrary $w$. With this new technique to study the upper bound of the algebraic degree, we answer the following questions from a theoretic perspective: \begin{enumerate} \item can the algebraic degree increase exponentially when $w=1$? \item what is the influence of $w$, $d$ and $(h_1,\ldots,h_w)$ on the growth of the algebraic degree? \end{enumerate} Based on this, we show (i) how to efficiently find $(h_1,\ldots,h_w)$ to achieve the exponential growth of the algebraic degree and (ii) how to efficiently compute the upper bound of the algebraic degree for arbitrary $(h_1,\ldots,h_w)$. Therefore, we expect that these results can further advance the understanding of the design and analysis of such primitives.
2023
CRYPTO
Combined Fault and Leakage Resilience: Composability, Constructions and Compiler
Real-world cryptographic implementations nowadays are not only attacked via classical cryptanalysis but also via implementation attacks. Roughly, these attacks can be divided into passive attacks, where the adversary observed information about the internals of the computation; and active attacks where an adversary attempts to induce faults. While there is a rich literature on countermeasures targeting either of these attacks, preventing \emph{combined} attacks only recently received wider attention by the research community. In order to protect against passive side-channel attacks the standard technique is to use masking. Here, all sensitive information is secret shared such that leakage from individual shares does not reveal relevant information. To further lift the masking countermeasure to protect against active attacks, two different approaches have been considered in the literature. First, we may run $\epsilon$ copies of the masked computation and verify the outputs in order to detect faulty computation in one of the copies. This, approach, however has the following shortcomings. Firstly, we either require a huge amount of randomness ($O(\epsilon)$ more than a single masked circuit consumes), or we re-use the randomness among all $\epsilon$ copies, which makes the computation highly vulnerable to so-called horizontal attacks. Secondly, the number of shares is quadratic resulting in quadratic complexity even for affine computations. An alternative approach is to use polynomial masking, where instead of using additive masking, we use a sharing based on Reed Solomon codes. This has the advantage that the encoding itself already provides some resilience against faults, which is not the case for the simple additive encoding. Unfortunately, however, current state of the art schemes either led to an overhead of $O(n^5)$ for non-linear gates (here $n$ is the number of masks), or only worked against very restricted faults. In this work, we present a compiler based on polynomial masking that uses only $n=d+\epsilon+1$ shares and achieves linear computational complexity for affine computation (as previous polynomial approaches) and cubic complexity for non-linear gates (as previous approaches using the duplication method). Hence, our compiler has the best-known asymptotic efficiency among all known approaches. Furthermore, our compiler provides security against much stronger attackers that use region probes and adaptive faults and is thus secure against horizontal attacks. To achieve our construction, we introduce the notion of fault-invariance that allows us to lift probing secure gadgets to also be secure against combined attacks without considering all possible fault combinations. This technique improves previous approaches verifying probing security for all possible fault combinations and allows for much simpler constructions.
2023
CRYPTO
Compact Lattice Gadget and Its Applications to Hash-and-Sign Signatures
Lattice gadgets and the associated algorithms are the essential building blocks of lattice-based cryptography. In the past decade, they have been applied to build versatile and powerful cryptosystems. However, the practical optimizations and designs of gadget-based schemes generally lag their theoretical constructions. For example, the gadget-based signatures have elegant design and capability of extending to more advanced primitives, but they are far less efficient than other lattice-based signatures. This work aims to improve the practicality of gadget-based cryptosystems, with a focus on hash-and-sign signatures. To this end, we develop a compact gadget framework in which the used gadget is a \emph{square} matrix instead of the short and fat one used in previous constructions. To work with this compact gadget, we devise a specialized gadget sampler, called \emph{semi-random sampler}, to compute the approximate preimage. It first \emph{deterministically} computes the error and then randomly samples the preimage. We show that for uniformly random targets, the preimage and error distributions are simulatable without knowing the trapdoor. This ensures the security of the signature applications. Compared to the Gaussian-distributed errors in previous algorithms, the deterministic errors have a smaller size, which lead to a substantial gain in security and enables a practically working instantiation. As the applications, we present two practically efficient gadget-based signature schemes based on NTRU and Ring-LWE respectively. The NTRU-based scheme offers comparable efficiency to Falcon and Mitaka and a simple implementation without the need of generating the NTRU trapdoor. The LWE-based scheme also achieves a desirable overall performance. It not only greatly outperforms the state-of-the-art LWE-based hash-and-sign signatures, but also has an even smaller size than the LWE-based Fiat-Shamir signature scheme Dilithium. These results fill the long-term gap in practical gadget-based signatures.
2023
CRYPTO
Completeness Theorems for Adaptively Secure Broadcast
The advent of blockchain protocols has reignited the interest in adaptively secure broadcast; it is by now well understood that broadcasting over a diffusion network allows an adaptive adversary to corrupt the sender depending on the message it attempts to send and change it. Hirt and Zikas [Eurocrypt '10] proved that this is an inherent limitation of broadcast in the simulation-based setting---i.e., this task is impossible against an adaptive adversary corrupting a majority of the parties (a task that is achievable against a static adversary). The contributions of this paper are two-fold. First, we show that, contrary to previous perception, the above limitation of adaptively secure broadcast is not an artifact of simulation-based security, but rather an inherent issue of adaptive security. In particular, we show that: (1) it also applies to the property-based broadcast definition adapted for adaptive adversaries, and (2) unlike other impossibilities in adaptive security, this impossibility cannot be circumvented by adding a programmable random oracle, in neither setting, property-based or simulation-based. Second, we turn to the resource-restricted cryptography (RRC) paradigm [Garay et al., Eurocrypt '20], which has proven useful in circumventing impossibility results, and ask whether it also affects the above negative result. We answer this question in the affirmative, by showing that time-lock puzzles (TLPs)---which can be viewed as an instance of RRC---indeed allow for achieving the property-based definition and circumvent the impossibility of adaptively secure broadcast. The natural question is then, do TLPs also allow for simulation-based adaptively secure broadcast against corrupted majorities? We answer this question in the negative. However, we show that a positive result can be achieved via a non-committing analogue of TLPs in the programmable random-oracle model. Importantly, and as a contribution of independent interest, we also present the first (limited) composition theorem in the resource-restricted setting, which is needed for the complexity-based, non-idealized treatment of TLPs in the context of other protocols.
2023
CRYPTO
Computational Wiretap Coding from Indistinguishability Obfuscation
A wiretap coding scheme for a pair of noisy channels $(\chB,\chE)$ enables Alice to reliably communicate a message to Bob by sending its encoding over $\chB$, while hiding the message from an adversary Eve who obtains the same encoding over $\chE$. A necessary condition for the feasibility of writeup coding is that $\chB$ is not a {\em degradation} of $\chE$, namely Eve cannot simulate Bob’s view. While insufficient in the information-theoretic setting, a recent work of Ishai, Korb, Lou, and Sahai (Crypto 2022) showed that the non-degradation condition {\em is} sufficient in the computational setting, assuming idealized flavors of obfuscation. The question of basing a similar feasibility result on standard cryptographic assumptions was left open, even in simple special cases. In this work, we settle the question for all discrete memoryless channels where the (common) input alphabet of $\chB$ and $\chE$ is {\em binary}, and with arbitrary finite output alphabet, under the standard assumptions that indistinguishability obfuscation and injective PRGs exist. In particular, this establishes the feasibility of computational wiretap coding when $\chB$ is a binary symmetric channel with crossover probability $p$ and $\chE$ is a binary erasure channel with erasure probability $e$, where $e>2p$. On the information-theoretic side, our result builds on a new polytope characterization of channel degradation for pairs of binary-input channels, which may be of independent interest.
2023
CRYPTO
Constant Input Attribute Based (and Predicate) Encryption from Evasive and Tensor LWE
Constructing advanced cryptographic primitives such as obfuscation or broadcast encryption from standard hardness assumptions in the post quantum regime is an important area of research, which has met with limited success despite significant effort. It is therefore extremely important to find new, simple to state assumptions in this regime which can be used to fill this gap. An important step was taken recently by Wee (Eurocrypt '22) who identified two new assumptions from lattices, namely evasive LWE and tensor LWE, and used these to construct broadcast encryption and ciphertext policy attribute based encryption for P with optimal parameters. Independently, Tsabary formulated a similar assumption and used it to construct witness encryption (Crypto '22). Following Wee's work, Vaikuntanathan, Wee and Wichs independently provided a construction of witness encryption (Asiacrypt '22). In this work, we advance this line of research by providing the first construction of multi-input attribute based encryption (miABE) for the function class NC_1 for any constant arity from evasive LWE. Our construction can be extended to support the function class P} by using evasive and a suitable strengthening of tensor LWE. In more detail, our construction supports k encryptors, for any constant k, where each encryptor uses the master secret key msk to encode its input (x_i, m_i), the key generator computes a key sk_f for a function f \in NC_1 and the decryptor can recover (m_1,...,m_k) if and only if f(x_1,...,x_k)=1. The only known construction for miABE for NC_1 by Agrawal, Yadav and Yamada (Crypto '22) supports arity 2 and relies on pairings in the generic group model (or with a non-standard knowledge assumption) in addition to LWE. Furthermore, it is completely unclear how to go beyond arity 2 using this approach due to the reliance on pairings. Using a compiler from Agrawal, Yadav and Yamada (Crypto '22), our miABE can be upgraded to multi-input predicate encryption for the same arity and function class. Thus, we obtain the first constructions for constant-arity predicate and attribute based encryption for a generalized class such as NC_1 or P from simple assumptions that may be conjectured post-quantum secure. Along the way, we show that the tensor LWE assumption can be reduced to standard LWE in an important special case which was not known before. This adds confidence to the plausibility of the assumption and may be of wider interest.
2023
CRYPTO
Correlated Pseudorandomness from the Hardness of Quasi-Abelian Decoding
Secure computation often benefits from the use of correlated randomness to achieve fast, non-cryptographic online protocols. A recent paradigm put forth by Boyle et al. (CCS 2018, Crypto 2019) showed how pseudorandom correlation generators (PCG) can be used to generate large amounts of useful forms of correlated (pseudo)randomness, using minimal interactions followed solely by local computation, yielding silent secure two-party computation protocols (protocols where the preprocessing phase requires almost no communication). Furthermore, programmable PCGs can be used similarly to generate multiparty correlated randomness to be used in silent secure N-party protocols. Previous works constructed very efficient (non-programmable) PCGs for correlations such as random oblivious transfer. However, the situation is less satisfying for the case of random oblivious linear evaluation (OLE), which generalize oblivious transfers over large field, and are a core resource for secure computation of arithmetic circuits. The state-of-the-art work of (Boyle et al., Crypto 2020) constructed programmable PCGs for OLE, but their work suffers from two important downsides: (1) it only generates OLEs over large fields, and (2) it relies on a relatively new ``splittable'' ring-LPN assumption, which lacks strong security foundations. In this work, we construct new programmable PCGs for the OLE correlation, that overcome both limitations. To this end, we introduce the Quasi-Abelian Syndrome Decoding problem (QASD), a family of assumption which generalizes the well-established Quasi-Cyclic Syndrome Decoding assumption. Building upon QASD, we construct new programmable PCGs for OLEs over any field Fq with q > 2. Furthermore, we provide strong security foundations for QASD, showing that it resists all attacks from the linear test framework (Couteau et al., Crypto 2021) and admits a search-to-decision reduction. In particular, our analysis also sheds light on the security of the ring-LPN assumption used in Boyle et al., Crypto 2020). Using our new PCGs, we obtain the first efficient N-party silent secure computation protocols for computing general arithmetic circuit over Fq for any q > 2.
2023
CRYPTO
Correlation Intractability and SNARGs from Sub-exponential DDH
We provide the first constructions of SNARGs for Batch-NP and P based solely on the sub-exponential Decisional Diffie Hellman (DDH) assumption. Our schemes achieve poly-logarithmic proof sizes. We obtain our results by following the correlation-intractability framework for secure instantiation of the Fiat-Shamir paradigm. The centerpiece of our results and of independent interest is a new construction of correlation-intractable hash functions for ``small input'' product relations verifiable in TC0, based on sub-exponential DDH.
2023
CRYPTO
Cryptanalysis of Symmetric Primitives over Rings and a Key Recovery Attack on Rubato
Symmetric primitives are a cornerstone of cryptography, and have traditionally been defined over fields, where cryptanalysis is now well understood. However, a few symmetric primitives defined over rings Z _q for a composite number q have recently been proposed, a setting where security is much less studied. In this paper we focus on studying established algebraic attacks typically defined over fields and the extent of their applicability to symmetric primitives defined over the ring of integers modulo a composite q. Based on our analysis, we present an attack on full Rubato, a family of symmetric ciphers proposed by Ha et al. at Eurocrypt 2022 designed to be used in a transciphering framework for approximate fully homomorphic encryption. We show that at least 25% of the possible choices for q satisfy certain conditions that lead to a successful key recovery attack with complexity significantly lower than the claimed security level for five of the six ciphers in the Rubato family.
2023
CRYPTO
Cryptography with Certified Deletion
We propose a new, unifying framework that yields an array of cryptographic primitives with certified deletion. These primitives enable a party in possession of a quantum ciphertext to generate a classical certificate that the encrypted plaintext has been information-theoretically deleted, and cannot be recovered even given unbounded computational resources. - For $X \in \{\mathsf{public}\text{-}\mathsf{key},\mathsf{attribute\text{-}based},\mathsf{fully\text{-}homomorphic},\mathsf{witness},\mathsf{timed}\text{-}\mathsf{release}\}$, our compiler converts any (post-quantum) $X$ encryption to $X$ encryption with certified deletion. In addition, we compile statistically-binding commitments to statistically-binding commitments with certified everlasting hiding. As a corollary, we also obtain statistically-sound zero-knowledge proofs for QMA with certified everlasting zero-knowledge assuming statistically-binding commitments. - We also obtain a strong form of everlasting security for two-party and multi-party computation in the dishonest majority setting. While simultaneously achieving everlasting security against \emph{all} parties in this setting is known to be impossible, we introduce {\em everlasting security transfer (EST)}. This enables \emph{any one} party (or a subset of parties) to dynamically and certifiably information-theoretically delete other participants' data after protocol execution. We construct general-purpose secure computation with EST assuming statistically-binding commitments, which can be based on one-way functions or pseudorandom quantum states. We obtain our results by developing a novel proof technique to argue that a bit $b$ has been {\em information-theoretically deleted} from an adversary's view once they output a valid deletion certificate, despite having been previously {\em information-theoretically determined} by the ciphertext they held in their view. This technique may be of independent interest.
2023
CRYPTO
Cryptography with Weights: MPC, Encryption and Signatures
The security of many powerful cryptographic systems such as secure multiparty computation, threshold encryption, and threshold signatures rests on trust assumptions about the parties. The de-facto model treats all parties equally and requires that a certain fraction of the parties are honest. While this paradigm of one-person-one-vote has been very successful over the years, current and emerging practical use cases suggest that it is outdated. In this work, we consider {\em weighted} cryptosystems where every party is assigned a certain weight and the trust assumption is that a certain fraction of the total weight is honest. This setting can be translated to the standard setting (where each party has a unit weight) via virtualization. However, this method is quite expensive, incurring a multiplicative overhead in the weight. We present new weighted cryptosystems with significantly better efficiency: our proposed schemes incur only an {\em additive} overhead in weights. \begin{itemize} \item We first present a weighted ramp secret-sharing scheme (WRSS) where the size of a secret share is $O(w)$ (where $w$ corresponds to the weight). In comparison, Shamir's secret sharing with virtualization requires secret shares of size $w\cdot\lambda$, where $\lambda=\log |\bbF|$ is the security parameter. \item Next, we use our WRSS to construct weighted versions of (semi-honest) secure multiparty computation (MPC), threshold encryption, and threshold signatures. All these schemes inherit the efficiency of our WRSS and incur only an additive overhead in weights. \end{itemize} Our WRSS is based on the Chinese remainder theorem-based secret-sharing scheme. Interestingly, this secret-sharing scheme is {\em non-linear} and only achieves statistical privacy. These distinct features introduce several technical hurdles in applications to MPC and threshold cryptosystems. We resolve these challenges by developing several new ideas.
2023
CRYPTO
CSI-Otter: Isogeny-based (Partially) Blind Signatures from the Class Group Action with a Twist
In this paper, we construct the first provably-secure isogeny-based (partially) blind signature scheme. While at a high level the scheme resembles the Schnorr blind signature, our work does not directly follow from that construction, since isogenies do not offer as rich an algebraic structure. Specifically, our protocol does not fit into the linear identification protocol abstraction introduced by Hauck, Kiltz, and Loss (EUROCYRPT’19), which was used to generically construct Schnorr-like blind signatures based on modules such as classical groups and lattices. Consequently, our scheme does not seem susceptible to the recent efficient ROS attack exploiting the linear nature of the underlying mathematical tool. In more detail, our blind signature exploits the quadratic twist of an elliptic curve in an essential way to endow isogenies with a strictly richer structure than abstract group actions (but still more restrictive than modules). The basic scheme has public key size 128 B and signature size 8 KB under the CSIDH-512 parameter sets—these are the smallest among all provably secure post-quantum secure blind signatures. Relying on a new ring variant of the group action inverse problem (rGAIP), we can halve the signature size to 4 KB while increasing the public key size to 512 B. We provide preliminary cryptanalysis of rGAIP and show that for certain parameter settings, it is essentially as secure as the standard GAIP. Finally, we show a novel way to turn our blind signature into a partially blind signature, where we deviate from prior methods since they require hashing into the set of public keys while hiding the corresponding secret key—constructing such a hash function in the isogeny setting remains an open problem.
2023
CRYPTO
Cuckoo Hashing in Cryptography: Optimal Parameters, Robustness and Applications
Cuckoo hashing is a powerful primitive that enables storing items using small space with efficient querying. At a high level, cuckoo hashing maps $n$ items into $b$ entries storing at most $\ell$ items such that each item is placed into one of $k$ randomly chosen entries. Additionally, there is an overflow stash that can store at most s items. Many cryptographic primitives rely upon cuckoo hashing to privately embed and query data. It is integral to ensure small failure probability for constructing cuckoo hashing tables as it directly relates to the privacy. As our main result, we present a more query-efficient cuckoo hashing construction using more hash functions. For construction failure probability ε, the query overhead of our scheme is $O(1 + \sqrt{\log(1/\epsilon)/\log n})$. Our scheme has quadratically smaller query overhead than prior works for any target failure probability $\epsilon$. We also prove lower bounds matching our construction. Our improvements come from a new understanding of the locality of cuckoo hashing failures for small sets of items. We also initiate the study of robust cuckoo hashing where the input set may be chosen with knowledge of the hash functions. We present a cuckoo hashing scheme using more hash functions with query overhead $\tilde{O}(log \lambda)$ that is robust against $\poly(\lambda)$ adversaries. Furthermore, we present lower bounds showing that this construction is tight and that extending previous approaches of large stashes or entries cannot obtain robustness except with $\Omega(n)$ query overhead. As applications of our results, we obtain improved constructions for batch codes and PIR. In particular, we present the most efficient explicit batch code and blackbox reduction from single-query PIR to batch PIR.
2023
CRYPTO
Differential Meet-In-The-Middle Cryptanalysis
In this paper we introduce the differential meet-in-the-middle framework, a new cryptanalysis technique for symmetric primitives. Our new cryptanalysis method combines techniques from both meet-in-the-middle and differential cryptanalysis. As such, the introduced technique can be seen as a way of extending meet-in-the-middle attacks and their variants but also as a new way to perform the key recovery part in differential attacks. We apply our approach to SKINNY-128-384 in the single key model and to AES-256 in the related-key model. Our attack on SKINNY-128-384 permits to break 25 out of the 56 rounds of this variant and improves by two rounds the previous best known attacks. For AES-256 we attack 12 rounds by considering two related keys, thus outperforming the previous best related-key attack on AES-256 with only two related keys by 2 rounds.
2023
CRYPTO
Does the Dual-Sieve Attack on Learning with Errors even Work?
Guo and Johansson (ASIACRYPT 2021), and MATZOV (tech. report 2022) have independently claimed improved attacks against various NIST lattice candidates by adding a Fast Fourier Transform (FFT) trick on top of the so-called Dual-Sieve attack. Recently, there was more follow up work in this line adding new practical improvements. However, from a theoretical perspective, all of these works are painfully specific to Learning with Errors, while the principle of the Dual-Sieve attack is more general (Laarhoven and Walter, CT-RSA 2021). More critically, all of these works are based on heuristics that have received very little theoretical and experimental attention. This work attempts to rectify the above deficiencies of the literature. We first propose a generalization of the FFT trick by Guo and Johansson to arbitrary Bounded Distance Decoding instances. This generalization offers a new improvement to the attack. We then theoretically explore the underlying heuristics and show that these are in contradiction with formal, unconditional theorems in some regimes, and with well-tested heuristics in other regimes. The specific instantiations of the recent literature fall into this second regime. We confirm these contradictions with experiments, documenting several phenomena that are not predicted by the analysis, including a "waterfall-floor" phenomenon, reminiscent of Low-Density Parity-Check decoding failures. We conclude that the success probability of the recent Dual-Sieve-FFT attacks are presumably significantly overestimated. We further discuss the adequate way forward towards fixing the attack and its analysis.
2023
CRYPTO
DualMS: Efficient Lattice-Based Two-Round Multi-Signature with Trapdoor-Free Simulation
A multi-signature scheme allows multiple signers to jointly sign a common message. In recent years, two lattice-based two-round multi-signature schemes based on Dilithium-G were proposed: DOTT by Damg{\aa}rd, Orlandi, Takahashi, and Tibouchi (PKC'21) and Musig-L by Boschini, Takahashi, and Tibouchi (CRYPTO'22). In this work, we propose a new lattice-based two-round multi-signature scheme called DualMS. Compared to DOTT, DualMS is likely to significantly reduce signature size, since it replaces an opening to a homomorphic trapdoor commitment with a Dilithium-G response in the signature. Compared to Musig-L, concrete parameters show that DualMS has smaller public keys, signatures, and lower communication, while the first round cannot be preprocessed offline as in Musig-L. The main reason behind such improvements is a trapdoor-free ``dual signing simulation'' of our scheme. Signature simulation of DualMS is virtually the same as the normal signing procedure and does not use lattice trapdoors like DOTT and Musig-L.
2023
CRYPTO
Efficient Hybrid Exact/Relaxed Lattice Proofs and Applications to Rounding and VRFs
In this work, we study hybrid exact/relaxed zero-knowledge proofs from lattices, where the proved relation is exact in one part and relaxed in the other. Such proofs arise in important real-life applications such as those requiring verifiable PRF evaluation and have so far not received significant attention as a standalone problem. We first introduce a general framework, LANES+, for realizing such hybrid proofs efficiently by combining standard relaxed proofs of knowledge RPoK and the LANES framework (due to a series of works in Crypto'20, Asiacrypt'20, ACM CCS'20). The latter framework is a powerful lattice-based proof system that can prove exact linear and multiplicative relations. The advantage of LANES+ is its ability to realize hybrid proofs more efficiently by exploiting RPoK for the high-dimensional part of the secret witness while leaving a low-dimensional secret witness part for the exact proof that is proven at a significantly lower cost via LANES. Thanks to the flexibility of LANES+, other exact proof systems can also be supported. We apply our LANES+ framework to construct substantially shorter proofs of rounding, which is a central tool for verifiable deterministic lattice-based cryptography. Based on our rounding proof, we then design an efficient long-term verifiable random function (VRF), named LaV. LaV leads to the shortest VRF outputs among the proposals of standard (i.e., long-term and stateless) VRFs based on quantum-safe assumptions. Of independent interest, we also present generalized results for challenge difference invertibility, a fundamental soundness security requirement for many proof systems.
2023
CRYPTO
Error Correction and Ciphertext Quantization in Lattice Cryptography
Recent work in the design of rate $1 - o(1)$ lattice-based cryptosystems have used two distinct design paradigms, namely replacing the noise-tolerant encoding $m \mapsto (q/2)m$ present in many lattice-based cryptosystems with a more efficient encoding, and post-processing traditional lattice-based ciphertexts with a lossy compression algorithm, using a technique very similar to the technique of ``vector quantization'' within coding theory. We introduce a framework for the design of lattice-based encryption that captures both of these paradigms, and prove information-theoretic rate bounds within this framework. These bounds separate the settings of trivial and non-trivial quantization, and show the impossibility of rate $1 - o(1)$ encryption using both trivial quantization and polynomial modulus. They furthermore put strong limits on the rate of constructions that utilize lattices built by tensoring a lattice of small dimension with $\Zset^k$, which is ubiquitous in the literature. We additionally introduce a new cryptosystem, that matches the rate of the highest-rate currently known scheme, while encoding messages with a ``gadget'', which may be useful for constructions of Fully Homomorphic Encryption.
2023
CRYPTO
Expand-Convolute Codes for Pseudorandom Correlation Generators from LPN
The recent development of pseudorandom correlation generators (PCG) holds tremendous promise for highly efficient MPC protocols. Among other correlations, PCGs allow for the efficient generation of oblivious transfer (OT) and vector oblivious linear evaluations (VOLE) with sublinear communication and concretely good computational overhead. This type of PCG makes use of a so-called LPN-friendly error-correcting code. That is, for large dimensions the code should have very efficient encoding and have high minimum distance. We investigate existing LPN-friendly codes and find that several candidates are less secure than was believed. Beginning with the recent expand-accumulate codes, we find that for their aggressive parameters, aimed at good concrete efficiency, they achieve a smaller minimum distance than conjectured. This decreases the resulting security parameter of the PCG but it remains unclear by how much. We additionally show that the recently proposed and extremely efficient silver codes achieve only very small minimum distance and result in concretely efficient attacks on the resulting PCG protocol. As such, silver codes should not be used. We introduce a new LPN-friendly code which we call expand-convolute. These codes have provably high minimum distance and faster encoding time than suitable alternatives, e.g. expand-accumulate. The main contribution of these codes is the introduction of a convolution step that dramatically increases the minimum distance. This in turn allows for a more efficient parameter selection which results in improved concrete performance. In particular, we observe a 2 times improvement in running time.
2023
CRYPTO
Exploring Decryption Failures of BIKE: New Class of Weak Keys and Key Recovery Attacks
Code-based cryptography has received a lot of attention recently because it is considered secure under quantum computing. Among them, the QC-MDPC based scheme is one of the most promising due to its excellent performance. QC-MDPC based schemes are usually subject to a small rate of decryption failure, which can leak information about the secret key. This raises two crucial problems: how to accurately estimate the decryption failure rate and how to use the failure information to recover the secret key. However, the two problems are challenging due to the difficulty of geometrically characterizing the bit-flipping decoder employed in QC-MDPC, such as using decoding radius. In this work, we introduce the gathering property and show it is strongly connected with the decryption failure rate of QC-MDPC. Based on this property, we present two results for QC-MDPC based schemes. The first is a new construction of weak keys obtained by extending the keys that have gathering property via ring isomorphism. For the set of weak keys, we present a rigorous analysis of the probability, as well as experimental simulation of the decryption failure rates. Considering BIKE's parameter set targeting $128$-bit security, our result eventually indicates that the average decryption failure rate is lower bounded by $\pr{DFR}_{\text{avg}} \ge 2^{-116.61}$. The second entails two key recovery attacks against CCA secure QC-MDPC schemes using decryption failures in a multi-target setting. The two attacks consider whether or not it is allowed to reuse ciphertexts respectively. In both cases, we show the decryption failures can be used to identify whether a target's secret key satisfies the gathering property. Then using the gathering property as an extra information, we present a modified information set decoding algorithm that efficiently retrieves the target's secret key. For BIKE's parameter set targeting $128$-bit security, we show a key recovery attack with complexity $2^{116.61}$ can be mounted if ciphertexts reusing is not permitted, and the complexity can be reduced to $2^{98.77}$ when ciphertexts reusing is permitted.
2023
CRYPTO
Extractors: Low Entropy Requirements Colliding With Non-Malleability
Two-source extractors are deterministic functions that, given two independent weak sources of randomness, output a (close to) uniformly random string of bits. Cheraghchi and Guruswami (TCC 2015) introduced two- source non-malleable extractors that combine the properties of randomness extraction with tamper resilience. Two-source non-malleable extractors have since then attracted a lot of attention, and have very quickly become fundamental objects in cryptosystems involving communication channels that cannot be fully trusted. Various applications of two-source non-malleable extractors include in particular non-malleable codes, non-malleable commitments, non-malleable secret sharing, network extraction, and privacy amplification with tamperable memory. The best known constructions of two-source non-malleable extractors are due to Chattopadhyay, Goyal, and Li (STOC 2016), Li (STOC 2017), and Li (CCC 2019). All of these constructions require both sources to have min-entropy at least 0.99n, where n is the bit-length of each source. In this work, we introduce collision-resistant randomness extractors. This allows us to design a compiler that, given a two-source non-malleable extractor, and a collision-resistant extractor, outputs a two-source non- malleable extractor that inherits the non-malleability property from the non-malleable extractor, and the entropy requirement from the collision-resistant extractor. Nested application of this compiler leads to a dramatic improvement of the state-of-the-art mentioned above. We obtain a construction of a two-source non-malleable extractor where one source is required to have min-entropy greater than 0.8n, and the other source is required to have only polylog(n) min-entropy. Moreover, the other parameters of our construction, i.e., the output length, and the error remain comparable to prior constructions.
2023
CRYPTO
Fast Blind Rotation for Bootstrapping FHEs
Blind rotation is one of the key techniques to construct fully homomorphic encryptions with the best known bootstrapping algorithms running in less than one second. Currently, the two main approaches, namely, AP and GINX, for realizing blind rotation are first introduced by Alperin-Sheriff and Peikert (CRYPTO 2014) and Gama, Izabachene, Nguyen and Xie (EUROCRYPT 2016), respectively. In this paper, we propose a new blind rotation algorithm based on a GSW-like encryption from the NTRU assumption. Our algorithm has performance asymptotically independent from the key distributions, and outperforms AP and GINX in both the evaluation key size and the computational efficiency (especially for large key distributions). By using our blind rotation algorithm as a building block, we present new bootstrapping algorithms for both LWE and RLWE ciphertexts. We implement our bootstrapping algorithm for LWE ciphertexts, and compare the actual performance with two bootstrapping algorithms, namely, FHEW/AP by Ducas and Micciancio (EUROCRYPT 2015) and TFHE/GINX by Chillotti, Gama, Georgieva and Izabach\`ene (Journal of Cryptology 2020), that were implemented in the OpenFHE library. For parameters with ternary key distribution at 128-bit security, our bootstrapping only needs to store evaluation key of size 18.65MB for blind rotation, which is about 89.8 times smaller than FHEW/AP and 2.9 times smaller than TFHE/GINX. Moreover, our bootstrapping can be done in 112ms on a laptop, which is about 3.2 times faster than FHEW/AP and 2.1 times faster than TFHE/GINX. More improvements are available for large key distributions such as Gaussian distributions.
2023
CRYPTO
Fast Practical Lattice Reduction through Iterated Compression
Best Paper Award
We introduce a new lattice basis reduction algorithm with approximation guarantees analogous to the LLL algorithm and practical performance that far exceeds the current state of the art. We achieve these results by iteratively applying precision management techniques within a recursive algorithm structure and show the stability of this approach. We analyze the asymptotic behavior of our algorithm, and show that the heuristic running time is $O(n^{\omega}(C+n)^{1+\varepsilon})$ for lattices of dimension $n$, $\omega\in (2,3]$ bounding the cost of size reduction, matrix multiplication, and QR factorization, and $C$ bounding the log of the condition number of the input basis $B$. This yields a running time of $O\left(n^\omega (p + n)^{1 + \varepsilon}\right)$ for precision $p = O(\log \|B\|_{max})$ in common applications. Our algorithm is fully practical, and we have published our implementation. We experimentally validate our heuristic, give extensive benchmarks against numerous classes of cryptographic lattices, and show that our algorithm significantly outperforms existing implementations.
2023
CRYPTO
Finding short integer solutions when the modulus is small
We present cryptanalysis of the inhomogenous short integer solution (ISIS) problem for anomalously small moduli q by exploiting the geometry of BKZ reduced bases of q-ary lattices. We apply this cryptanalysis to examples from the literature where taking such small moduli has been suggested. A recent work [Espitau--Tibouchi--Wallet--Yu, CRYPTO 2022] suggests small q versions of the lattice signature scheme Falcon and its variant Mitaka. For one small q parametrisation of Falcon we reduce the estimated security against signature forgery by approximately 26 bits. For one small q parametrisation of Mitaka we successfully forge a signature in 15 seconds.
2023
CRYPTO
Fixing and Mechanizing the Security Proof of Fiat-Shamir with Aborts and Dilithium
We extend and consolidate the security justification for the Dilithium signature scheme. In particular, we identify a subtle but crucial gap that appears in several ROM and QROM security proofs for signature schemes that are based on the Fiat-Shamir with aborts paradigm, including Dilithium. The gap lies in the CMA-to-NMA reduction and was uncovered when trying to formalize a variant of the QROM security proof by Kiltz, Lyubashevsky, and Schaffner (Eurocrypt 2018). The gap was confirmed by the authors, and there seems to be no simple patch for it. We provide new, fixed proofs for the affected CMA-to-NMA reduction, both for the ROM and the QROM, and we perform a concrete security analysis for the case of Dilithium to show that the claimed security level is still valid after addressing the gap. Furthermore, we offer a fully mechanized ROM proof for the CMA-security of Dilithium in the EasyCrypt proof assistant. Our formalization includes several new tools and techniques of independent interest for future formal verification results.
2023
CRYPTO
Fork-Resilient Continuous Group Key Agreement
Continuous Group Key Agreement (CGKA) lets a evolving group of clients agree on a sequence of group keys. An important application of CGKA is scalable asynchronous end-to-end (E2E) encrypted group messaging. A major problem preventing the use of CGKA over unreliable infrastructure are so-called forks. A fork occurs when group members have diverging views of the group's history (and thus its current state); e.g. due to network or server failures. Once communication channels are restored, members resolve a fork by agreeing on the state of the group again. Today's CGKA protocols make fork resolution challenging, as natural resolution strategies seem to conflict with the way the protocols enforce group state agreement and forward secrecy. Meanwhile, secure group messaging protocols which do support fork resolution do not scale nearly as well as CGKA does. In this work, we pave the way to practical scalable E2E messaging over unreliable infrastructure. To that end, we generalize CGKA to Fork Resilient-CGKA which allows clients to process significantly more types of out-of-order network traffic. This is important for many natural fork resolution procedures as they are based, in part, on replaying missed traffic. Next, we give two FR-CGKA constructions: a practical one based on the CGKA underlying the MLS messaging standard and an optimally secure one (albeit with only theoretical efficiency). To further assist with fork resolution, we introduce a simple new abstraction to describe a client's local protocol state. The abstraction describes all and only the information relevant to natural fork resolution, making it easier for higher-level fork resolution procedures to work with and reason about. We define a black-box extension of an FR-CGKA which maintains such a description of a client's internal state. Finally, as a proof of concept, we give a basic fork resolution protocol.
2023
CRYPTO
Fully Adaptive Schnorr Threshold Signatures
Best Early Career Paper
We prove adaptive security of a simple three-round threshold Schnorr signature scheme, which we call Sparkle. The standard notion of security for threshold signatures considers a static adversary - one who must declare which parties are corrupt at the beginning of the protocol. The stronger adaptive adversary can at any time corrupt parties and learn their state. This notion is natural and practical, yet not proven to be met by most schemes in the literature. In this paper, we demonstrate that Sparkle achieves several levels of security based on different corruption models and assumptions. To begin with, Sparkle is statically secure under minimal assumptions: the discrete logarithm assumption (DL) and the random oracle model (ROM). If an adaptive adversary corrupts fewer than t/2 out of a threshold of t+1 signers, then Sparkle is adaptively secure under a weaker variant of the one-more discrete logarithm assumption (AOMDL) in the ROM. Finally, we prove that Sparkle achieves full adaptive security, with a corruption threshold of t, under AOMDL in the algebraic group model (AGM) with random oracles. Importantly, we show adaptive security without requiring secure erasures. Ours is the first proof achieving full adaptive security without exponential tightness loss for any threshold Schnorr signature scheme; moreover, the reduction is tight.
2023
CRYPTO
Graph-Theoretic Algorithms for the Alternating Trilinear Form Equivalence Problem
At Eurocrypt`22 Tang, Duong, Joux, Plantard, Qiao, and Susilo proposed a digital signature algorithm based on the hardness of the isomorphism problem of alternating trilinear forms. They propose three concrete parameters in dimensions 9,10, and 11 respectively. We give new heuristic algorithms that solve this problem more efficiently. With our new algorithms, the first parameter set can be broken in less than a day on a laptop. For the second parameter set, we show there is a $2^{-17}$ fraction of the public keys that can also be broken in less than a day. We do not break the third parameter set in practice, but we claim it falls short of the target security level of 128 bits.
2023
CRYPTO
HERMES: Efficient Ring Packing using MLWE Ciphertexts and Application to Transciphering
Most of the current fully homomorphic encryption (FHE) schemes are based on either the learning-with-errors (LWE) problem or on its ring variant (RLWE) for storing plaintexts. During the homomorphic computation of FHE schemes, RLWE formats provide high throughput when considering several messages, and LWE formats provide a low latency when there are only a few messages. Efficient conversion can bridge the advantages of each format. However, converting LWE formats into RLWE format, which is called \textit{ring packing}, has been a challenging problem. We propose an efficient solution for ring packing for FHE. The main improvement of this work is twofold. First, we accelerate the existing ring packing methods by using bootstrapping and ring switching techniques, achieving practical runtimes. Second, we propose a new method for efficient ring packing, \textsc{HERMES}, by using ciphertexts in Module-LWE (MLWE) formats, to also reduce the memory. To this end, we generalize the tools of LWE and RLWE formats for MLWE formats. On a single-thread implementation, \textsc{HERMES} consumes $10.2$s for the ring packing of $2^{15}$ LWE-format ciphertexts into an RLWE-format ciphertext. This gives $41$x higher throughput compared to the state-of-the-art ring packing for FHE, \textsc{PEGASUS} [S\&P'21], which takes $51.7$s for packing $2^{12}$ LWE ciphertexts with similar homomorphic capacity. We also illustrate the efficiency of \textsc{HERMES} by using it for transciphering from LWE symmetric encryption to CKKS fully homomorphic encryption, significantly outperforming the recent proposals \textsc{HERA} [Asiacrypt'21] and \textsc{Rubato} [Eurocrypt'22].
2023
CRYPTO
Horst Meets Fluid-SPN: Griffin for Zero-Knowledge Applications
Zero-knowledge (ZK) applications form a large group of use cases in modern cryptography, and recently gained in popularity due to novel proof systems. For many of these applications, cryptographic hash functions are used as the main building blocks, and they often dominate the overall performance and cost of these approaches. Therefore, in the last years several new hash functions were built in order to reduce the cost in these scenarios, including Poseidon and Rescue among others. These hash functions often look very different from more classical designs such as AES or SHA-2. For example, they work natively over prime fields rather than binary ones. At the same time, for example Poseidon and Rescue share some common features, such as being SPN schemes and instantiating the nonlinear layer with invertible power maps. While this allows the designers to provide simple and strong arguments for establishing their security, it also introduces crucial limitations in the design, which may affect the performance in the target applications. In this paper, we propose the Horst construction, in which the addition in a Feistel scheme (x, y) -> (y + F(x), x) is extended via a multiplication, i.e., (x, y) -> (y * G(x) + F(x), x). By carefully analyzing the performance metrics in SNARK and STARK protocols, we show how to combine an expanding Horst scheme with a Rescue-like SPN scheme in order to provide security and better efficiency in the target applications. We provide an extensive security analysis for our new design Griffin and a comparison with all current competitors.
2023
CRYPTO
How to Recover a Secret with O(n) Additions
Motivated by applications in threshold cryptography, we initiate the study of secret-sharing schemes that distribute a secret from a large field $F_p$ among $n$ parties such that the recovery algorithm makes a minimal number of \emph{additions}. Existing schemes achieve either $O(n\log p)$ additions (e.g., Shamir, Comm. of ACM, 1979) or at least $\Omega(n^2)$ operations independently of the field size (e.g., Cramer-Xing, EUROCRYPT, 2020). This leaves open the existence of a secret sharing whose recovery algorithm can be computed by performing only $O(n)$ additions. We resolve the question in the affirmative and present such a near-threshold secret sharing scheme that provides privacy against unauthorized sets of density at most $\tau_p$, and correctness for authorized sets of density at least $\tau_c$, for any given arbitrarily close constants $\tau_p<\tau_c$. Reconstruction can be computed by making at most $O(n)$ additions and in addition, (1) the share size is constant, (2) the sharing also makes $O(n)$ additions, and (3) the scheme is a blackbox secret-sharing scheme, i.e., the sharing and reconstruction algorithms work universally for all finite abelian groups $\mathbb{G}$. Prior to our work, no such scheme was known even without features (1)--(3) and even for the ramp setting where $\tau_p$ and $\tau_c$ are far-apart. As a by-product we derive the first blackbox near-treshosld secret-sharing scheme with linear-time sharing. We also present several concrete instantiations of our approach that seems practically efficient (e.g., for threshold discrete-log based signatures). Our constructions are combinatorial in nature. We combine graph-based erasure codes that support ``peeling-based'' decoding with a new randomness extraction for low dimensional sub-space that is based on inner-product with a small-integer vector. Based on these tools, we derive efficient secret sharing scheme via the blueprint of Cramer et al. (EUROCRYPT 2015) with far-apart thresholds. We then introduce a general concatenation-like transform for secret sharing schemes that allows us to arbitrarily shrink the privacy-correctness gap with a minor overhead. Our techniques enrich the secret-sharing toolbox and, in the context of blackbox secrete sharing, provide a new alternative to existing number-theoretic approaches. We believe that our tools are likely to lead to other applications.
2023
CRYPTO
How to Use (Plain) Witness Encryption: Registered ABE, Flexible Broadcast, and More
Witness encryption (WE) is a generalization of public-key encryption where the public key can be any NP statement x and the associated decryption key is any witness w for x. While early constructions of WE essentially relied on indistinguishability obfuscation (iO), recent works have shown new pathways for direct constructions of WE that are significantly more efficient than iO (and also seem unlikely to yield iO). Motivated by this progress, we revisit the possibility of using WE to realize advanced cryptographic primitives previously only known to exist in ``obfustopia.'' In this work, we give new constructions of trustless encryption schemes from plain witness encryption (and the learning-with-errors assumption) that were previously only known from iO: (1) a distributed broadcast encryption scheme (a broadcast encryption scheme where users choose their own secret keys); and (2) registered attributed-based encryption scheme (a system where users choose their own keys and then register their public key together with their attributes with a deterministic and transparent key curator). We also show how to use our techniques to obtain an optimal broadcast encryption scheme in the random oracle model. Underlying our constructions is a novel technique for using witness encryption based on a new primitive which we call function-binding hash functions. Whereas a somewhere statistically binding hash function binds a digest to a few bits of the input, a function-binding hash function binds a digest to the output of a function of the inputs. As we demonstrate in this work, function-binding hash functions provide us new ways to leverage the power of plain witness encryption and use it as the foundation of advanced cryptographic primitives. Finally, we show how to build function-binding hash functions for the class of disjunctions of block functions from leveled homomorphic encryption; this in combination with with witness encryption yields our main results.
2023
CRYPTO
Improved Multi-User Security Using the Squared-Ratio Method
Proving security bounds in contexts with a large number of users is one of the central problems in symmetric-key cryptography today. This paper introduces a new method for information-theoretic multi-user security proofs, called ``the Squared-Ratio method''. At its core, the method requires the expectation of the square of the ratio of observing the so-called good transcripts (from Patarin's H-coefficient technique) in the real and the ideal world. Central to the method is the observation that for information-theoretic adversaries, the KL-divergence for the multi-user security bound can be written as a summation of the KL-divergence of every single user. We showcase the Squared-Ratio method on three examples: the Xor of two Permutations by Bellare et al. (EUROCRYPT '98) and Hall et al. (CRYPTO '98), the Encrypted Davies-Mayer by Cogliati and Seurin (CRYPTO '16), and the two permutation variant of the nEHtM MAC algorithm by Dutta et al. (EUROCRYPT '19). With this new tool, we provide improved bounds for the multi-user security of these constructions. Our approach is modular in the sense that the multi-user security can be obtained directly from single-user results.
2023
CRYPTO
Individual Cryptography
We initiate a formal study of \emph{individual cryptography}. Informally speaking, an algorithm $\mathsf{Alg}$ is \emph{individual} if, in every implementation of $\mathsf{Alg}$, there always exists an individual user with full knowledge of the cryptographic data $S$ used by $\mathsf{Alg}$. In particular, it should be infeasible to design implementations of this algorithm that would hide $S$ by distributing it between a group of parties using an MPC protocol or outsourcing it to a trusted execution environment. We define and construct two primitives in this model. The first one, called \emph{proofs of individual knowledge}, is a tool for proving that a given message is fully known to a single (``individual'') machine on the Internet, i.e., it cannot be shared between a group of parties. The second one, dubbed \emph{individual secret sharing}, is a scheme for sharing a secret $S$ between a group of parties so that the parties have no knowledge of $S$ as long as they do not reconstruct it. The reconstruction ensures that if the shareholders attempt to collude, one of them will learn the secret entirely. Individual secret sharing has applications for preventing collusion in secret sharing. A central technique for constructing individual cryptographic primitives is the concept of MPC hardness. MPC hardness precludes an adversary from completing a cryptographic task in a distributed fashion within a specific time frame.
2023
CRYPTO
LaBRADOR: Compact Proofs for R1CS from Module-SIS
The most compact quantum-safe proof systems for large circuits are PCP-type systems such as Ligero, Aurora, and Shockwave, that only use weak cryptographic assumptions, namely hash functions modeled as random oracles. One would expect that by allowing for stronger assumptions, such as the hardness of Module-SIS, it should be possible to design more compact proof systems. But alas, despite considerable progress in lattice-based proofs, no such proof system was known so far. We rectify this situation by introducing a Lattice-Based Recursively Amortized Demonstration Of R1CS (LaBRADOR), with more compact proof sizes than known hash-based proof systems. At the 128 bits security level, LaBRADOR proves knowledge of a solution for an R1CS mod 2^64+1 with 2^20 constraints, with a proof size of only 58 KB, an order of magnitude more compact than previous quantum-safe proofs.
2023
CRYPTO
Lattice Signature with Efficient Protocols, Application to Anonymous Credentials
Digital signature is an essential primitive in cryptography, which can be used as the digital analogue of handwritten signatures but also as a building block for more complex systems. In the latter case, signatures with specific features are needed, so as to smoothly interact with the other components of the systems, such as zero-knowledge proofs. This has given rise to so-called signatures with efficient protocols, a versatile tool that has been used in countless applications. Designing such signatures is however quite difficult, in particular if one wishes to withstand quantum computing. We are indeed aware of only one post-quantum construction, proposed by Libert et al. at Asiacrypt'16, yielding very large signatures and proofs. In this paper, we propose a new construction that can be instantiated in both standard lattices and structured ones, resulting in each case in dramatic performance improvements. In particular, the size of a proof of message-signature possession, which is one of the main metrics for such schemes, can be brought down to less than 650 KB. As our construction retains all the features expected from signatures with efficient protocols, it can be used as a drop-in replacement in all systems using them, which mechanically improves their own performance, and has thus a direct impact on many applications. It can also be used to easily design new privacy-preserving mechanisms. As an example, we provide the first lattice-based anonymous credentials system.
2023
CRYPTO
Lattice-based Authenticated Key Exchange with Tight Security
We construct the first tightly secure authenticated key exchange (AKE) protocol from lattices. Known tight constructions are all based on Diffie-Hellman-like assumptions. Thus, our protocol is the first construction with tight security from a post-quantum assumption. Our AKE protocol is constructed tightly from a new security notion for key encapsulation mechanisms (KEMs), called one-way security against checkable chosen-ciphertext attacks (OW-ChCCA). We show how an OW-ChCCA secure KEM can be tightly constructed based on the Learning With Errors assumption, leading to the desired AKE protocol. To show the usefulness of OW-ChCCA security beyond AKE, we use it to construct the first tightly bilateral selective-opening (BiSO) secure PKE. BiSO security is a stronger selective-opening notion proposed by Lai et al. (ASIACRYPT 2021).
2023
CRYPTO
Lattice-Based Succinct Arguments for NP with Polylogarithmic-Time Verification
The ideal argument system should offer small proof sizes in practice, succinct verification, and post-quantum security based on standard assumptions. However, so far, all known constructions fall short. Succinct argument systems which rely on the Merkle-hashing paradigm introduced by Kilian (STOC 92) suffer from large proof sizes in practice due to the use of generic cryptographic primitives. Popular alternatives, which obtain smaller proof sizes by exploiting the structure of homomorphic commitment schemes, either rely on quantum-insecure assumptions, or fail to provide succinct verification. In this paper, we construct the first lattice-based succinct interactive argument system for NP statements with succinct verification that departs from the Merkle-hashing paradigm, and exploits the homomorphic properties of lattice-based commitments. For an arithmetic circuit with N gates, our construction achieves polylog(N) communication and polylog(N) verification time based on the hardness of the Ring Short- Integer-Solution (RSIS) problem. The core technique in our construction is a delegation protocol built from commitment schemes based on leveled bilinear modules, a new notion that we deem of independent interest. We show that leveled bilinear modules can be realized from both pre-quantum and post-quantum cryptographic assumptions.
2023
CRYPTO
Lattice-based Succinct Arguments from Vanishing Polynomials
Succinct arguments allow a prover to convince a verifier of the validity of any statement in a language, with minimal communication and verifier's work. Among other approaches, lattice-based protocols offer solid theoretical foundations, post-quantum security, and a rich algebraic structure. In this work, we present some new approaches to constructing efficient lattice-based succinct arguments. Our main technical ingredient is a new commitment scheme based on \emph{vanishing polynomials}, a notion borrowed from algebraic geometry. We analyse the security of such a commitment scheme, and show how to take advantage of the additional algebraic structure to build new lattice-based succinct arguments. A few highlights amongst our results are: \begin{enumerate} \item The first recursive folding (i.e. Bulletproofs-like) protocol for linear relations with \emph{polylogarithmic} verifier runtime. Traditionally, the verifier runtime has been the efficiency bottleneck for such protocols (regardless of the underlying assumptions). \item The first verifiable delay function (VDF) based on lattices, building on a recently introduced sequential relation. \item The first lattice-based \emph{linear-time prover} succinct argument for NP, in the preprocessing model. The soundness of the scheme is based on (knowledge)-k-R-ISIS assumption [Albrecht et al., CRYPTO'22]. \end{enumerate}
2023
CRYPTO
Lattice-Based Timed Cryptography
Timed cryptography studies primitives that retain their security only for a pre-determined amount of time, such as proofs of sequential work and time-lock puzzles. This feature has proven to be useful in a large number of practical applications, e.g., randomness generation, sealed-bid auctions, or fair multi-party computation. However, the current state of affairs in timed cryptography is unsatisfactory: Virtually all efficient constructions rely on a single sequentiality assumption, namely that repeated squaring in unknown order groups cannot be parallelized. This is a single point of failure in the classical setting and is even false against quantum adversaries. In this work we put forward a new sequentiality assumption, which essentially says that a repeated application of the standard lattice-based hash function cannot be parallelized. We provide concrete evidence of the validity of this assumption and, to substantiate its usefulness, we show how it enables a new proof of sequential work, with a stronger sequentiality guarantee than prior hash-based schemes.
2023
CRYPTO
Layout Graphs, Random Walks and the $t$-wise Independence of SPN Block Ciphers
We continue the study of $t$-wise independence of substitution-permutation networks (SPNs) initiated by the recent work of Liu, Tessaro, and Vaikuntanathan (CRYPTO 2021). Our key technical result shows that when the S-boxes are {\em randomly and independently chosen}, as well as secret, an $r$-round SPN with input length $n = b \cdot k$ is $2^{-\Theta(n)}$-close to $t$-wise independent within $r = O(\min\{k, \log t\})$ rounds for any $t$ almost as large as $2^{b/2}$. Here, $b$ is the input length of the S-box and the result assumes that the underlying mixing achieves maximum branch number. We also analyze the special case of AES parameters (with random S-boxes), and show it is $2^{-128}$-close to pairwise independent in $7$ rounds. Central to our result is the analysis of a random walk on what we call the {\em layout graph}, a combinatorial abstraction that captures equality and inequality constraints among multiple SPN evaluations. We use our technical result to show concrete security bounds for SPNs with actual block cipher parameters and {\em small-input $S$-boxes}. (This is in contrast to the large body of results on ideal-model analyses of SPNs.) For example, for the censored-AES block cipher, namely AES with most of the mixing layers removed, we show that 192 rounds suffice to attain $2^{-128}$-closeness to pairwise independence. The prior such result for AES (Liu, Tessaro and Vaikuntanathan, CRYPTO 2021) required more than 9000 rounds.
2023
CRYPTO
Learning With Physical Rounding for Linear and Quadratic Leakage Functions
Fresh re-keying is a countermeasure against side-channel analysis where an ephemeral key is derived from a long-term key using a public random value. Popular instances of such schemes rely on key-homomorphic primitives, so that the re-keying process is easy to mask and the rest of the (e.g., block cipher) computations can run with cheaper countermeasures. The main requirement for these schemes to be secure is that the leakages of the ephemeral keys do not allow recovering the long-term key. The Learning with Physical Rounding (LWPR) problem formalizes this security in a practically-relevant model where the adversary can observe noise-free leakages. It can be viewed as a physical version of the Learning With Rounding (LWR) problem, where the rounding is performed by a leakage function and therefore does not have to be computed explicitly. In this paper, we first consolidate the intuition that LWPR cannot be secure in a serial implementation context without additional countermeasures (like shuffling), due to attacks exploiting worst-case leakages that can be mounted with practical data complexity. We then extend the understanding of LWPR in a parallel implementation setting. On the one hand, we generalize its robustness against cryptanalysis taking advantage of any (i.e., not only worst-case) leakage. A previous work claimed security in the specific context of a Hamming weight leakage function. We clarify necessary conditions to maintain this guarantee, based on the degree of the leakage function and the accuracy of its coefficients. On the other hand, we show that parallelism inherently provides good security against attacks exploiting worst-case leakages. We finally confirm the practical relevance of these findings by validating our assumptions experimentally for an exemplary implementation.
2023
CRYPTO
Limits of Breach-Resistant and Snapshot-Oblivious RAMs
Oblivious RAMs (ORAMs) are an important cryptographic primitive that enable outsourcing data to a potentially untrusted server while hiding patterns of access to the data. ORAMs provide strong guarantees even in the face of a {\em persistent adversary} that views the transcripts of all operations and resulting memory contents. Unfortunately, the strong guarantees against persistent adversaries comes at the cost of efficiency as ORAMs are known to require $\Omega(\log n)$ overhead. In an attempt to obtain faster constructions, prior works considered security against {\em snapshot adversaries} that only have limited access to operational transcripts and memory. We consider $(s,\ell)$-snapshot adversaries that perform $s$ data breaches and views the transcripts of $\ell$ total queries. Promisingly, Du, Genkin and Grubbs [Crypto'22] presented an ORAM construction with $O(\log \ell)$ overhead protecting against $(1,\ell)$-snapshot adversaries with the transcript of $\ell$ consecutive operations from a single breach. For small values of $\ell$, this outperforms standard ORAMs. In this work, we tackle whether it is possible to further push this construction beyond a single breach. Unfortunately, we show that protecting against even slightly stronger snapshot adversaries becomes difficult. As our main result, we present a $\Omega(\log n)$ lower bound for any ORAM protecting against a $(3,1)$-snapshot adversary that performs three breaches and sees the transcript of only one query. In other words, our lower bound holds even if an adversary observes only memory contents during two breaches while managing to view the transcript of only one query in the other breach. Therefore, we surprisingly show that protecting against a snapshot adversary with three data breaches is as difficult as protecting against a persistent adversary.
2023
CRYPTO
List Oblivious Transfer and Applications to Round-Optimal Black-Box Multiparty Coin Tossing
In this work we study the problem of minimizing the round complexity for securely evaluating multiparty functionalities while making black-box use of polynomial time assumptions. In Eurocrypt 2016, Garg et al. showed that assuming all parties have access to a broadcast channel, then at least four rounds of communication are required to securely realize non-trivial functionalities in the plain model. A sequence of results follow-up the result of Garg et al. matching this lower bound under a variety of assumptions. Unfortunately, none of these works make black-box use of the underlying cryptographic primitives. In Crypto 2021, Ishai, Khurana, Sahai, and Srinivasan came closer to matching the four-round lower bound, obtaining a five-round protocol that makes black-box use of oblivious transfer and PKE with pseudorandom public keys. In this work, we show how to realize any input-less functionality (e.g., coin-tossing, generation of key-pairs, and so on) in four rounds while making black-box use of two-round oblivious transfer. As an additional result, we construct the first four-round MPC protocol for generic functionalities that makes black-box use of the underlying primitives, achieving security against non-aborting adversaries. Our protocols are based on a new primitive called list two-party computation. This primitive offers relaxed security compared to the standard notion of secure two-party computation. Despite this relaxation, we argue that this tool suffices for our applications. List two-party computation is of independent interest, as we argue it can also be used for the generation of setups, like oblivious transfer correlated randomness, in three rounds. Prior to our work, generating such a setup required at least four rounds of interactions or a trusted third party.
2023
CRYPTO
Machine-Checked Security for XMSS as in RFC 8391 and SPHINCS+
This work presents a novel machine-checked tight security proof for XMSS --- a stateful hash-based signature scheme that is (1) standardized in RFC 8391 and NIST SP 800-208, and (2) employed as a primary building block of SPHINCS+, one of the signature schemes recently selected for standardization as a result of NIST's post-quantum competition. In 2020, Kudinov, Kiktenko, and Fedoro pointed out a flaw affecting the tight security proofs of SPHINCS+ and XMSS. For the case of SPHINCS+, this flaw was fixed in a subsequent tight security proof by Hülsing and Kudinov. Unfortunately, employing the fix from this proof to construct an analogous tight security proof for XMSS would merely demonstrate security with respect to an insufficient notion. At the cost of modeling the message-hashing function as a random oracle, we complete the tight security proof for XMSS and formally verify it using the EasyCrypt proof assistant. (Note that this merely extends the use of the random oracle model, as this model is already required in other parts of the security analysis to justify the currently standardized parameter values). As part of this endeavor, we formally verify the crucial step common to the security proofs of SPHINCS+ and XMSS that was found to be flawed before, thereby confirming that the core of the aforementioned security proof by Hülsing and Kudinov is correct. As this is the first work to formally verify proofs for hash-based signature schemes in EasyCrypt, we develop several novel libraries for the fundamental cryptographic concepts underlying such schemes --- e.g., hash functions and digital signature schemes --- establishing a common starting point for future formal verification efforts. These libraries will be particularly helpful in formally verifying proofs of other hash-based signature schemes such as LMS or SPHINCS+.
2023
CRYPTO
MacORAMa: Optimal Oblivious RAM with Integrity
Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (J. ACM `96), is a primitive that allows a client to perform RAM computations on an external database without revealing any information through the access pattern. For a database of size $N$, well-known lower bounds show that a multiplicative overhead of $\Omega(\log N)$ in the number of RAM queries is necessary assuming $O(1)$ client storage. A long sequence of works culminated in the asymptotically optimal construction of Asharov, Komargodski, Lin, and Shi (CRYPTO `21) with $O(\log N)$ worst-case overhead and $O(1)$ client storage. However, this optimal ORAM is known to be secure only in the \emph{honest-but-curious} setting, where an adversary is allowed to observe the access patterns but not modify the contents of the database. In the \emph{malicious} setting, where an adversary is additionally allowed to tamper with the database, this construction and many others in fact become insecure. In this work, we construct the first maliciously secure ORAM with worst-case $O(\log N)$ overhead and $O(1)$ client storage assuming one-way functions, which are also necessary. By the $\Omega(\log N)$ lower bound, our construction is asymptotically optimal. To attain this overhead, we develop techniques to intricately interleave online and offline memory checking for malicious security. Furthermore, we complement our positive result by showing the impossibility of a \emph{generic} overhead-preserving compiler from honest-but-curious to malicious security, barring a breakthrough in memory checking.
2023
CRYPTO
Malicious Secure, Structure-Aware Private Set Intersection
Structure-Aware PSI (saPSI) is a variant of PSI where Alice's input set $A$ has some publicly known structure and Bob's input $B$ is an unstructured set of points and Alice wants to learn the intersection $A \cap B$. It was recently introduced by Garimella et al. (Crypto 2022); they present a semi-honest saPSI protocol with communication that scales with the description size of Alice's set, instead of its cardinality. In this paper, we present the first saPSI protocol secure against malicious-adversaries. We use a cut-and-choose approach to ensure that Alice uses valid FSS sharings, of the same underlying object. In order to handle a technical issue that arises, we introduce a new variant of function secret sharing, called derandomizable FSS (dFSS). We show how to extend prior FSS constructions for union of geometric balls, to meet the requirements of dFSS. Additionally, we improve FSS constructions that result in asymptotic improvements to the prior semi-honest structure-aware PSI protocol.
2023
CRYPTO
Moving a Step of ChaCha in Syncopated Rhythm
The stream cipher ChaCha is one of the most widely used ciphers in the real world, such as in TLS, SSH and so on. In this paper, we study the security of ChaCha via differential cryptanalysis based on probabilistic neutrality bits (PNBs). We introduce the \textit{syncopation} technique for the PNB-based approximation in the backward direction, which significantly amplifies its correlation by utilizing the property of ARX structure. In virtue of this technique, we present a new and efficient method for finding a good set of PNBs. A refined framework of key-recovery attack is then formalized for round-reduced ChaCha. The new techniques allow us to break 7.5 rounds of ChaCha without the last XOR and rotation, as well as to bring faster attacks on 6 rounds and 7 rounds of ChaCha.
2023
CRYPTO
Multi-Party Homomorphic Secret Sharing and Sublinear MPC from Sparse LPN
Over the past few years, we have seen the powerful emergence of homomorphic secret sharing (HSS) as a compelling alternative to fully homomorphic encryption (FHE), due to its efficiency benefits and its feasibility from an array of standard assumptions. However, all previously known HSS schemes, with the exception of schemes built from FHE or indistinguishability obfuscation (iO), can only support two parties. In this work, we give the first construction of a \emph{multi-party} HSS scheme for a non-trivial function class, from an assumption not known to imply FHE. In particular, we construct an HSS scheme for an \emph{arbitrary} number of parties with an \emph{arbitrary} corruption threshold, supporting evaluations of $\log / \log \log$-degree polynomials, containing a polynomial number of monomials, over arbitrary finite fields. As a consequence, we obtain an MPC protocol for any number of parties, with (slightly) \emph{sub-linear} communication per party of roughly $O(S / \log \log S)$ bits when evaluating a layered Boolean circuit of size $S$. Our HSS scheme relies on the \emph{sparse} Learning Parity with Noise (LPN) assumption, a standard variant of LPN with a sparse public matrix that has been studied and used in prior works. Thanks to this assumption, our construction enjoys several unique benefits. In particular, it can be built on top of \emph{any} linear secret sharing scheme, producing noisy output shares that can be error-corrected by the decoder. This yields HSS for low-degree polynomials with optimal download rate. Unlike prior works, our scheme also has a low computation overhead in that the per-party computation of a constant degree polynomial takes $O(M)$ work, where $M$ is the number of monomials.
2023
CRYPTO
Network-Agnostic Security Comes (Almost) for Free in DKG and MPC
Distributed key generation (DKG) protocols are an essential building block for threshold cryptosystems. Many DKG protocols tolerate up to t_s<n/2 corruptions assuming a well-behaved synchronous network, but become insecure as soon as the network delay becomes unstable. On the other hand, solutions in the asynchronous model operate under arbitrary network conditions, but only tolerate t_a<n/3 corruptions, even when the network is well-behaved. In this work, we ask whether one can design a protocol that achieves security guarantees in either scenario. We show a complete characterization of _network-agnostic_ DKG protocols, showing that the tight bound is t_a + 2t_s < n. As a second contribution, we provide an optimized version of the network-agnostic MPC protocol by Blum, Liu-Zhang and Loss [CRYPTO'20] which improves over the communication complexity of their protocol by a linear factor. Moreover, using our DKG protocol, we can instantiate our MPC protocol in the _plain PKI model_, i.e., without the need to assume an expensive trusted setup. Our protocols incur comparable communication complexity as state-of-the-art DKG and MPC protocols with optimal resilience in their respective purely synchronous and asynchronous settings, thereby showing that network-agnostic security comes (almost) _for free_.
2023
CRYPTO
2023
CRYPTO
New Bounds on the Local Leakage Resilience of Shamir's Secret Sharing Scheme
We study the local leakage resilience of Shamir's secret sharing scheme. In Shamir's scheme, a random polynomial $f$ of degree $t$ is sampled over a field of size $p>n$, conditioned on $f(0)=s$ for a secret $s$. Any $t$ points can be used to fully recover $f$ and thereby $f(0)$. But, any $t-1$ evaluations of $f$ at non-zero coordinates are completely independent of $f(0)$. Recent works ask whether the secret remains hidden even if say only 1 bit of information is leaked from \emph{each} share, independently. This question is well motivated due to the wide range of applications of Shamir's scheme. For instance, it is known that if Shamir's scheme is leakage resilient in some range of parameters, then known secure computation protocols are secure in a local leakage model. Over characteristic-2 fields, the answer is known to be negative (e.g., Guruswami and Wootters, STOC~'16). Benhamouda, Degwekar, Ishai, and Rabin (CRYPTO~'18) were the first to give a positive answer assuming computation is done over prime-order fields. They showed that if $t \ge 0.907n$, then Shamir's scheme is leakage resilient. Since then, there has been extensive efforts to improve the above threshold and after a series of works, the current record shows leakage resilience for $t\ge 0.78n$ (Maji et al., ISIT~'22). All existing analyses of Shamir's leakage resilience for general leakage functions follow a single framework for which there is a known barrier for any $t \le 0.5 n$. In this work, we a develop a new analytical framework that allows us to significantly improve upon the previous record and obtain additional new results. Specifically, we show: \begin{enumerate} \item Shamir's scheme is leakage resilient for any $t \ge 0.69n$. \item If the leakage functions are guaranteed to be ``balanced'' (i.e., splitting the domain of possible shares into 2 roughly equal-size parts), then Shamir's scheme is leakage resilient for any $t \ge 0.58n$. \item If the leakage functions are guaranteed to be ``unbalanced'' (i.e., splitting the domain of possible shares into 2 parts of very different sizes), then Shamir's scheme is leakage resilient as long as $t \ge 0.01 n$. Such a result is \emph{provably} impossible to obtain using the previously known technique. \end{enumerate} All of the above apply more generally to any MDS codes-based secret sharing scheme. Confirming leakage resilience is most important in the range $t \leq n/2$, as in many applications, Shamir’s scheme is used with thresholds $t\leq n/2$. As opposed to the previous approach, ours does not seem to have a barrier at $t=n/2$, as demonstrated by our third contribution.
2023
CRYPTO
New Design Techniques for Efficient Arithmetization-Oriented Hash Functions: Anemoi Permutations and Jive Compression Mode
Advanced cryptographic protocols such as Zero-knowledge (ZK) proofs of knowledge, widely used in cryptocurrency applications such as Zcash, Monero, Filecoin, Tezos, Topos, demand new cryptographic hash functions that are efficient not only over the binary field $\F_2$, but also over large fields of prime characteristic $\F_p$. This need has been acknowledged by the wider community and new so-called Arithmetization-Oriented (AO) hash functions have been proposed, e.g. MiMC-Hash, Rescue, Poseidon, ReinforcedConcrete and Griffin to name a few. In this paper we propose Anemoi: a new family of ZK-friendly permutations, that can be used to construct efficient hash functions and compression functions. The main features of these algorithms are that 1) they are designed to be efficient within multiple proof systems (e.g. Groth16, Plonk, etc.), 2) they contain dedicated functions optimised for specific applications (namely Merkle tree hashing and general purpose hashing), 3) they have highly competitive performance e.g. about a factor of 2 improvement over Poseidon and Rescue in terms of R1CS constraints, a 21%-35% Plonk constraint reduction over a highly optimized Poseidon implementation, as well as competitive native performance, running between two and three times faster than Rescue, depending on the field size. On the theoretical side, Anemoi pushes further the frontier in understanding the design principles that are truly entailed by arithmetization-orientation. In particular, we identify and exploit a previously unknown relationship between CCZ-equivalence and arithmetization-orientation. In addition, we propose two new standalone components that can be easily reused in new designs. One is a new S-box called Flystel, based on the well-studied butterfly structure, and the second is Jive -- a new mode of operation, inspired by the ``Latin dance'' symmetric algorithms (Salsa, ChaCha and derivatives). Our design is a conservative one: it uses a very classical Substitution-Permutation Network structure, and our detailed analysis of algebraic attacks highlights can be of independent interest.
2023
CRYPTO
Non-interactive Universal Arguments
In 2002, Barak and Goldreich introduced the notion of a {\em universal argument} and constructed an interactive universal argument for non-deterministic computations based on polynomially hard collision-resistant hash functions. Since then, and especially in recent years, there have been tremendous developments in the construction of {\em non-interactive} succinct arguments for deterministic computations under standard hardness assumptions. However, the constructed succinct arguments can be proven universal only under {\em sub-exponential} assumptions. Assuming {\em polynomially hard} fully homomorphic encryption and a widely believed worst-case complexity assumption, we prove a general lifting theorem showing that all existing non-interactive succinct arguments can be made universal. The required complexity assumption is that non-uniformity does not allow arbitrary polynomial speedup. In the setting of uniform adversaries, this extra assumption is not needed.
2023
CRYPTO
Non-Interactive Zero-Knowledge from Non-Interactive Batch Arguments
Zero-knowledge and succinctness are two important properties that arise in the study of non-interactive arguments. Previously, Kitagawa et al. (TCC 2020) showed how to obtain a non-interactive zero-knowledge (NIZK) argument for NP from a succinct non-interactive argument (SNARG) for NP. In particular, their work demonstrates how to leverage the succinctness property from an argument system and transform it into a zero-knowledge property. In this work, we study a similar question of leveraging succinctness for zero-knowledge. Our starting point is a batch argument for NP, a primitive that allows a prover to convince a verifier of $T$ NP statements $x_1, \ldots, x_T$ with a proof whose size scales sublinearly with $T$. Unlike SNARGs for NP, batch arguments for NP can be built from group-based assumptions in both pairing and pairing-free groups and from lattice-based assumptions. The challenge with batch arguments is that the proof size is only amortized over the number of instances, but can still encode full information about the witness to a small number of instances. We show how to combine a batch argument for NP with a local pseudorandom generator (i.e., a pseudorandom generator where each output bit only depends on a small number of input bits) and a dual-mode commitment scheme to obtain a NIZK for NP. Our work provides a new generic approach of realizing zero-knowledge from succinctness and highlights a new connection between succinctness and zero-knowledge.
2023
CRYPTO
On Active Attack Detection in Messaging with Immediate Decryption
The widely used Signal protocol provides protection against state exposure attacks through forward security (protecting past messages) and post-compromise security (for restoring security). It supports immediate decryption, allowing messages to be re-ordered or dropped at the protocol level without affecting correctness. In this work, we consider strong active attack detection for secure messaging with immediate decryption, where parties are able to immediately detect active attacks under certain conditions. We first consider in-band active attack detection, where participants who have been actively compromised but are still able to send a single message to their partner can detect the compromise. We propose two complementary notions to capture security, and present a compiler that provides security with respect to both notions. Our notions generalise existing work (RECOVER security) which only supported in-order messaging. We also study the related out-of-band attack detection problem by considering communication over out-of-band, authenticated channels and propose analogous security notions. We prove that one of our two notions in each setting imposes a linear communication overhead in the number of sent messages and security parameter using an information-theoretic argument. This implies that each message must information-theoretically contain all previous messages and that our construction, that essentially attaches the entire message history to every new message, is asymptotically optimal. We then explore ways to bypass this lower bound and highlight the feasibility of practical active attack detection compatible with immediate decryption.
2023
CRYPTO
On Concurrent Multi-Party Quantum Computation
Recently, significant progress has been made toward quantumly secure multi-party computation (MPC) in the stand-alone setting. In sharp contrast, the picture of concurrently secure MPC (or even 2PC), for both classical and quantum functionalities, still remains unclear. Quantum information behaves in a fundamentally different way, making the job of adversary harder and easier at the same time. Thus, it is unclear if the positive or negative results from the classical setting still apply. This work initiates a systematic study of concurrent secure computation in the quantum setting. We obtain a mix of positive and negative results. We first show that assuming the existence of post-quantum one-way functions (PQ-OWFs), concurrently secure 2PC (and thus MPC) for quantum functionalities is impossible. Next, we focus on the bounded-concurrent setting, where we obtain simulation-sound zero-knowledge arguments for both NP and QMA, assuming PQ-OWFs. This is obtained by a new design of simulation-sound gadget, relying on the recent post-quantum non-malleable commitments by Liang, Pandey, and Yamakawa [arXiv:2207.05861], and the quantum rewinding strategy recently developed by Ananth, Chung, and La Placa [CRYPTO'21] for bounded-concurrent post-quantum ZK. Moreover, we show that our technique is general enough---It also leads to quantum-secure bounded-concurrent coin-flipping protocols, and eventually general-purpose 2PC and MPC, for both classical and quantum functionalities. All these constructions can be based on the quantum hardness of Learning with Errors.
2023
CRYPTO
On Linear Communication Complexity for (Maximally) Fluid MPC
Secure multiparty computation protocols with dynamic parties, which assume that honest parties do not need to be online throughout the whole execution of the protocol, have recently gained a lot of traction for computations of large scale distributed protocols, such as blockchains. More specifically, in Fluid MPC, introduced in (Choudhuri et al. CRYPTO 2021), parties can dynamically join and leave the computation from round to round. The best known Fluid MPC protocol in the honest majority setting communicates O(n^2) elements per gate where n is the number of parties online at a time. While Le Mans (Rachuri and Scholl, CRYPTO 2022) extends Fluid MPC to the dishonest majority setting with preprocessing, it still communicates O(n^2) elements per gate. In this work we present alternative Fluid MPC solutions that require O(n) communication per gate for both the information-theoretic honest majority setting and the information-theoretic dishonest majority setting with preprocessing. Our solutions also achieve maximal fluidity where parties only need to be online for a single communication round. Additionally, we show that a protocol in the information-theoretic dishonest majority setting with sub-quadratic o(n^2) overhead per gate requires for each of the N parties who may ever participate in the (later) execution phase, \Omega(N) preprocessed data per gate.
2023
CRYPTO
On Optimal Tightness for Key Exchange with Full Forward Secrecy via Key Confirmation
A standard paradigm for building key exchange protocols with full forward secrecy (and explicit authentication) is to add key confirmation messages to an underlying protocol having only weak forward secrecy (and implicit authentication). Somewhat surprisingly, we show through an impossibility result that this simple trick must nevertheless incur a linear tightness loss in the number of parties for many natural protocols. This includes Krawczyk’s HMQV protocol (CRYPTO 2005) and the protocol of Cohn-Gordon et al. (CRYPTO 2019). Cohn-Gordon et al. gave a very efficient underlying protocol with weak forward secrecy having a linear security loss, and showed that this is optimal for certain reductions. However, they also claimed that full forward secrecy can be achieved via key confirmation without any additional loss. Our impossibility result disproves this claim, showing that their approach, in fact, has an overall loss which is quadratic. Motivated by this predicament we seek to restore the original lin- ear loss claim of Cohn-Gordon et al. by using a different proof strategy. Specifically, we start by lowering the goal for the underlying protocol with weak forward secrecy, to a selective security notion where the adversary must commit to a long-term key it cannot reveal. This allows a tight reduction rather than a linear loss reduction. Next, we show that the protocol can be upgraded to full forward secrecy using key confirmation messages with a linear tightness loss, even when starting from the weaker selective security notion. Thus, our approach yields an overall tightness loss for the fully forward-secret protocol that is only linear, as originally claimed. Finally, we confirm that the underlying protocol of Cohn-Gordon et al. can indeed be proven selectively secure, tightly.
2023
CRYPTO
On Perfect Linear Approximations and Differentials over Two-Round SPNs
Recent constructions of (tweakable) block ciphers with an embedded cryptographic backdoor relied on the existence of probability-one differentials or perfect (non-)linear approximations over a reduced-round version of the primitive. In this work, we study how the existence of probability-one differentials or perfect linear approximations over two rounds of a substitution permutation network can be avoided by design. More precisely, we develop criteria on the s-box and the linear layer that guarantee the absence of probability-one differentials for all keys. We further present an algorithm that allows to efficiently exclude the existence of keys for which there exists a perfect linear approximation.
2023
CRYPTO
On the Impossibility of Algebraic NIZK In Pairing-Free Groups
Non-Interactive Zero-Knowledge proofs (NIZK) allow a prover to convince a verifier that a statement is true by sending only one message and without conveying any other information. In the CRS model, many instantiations have been proposed from group-theoretic assumptions. On the one hand, some of these constructions use the group structure in a black-box way but rely on pairings, an example being the celebrated Groth-Sahai proof system. On the other hand, a recent line of research realized NIZKs from sub-exponential DDH in pairing-free groups using Correlation Intractable Hash functions, but at the price of making non black-box usage of the group. As of today no construction is known to \textit{simultaneously} reduce its security to pairing-free group problems and to use the underlying group in a black-box way. This is indeed not a coincidence: in this paper, we prove that for a large class of NIZK either a pairing-free group is used non black-box by relying on element representation, or security reduces to external hardness assumptions. More specifically our impossibility applies to two incomparable cases: The first one covers Arguments of Knowledge (AoK) which proves that a preimage under a given one way function is known. The second one covers NIZK (not necessarily AoK) for hard subset problems, which captures relations such as DDH, Decision-Linear and Matrix-DDH.
2023
CRYPTO
On the Security of Keyed Hashing Based on Public Permutations
Doubly-extendable cryptographic keyed functions (deck) generalize the concept of message authentication codes (MAC) and stream ciphers in that they support variable-length strings as input and return variable-length strings as output. A prominent example of building deck functions is Farfalle, which consists of a set of public permutations and rolling functions that are used in its compression and expansion layers. By generalizing the compression layer of Farfalle, we prove its universality in terms of the probability of differentials over the public permutation used in it. As the compression layer of Farfalle is inherently parallel, we compare it to a generalization of a serial compression function inspired by Pelican-MAC. The same public permutation may result in different universalities depending on whether the compression is done in parallel or serial. The parallel construction consistently performs better than the serial one, sometimes by a big factor. We demonstrate this effect using Xoodoo[3], which is a round-reduced variant of the public permutation used in the deck function Xoofff.
2023
CRYPTO
One-Message Secure Reductions: On the Cost of Converting Correlations
Correlated secret randomness is a useful resource for secure computation protocols, often enabling dramatic speedups compared to protocols in the plain model. This has motivated a line of work on identifying and securely generating useful correlations. Different kinds of correlations can vary greatly in terms of usefulness and ease of generation. While there has been major progress on efficiency generating oblivious transfer (OT) correlations, other useful kinds of correlations are much more costly to generate. Thus, it is highly desirable to develop efficient techniques for securely converting copies of a given source correlation into copies of a given target correlation, especially when the former are cheaper to generate than the latter. In this work, we initiate a systematic study of such conversions that only involve a single uni-directional message. We refer to such a conversion as a one-way secure reduction (OMSR). Recent works (Agarwal et. al, Eurocrypt 2022; Khorasgani et. al, Eurocrypt 2022) studied a similar problem when no communication was allowed; this setting is quite restrictive, however, with few non-trivial conversions being feasible. The OMSR setting substantially expands the scope of feasible results, allowing for direct applications to existing MPC protocols. We obtain the following positive and negative results. - (OMSR constructions). We present a general rejection-sampling based technique for OMSR with OT source correlations. We apply it to substantially improve in the communication complexity of optimized protocols for distributed symmetric cryptography (Dinur et al., Crypto 2021). - (OMSR lower bounds). We develop general techniques for proving lower bounds on the communication complexity of OMSR, matching our positive results up to small constant factors.
2023
CRYPTO
One-way Functions and the Hardness of (Probabilistic) Time-Bounded Kolmogorov Complexity w.r.t. Samplable Distributions
Consider the recently introduced notion of \emph{probabilistic time-bounded Kolmogorov Complexity}, $pK^t$ (Goldberg et al, CCC'22), and let $\mpktp$ denote the language of pairs $(x,t)$ such that $pK^t(x) \leq t$. We show the equivalence of the following: \BI \item $\mpkpolyp$ is (mildly) hard-on-average w.r.t. \emph{any} samplable distribution $\D$; \item $\mpkpolyp$ is (mildly) hard-on-average w.r.t. the \emph{uniform} distribution; \item existence of one-way functions. \EI As far as we know, this yields the first natural class of problems where hardness with respect to any samplable distribution is equivalent to hardness with respect to the uniform distribution. Under standard derandomization assumptions, we can show the same result also w.r.t. the standard notion of time-bounded Kolmogorov complexity, $K^t$.
2023
CRYPTO
Orbweaver: Succinct Linear Functional Commitments from Lattices
We present Orbweaver, the first plausibly post-quantum functional commitment to achieve quasilinear prover time together with O(log(n)) proof size and O(log(n)loglog(n)) verifier time. Orbweaver enables evaluation of linear maps on committed vectors over cyclotomic rings or the integers. It is extractable, preprocessing, non-interactive, structure-preserving, amenable to recursive composition, and supports logarithmic public proof aggregation. The security of our scheme is based on the k-R-ISIS assumption (and its knowledge counterpart), whereby we require a trusted setup to generate a universal structured reference string. We additionally use Orbweaver to construct a succinct polynomial commitment for integer polynomials.
2023
CRYPTO
PAC Privacy: Automatic Privacy Measurement and Control of Data Processing
We propose and study a new privacy definition, termed Probably Approximately Correct (PAC) Privacy. PAC Privacy characterizes the information-theoretic hardness to recover sensitive data given arbitrary information disclosure/leakage during/after any processing. Unlike the classic cryptographic definition and Differential Privacy (DP), which consider the adversarial input-independent worst case}, PAC Privacy is a simulatable metric that quantifies the instance-based impossibility of inference. A fully automatic analysis and proof generation framework is proposed: security parameters can be produced with arbitrarily high confidence via Monte-Carlo simulation for any black-box data processing oracle. This appealing automation property enables analysis of complicated data processing, where the worst-case proof in the classic privacy regime could be loose or even intractable. Moreover, we show that the produced PAC Privacy guarantees enjoy simple composition bounds and the automatic analysis framework can be implemented in an online fashion to analyze the composite PAC Privacy loss even under correlated randomness. On the utility side, the magnitude of (necessary) perturbation required in PAC Privacy is not lower bounded by Theta(\sqrt{d}) for a d-dimensional release but could be O(1) for many practical data processing tasks, which is in contrast to the input-independent worst-case information-theoretic lower bound. Example applications of PAC Privacy are included with comparisons to existing works.
2023
CRYPTO
Perfect MPC over Layered Graphs
The classical “BGW protocol” (Ben-Or, Goldwasser and Wigderson, STOC 1988) shows that secure multiparty computation (MPC) among n parties can be realized with perfect full security if t < n/3 parties are corrupted. This holds against malicious adversaries in the “standard” model for MPC, where a fixed set of n parties is involved in the full execution of the protocol. However, the picture is less clear in the mobile adversary setting of Ostrovsky and Yung (PODC 1991), where the adversary may periodically “move” by uncorrupting parties and corrupting a new set of t parties. In this setting, it is unclear if full security can be achieved against an adversary that is maximally mobile, i.e., moves after every round. The question is further motivated by the “You Only Speak Once” (YOSO) setting of Gentry et al. (Crypto 2021), where not only the adversary is mobile but also each round is executed by a disjoint set of parties. Previous positive results in this model do not achieve perfect security, and either assume probabilistic corruption and a nonstandard communication model, or only realize the weaker goal of security-with-abort. The question of matching the BGW result in these settings remained open. In this work, we tackle the above two challenges simultaneously. We consider a layered MPC model, a simplified variant of the fluid MPC model of Choudhuri et al. (Crypto 2021). Layered MPC is an instance of standard MPC where the interaction pattern is defined by a layered graph of width n, allowing each party to send secret messages and broadcast messages only to parties in the next layer. We require perfect security against a malicious adversary who may corrupt at most t parties in each layer. Our main result is a perfect, fully secure layered MPC protocol with an optimal corruption threshold of t < n/3, thus extending the BGW feasibility result to the layered setting. This implies perfectly secure MPC protocols against a maximally mobile adversary.
2023
CRYPTO
Practical Schnorr Threshold Signatures Without the Algebraic Group Model
Threshold signatures are digital signature schemes in which a set of n signers specify a threshold t such that any subset of size t is authorized to produce signatures on behalf of the group. There has recently been a renewed interest in this primitive, largely driven by the need to secure highly valuable signing keys, e.g., DNSSEC keys or keys protecting digital wallets in the cryptocurrency ecosystem. Of special interest is FROST, a practical Schnorr threshold signature scheme, which is currently undergoing standardization in the IETF and whose security was recently analyzed at CRYPTO'22. We continue this line of research by focusing on FROST's unforgeability combined with a practical distributed key generation (DKG) algorithm. Existing proofs of this setup either use non-standard heuristics, idealized group models like the AGM, or idealized key generation. Moreover, existing proofs do not consider all practical relevant optimizations that have been proposed. We close this gap between theory and practice by presenting the Schnorr threshold signature scheme Olaf, which combines the most efficient known FROST variant FROST3 with a variant of Pedersen's DKG protocol (as commonly used for FROST), and prove its unforgeability. Our proof relies on the AOMDL assumption (a weaker and falsifiable variant of the OMDL assumption) and, like proofs of regular Schnorr signatures, on the random oracle model.
2023
CRYPTO
Practical Settlement Bounds for Longest-Chain Consensus
Nakamoto's longest-chain consensus paradigm now powers the bulk of the world's cryptocurrencies and distributed finance infrastructure. An emblematic property of longest-chain consensus is that it provides probabilistic settlement guarantees that strengthen over time. This makes the exact relationship between settlement error and settlement latency a critical aspect of the protocol that both users and system designers must understand to make informed decisions. A recent line of work has finally provided a satisfactory rigorous accounting of this relationship for proof-of-work longest-chain protocols, but those techniques do not appear to carry over to the proof-of-stake setting. This article develops a new analytic approach for establishing such settlement guarantees that yields explicit, rigorous settlement bounds for proof-of-stake longest-chain protocols, placing them on equal footing with their proof-of-work counterparts. Our techniques apply with some adaptations to the proof-of-work setting where they provide improvements to the state-of-the-art settlement bounds for proof-of-work protocols.
2023
CRYPTO
Practical-Time Related-Key Attack on GOST with Secret S-boxes
The block cipher GOST 28147-89 was the Russian Federation encryption standard for over 20 years, and is still one of its two standard block ciphers. GOST is a 32-round Feistel construction, whose security benefits from the fact that the S-boxes used in the design are kept secret. In the last 10 years, several attacks on the full 32-round GOST were presented. However, they all assume that the S-boxes are known. When the S-boxes are secret, all published attacks either target a small number of rounds, or apply for small sets of weak keys. In this paper we present the first practical-time attack on GOST with secret S-boxes. The attack works in the related-key model and is faster than all previous attacks in this model which assume that the S-boxes are known. The complexity of the attack is less than $2^{27}$ encryptions. It was fully verified, and runs in a few seconds on a PC. The attack is based on a novel type of related-key differentials of GOST, inspired by local collisions. Our new technique may be applicable to certain GOST-based hash functions as well. To demonstrate this, we show how to find a collision on a Davies-Meyer construction based on GOST with an arbitrary initial value, in less than $2^{10}$ hash function evaluations.
2023
CRYPTO
Prouff & Rivain’s Formal Security Proof of Masking, Revisited: Tight Bounds in the Noisy Leakage Model
Masking is a counter-measure that can be incorporated to software and hardware implementations of block ciphers to provably se- cure them against side-channel attacks. The security of masking can be proven in different types of threat models. In this paper, we are interested in directly proving the security in the most realistic threat model, the so-called noisy leakage adversary, that captures well how real-world side- channel adversaries operate. Direct proofs in this leakage model have been established by Prouff & Rivain at Eurocrypt 2013, Dziembowski et al. at Eurocrypt 2015, and Prest et al. at Crypto 2019. Both proofs are complementary to each other, in the sense that the weaknesses of one proof are fixed in at least one of the others, and conversely. These weak- nesses concerned in particular the strong requirements on the noise level and the security parameter to get meaningful security bounds, and some requirements on the type of adversary covered by the proof — i.e., cho- sen or random plaintexts. This suggested that the drawbacks of each security bound could actually be proof artifacts. In this paper, we solve these issues, by revisiting Prouff & Rivain’s approach.
2023
CRYPTO
Publicly Verifiable Zero-Knowledge and Post-Quantum Signatures From VOLE-in-the-Head
We present a new method for transforming zero-knowledge protocols in the designated verifier setting into public-coin protocols, which can be made non-interactive and publicly verifiable. Our transformation applies to a large class of ZK protocols based on oblivious transfer. In particular, we show that it can be applied to recent, fast protocols based on vector oblivious linear evaluation (VOLE), with a technique we call VOLE-in-the-head, upgrading these protocols to support public verifiability. Our resulting ZK protocols have linear proof size, and are simpler, smaller and faster than related approaches based on MPC-in-the-head. To build VOLE-in-the-head while supporting both binary circuits and large finite fields, we develop several new technical tools. One of these is a new proof of security for the SoftSpokenOT protocol (Crypto 2022), which generalizes it to produce certain types of VOLE correlations over large fields. Secondly, we present a new ZK protocol that is tailored to take advantage of this form of VOLE, which leads to a publicly verifiable VOLE-in-the-head protocol with only 2x more communication than the best, designated-verifier VOLE-based protocols. We analyze the soundness of our approach when made non-interactive using the Fiat-Shamir transform, using round-by-round soundness. As an application of the resulting NIZK, we present FAEST, a post-quantum signature scheme based on AES. FAEST is the first AES-based signature scheme to be smaller than SPHINCS+, with signature sizes between 5.6 and 6.6kB at the 128-bit security level. Compared with the smallest version of SPHINCS+ (7.9kB), FAEST verification is slower, but the signing times are between 8x and 40x faster.
2023
CRYPTO
Publicly-Verifiable Deletion via Target-Collapsing Functions
This work re-examines the goal of building cryptosystems with publicly-verifiable deletion. We introduce target-collapsing as a weakening of collapsing, analogous to how second preimage resistance weakens collision resistance. That is, target-collapsing requires indistinguishability between superpositions and mixtures of preimages of an honestly sampled image. We show that target-collapsing hashes enable publicly-verifiable deletion, proving conjectures from [Poremba, ITCS'23] and demonstrating that the Dual-Regev encryption (and corresponding FHE) schemes support $\PVD$ under the learning with errors assumption. We build on this framework to obtain a variety of primitives supporting publicly-verifiable deletion ($\PVD$) from weak cryptographic assumptions, including: - Commitments with $\PVD$ assuming the existence of injective one-way functions, or more generally, {\em almost-regular} one-way functions. Along the way, we demonstrate that (partial) target-collapsing hashes can be built from almost-regular one-way functions. - Public-key encryption with $\PVD$ assuming trapdoored variants of injective (or almost-regular) one-way functions. - Public-key encryption with $\PVD$ assuming pseudorandom group actions, by demonstrating that the scheme of [Hhan, Morimae, and Yamakawa, Eurocrypt'23] has $\PVD$. - $X$ with $\PVD$ for $X \in \{$attribute-based encryption, quantum fully-homomorphic encryption, witness encryption, time-revocable encryption$\}$, assuming $X$ and trapdoored variants of injective (or almost-regular) one-way functions.
2023
CRYPTO
Quantum Linear Key-recovery Attacks Using the QFT
The Quantum Fourier Transform is a fundamental tool in quantum cryptanalysis. In symmetric cryptanalysis, hidden shift algorithms such as Simon’s, which rely on the QFT, have been used to obtain structural attacks on some very specific block ciphers. The Fourier Transform is also used in classical cryptanalysis, for example in FFT-based linear key-recovery attacks introduced by Collard et al. (ICISC 2007). Whether such techniques can be adapted to the quantum setting has remained so far an open question. In this paper, we introduce a new framework for quantum linear key-recovery attacks using the QFT. These attacks loosely follow the classical method of Collard et al., in that they rely on the fast computation of a correlation state in which experimental correlations, rather than being directly accessible, are encoded in the amplitudes of a quantum state. The experimental correlation is a statistic that is expected to be higher for the good key, and on some conditions, the increased amplitude creates a speedup with respect to an exhaustive search of the key. The same method also yields a new family of structural attacks, and new examples of quantum speedups beyond quadratic using classical known-plaintext queries.
2023
CRYPTO
Random Oracle Combiners: Breaking the Concatenation Barrier for Collision-Resistance
Suppose we have two hash functions h1 and h2, but we trust the security of only one of them. To mitigate this worry, we wish to build a hash combiner C^{h1,h2} which is secure so long as one of the underlying hash functions is. This question has been well-studied in the regime of collision resistance. In this case, concatenating the two hash function outputs clearly works. Unfortunately for practice, a long series of works (Boneh and Boyen, CRYPTO’06; Pietrzak, Eurocrypt’07; Pietrzak, Crypto’08) showed no (noticeably) better combiner for collision resistance is possible. In this work, we revisit this pessimistic state of affairs, motivated the observation that collision-resistance is insufficient for many interesting applications of cryptographic hash functions anyway. Thus, we believe (and argue) the right formulation of the “hash combiner” is to build what we call random oracle (RO) combiners, utilizing stronger assumptions for stronger constructions. Indeed, we circumvent the previous lower bounds for collision resistance by constructing a simple length-preserving RO combiner C^{h1,h2}_{Z1,Z2} (M ) = h1(M, Z1) ⊕ h2(M, Z2), where Z1, Z2 are random salts of appropriate length. We show that this extra randomness is necessary for RO combiners, and indeed our construction is somewhat tight with this lower bound. On the negative side, we show that one cannot generically apply the composition theorem to further replace “monolithic” hash functions h1 and h2 by some simpler indifferentiable (such as the Merkle-Damgard transformation) from smaller components, such as fixed-length compression functions. Finally, despite this issue, we directly prove collision resistance of the Merkle-Damgard variant of our combiner, where h1 and h2 are replaced by iterative Merkle-Damgard hashes applied to a fixed-length compression function. Thus, we can still subvert the concatenation barrier for collision-resistance combiners while utilizing practically small fixed-length components underneath.
2023
CRYPTO
Reductions from module lattices to free module lattices, and application to dequantizing module-LLL
In this article, we give evidences that free modules (i.e., modules which admit a basis) are no weaker than arbitrary modules, when it comes to solving cryptographic algorithmic problems (and when the rank of the module is at least 2). More precisely, we show that for three algorithmic problems used in cryptography, namely the shortest vector problem, the Hermite shortest vector problem and a variant of the closest vector problem, there is a reduction from solving the problem in any module of rank n ≥ 2 to solving the problem in any free module of the same rank n. As an application, we show that this can be used to dequantize the LLL algorithm for module lattices presented by Lee et al. (Asiacrypt 2019).
2023
CRYPTO
Reusable Secure Computation in the Plain Model
Consider the standard setting of two-party computation where the sender has a secret function $f$ and the receiver has a secret input $x$ and the output $f(x)$ is delivered to the receiver at the end of the protocol. Let us consider the unidirectional message model where only one party speaks in each round. In this setting, Katz and Ostrovsky (Crypto 2004) showed that at least four rounds of interaction between the parties are needed in the plain model (i.e., no trusted setup) if the simulator uses the adversary in a black-box way (a.k.a. black-box simulation). Suppose the sender and the receiver would like to run multiple sequential iterations of the secure computation protocol on possibly different inputs. For each of these iterations, do the parties need to start the protocol from scratch and exchange four messages? In this work, we explore the possibility of \textit{amortizing} the round complexity or in other words, \textit{reusing} a certain number of rounds of the secure computation protocol in the plain model. We obtain the following results. \begin{itemize} \item Under standard cryptographic assumptions, we construct a four-round two-party computation protocol where (i) the first three rounds of the protocol could be reused an unbounded number of times if the receiver input remains the same and only the sender input changes, and (ii) the first two rounds of the protocol could be reused an unbounded number of times if the receiver input needs to change as well. In other words, the sender sends a single additional message if only its input changes, and in the other case, we need one message each from the receiver and the sender. The number of additional messages needed in each of the above two modes is optimal and, additionally, our protocol allows arbitrary interleaving of these two modes. \item We also extend these results to the multiparty setting (in the simultaneous message exchange model) and give round-optimal protocols such that (i) the first two rounds could be reused an unbounded number of times if the inputs of the parties need to change and (ii) the first three rounds could be reused an unbounded number of times if the inputs remain the same but the functionality to be computed changes. As in the two-party setting, we allow arbitrary interleaving of the above two modes of operation. \end{itemize}
2023
CRYPTO
Revisiting cycles of pairing-friendly elliptic curves
A recent area of interest in cryptography is recursive composition of proof systems. One of the approaches to make recursive composition efficient involves cycles of pairing-friendly elliptic curves of prime order. However, known constructions have very low embedding degrees. This entails large parameter sizes, which makes the overall system inefficient. In this paper, we explore 2-cycles composed of curves from families parameterized by polynomials, and show that such cycles do not exist unless a strong condition holds. As a consequence, we prove that no 2-cycles can arise from the known families, except for those cycles already known. Additionally, we show some general properties about cycles, and provide a detailed computation on the density of pairing-friendly cycles among all cycles.
2023
CRYPTO
Revisiting Security Estimation for LWE with Hints from a Geometric Perspective
The Distorted Bounded Distance Decoding Problem (DBDD) was introduced by Dachman-Soled et al. [Crypto ’20] as an intermediate problem between LWE and unique-SVP (uSVP). They presented an approach that reduces an LWE instance to a DBDD instance, integrates side information (or “hints”) into the DBDD instance, and finally reduces it to a uSVP instance, which can be solved via lattice reduction. They showed that this principled approach can lead to algorithms for side-channel attacks that perform better than ad-hoc algorithms that do not rely on lattice reduction. The current work focuses on new methods for integrating hints into a DBDD instance. We view hints from a geometric perspective, as opposed to the distributional perspective from the prior work. Our approach provides the rigorous promise that, as hints are integrated into the DBDD instance, the correct solution remains a lattice point contained in the specified ellipsoid. We instantiate our approach with two new types of hints: (1) Inequality hints, corresponding to the region of intersection of an ellipsoid and a halfspace; (2) Combined hints, corresponding to the region of intersection of two ellipsoids. Since the regions in (1) and (2) are not necessarily ellipsoids, we replace them with ellipsoidal approximations that circumscribe the region of intersection. Perfect hints are reconsidered as the region of intersection of an ellipsoid and a hyperplane, which is itself an ellipsoid. The compatibility of “approximate,” “modular,” and “short vector” hints from the prior work is examined. We apply our techniques to the decryption failure and side-channel attack settings. We show that “inequality hints” can be used to model decryption failures, and that our new approach yields a geometric analogue of the “failure boosting” technique of D’anvers et al. [ePrint, ’18]. We also show that “combined hints” can be used to fuse information from a decryption failure and a side-channel attack, and provide rigorous guarantees despite the data being non-Gaussian. We provide experimental data for both applications. The code that we have developed to implement the integration of hints and hardness estimates extends the Toolkit from prior work and has been released publicly.
2023
CRYPTO
Revisiting the Constant-sum Winternitz One-time Signature with Applications to SPHINCS+ and XMSS
Hash-based signatures offer a conservative alternative to post-quantum signatures with arguably better-understood security than other post-quantum candidates. As a core building block of hash-based signatures, the efficiency of one-time signature (OTS) largely dominates that of hash-based signatures. The WOTS$^{+}$ signature scheme (Africacrypt 2013) is the current state-of-the-art OTS adopted by the signature schemes standardized by NIST---XMSS, LMS, and SPHINCS$^+$. A natural question is whether there is (and how much) room left for improving one-time signatures (and thus standard hash-based signatures). In this paper, we show that WOTS$^{+}$ one-time signature, when adopting the constant-sum encoding scheme (Bos and Chaum, Crypto 1992), is size-optimal not only under Winternitz's OTS framework, but also among all tree-based OTS designs. Moreover, we point out a flaw in the DAG-based OTS design previously shown to be size-optimal at Asiacrypt 1996, which makes the constant-sum WOTS$^{+}$ the most size-efficient OTS to the best of our knowledge. Finally, we evaluate the performance of constant-sum WOTS$^{+}$ integrated into the SPHINCS$^+$ (CCS 2019) and XMSS (PQC 2011) signature schemes which exhibit certain degrees of improvement in both signing time and signature size.
2023
CRYPTO
Revisiting the Indifferentiability of the Sum of Permutations
The sum of two $n$-bit pseudorandom permutations is known to behave like a pseudorandom function with $n$ bits of security. A recent line of research has investigated the security of two public $n$-bit permutations and its degree of indifferentiability. Mandal et al. (INDOCRYPT 2010) proved $2n/3$-bit security, Mennink and Preneel (ACNS 2015) pointed out a non-trivial flaw in their analysis and re-proved $2n/3$-bit security. Bhattacharya and Nandi (EUROCRYPT 2018) eventually improved the result to $n$-bit security. Recently, Gunsing at CRYPTO 2022 already observed that a proof technique used in this line of research only holds for sequential indifferentiability. We revisit the line of research in detail, and observe that the strongest bound of $n$-bit security has two other serious issues in the reasoning, the first one is actually the same non-trivial flaw that was present in the work of Mandal et al., while the second one discards biases in the randomness influenced by the distinguisher. More concretely, we introduce two attacks that show limited potential of different approaches. We (i) show that the latter issue that discards biases only holds up to $2^{3n/4}$ queries, and (ii) perform a differentiability attack against their simulator in $2^{5n/6}$ queries. On the upside, we revive the result of Mennink and Preneel and show $2n/3$-bit regular indifferentiability security of the sum of public permutations.
2023
CRYPTO
Revisiting Time-Space Tradeoffs for Function Inversion
We study the black-box function inversion problem, which is the problem of finding x \in [N] such that f(x) = y, given as input some challenge point y in the image of a function f : [N] \to [N], using T oracle queries to f \emph{and} preprocessed advice \sigma \in \{0,1\}^S depending on f. We prove a number of new results about this problem, as follows. 1. We show an algorithm that works for any T and S satisfying T S^2 \cdot \max\{S,T\} = \widetilde{\Theta}(N^3) In the important setting when S < T, this improves on the celebrated algorithm of Fiat and Naor [STOC, 1991], which requires T S^3 \gtrsim N^3. E.g., Fiat and Naor's algorithm is only non-trivial for S \gg N^{2/3}, while our algorithm gives a non-trivial tradeoff for any S \gg N^{1/2}. (Our algorithm and analysis are quite simple. As a consequence of this, we also give a self-contained and simple proof of Fiat and Naor's original result, with certain optimizations left out for simplicity.) 2. We observe that there is a very simple \emph{non-adaptive} algorithm (i.e., an algorithm whose i-th query x_i is chosen based entirely on \sigma and y, and not on the f(x_1),\ldots, f(x_{i-1})) that improves slightly on the trivial algorithm. It works for any T and S satisfying S = \Theta(N \log(N/T)), for example, T = N /\polylog(N), S = \Theta(N/\log \log N). This answers a question due to Corrigan-Gibbs and Kogan [TCC, 2019], who asked whether non-trivial non-adaptive algorithms exist; namely, algorithms that work with parameters T and S satisfying T + S/\log N < o(N). We also observe that our non-adaptive algorithm is what we call a \emph{guess-and-check} algorithm, that is, it is non-adaptive \emph{and} its final output is always one of the oracle queries x_1,\ldots, x_T. For guess-and-check algorithms, we prove a matching lower bound, therefore completely characterizing the achievable parameters (S,T) for this natural class of algorithms. (Corrigan-Gibbs and Kogan showed that any such lower bound for \emph{arbitrary} non-adaptive algorithms would imply new circuit lower bounds.) 3. We show equivalence between function inversion and a natural decision version of the problem in both the worst case and the average case, and similarly for functions f : [N] \to [M] with different ranges. Some of these equivalence results are deferred to the full version [ECCC, 2022]. All of the above results are most naturally described in a model with \emph{shared randomness} (i.e., random coins shared between the preprocessing algorithm and the online algorithm). However, as an additional contribution, we show (using a technique from communication complexity due to Newman [IPL, 1991]) how to generically convert any algorithm that uses shared randomness into one that does not.
2023
CRYPTO
Round-Optimal Black-Box MPC in the Plain Model
We give the first construction of a (fully) black-box round- optimal secure multiparty computation protocol in the plain model. Our protocol makes black-box use of a sub-exponentially secure two-message statistical sender private oblivious transfer (SSP-OT), which in turn can be based on (sub-exponential variants of) most of the standard cryptographic assumptions known to imply public-key cryptography
2023
CRYPTO
Secure Computation with Shared EPR Pairs (Or: How to Teleport in Zero-Knowledge)
Can a sender non-interactively transmit one of two strings to a receiver without knowing which was received? Does there exist minimally-interactive secure computation without the use of public-key primitives? We provide affirmative answers to these questions in a model where parties have access to shared EPR pairs, thus demonstrating the cryptographic power of this resource. - First, we construct one-shot string oblivious transfer (OT) in the shared EPR pair model, assuming the sub-exponential hardness of LWE. Building on this, we show that {\em secure teleportation} is possible: Given the description of any quantum operation $Q$, a sender with input $\rho$ can send a single classical message that securely transmits $Q(\rho)$ to a receiver. That is, we realize an ideal channel that takes input $\rho$ from the sender and provably delivers $Q(\rho)$ to the receiver without revealing any other information, achieving one-shot secure computation of unidirectional quantum functionalities in the shared EPR pair model. This immediately gives a number of applications in the shared EPR pair model: (1) We obtain one-shot secure computation of unidirectional \emph{classical} functionalities, (2) We obtain the first construction of NIZK for QMA from (sub-exponential) standard assumptions, and (3) We define a notion of \emph{zero-knowledge} state synthesis, and show that it can be achieved with one message. - Next, we investigate general-purpose secure \emph{multiparty} computation in the shared EPR pair model. Here, we obtain a (round-optimal) two-round protocol for secure computation of classical functionalities that is \emph{unconditionally-secure} in the (quantum-accessible) random oracle model. We also obtain a two-round protocol without random oracles assuming (non-interactive) extractable commitments and correlation-intractability for efficient functions. At the heart of our approach are novel techniques for making use of entangling operations to generate multibit OT correlations, and for instantiating the Fiat-Shamir transform using correlation-intractability in the quantum setting.
2023
CRYPTO
Secure Multiparty Computation from Threshold Encryption based on Class Groups
We construct the first actively-secure threshold version of the cryptosystem based on class groups from the so-called CL framework (Castagnos and Laguillaumie, 2015). We show how to use our threshold scheme to achieve general universally composable (UC) secure multiparty computation (MPC) with only transparent set-up, i.e., with no secret trapdoors involved. On the way to our goal, we design new zero-knowledge (ZK) protocols with constant communication complexity for proving multiplicative relations between encrypted values. This allows us to use the ZK proofs to achieve MPC with active security with only a constant factor overhead. Finally, we adapt our protocol for the so called "You-Only-Speak-Once" (YOSO) setting, which is a very promising recent approach for performing MPC over a blockchain. This is possible because our key generation protocol is simpler and requires significantly less interaction compared to previous approaches: in particular, our new key generation protocol allows the adversary to bias the public key, but we show that this has no impact on the security of the resulting cryptosystem.
2023
CRYPTO
Security Analysis of the WhatsApp End-to-End Encrypted Backup Protocol
WhatsApp is an end-to-end encrypted (E2EE) messaging service used by billions of people. In late 2021, WhatsApp rolled out a new protocol for backing up chat histories. The E2EE WhatsApp backup protocol (WBP) allows users to recover their chat history from passwords, leaving WhatsApp oblivious of the actual encryption keys. The WBP builds upon the OPAQUE framework for password-based key exchange, which is currently undergoing standardization. While considerable efforts have gone into the design and auditing of the WBP, the complexity of the protocol’s design and shortcomings in the existing security analyses of its building blocks make it hard to understand the actual security guarantees that the WBP provides. In this work, we provide the first formal security analysis of the WBP. Our analysis in the universal composability (UC) framework confirms that the WBP provides strong protection of users’ chat history and passwords. It also shows that a corrupted server can under certain conditions make more password guesses than what previous analysis suggests.
2023
CRYPTO
Security-Preserving Distributed Samplers: How to Generate any CRS in One Round without Random Oracles
A distributed sampler is a way for several mutually distrusting parties to non-interactively generate a common reference string (CRS) that all parties trust. Previous work constructs distributed samplers in the random oracle model, or in the standard model with very limited security guarantees. This is no accident, as standard model distributed samplers with full security were shown impossible. In this work, we provide new definitions for distributed samplers which we show achieve meaningful security guarantees in the standard model. In particular, our notion implies that the hardness of a wide range of security games is preserved when the CRS is replaced with a distributed sampler. We also show how to realize our notion of distributed samplers. A core technical tool enabling our construction is a new notion of single-message zero knowledge.
2023
CRYPTO
Simple Tests of Quantumness Also Certify Qubits
A test of quantumness is a protocol that allows a classical verifier to certify (only) that a prover is not classical. We show that tests of quantumness that follow a certain template, which captures recent proposals such as [KCVY21,KLVY22], in fact can do much more. Namely, the same protocols can be used for certifying a qubit, a building-block that stands at the heart of applications such as certifiable randomness and classical delegation of quantum computation. Certifying qubits was previously only known to be possible based on the hardness of the Learning with Errors problem and the use of adaptive hardcore bits [BCM+21]. Our framework allows certification of qubits based only on the existence of post-quantum trapdoor claw-free functions, or on quantum fully homomorphic encryption. These can be instantiated, for example, from Ring Learning with Errors. This has the potential to improve the efficiency of qubit certification and derived functionalities. On the technical side, we show that the quantum soundness of any such protocol can be reduced to proving a bound on a simple algorithmic task: informally, answering “two challenges simultaneously” in the protocol. Our reduction formalizes the intuition that these protocols demonstrate quantumness by leveraging the impossibility of rewinding a general quantum prover. This allows us to prove tight bounds on the quantum soundness of [KCVY21] and [KLVY22], showing that no quantum polynomial-time prover can succeed with probability larger than cos2 π8 ≈ 0.853. Previously, only an upper bound on the success probability of classical provers, and a lower bound on the success probability of quantum provers, were known. We then extend this proof of quantum soundness to show that provers that approach the quantum soundness bound must perform almost anti-commuting measurements. This certifies that the prover holds a qubit.
2023
CRYPTO
SNARGs for Monotone Policy Batch NP
We construct a succinct non-interactive argument ($\mathsf{SNARG}$) for the class of monotone policy batch $\mathsf{NP}$ languages, under the Learning with Errors ($\mathsf{LWE}$) assumption. This class is a subclass of $\mathsf{NP}$ that is associated with a monotone function~$f:\{0,1\}^k\rightarrow\{0,1\}$ and an $\mathsf{NP}$ language $\mathcal{L}$, and contains instances $(x_1,\ldots,x_k)$ such the $f(b_1,\ldots,b_k)=1$ where $b_j=1$ if and only if $x_j\in \mathcal{L}$. Our $\mathsf{SNARG}$s are arguments of knowledge in the non-adaptive setting, and satisfy a new notion of somewhere extractability against adaptive adversaries. This is the first $\mathsf{SNARG}$ under standard hardness assumptions for a sub-class of $\mathsf{NP}$ that is not known to have a (computational) non-signaling $\mathsf{PCP}$ with parameters compatible with the standard framework for constructing $\mathsf{SNARG}$s dating back to [Kalai-Raz-Rothblum, STOC '13]. Indeed, our approach necessarily departs from this framework. Our construction combines existing quasi-arguments for $\mathsf{NP}$ (based on batch arguments for $\mathsf{NP}$) with a new type of cryptographic encoding of the instance and a new analysis going from local to global soundness. The main novel ingredient used in our encoding is a {\em predicate-extractable hash} ($\mathsf{PEH}$) family, which is a primitive that generalizes the notion of a somewhere extractable hash. Whereas a somewhere extractable hash allows to extract a single input coordinate, our $\mathsf{PEH}$ extracts a {\em global} property of the input. We view this primitive to be of independent interest, and believe that it will find other applications.
2023
CRYPTO
Snowblind: A Threshold Blind Signature in Pairing-Free Groups
Both threshold and blind signatures have, individually, received a considerable amount of attention. However little is known about their combination, i.e., a threshold signature which is also blind, in that no coalition of signers learns anything about the message being signed or the signature being produced. Several applications of blind signatures (e.g., anonymous tokens) would benefit from distributed signing as a means to increase trust in the service and hence reduce the risks of key compromise. This paper builds the first blind threshold signatures in pairing-free groups. Our main contribution is a construction that transforms an underlying blind non-threshold signature scheme with a suitable structure into a threshold scheme, preserving its blindness. The resulting signing protocol proceeds in three rounds, and produces signatures consisting of one group element and two scalars. The underlying non-threshold blind signature schemes are of independent interest, and improve upon the current state of the art (Tessaro and Zhu, EUROCRYPT ’22) with shorter signatures (three elements, instead of four) and simpler proofs of security. All of our schemes are proved secure in the Random Oracle and Algebraic Group Models, assuming the hardness of the discrete logarithm problem.
2023
CRYPTO
Streaming Functional Encryption
We initiate the study of streaming functional encryption (sFE) which is designed for scenarios in which data arrives in a streaming manner and is computed on in an iterative manner as the stream arrives. Unlike in a standard functional encryption (FE) scheme, in an sFE scheme, we (1) do not require the entire data set to be known at encryption time and (2) allow for partial decryption given only a prefix of the input. More specifically, in an sFE scheme, we can sequentially encrypt each data point x_i in a stream of data x = x_1 . . . x_n as it arrives, without needing to wait for all n values. We can then generate function keys for streaming functions which are stateful functions that take as input a message x_i and a state st_i and output a value y_i and the next state st_{i+1}. For any k ≤ n, a user with a function key for a streaming function f can learn the first k output values y_1 . . . y_k where (y_i, st_{i+1}) = f (x_i, st_i) and st_1 = ⊥ given only ciphertexts for the first k elements x_1 . . . x_k. In this work, we introduce the notion of sFE and show how to construct it from FE. In particular, we show how to achieve a secure sFE scheme for P/Poly from a compact, secure FE scheme for P/Poly, where our security notion for sFE is similar to standard FE security except that we require all function queries to be made before the challenge ciphertext query. Furthermore, by combining our result with the FE construction of Jain, Lin, and Sahai (STOC, 2022), we show how to achieve a secure sFE scheme for P/Poly from the polynomial hardness of well-studied assumptions.
2023
CRYPTO
Succinct Arguments for RAM Programs via Projection Codes
Motivated by the goal of proving statements that involve small subsets of a big database, we introduce and study the notion of projection codes. A standard error-correcting code allows one to encode a message x into a codeword X, such that even if a constant fraction of X is corrupted, the message x can still be recovered. A projection code extends this guarantee to any subset of the bits of x. Concretely, for every projection of x to a subset s of its coordinates, there is a subset S of comparable size such that the projected encoding X|_S forms a robust encoding of the projected message x|_s. Our first main result is a construction of projection codes with a near-optimal increase in the length of x and size of s. We then apply this to obtain our second main result: succinct arguments for the computation of a RAM program on a (big) committed database, where the communication and the run-time of both the prover and the verifier are close to optimal even when the RAM program run-time is much smaller than the database size. Our solution makes only a black-box use of a collision-resistant hash function, providing the first black-box alternative to previous non-black-box constructions with similar asymptotic efficiency.
2023
CRYPTO
The Power of Undirected Rewindings for Adaptive Security
Existing proofs of adaptive security (e.g., in settings in which decryption keys are adaptively revealed) often rely on guessing arguments. Such guessing arguments can be simple (and, e.g., just involve guessing which keys are revealed), or more complex ``partitioning'' arguments. Since guessing directly and negatively impacts the loss of the corresponding security reduction, this leads to black-box lower bounds for a number of cryptographic scenarios that involve adaptive security. In this work, we provide an alternative to such guessing arguments: instead of guessing in a security reduction which adaptive choices an adversary A makes, we rewind A many times until we can successfully embed a given computational challenge. The main benefit of using rewindings is that these rewindings can be arranged sequentially, and the corresponding reduction loss only accumulates additively (instead of multiplicatively, as with guessing). The main technical challenge is to show that A's success is not negatively affected after (potentially many) rewindings. To this end, we develop a machinery for ``undirected'' rewindings that preserve A's success across (potentially many) rewindings. We use this strategy to show - security of the ``Logical Key Hierarchy'' protocol underlying the popular TreeKEM key management protocol, and - security of the Goldreich-Goldwasser-Micali (GGM) pseudorandom function (PRF) as a prefix-constrained PRF. In both cases, we provide the first polynomial reductions to standard assumptions (i.e., to IND-CPA and PRG security, respectively), and in case of the GGM PRF, we also circumvent an existing lower bound.
2023
CRYPTO
The Pseudorandom Oracle Model and Ideal Obfuscation
We introduce a new idealized model of hash functions, which we refer to as the *pseudorandom oracle* (PrO) model. Intuitively, it allows us to model cryptosystems that use the code of an ideal hash function in a non-black-box way. Formally, we model hash functions via a combination of a pseudorandom function (PRF) family and an ideal oracle. A user can initialize the hash function by choosing a PRF key $k$ and mapping it to a public handle $h$ using the oracle. Given the handle $h$ and some input $x$, the oracle can also be called to evaluate the PRF at $x$ with the corresponding key $k$. A user who chooses the PRF key $k$ therefore has a complete description of the hash function and can use its code in non-black-box constructions, while an adversary, who just gets the handle $h$, only has black-box access to the hash function via the oracle. As our main result, we show how to construct ideal obfuscation in the PrO model, starting from functional encryption (FE), which in turn can be based on well-studied polynomial hardness assumptions. In contrast, we know that ideal obfuscation cannot be instantiated in the basic random oracle model under any assumptions. We believe our result provides heuristic justification for the following: (1) most natural security goals implied by ideal obfuscation can be achieved in the real world; (2) obfuscation can be constructed from FE at polynomial security loss. We also discuss how to interpret our result in the PrO model as a construction of ideal obfuscation using simple hardware tokens or as a way to bootstrap ideal obfuscation for PRFs to that for all functions.
2023
CRYPTO
The Query-Complexity of Preprocessing Attacks
A large number of works prove lower bounds on space-time trade-offs in preprocessing attacks, i.e., trade-offs between the size of the advice and the time needed to break a scheme given such advice. We contend that the question of how much {\em time} is needed to produce this advice is equally important, and often highly non-trivial. However, it has received significantly less attention. In this paper, we present lower bounds on the complexity of preprocessing attacks that depend on both offline and online time. As in the case of space-time trade-offs, we focus in particular on settings with ideal primitives, where both the offline and online time complexities are approximated by the number of queries to the given primitive. We give generic results that highlight the benefits of salting to generically increase the offline costs of preprocessing attacks. The bulk of our paper presents several results focusing on {\em salted} hash functions. In particular, we provide a fairly involved analysis of the pre-image- and collision-resistance security of the (two-block) Merkle-Damg\aard construction in our model.
2023
CRYPTO
Tighter QCCA-Secure Key Encapsulation Mechanism with Explicit Rejection in the Quantum Random Oracle Model
Hofheinz et al. (TCC 2017) proposed several key encapsulation mechanism (KEM) variants of Fujisaki-Okamoto (\textsf{FO}) transformation, including $\textsf{FO}^{\slashed{\bot}},\textsf{FO}_m^{\slashed{\bot}}, \textsf{QFO}_m^{\slashed{\bot}},\textsf{FO}^{\bot},\textsf{FO}_m^\bot$ and $\textsf{QFO}_m^\bot$, and they are widely used in the post-quantum cryptography standardization launched by NIST. These transformations are divided into two types, the implicit and explicit rejection type, including $\{\textsf{FO}^{\slashed{\bot}}$, $\textsf{FO}_m^{\slashed{\bot}}$, $\textsf{QFO}_m^{\slashed{\bot}}\}$ and $\{\textsf{FO}^{\bot}$, $\textsf{FO}_m^\bot$, $\textsf{QFO}_m^\bot\}$, respectively. The decapsulation algorithm of the implicit (resp. explicit) rejection type returns a pseudorandom value (resp. an abort symbol $\bot$) for an invalid ciphertext. For the implicit rejection type, the \textsf{IND-CCA} security reduction of $\textsf{FO}^\slashed{\bot}$ in the quantum random oracle model (QROM) can avoid the quadratic security loss, as shown by Kuchta et al. (EUROCRYPT 2020). However, for the explicit rejection type, the best known \textsf{IND-CCA} security reduction in the QROM presented by H{\"o}velmanns et al. (ASIACRYPT 2022) for $\textsf{FO}_m^\bot$ still suffers from a quadratic security loss. Moreover, it is not clear until now whether the implicit rejection type is more secure than the explicit rejection type. In this paper, a QROM security reduction of $\textsf{FO}_m^\bot$ without incurring a quadratic security loss is provided. Furthermore, our reduction achieves \textsf{IND-qCCA} security, which is stronger than the \textsf{IND-CCA} security. To achieve our result, two steps are taken: The first step is to prove that the \textsf{IND-qCCA} security of $\textsf{FO}_m^\bot$ can be tightly reduced to the \textsf{IND-CPA} security of $\textsf{FO}_m^\bot$ by using the online extraction technique proposed by Don et al. (EUROCRYPT 2022). The second step is to prove that the \textsf{IND-CPA} security of $\textsf{FO}_m^\bot$ can be reduced to the \textsf{IND-CPA} security of the underlying public key encryption (PKE) scheme without incurring quadratic security loss by using the Measure-Rewind-Measure One-Way to Hiding Lemma (EUROCRYPT 2020). In addition, we prove that (at least from a theoretic point of view), security is independent of whether the rejection type is explicit ($\textsf{FO}_m^\bot$) or implicit ($\textsf{FO}_m^{\slashed{\bot}}$) if the underlying PKE scheme is weakly $\gamma$-spread.
2023
CRYPTO
Toward Practical Lattice-based Proof of Knowledge from Hint-MLWE
In the last decade, zero-knowledge proof of knowledge protocols have been extensively studied to achieve active security of various cryptographic protocols. However, the existing solutions simply seek zero-knowledge for both message and randomness, which is an overkill in many applications since protocols may remain secure even if some information about randomness is leaked to the adversary. We develop this idea to improve the state-of-the-art proof of knowledge protocols for RLWE-based public-key encryption and BDLOP commitment schemes. In a nutshell, we present new proof of knowledge protocols without using noise flooding or rejection sampling which are provably secure under a computational hardness assumption, called Hint-MLWE. We also show an efficient reduction from Hint-MLWE to the standard MLWE assumption. Our approach enjoys the best of two worlds because it has no computational overhead from repetition (abort) and achieves a polynomial overhead between the honest and proven languages. We prove this claim by demonstrating concrete parameters and compare with previous results. Finally, we explain how our idea can be further applied to other proof of knowledge providing advanced functionality.
2023
CRYPTO
Tracing Quantum State Distinguishers via Backtracking
We show the following results: - The post-quantum equivalence of indisitnguishability obfuscation and differing inputs obfuscation in the restricted setting where the outputs differ on at most a polynomial number of points. Our result handles the case where the auxiliary input may contain a \emph{quantum state}; previous results could only handle classical auxiliary input. - Bounded collusion traitor tracing from general public key encryption, where the decoder is allowed to contain a quantum state. The parameters of the scheme grow polynomially in the collusion bound. - Collusion-resistant traitor tracing with constant-size ciphertexts from general public key encryption, again for quantum state decoders. The public key and secret keys grow polynomially in the number of users. - Traitor tracing with embedded identities, again forquantum state decoders, under a variety of different assumptions with different parameter size trade-offs. Traitor tracing and differing inputs obfuscation with quantum decoders / auxiliary input arises naturally when considering the post-quantum security of these primitives. We obtain our results by abstracting out a core algorithmic model, which we call the Back One Step (BOS) model. We prove a general theorem, reducing many quantum results including ours to designing \emph{classical} algorithms in the BOS model. We then provide simple algorithms for the particular instances studied in this work.
2023
CRYPTO
TreePIR: Sublinear-Time and Polylog-Bandwidth Private Information Retrieval from DDH
In Private Information Retrieval (PIR), a client wishes to retrieve the value of an index $i$ from a public database of $N$ values without leaking any information about $i$. In their recent seminal work, Corrigan-Gibbs and Kogan (EUROCRYPT 2020) introduced the first two-server PIR protocol with sublinear amortized server time and sublinear $O(\sqrt{N} \log N)$ bandwidth. In a followup work, Shi et al. (CRYPTO 2021) reduced the bandwidth to polylogarithmic by proposing a construction based on privately puncturable pseudorandom functions, a primitive whose only construction known to date is based on heavy cryptographic primitives such as LWE. Partly because of this, their PIR protocol does not achieve concrete efficiency. In this paper we propose TreePIR, a two-server PIR protocol with sublinear amortized server time and polylogarithmic bandwidth whose security can be based on just the DDH assumption. TreePIR can be partitioned in two phases that are both sublinear: The first phase is remarkably simple and only requires pseudorandom generators. The second phase is a single-server PIR protocol on \emph{only} $\sqrt{N}$ indices, for which we can use the protocol by D\"ottling et al. (CRYPTO 2019) based on DDH, or, for practical purposes, the most concretely efficient single-server PIR protocol. Not only does TreePIR achieve better asymptotics than previous approaches while resting on weaker cryptographic assumptions, it also outperforms existing two-server PIR protocols in practice. The crux of our protocol is a new cryptographic primitive that we call weak privately puncturable pseudorandom functions, which we believe can have further applications.
2023
CRYPTO
Tri-State Circuits: A Better Model of Computation for Garbling
We introduce tri-state circuits (TSCs). TSCs form a natural model of computation that, to our knowledge, has not been considered by theorists. The model captures a surprising combination of simplicity and power. TSCs are simple in that they allow only three wire values (0,1, and undefined -- Z) and three types of fan-in two gates; they are powerful in that their statically placed gates fire (execute) eagerly as their inputs become defined, implying orders of execution that depend on input. This behavior is sufficient to efficiently evaluate RAM programs. We construct a TSC that emulates T steps of any RAM program and that has only $O(T log^3 T log log T)$ gates. Contrast this with the reduction from RAM to Boolean circuits, where the best approach scans all of memory on each access, incurring quadratic cost. We connect TSCs with Garbled Circuits (GC). TSCs capture the power of garbling far better than Boolean Circuits, offering a more expressive model of computation that leaves per-gate cost essentially unchanged. As an important application, we construct authenticated Garbled RAM (GRAM), enabling constant-round maliciously-secure 2PC of RAM programs. Let $\lambda$ denote the security parameter. We extend authenticated garbling to TSCs; by simply plugging in our TSC-based RAM, we obtain authenticated GRAM running at cost $O(T log^3 T log log T \lambda)$, outperforming all prior work, including prior semi-honest GRAM. We also give semi-honest garbling of TSCs from a one-way function (OWF). This yields OWF-based GRAM at cost $O(T log^3 T log log T \lambda)$, outperforming the best prior OWF-based GRAM by more than factor $\lambda$.
2023
CRYPTO
Twin Column Parity Mixers and Gaston
We introduce a new type of mixing layer for the round function of cryptographic permutations, called circulant twin column parity mixer (CPM), that is a generalization of the mixing layers in KECCAK-f and XOODOO. While these mixing layers have a bitwise differential branch number of 4 and a computational cost of 2 (bitwise) additions per bit, the circulant twin CPMs we build have a bitwise differential branch number of 12 at the expense of an increase in computational cost: depending on the dimension this ranges between 3 and 3.34 XORs per bit. Our circulant twin CPMs operate on a state in the form of a rectangular array and can serve as mixing layer in a round function that has as non-linear step a layer of S-boxes operating in parallel on the columns. When sandwiched between two ShiftRow-like mappings, we can obtain a columnwise branch number of 12 and hence it guarantees 12 active S-boxes per two rounds in differential trails. Remarkably, the linear branch numbers (bitwise and columnwise alike) of these mappings is only 4. However, we define the transpose of a circulant twin CPM that has linear branch number of 12 and a differential branch number of 4. We give a concrete instantiation of a permutation using such a mixing layer, named Gaston. It operates on a state of 5*64 bits and uses chi operating on columns for its non-linear layer. Most notably, the Gaston round function is lightweight in that it takes as few bitwise operations as the one of NIST lightweight standard ASCON. We show that the best 3-round differential and linear trails of Gaston have much higher weights than those of ASCON. Permutations like Gaston can be very competitive in applications that rely for their security exclusively on good differential properties, such as keyed hashing as in the compression phase of Farfalle.
2023
CRYPTO
Two-Round Stateless Deterministic Two-Party Schnorr Signatures From Pseudorandom Correlation Functions
Schnorr signatures are a popular choice due to their simplicity, provable security, and linear structure that enables relatively easy threshold signing protocols. The deterministic variant of Schnorr (where the nonce is derived in a stateless manner using a PRF from the message and a long term secret) is more popular in practice since it mitigates the threats of a faulty or poor randomness generator (which in Schnorr leads to catastrophic breaches of security). Unfortunately, threshold protocols for the deterministic variant of Schnorr have so far been quite inefficient, as they make non black-box use of the PRF involved in the nonce generation. In this paper, we present the first two-party threshold protocol for the determistic variant of Schnorr signatures, which only makes black-box use of the underlying cryptographic algorithms. We present a protocol from general assumptions which achieves covert security and a protocol that achieves full active security under factoring-like assumptions. Our protocols make crucial use of recent advances within the field of pseudorandom correlation functions (PCFs). As an additional benefit, only two-rounds are needed to perform distributed signing in our protocol, connecting our work to a recent line of research on the trade-offs between round complexity and computational assumptions for threshold Schnorr signatures.
2023
CRYPTO
Unifying Freedom and Separation for Tight Probing-Secure Composition
The masking countermeasure is often analyzed in the probing model. Proving the probing security of large circuits at high masking orders is achieved by composing gadgets that satisfy security definitions such as non-interference (NI), strong non-interference (SNI) or free SNI. The region probing model is a variant of the probing model, where the probing capabilities of the adversary scale with the number of regions in a masked circuit. This model is of interest as it allows better reductions to the more realistic noisy leakage model. The efficiency of composable region probing secure masking has been recently improved with the introduction of the input-output separation (IOS) definition. In this paper, we first establish equivalences between the non-interference framework and the IOS formalism. We also generalize the security definitions to multiple-input gadgets and systematically show implications and separations between these notions. Then, we study which gadgets from the literature satisfy these. We give new security proofs for some well-known arbitrary-order gadgets, and also some automated proofs for fixed-order, special-case gadgets. To this end, we introduce a new automated formal verification algorithm that solves the open problem of verifying free SNI, which is not a purely simulation-based definition. Using the relationships between the security notions, we adapt this algorithm to further verify IOS. Finally, we look at composition theorems. In the probing model, we use the link between free SNI and the IOS formalism to generalize and improve the efficiency of the tight private circuit (Asiacrypt 2018) construction, also fixing a flaw in the original proof. In the region probing model, we relax the assumptions for IOS composition (TCHES 2021), which allows to save many refresh gadgets, hence improving the efficiency.
2023
CRYPTO
Universal Amplification of KDM Security: From 1-Key Circular to Multi-Key KDM
An encryption scheme is Key Dependent Message (KDM) secure if it is safe to encrypt messages that can arbitrarily depend on the secret keys themselves. In this work, we show how to upgrade essentially the weakest form of KDM security into the strongest one. In particular, we assume the existence of a symmetric-key bit-encryption that is circular-secure in the 1-key setting, meaning that it maintains security even if one can encrypt individual bits of a single secret key under itself. We also rely on a standard CPA-secure public-key encryption. We construct a public-key encryption scheme that is KDM secure for general functions (of a-priori bounded circuit size) in the multi-key setting, meaning that it maintains security even if one can encrypt arbitrary functions of arbitrarily many secret keys under each of the public keys. As a special case, the latter guarantees security in the presence of arbitrary length key cycles. Prior work already showed how to amplify n-key circular to n-key KDM security for general functions. Therefore, the main novelty of our work is to upgrade from 1-key to n-key security for arbitrary n. As an independently interesting feature of our result, our construction does not need to know the actual specification of the underlying 1-key circular secure scheme, and we only rely on the existence of some such scheme in the proof of security. In particular, we present a universal construction of a multi-key KDM-secure encryption that is secure as long as some 1-key circular-secure scheme exists. While this feature is similar in spirit to Levin's universal construction of one-way functions, the way we achieve it is quite different technically, and does not come with the same galactic inefficiency.
2023
CRYPTO
Weak instances of class group action based cryptography via self-pairings
In this paper we study non-trivial self-pairings with cyclic domains that are compatible with isogenies between elliptic curves oriented by an imaginary quadratic order $\mathcal{O}$. We prove that the order $m$ of such a self-pairing necessarily satisfies $m \mid \Delta_{\mathcal{O}}$ (and even $2m \mid \Delta_{\mathcal{O}} $ if $4 \mid \Delta_{\mathcal{O}}$ and $4m \mid \Delta_{\mathcal{O}}$ if $8 \mid \Delta_{\mathcal{O}}$) and is not a multiple of the field characteristic. Conversely, for each $m$ satisfying these necessary conditions, we construct a family of non-trivial cyclic self-pairings of order $m$ that are compatible with oriented isogenies, based on generalized Weil and Tate pairings. As an application, we identify weak instances of class group actions on elliptic curves assuming the degree of the secret isogeny is known. More in detail, we show that if $m^2 \mid \Delta_{\mathcal{O}}$ for some prime power $m$ then given two primitively $\mathcal{O}$-oriented elliptic curves $(E, \iota)$ and $(E',\iota') = [\mathfrak{a}](E,\iota)$ connected by an unknown invertible ideal $\mathfrak{a} \subseteq \mathcal{O}$, we can recover $\mathfrak{a}$ essentially at the cost of a discrete logarithm computation in a group of order $m^2$, assuming the norm of $\mathfrak{a}$ is given and is smaller than $m^2$. We give concrete instances, involving ordinary elliptic curves over finite fields, where this turns into a polynomial time attack. Finally, we show that these self-pairings simplify known results on the decisional Diffie-Hellman problem for class group actions on oriented elliptic curves.
2023
CRYPTO
When Messages are Keys: Is HMAC a dual-PRF?
In Internet security protocols including TLS 1.3, KEMTLS, MLS and Noise, HMAC is being assumed to be a dual-PRF, meaning a PRF not only when keyed conventionally (through its first input), but also when "swapped" and keyed (unconventionally) through its second (message) input. We give the first in-depth analysis of the dual-PRF assumption on HMAC. For the swap case, we note that security does not hold in general, but completely characterize when it does; we show that HMAC is swap-PRF secure if and only if keys are restricted to sets satisfying a condition called feasibility, that we give, and that holds in applications. The sufficiency is shown by proof and the necessity by attacks. For the conventional PRF case, we fill a gap in the literature by proving PRF security of HMAC for keys of arbitrary length. Our proofs are in the standard model, make assumptions only on the compression function underlying the hash function, and give good bounds in the multi-user setting. The positive results are strengthened through achieving a new notion of variable key-length PRF security that guarantees security even if different users use keys of different lengths, as happens in practice.