IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
23 October 2020
Pratyay Mukherjee
ePrint Report
In a threshold symmetric-key encryption (TSE) scheme, encryption/decryption is performed by interacting with any threshold number of parties who hold parts of the secret-keys. Security holds as long as the number of corrupt (possibly colluding) parties stay below the threshold. Recently, Agrawal et al. [CCS 2018] (alternatively called DiSE) initiated the study of TSE. They proposed a generic TSE construction based on any distributed pseudorandom function (DPRF). Instantiating with DPRF constructions by Naor, Pinkas and Reingold [Eurocrypt 1999] (also called NPR) they obtained several efficient TSE schemes with various merits. However, their security models and corresponding analyses consider only static (and malicious) corruption, in that the adversary fixes the set of corrupt parties in the beginning of the execution before acquiring any information (except the public parameters) and is not allowed to change that later.
In this work we augment the DiSE TSE definitions to the fully adaptive (and malicious) setting, in that the adversary is allowed to corrupt parties dynamically at any time during the execution. The adversary may choose to corrupt a party depending on the information acquired thus far, as long as the total number of corrupt parties stays below the threshold. We also augment DiSEs DPRF definitions to support adaptive corruption. We show that their generic TSE construction, when plugged-in with an adaptive DPRF (satisfying our definition), meets our adaptive TSE definitions.
We provide an efficient instantiation of the adaptive DPRF, proven secure assuming decisional Diffie-Hellman assumption (DDH), in the random oracle model. Our construction borrows ideas from Naor, Pinkas and Reingolds [Eurocrypt 1999] statically secure DDH-based DPRF (used in DiSE) and Libert, Joye and Yungs [PODC 2014] adaptively secure threshold signature. Similar to DiSE, we also give an extension satisfying a strengthened adaptive DPRF definition, which in turn yields a stronger adaptive TSE scheme. For that, we construct a simple and efficient adaptive NIZK protocol for proving a specific commit-and-prove style statement in the random oracle model assuming DDH.
In this work we augment the DiSE TSE definitions to the fully adaptive (and malicious) setting, in that the adversary is allowed to corrupt parties dynamically at any time during the execution. The adversary may choose to corrupt a party depending on the information acquired thus far, as long as the total number of corrupt parties stays below the threshold. We also augment DiSEs DPRF definitions to support adaptive corruption. We show that their generic TSE construction, when plugged-in with an adaptive DPRF (satisfying our definition), meets our adaptive TSE definitions.
We provide an efficient instantiation of the adaptive DPRF, proven secure assuming decisional Diffie-Hellman assumption (DDH), in the random oracle model. Our construction borrows ideas from Naor, Pinkas and Reingolds [Eurocrypt 1999] statically secure DDH-based DPRF (used in DiSE) and Libert, Joye and Yungs [PODC 2014] adaptively secure threshold signature. Similar to DiSE, we also give an extension satisfying a strengthened adaptive DPRF definition, which in turn yields a stronger adaptive TSE scheme. For that, we construct a simple and efficient adaptive NIZK protocol for proving a specific commit-and-prove style statement in the random oracle model assuming DDH.
Zichen Gui, Kenneth G. Paterson, Sikhar Patranabis, Bogdan Warinschi
ePrint Report
This paper initiates a new direction of research for searchable symmetric encryption (SSE). We provide comprehensive security models and notions for SSE in the simulation tradition that encompass leakage from the whole SSE system, including accesses to encrypted indices and the encrypted database documents themselves. We provide static and dynamic SSE constructions targeting our new notions. Our constructions involve a combination of novel techniques: bucketization to hide volumes of responses to queries; delayed, pseudorandom write-backs to disrupt access patterns; and indistinguishable search and update operations. The oblivious operations make it easy to establish strong versions of forward and backward security for our dynamic SSE scheme and rule out file-injection attacks. Our implementation of the dynamic SSE scheme demonstrates that it offers very strong security against general classes of leakage-abuse attack with moderate overhead. Our schemes scale smoothly to databases containing hundreds of thousand of documents and millions of keyword-document pairs.
Joël Alwen, Daniel Jost, Marta Mularczyk
ePrint Report
The Messaging Layer Security (MLS) protocol is a new complex open standard for end-to-end (E2E) secure group messaging being developed by the IETF. Its primary security goal is to provide E2E privacy and authenticity for messages in long lived sessions whenever possible. This, despite the participation (at times) of malicious insiders that can interact with the PKI at will, actively deviate from the protocol, leak honest parties' states, and fully control the network.
The cryptographic core of the MLS protocol (from which it inherits essentially all of its efficiency and security properties) is a Continuous Group Key Agreement (CGKA) protocol. CGKA protocols provide asynchronous E2E secure group management by allowing group members to agree on a fresh independent symmetric key after every change to the group's state (e.g. when someone joins/leaves the group).
In this work, we make progress towards a precise understanding of the insider security of MLS in the form of 3 contributions. On the theory side, we overcome several subtelties to formulate the first notion of insider security for a CGKA (or group messaging) protocol. Next, we isolate the core components of MLS to obtain a CGKA protocol we dubbed Insider Secure TreeKEM (ITK). Finally, we give a rigorous proof that ITK provides (adaptive) insider security. In particular, this work also initiates the study of insider secure CGKA protocols, a primitive of interest in its own right.
The cryptographic core of the MLS protocol (from which it inherits essentially all of its efficiency and security properties) is a Continuous Group Key Agreement (CGKA) protocol. CGKA protocols provide asynchronous E2E secure group management by allowing group members to agree on a fresh independent symmetric key after every change to the group's state (e.g. when someone joins/leaves the group).
In this work, we make progress towards a precise understanding of the insider security of MLS in the form of 3 contributions. On the theory side, we overcome several subtelties to formulate the first notion of insider security for a CGKA (or group messaging) protocol. Next, we isolate the core components of MLS to obtain a CGKA protocol we dubbed Insider Secure TreeKEM (ITK). Finally, we give a rigorous proof that ITK provides (adaptive) insider security. In particular, this work also initiates the study of insider secure CGKA protocols, a primitive of interest in its own right.
Chris Brzuska, Geoffroy Couteau
ePrint Report
Constructing one-way function from average-case hardness is a long-standing open problem. A positive result would exclude Pessiland (Impagliazzo 95) and establish a highly desirable win-win situation: either (symmetric) cryptography exists unconditionally, enabling many of the important primitives which are used to secure our communications, or all NP problems can be solved efficiently on the average, which would be revolutionary for algorithmists and industrials. Motivated by the strong interest of establishing such win-win results and the lack of progress on this seemingly very hard question, we initiate the investigation of weaker yet meaningful candidate win-win results. Specifically, we study the following type of win-win results: either there are fine-grained one-way functions (FGOWF), which relax the standard notion of a one-way function by requiring only a fixed polynomial gap (as opposed to superpolynomial) between the running time of the function and the running time of an inverter, or nontrivial speedups can be obtained for all NP problems on the average. We obtain three main results:
We introduce the Random Language Model (RLM), which captures idealized average-case hard languages, analogous to how the random oracle model captures idealized one-way functions. In the RLM, we rule out an idealized version of Pessiland, where ideally hard languages would exist yet even weak forms of cryptography would fail. Namely, we provide a construction of a FGOWF (with quadratic hardness gap) and prove its security in the RLM.
On the negative side, we prove a strong oracle separation: we show that there is no black-box proof that either FGOWF exist, or non-trivial speedup can be obtained for all NP languages on average (i.e., there is no exponentially average-case hard NP languages).
We provide a second strong negative result for an even weaker candidate win-win result: there is no black-box proof that either FGOWF exist, or non-trivial speedups can be obtained for all NP languages on average when amortizing over many instances (i.e., there is no exponentially average-case hard NP languages whose hardness amplifies optimally through parallel repetitions). This separation forms the core technical contribution of our work.
Our results lay the foundations for a program towards building fine-grained one-way functions from strong forms of average-case hardness, following the template of constructions in the Random Language Model. We provide a preliminary investigation of this program, showing black-box barriers toward instantiating our idealized constructions from natural hardness properties.
Our results lay the foundations for a program towards building fine-grained one-way functions from strong forms of average-case hardness, following the template of constructions in the Random Language Model. We provide a preliminary investigation of this program, showing black-box barriers toward instantiating our idealized constructions from natural hardness properties.
Adrián Ranea, Bart Preneel
ePrint Report
All academic methods to secure software implementations of block ciphers against adversaries with full control of the device have been broken. Despite the huge progress in the cryptanalysis of these white-box implementations, no recent progress has been made on the design side. Most of the white-box designs follow the CEJO framework, where each round is encoded by composing it with small random permutations. While several generic attacks have been proposed on the CEJO framework, no generic analysis has been performed on self-equivalence encodings, a different design where only the affine layer of each round is encoded with random self-equivalences of the S-box layer, that is, affine permutations commuting with the non-linear layer.
In this work, we analyse the security of white-box implementations based on self-equivalence encodings for a broad class of SPN ciphers. First, we characterize the self-equivalence groups of S-box layers, and we prove that all the self-equivalences of a cryptographically strong S-box layer have a diagonal shape. Then, we propose the first generic attack on self-equivalence encodings. Our attack, based on affine equivalence problems, identifies the connection between the security of self equivalence encodings and the self-equivalence structure of the cipher components. While we show that traditional SPN ciphers with cryptographically strong S-box layers cannot be secured with self-equivalence encodings, our analysis shows that self-equivalence encodings resist the generic attack if the cipher components satisfy several conditions, revealing the potential of self-equivalence encodings to secure other types of ciphers.
In this work, we analyse the security of white-box implementations based on self-equivalence encodings for a broad class of SPN ciphers. First, we characterize the self-equivalence groups of S-box layers, and we prove that all the self-equivalences of a cryptographically strong S-box layer have a diagonal shape. Then, we propose the first generic attack on self-equivalence encodings. Our attack, based on affine equivalence problems, identifies the connection between the security of self equivalence encodings and the self-equivalence structure of the cipher components. While we show that traditional SPN ciphers with cryptographically strong S-box layers cannot be secured with self-equivalence encodings, our analysis shows that self-equivalence encodings resist the generic attack if the cipher components satisfy several conditions, revealing the potential of self-equivalence encodings to secure other types of ciphers.
Aniruddha Biswas, Palash Sarkar
ePrint Report
We show (almost) separation between certain important classes of Boolean functions. The technique that we use is to show that the total influence of functions in one class is less than the total influence of functions in the other class. In particular, we show (almost) separation of several classes of Boolean functions which have been studied in the coding theory and cryptography from classes which have been studied in combinatorics and complexity theory.
Ward Beullens, Lucas Disson, Robi Pedersen, Frederik Vercauteren
ePrint Report
We present an honest-majority Distributed Key Generation protocol (DKG) based on Shamir's $(k,n)$-threshold secret sharing in the setting of Very Hard Homogenous Spaces (VHHS). DKG's in the DLOG setting use Pedersen commitments, for which there is no known analogue in the VHHS setting. As a replacement, we introduce a new primitive called piecewise verifiable proofs, which allow a prover to prove that a list of NP-statements is valid with respect to a common witness, and such that the different statements can be verified individually. Our protocol is robust and actively secure in the Quantum Random Oracle Model. For $n$ participants, the total runtime of our protocol is\break $2+\lambda+n(1+4\lambda)$ group action evaluations, where $\lambda$ is the underlying security parameter, and is thus independent of the threshold $k$. When instantiated with CSIDH-512, this amounts to approximately $4.5+18n$ seconds.
Sebastian Paul, Patrik Scheible
ePrint Report
The threat of a cryptographically relevant quantum computer contributes to an increasing interest in the field of post-quantum cryptography (PQC). Compared to existing research efforts regarding the integration of PQC into the Transport Layer Security (TLS) protocol, industrial communication protocols have so far been neglected. Since industrial cyber-physical systems (CPS) are typically deployed for decades, protection against such long-term threats is needed. In this work, we propose two novel solutions for the integration of post-quantum (PQ) primitives (digital signatures and key establishment) into the industrial protocol Open Platform Communications Unified Architecture (OPC UA): a hybrid solution combining conventional cryptography with PQC and a solution solely based on PQC. Both approaches provide mutual authentication between client and server and are realized with certificates fully compliant to the X.509 standard. Moreover, we implement the two solutions and measure and evaluate their performance across three different security levels. All selected algorithms (Kyber, Dilithium, and Falcon) are candidates for standardization by the National Institute of Standards and Technology (NIST). We show that Falcon is a suitable optionespeciallywhen using floating-point hardware provided by our ARM-based evaluation platform. Our proposed hybrid solution provides PQ security for early adopters but comes with additional performance and communication requirements. Our solution solely based on PQC shows superior performance across all evaluated security levels in terms of handshake duration compared to conventional OPC UA but comes at the cost of increased sizes for handshake messages.
Akinori Hosoyamada, Tetsu Iwata
ePrint Report
Recent results on quantum cryptanalysis show that some symmetric key schemes can be broken in polynomial time even if they are proven to be secure in the classical setting.
Liskov, Rivest, and Wagner showed that secure tweakable block ciphers can be constructed from secure block ciphers in the classical setting.
However, Kaplan et al. showed that their scheme can be broken by polynomial time quantum superposition attacks, even if underlying block ciphers are quantum-secure.
Since then, it remains open if there exists a mode of block ciphers to build quantum-secure tweakable block ciphers.
This paper settles the problem in the reduction-based provable security paradigm.
We show the first design of
quantum-secure tweakable block ciphers based on quantum-secure block ciphers,
and present a provable security bound.
Our construction is simple, and when instantiated with a quantum-secure $n$-bit block cipher, it is secure against attacks that query arbitrary quantum superpositions of plaintexts and tweaks up to $O(2^{n/6})$ quantum queries.
Our security proofs use the compressed oracle technique introduced by Zhandry.
More precisely, we use an alternative formalization of the technique introduced by Hosoyamada and Iwata.
Subhadeep Banik, Zhenzhen Bao, Takanori Isobe, Hiroyasu Kubo, Fukang Liu, Kazuhiko Minematsu, Kosei Sakamoto, Nao Shibata, Maki Shigeri
ePrint Report
In this article, we present WARP, a lightweight 128-bit block cipher with a 128-bit key. It aims at small-footprint circuit in the field of 128-bit block ciphers, possibly for a unified encryption and decryption functionality. The overall structure of WARP is a variant of 32-nibble Type-2 Generalized Feistel Network (GFN), with a permutation over nibbles designed to optimize the security and efficiency. We conduct a thorough security analysis and report comprehensive hardware and software implementation results. Our hardware results show that WARP is the smallest 128-bit block cipher for most of typical hardware implementation strategies. A serialized circuit of WARP achieves around 800 Gate Equivalents (GEs), which is much smaller than previous state-of-the-art implementations of lightweight 128-bit ciphers (they need more than $1,000$ GEs). While our primary metric is hardware size, WARP also enjoys several other features, most notably low energy consumption. This is somewhat surprising, since GFN generally needs more rounds than substitution permutation network (SPN), and thus GFN has been considered to be less advantageous in this regard. We show a multi-round implementation of WARP is quite low-energy. Moreover, WARP also performs well on software: our SIMD implementation is quite competitive to known hardware-oriented 128-bit lightweight ciphers for long input, and even much better for small inputs due to the small number of parallel blocks. On 8-bit microcontrollers, the results of our assembly implementations show that WARP is flexible to achieve various performance characteristics.
Ohad Barta, Yuval Ishai, Rafail Ostrovsky, David J. Wu
ePrint Report
Succinct non-interactive arguments (SNARGs) enable proofs of NP statements with very low communication. Recently, there has been significant work in both theory and practice on constructing SNARGs with very short proofs. Currently, the state-of-the-art in succinctness is due to Groth (Eurocrypt 2016) who constructed a SNARG from bilinear maps where the proof consists of just 3 group elements.
In this work, we first construct a concretely-efficient designated-verifier (preprocessing) SNARG with inverse polynomial soundness, where the proof consists of just 2 group elements in a standard (generic) group. This leads to a 50% reduction in concrete proof size compared to Groth's construction. We follow the approach of Bitansky et al. (TCC 2013) who describe a compiler from linear PCPs to SNARGs in the preprocessing model. Our improvement is based on a new linear PCP packing technique that allows us to construct 1-query linear PCPs which can then be compiled into a SNARG (using ElGamal encryption over a generic group). An appealing feature of our new SNARG is that the verifier can precompute a statement-independent lookup table in an offline phase; verifying proofs then only requires 2 exponentiations and a single table lookup. This makes our new designated-verifier SNARG appealing in settings that demand fast verification and minimal communication.
We then turn to the question of constructing arguments where the proof consists of a single group element. Here, we first show that any (possibly interactive) argument for a language L where the verification algorithm is "generic" (i.e., only performs generic group operations) and the proof consists of a single group element, implies a witness encryption scheme for L. We then show that under a yet-unproven, but highly plausible, hypothesis on the hardness of approximating the minimal distance of linear codes, we can construct a 2-message laconic argument for NP where the proof consists of a single group element. Under the same hypothesis, we obtain a witness encryption scheme for NP in the generic group model. Along the way, we show that under a conceptually-similar but proven hardness of approximation result, there is a 2-message laconic argument for NP with negligible soundness error where the prover's message consists of just 2 group elements. In both settings, we obtain laconic arguments (and linear PCPs) with linear decision procedures. Our constructions circumvent a previous lower bound by Groth on such argument systems with linear decision procedures by relying on imperfect completeness. Namely, our constructions have vanishing but not negligible completeness error, while the lower bound of Groth implicitly assumes negligible completeness error of the underlying argument. Our techniques thus highlight new avenues for designing linear PCPs, succinct arguments, and witness encryption schemes.
In this work, we first construct a concretely-efficient designated-verifier (preprocessing) SNARG with inverse polynomial soundness, where the proof consists of just 2 group elements in a standard (generic) group. This leads to a 50% reduction in concrete proof size compared to Groth's construction. We follow the approach of Bitansky et al. (TCC 2013) who describe a compiler from linear PCPs to SNARGs in the preprocessing model. Our improvement is based on a new linear PCP packing technique that allows us to construct 1-query linear PCPs which can then be compiled into a SNARG (using ElGamal encryption over a generic group). An appealing feature of our new SNARG is that the verifier can precompute a statement-independent lookup table in an offline phase; verifying proofs then only requires 2 exponentiations and a single table lookup. This makes our new designated-verifier SNARG appealing in settings that demand fast verification and minimal communication.
We then turn to the question of constructing arguments where the proof consists of a single group element. Here, we first show that any (possibly interactive) argument for a language L where the verification algorithm is "generic" (i.e., only performs generic group operations) and the proof consists of a single group element, implies a witness encryption scheme for L. We then show that under a yet-unproven, but highly plausible, hypothesis on the hardness of approximating the minimal distance of linear codes, we can construct a 2-message laconic argument for NP where the proof consists of a single group element. Under the same hypothesis, we obtain a witness encryption scheme for NP in the generic group model. Along the way, we show that under a conceptually-similar but proven hardness of approximation result, there is a 2-message laconic argument for NP with negligible soundness error where the prover's message consists of just 2 group elements. In both settings, we obtain laconic arguments (and linear PCPs) with linear decision procedures. Our constructions circumvent a previous lower bound by Groth on such argument systems with linear decision procedures by relying on imperfect completeness. Namely, our constructions have vanishing but not negligible completeness error, while the lower bound of Groth implicitly assumes negligible completeness error of the underlying argument. Our techniques thus highlight new avenues for designing linear PCPs, succinct arguments, and witness encryption schemes.
Abhiram Kothapalli, Elisaweta Masserova, Bryan Parno
ePrint Report
We present the first direct construction of a zero-knowledge argument system for general computation that features a linear-time prover and a constant-time verifier (after a single linear-time public setup) in terms of the number of field and group operations. Our scheme utilizes a universal linear-size structured reference string (SRS) that allows a single trusted setup to be used across all computation instances of a bounded size. Concretely, for computations of size $n$, our prover's cost is dominated by $35$ multi-exponentiations of size $n$ and our verifier's cost is dominated by $34$ pairings. To achieve the stated asymptotics, we first construct a nearly-optimal zkSNARK with a logarithmic verifier in the random oracle model. We then show how to achieve a constant-time verifier using proof composition. Along the way we design (1) a new polynomial commitment scheme for evaluation-based representations of polynomials, (2) an asymptotically optimal inner-product argument system, (3) an asymptotically optimal multi-Hadamard-product argument system, and (4) a new constraint system for NP that is particularly well-suited for our bundle of techniques.
Hosein Hadipour, Nasour Bagheri, Ling Song
ePrint Report
The boomerang and rectangle attacks are adaptions of differential cryptanalysis in which the attacker divides a block cipher $E$ into two sub-ciphers, i.e., $E = E_{1}\circ E_{0}$, to construct a distinguisher for $E$ with probability $p^{2}q^{2}$ by concatenating two short differential trails for $E_{0}$ and $E_{1}$ with probability $p$ and $q$ respectively. According to the previous research the dependency between these two differential characteristics have a great impact on the probability of boomerang and rectangle distinguishers. Dunkelman \etal proposed the sandwich attack to formalise such dependency that regards $E$ as three parts, i.e., $E = \widetilde{E}_{1}\circ E_{m}\circ \widetilde{E}_{0}$, where $E_{m}$ contains the dependency between two differential trails, satisfying some differential propagation with probability $r$. Accordingly, the entire probability is $p^{2}q^{2}r$. Recently, Song et al. have proposed a general framework to identify the actual boundaries of $E_{m}$ and systematically evaluate the probability of $E_{m}$ with any number of rounds, and applied their method to improve the best boomerang distinguishers of SKINNY. In this paper, using a more advanced method to search for boomerang distinguishers, we show that the best previous boomerang distinguishers for SKINNY can be significantly improved. Given that SKINNY is a very important lightweight tweakable block cipher which is a basic module of many candidates of the Lightweight Cryptography (LWC) standardization project by NIST, and rectangle attack is one of the most efficient attacks on reduced-round of this cipher, using our boomerang distinguishers we improve the related tweakey rectangle attack on SKINNY to investigate the security of this cipher more accurately. CRAFT is another light weight tweakable block cipher for which we provide the security analysis against rectangle attack for the first time.
Following the previous research regarding evaluation of switching in multiple rounds of boomerang distinguishers, we also introduce new tools called \textit{Double Boomerang Connectivity Table} (DBCT), $BDT^{\star}$ and $DBT^{\star}$ to evaluate the boomerang switch through the multiple rounds more accurately. Using these new tools we provide theoretical proofs for our boomerang distinguishers for CRAFT and SKINNY.
Juliane Krämer, Patrick Struck
ePrint Report
Ciphertext indistinguishability under chosen plaintext attacks is a standard security notion for public key encryption. It crucially relies on the usage of good randomness and is trivially unachievable if the randomness is known by the adversary. Yilek (CT-RSA'10) defined security against resetting attacks, where randomness might be reused but remains unknown to the adversary. Furthermore, Yilek claimed that security against adversaries making a single query to the challenge oracle implies security against adversaries making multiple queries to the challenge oracle. This is a typical simplification for indistinguishability security notions proven via a standard hybrid argument. The given proof, however, was pointed out to be flawed by Paterson, Schuldt, and Sibborn (PKC'14). Prior to this work, it has been unclear whether this simplification of the security notion also holds in case of resetting attacks.
We remedy this state of affairs as follows. First, we show the strength of resetting attacks by showing that many public key encryption schemes are susceptible to these attacks. As our main contribution, we show that the simplification to adversaries making only one query to the challenge oracle also holds in the light of resetting attacks. More precisely, we show that the existing proof can not be fixed and give a different proof for the claim. Finally, we define real-or-random security against resetting attacks and prove it equivalent to the notion by Yilek which is of the form left-or-right.
Steven D. Galbraith, Robert Granger, Simon-Philipp Merz, Christophe Petit
ePrint Report
In this paper we further the study of index calculus methods for solving the elliptic curve discrete logarithm problem (ECDLP). We focus on the index calculus for subfield curves, also called Koblitz curves, defined over $\mathbb{F}_q$ with ECDLP in $\mathbb{F}_{q^n}$. Instead of accelerating the solution of polynomial systems during index calculus as was predominantly done in previous work, we define factor bases that are invariant under the $q$-power Frobenius automorphism of the field $\mathbb{F}_{q^n}$, reducing the number of polynomial systems that need to be solved. A reduction by a factor of $1/n$ is the best one could hope for. We show how to choose factor bases to achieve this, while simultaneously accelerating the linear algebra step of the index calculus method for Koblitz curves by a factor $n^2$.
Furthermore, we show how to use the Frobenius endomorphism to improve symmetry breaking for Koblitz curves.
We provide constructions of factor bases with the desired properties, and we study their impact on the polynomial system solving costs experimentally.
This work gives an answer to the problem raised in the literature on how the Frobenius endomorphism can be used to speed-up index calculus on subfield curves.
Fuyuki Kitagawa, Ryo Nishimaki, Takashi Yamakawa
ePrint Report
Secure software leasing (SSL) is a quantum cryptographic primitive that enables users to execute software only during the software is leased. It prevents users from executing leased software after they return the leased software to its owner. SSL can make software distribution more flexible and controllable.
Although SSL is an attractive cryptographic primitive, the existing SSL scheme is based on public key quantum money, which is not instantiated with standard cryptographic assumptions so far. Moreover, the existing SSL scheme only supports a subclass of evasive functions.
In this work, we present SSL schemes based on the learning with errors assumption (LWE). Specifically, our contributions consist of the following.
- We construct an SSL scheme for pseudorandom functions from the LWE assumption against quantum adversaries. - We construct an SSL scheme for a subclass of evasive functions from the LWE assumption against sub-exponential quantum adversaries. - We construct SSL schemes for the functionalities above with classical communication from the LWE assumption against (sub-exponential) quantum adversaries.
SSL with classical communication means that entities exchange only classical information though they run quantum computation locally.
Our crucial tool is two-tier quantum lightning, which is introduced in this work and a relaxed version of quantum lighting. In two-tier quantum lightning schemes, we have a public verification algorithm called semi-verification and a private verification algorithm called full-verification. An adversary cannot generate possibly entangled two quantum states whose serial numbers are the same such that one passes the semi-verification, and the other also passes the full-verification. We show that we can construct a two-tier quantum lightning scheme from the LWE assumption.
Although SSL is an attractive cryptographic primitive, the existing SSL scheme is based on public key quantum money, which is not instantiated with standard cryptographic assumptions so far. Moreover, the existing SSL scheme only supports a subclass of evasive functions.
In this work, we present SSL schemes based on the learning with errors assumption (LWE). Specifically, our contributions consist of the following.
- We construct an SSL scheme for pseudorandom functions from the LWE assumption against quantum adversaries. - We construct an SSL scheme for a subclass of evasive functions from the LWE assumption against sub-exponential quantum adversaries. - We construct SSL schemes for the functionalities above with classical communication from the LWE assumption against (sub-exponential) quantum adversaries.
SSL with classical communication means that entities exchange only classical information though they run quantum computation locally.
Our crucial tool is two-tier quantum lightning, which is introduced in this work and a relaxed version of quantum lighting. In two-tier quantum lightning schemes, we have a public verification algorithm called semi-verification and a private verification algorithm called full-verification. An adversary cannot generate possibly entangled two quantum states whose serial numbers are the same such that one passes the semi-verification, and the other also passes the full-verification. We show that we can construct a two-tier quantum lightning scheme from the LWE assumption.
Maxim Jourenko, Mario Larangeira, Keisuke Tanaka
ePrint Report
The security of blockchain based decentralized ledgers relies on consensus protocols executed between mutually distrustful parties. Such protocols incur delays which severely limit the throughput of such ledgers. Payment and state channels enable execution of offchain protocols that allow interaction between parties without involving the consensus protocol. Protocols such as Hashed Timelock Contracts (HTLC) and Sprites (FC'19) connect channels into Payment Channel Networks (PCN) allowing payments across a path of payment channels. Such a payment requires each party to lock away funds for an amount of time. The product of funds and locktime is the collateral of the party, i.e., their cost of opportunity to forward a payment. In the case of HTLC, the locktime is linear to the length of the path, making the total collateral invested across the path quadratic in size of its length. Sprites improved on this by reducing the locktime to a constant by utilizing smart contracts. Atomic Multi-Channel Updates (AMCU), published at CCS'19, introduced constant collateral payments without smart contracts. In this work we present the Channel Closure attack on AMCU that allows a malicious adversary to make honest parties lose funds. Furthermore, we propose the Payment Trees protocol that allows payments across a PCN with linear total collateral without the aid of smart contracts. A competitive performance similar to Sprites, and yet compatible to Bitcoin.
20 October 2020
Yi Deng
ePrint Report
We develop an individual simulation technique that explicitly makes use of particular properties/structures of a given adversary's functionality. Using this simulation technique, we obtain the following results.
1. We construct the first protocols that break previous black-box barriers of [Xiao, TCC'11 and Alwen et al., Crypto'05] under the standard hardness of factoring, both of which are polynomial time simulatable all a-priori bounded polynomial size distinguishers: -- Two-round selective opening secure commitment scheme. -- Three-round concurrent zero knowledge and concurrent witness hiding argument for NP in the bare public-key model.
2. We present a simpler two-round weak zero knowledge and witness hiding argument for NP in the plain model under the sub-exponential hardness of factoring. Our technique also yields a significantly simpler proof that existing distinguisher-dependent simulatable zero knowledge protocols are also polynomial time simulatable against all distinguishers of a-priori bounded polynomial size.
The core conceptual idea underlying our individual simulation technique is an observation of the existence of nearly optimal extractors for all hard distributions: For any NP-instance(s) sampling algorithm, there exists a polynomial-size witness extractor (depending on the sampler's functionality) that almost outperforms any circuit of a-priori bounded polynomial size in terms of the success probability.
1. We construct the first protocols that break previous black-box barriers of [Xiao, TCC'11 and Alwen et al., Crypto'05] under the standard hardness of factoring, both of which are polynomial time simulatable all a-priori bounded polynomial size distinguishers: -- Two-round selective opening secure commitment scheme. -- Three-round concurrent zero knowledge and concurrent witness hiding argument for NP in the bare public-key model.
2. We present a simpler two-round weak zero knowledge and witness hiding argument for NP in the plain model under the sub-exponential hardness of factoring. Our technique also yields a significantly simpler proof that existing distinguisher-dependent simulatable zero knowledge protocols are also polynomial time simulatable against all distinguishers of a-priori bounded polynomial size.
The core conceptual idea underlying our individual simulation technique is an observation of the existence of nearly optimal extractors for all hard distributions: For any NP-instance(s) sampling algorithm, there exists a polynomial-size witness extractor (depending on the sampler's functionality) that almost outperforms any circuit of a-priori bounded polynomial size in terms of the success probability.
Orr Dunkelman, Abhishek Kumar, Eran Lambooij, Somitra Kumar Sanadhya
ePrint Report
Format-Preserving Encryption (FPE) is a method to encrypt non-standard domains, thus allowing for securely encrypting not only binary strings, but also special domains, e.g., social security numbers into social security numbers. The need for those resulted in a few standardized constructions such as the NIST standardized FF1 and FF3-1 and the Korean Standards FEA-1 and FEA-2. Moreover, there are currently efforts both in ANSI and in ISO to include such block ciphers to standards (e.g., the ANSI X9.124 discussing encryption for financial services).
Most of the proposed FPE schemes, such as the NIST standardized FF1 and FF3-1 and the Korean Standards FEA-1 and FEA-2, are based on a Feistel construction with pseudo-random round functions. Moreover, to mitigate enumeration attacks against the possibly small domains, they all employ tweaks, which enrich the actual domain sizes.
In this paper we present distinguishing attacks against Feistel-based FPEs. We show a distinguishing attack against the full FF1 with data complexity of $2^{60}$ 20-bit plaintexts, against the full FF3-1
with data complexity of $2^{40}$ 20-bit plaintexts. For FEA-1 with 128-bit, 192-bit and 256-bit keys, the data complexity of the distinguishing attack is $2^{32}$, $2^{40}$, and $2^{48}$ 8-bit plaintexts, respectively. The data complexity of the distinguishing attack against the full FEA-2 with 128-bit, 192-bit and 256-bit is $2^{56}$, $2^{68}$, and $2^{80}$ 8-bit plaintexts, respectively. Moreover, we show how to extend the distinguishing attack on FEA-1 and FEA-2 using 192-bit and 256-bit keys into key recovery attacks with time complexity $2^{136}$ (for both attacks).
Karim Belabas, Thorsten Kleinjung, Antonio Sanso, Benjamin Wesolowski
ePrint Report
In this short note we analyze the low order assumption in the imaginary quadratic number fields. We show how this assumption is broken for Mersenne primes. We also provide a description on how to possible attack this assumption for other class of prime numbers leveraging some new mathematical tool coming from higher (cubic) number fields.