International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

20 February 2021

Dimitris Karakostas, Nikos Karayannidis, Aggelos Kiayias
ePrint Report ePrint Report
Distributed ledgers implement a storage layer, on top of which a shared state is maintained in a decentralized manner. In UTxO-based ledgers, like Bitcoin, the shared state is the set of all unspent outputs (UTxOs), which serve as inputs to future transactions. The continuously increasing size of this shared state will gradually render its maintenance unaffordable. Our work investigates techniques that minimize the shared state of the distributed ledger, i.e., the in-memory UTxO set. To this end, we follow two directions: a) we propose novel transaction optimization techniques to be followed by wallets, so as to create transactions that reduce the shared state cost and b) propose a novel fee scheme that incentivizes the creation of "state-friendly" transactions. We devise an abstract ledger model, expressed via a series of algebraic operators, and define the transaction optimization problem of minimizing the shared state; we also propose a multi-layered algorithm that approximates the optimal solution to this problem. Finally, we define the necessary conditions such that a ledger’s fee scheme incentivizes proper state management and propose a state efficient fee function for Bitcoin.
Expand
István András Seres, Máté Horváth, Péter Burcsi
ePrint Report ePrint Report
Sequences of consecutive Legendre and Jacobi symbols as pseudorandom bit generators have been proposed for cryptographic use in 1988. Since then they were mostly forgotten in the applications. However, recently revived interest is shown to pseudorandom functions (PRF) based on the Legendre and power residue symbols, due to their extreme efficiency in the multi-party setting and their conjectured post-quantum security. The lack of provable security results hinders the deployment of PRFs based on quadratic and power residue symbols. On the other hand, the security of the Legendre PRF and other variants do not seem to be related to standard cryptographic assumptions, e.g. discrete logarithm or factoring.

Therefore, in this work, we show that key-recovery attacks against the Legendre PRF are equivalent to solving a specific family of multivariate quadratic (MQ) equation system over a finite prime field. This new perspective sheds some light on the complexity of key-recovery attacks against the Legendre PRF. This allows us to take the first steps in settling the provable security of the Legendre PRF and other variants. We do this by conducting extensive algebraic cryptanalysis on the resulting MQ instance. We show how the currently best-known techniques and attacks fall short in solving these sparse quadratic equation systems. Another benefit of viewing the Legendre PRF as an MQ instance is that it facilitates new applications of the Legendre PRF, such as verifiable random function or oblivious (programmable) pseudorandom function. These new applications can be used in cryptographic protocols, such as state of the art proof-of-stake consensus algorithms or private set intersection protocols.
Expand
Jesus Diaz, Anja Lehmann
ePrint Report ePrint Report
Group signatures allow users to create signatures on behalf of a group while remaining anonymous. Such signatures are a powerful tool to realize privacy-preserving data collections, where e.g., sensors, wearables or vehicles can upload authenticated measurements into a data lake. The anonymity protects the user’s privacy yet enables basic data processing of the uploaded unlinkable information. For many applications, full anonymity is often neither desired nor useful though, and selected parts of the data must eventually be correlated after being uploaded. Current solutions of group signatures do not provide such functionality in a satisfactory way: they either rely on a trusted party to perform opening or linking of signatures, which clearly conflicts with the core privacy goal of group signatures; or require the user to decide upon the linkability of signatures before they are generated.

In this paper we propose a new variant of group signatures that provides linkability in a flexible and user-centric manner. Users – and only they – can decide before and after signature creation whether they should remain linkable or be correlated. To prevent attacks where a user omits certain signatures when a sequence of events in a certain section (e.g., time frame), should be linked, we further extend this new primitive to allow for sequential link proofs. Such proofs guarantee that the provided sequence of data is not only originating from the same signer, but also occurred in that exact order and contains all of the user’s signatures within the time frame. We formally define the desired security and privacy properties, propose a provably secure construction based on DL-related assumptions and report on a prototypical implementation of our scheme.
Expand
Adithya Bhat, Akhil Bandarupalli, Saurabh Bagchi, Aniket Kate, Michael Reiter
ePrint Report ePrint Report
Existing Byzantine fault-tolerant (BFT) state-machine replication (SMR) protocols in the standard (bounded) synchrony and weak synchrony models rely on equivocation detection to ensure safety. To perform a commit (or output a transaction block), this detection inherently requires $O(n^2)$ communication overhead among $n$ nodes and waiting for $O(\Delta)$ time, where $\Delta$ is the worst-case network delay. The quadratic communication overhead limits scalability. Moreover, as the typical network latency $\delta$ tends to be much smaller than $\Delta$, $51\%$ honest-majority (and hence synchronous or weakly synchronous) solutions become slow as compared to $67\%$ honest-majority asynchronous protocols working at network speed.

The observation that SMR commits do not have to be treated separately motivates this work. We propose UCC (Unique Chain Commit) rule, a novel yet simple rule for hash chains where extending a block by including its hash is treated as a vote for the block and all its direct and indirect parents. When a block obtains $f+1$ such votes, where $f$ is the maximum number of faulty nodes in the system, we commit the block and its parents. We use this UCC rule to design Apollo, an SMR protocol with rotating leaders which has a linear communication complexity. Apollo proposes and commits a block every $\delta$ time units with every block having a latency of $(f+1)\delta$. When compared to existing works which use equivocation detection, we improve the optimistic commit latency when $(f+1)\delta<2\Delta$. We prove the security of our protocol in both standard and weak synchrony models. We also implement and compare Apollo with the state of the art protocol, and demonstrate Apollo having commit latencies independent of and less than the $\Delta$ parameter, and also show significant gains in response rates with increasing $n$. For instance, when $n=3$, Apollo can commit blocks of size $2000$ as fast $3$ ms. For $n=65$, Apollo produces $3$x more committed transactions per second than the state of the art Sync HotStuff protocol.
Expand
An Wang, Yuan Li, Yaoling Ding, Liehuang Zhu, Yongjuan Wang
ePrint Report ePrint Report
Various Artificial Intelligence (AI) techniques are combined with classic side-channel methods to improve the efficiency of attacks. Among them, Genetic Algorithms based Correlation Power Analysis (GA-CPA) is proposed to launch attacks on hardware cryptosystems to extract the secret key efficiently. However, the convergence rate is unsatisfactory due to two problems: individuals of the initial population generally have low fitnesses, and the mutation operation is hard to generate high-quality components. In this paper, we give an analysis framework to solve them. Firstly, we employ lists of sorted candidate key bytes obtained with CPA to initialize the population with high quality candidates. Secondly, we guide the mutation operation with lists of candidate keys sorted according to fitnesses, which are obtained by exhausting the values of a certain key byte and calculating the corresponding correlation coefficients with the whole key. Thirdly, key enumeration algorithms are utilized to deal with ranked candidates obtained by the last generation of GA-CPA to improve the success rate further. Simulation experimental results show that our method reduces the number of traces by 33.3\% and 43.9\% compared to CPA with key enumeration and GA-CPA respectively when the success rate is fixed to 90\%. Real experiments performed on SAKURA-G confirm that the number of traces required in our method is much less than the numbers of traces required in CPA and GA-CPA. Besides, we adjust our method to deal with DPA contest v1 dataset, and achieve a better result of 40.76 traces than the winning proposal of 42.42 traces. The computation cost of our proposal is nearly 16.7\% of the winner.
Expand
Tapas Pal, Ratna Dutta
ePrint Report ePrint Report
The notion of functional encryption (FE) was proposed as a generalization of plain public-key encryption to enable a much more fine-grained handling of encrypted data, with advanced applications such as cloud computing, multi-party computations, obfuscating circuits or Turin machines. While FE for general circuits or Turin machines gives a natural instantiation of the many cryptographic primitives, existing FE schemes are based on indistinguishability obfuscation or multilinear maps which either rely on new computational hardness assumptions or heuristically claimed to be secure. In this work, we present new techniques directly yielding FE for inner product functionality where secret-keys provide access control via polynomial-size bounded-depth circuits. More specifically, we encrypt messages with respect to attributes and embed policy circuits into secret-keys so that a restricted class of receivers would be able to learn certain property about the messages. Recently, many inner product FE schemes were proposed. However, none of them uses a general circuit as an access structure. Our main contribution is designing the first construction for an attribute-based FE scheme in key-policy setting for inner products from well-studied Learning With Errors (LWE) assumption. Our construction takes inspiration from the attribute-based encryption of Boneh et al. from Eurocrypt 2014 and the inner product functional encryption of Agrawal et al. from Crypto 2016. The scheme is proved in a stronger setting where the adversary is allowed to ask secret-keys that can decrypt the challenge ciphertext. Doing so requires a careful setting of parameters for handling the noise in ciphertexts to enable correct decryption. Another main advantage of our scheme is that the size of ciphertexts and secret-keys depends on the depth of the circuits rather than its size. Additionally, we extend our construction in a much desirable multi-input variant where secret-keys are associated with multiple policies subject to different encryption slots. This enhances the applicability of the scheme with finer access control.
Expand
Miguel Ambrona
ePrint Report ePrint Report
Attribute-based encryption (ABE) is a cryptographic primitive which supports fine-grained access control on encrypted data, making it an appealing building block for many applications. Pair encodings (Attrapadung, EUROCRYPT 2014) are simple primitives that can be used for constructing fully secure ABE schemes associated to a predicate relative to the encoding. We propose a generic transformation that takes any pair encoding scheme (PES) for a predicate $P$ and produces a PES for its negated predicate $\bar{P}$ . This construction finally solves a problem that was open since 2015. Our techniques bring new insight to the expressivity and generality of PES and can be of independent interest. We also provide, to the best of our knowledge, the first pair encoding scheme for negated doubly spatial encryption (obtained with our transformation) and explore several other consequences of our results.
Expand
Lukas Aumayr, Pedro Moreno-Sanchez, Aniket Kate, Matteo Maffei
ePrint Report ePrint Report
Payment-channel networks (PCN) are the most prominent approach to tackle the scalability issues of current permissionless blockchains. A PCN reduces the load on-chain by allowing arbitrarily many off-chain multi-hop payments (MHP) between any two users connected through a path of payment channels. Unfortunately, current protocols for MHP are far from satisfactory. One round MHP (e.g., Interledger) are insecure as a malicious intermediary can steal the payment funds. Two-round MHP (e.g., Lightning Network (LN)) follow the 2-phase-commit paradigm as in databases to overcome this issue. However, when tied with economical incentives, 2-phase-commit brings other security threats (i.e., wormhole attacks), staggered collateral (i.e., funds are locked for a time proportional to the payment path length) and dependency on specific scripting language functionality (e.g., hash time lock contracts) that hinders a wider deployment in practice.

We present Blitz, a novel MHP protocol that demonstrates for the first time that we can achieve the best of the two worlds: a single round MHP where no malicious intermediary can steal coins. Moreover, Blitz provides privacy for sender and receiver, it is not prone to the wormhole attack and it requires only constant collateral. Additionally, we construct MHP using only digital signatures and a timelock functionality, both available at the core of virtually every cryptocurrency today. We provide the cryptographic details of Blitz and we formally prove its security. Furthermore, our experimental evaluation on an LN snapshot shows that the LN collateral results in between 4x and 33x more unsuccessful payments than the Blitz collateral (both compared to a baseline), reduce the size of the payment contract by 26% and prevent up to 0.3 BTC (3397 USD in October 2020) in fees being stolen over a three day period as it avoids wormhole attacks by design.
Expand
Siwei Chen, Zejun Xiang, Xiangyong Zeng, Shasha Zhang
ePrint Report ePrint Report
In this paper, we compare several non-tight degree evaluation methods i.e., Boura and Canteaut's formula, Carlet's formula as well as Liu's numeric mapping and division property proposed by Todo, and hope to find the best one from these methods for practical applications. Specifically, for the substitution-permutation-network (SPN) ciphers, we first deeply explore the relationships between division property of an Sbox and its algebraic properties (e.g., the algebraic degree of its inverse). Based on these findings, we can prove theoretically that division property is never worse than Boura and Canteaut's and Carlet's formulas, and we also experimentally verified that the division property can indeed give a better bound than the latter two methods. In addition, for the nonlinear feedback shift registers (NFSR) based ciphers, according to the propagation of division property and the core idea of numeric mapping, we give a strict proof that the estimated degree using division property is never greater than that of numeric mapping. Moreover, our experimental results on Trivium and Kreyvium indicate the division property actually derives a much better bound than the numeric mapping. To the best of our knowledge, this is the first time to give a formal discussion on the relationships between division property and other degree evaluation methods, and we present the first theoretical proof and give the experimental verification to illustrate that division property is the optimal one among these methods in terms of the accuracy of the upper bounds on algebraic degree.
Expand
Alptekin Küpçü, Reihaneh Safavi-Naini
ePrint Report ePrint Report
Outsourcing computation allows resource limited clients to access computing on demand. Various types of clusters, grids, and clouds, such as Microsoft’s Azure and Amazon’s EC2, form today’s outsourced computing infrastructure. A basic requirement of outsourcing is providing guarantee that the computation result is correct. We consider an automated and efficient way of achieving assurance where the computation is replicated and outsourced to two contractors by a smart contract that will decide on the correctness of the computation result, by comparing the two received results. We show that all previous incentivized outsourcing protocols with proven correctness fail when automated by a smart contract, because of copy attack where a contractor simply copies the submitted response of the other contractor. We then design an incentive mechanism that uses two lightweight response-checking protocols, and employ monetary reward, fine, and bounty to incentivize correct computation. We use game theory to model and analyze our mechanism, and prove that it has a single Nash equilibrium, corresponding to the contractors’ strategy of correctly computing the result. Our work provides a foundation for incentivized computation in the smart contract setting and opens new research directions.
Expand
Wai-Kong Lee, Hwajeong Seo, Zhenfei Zhang, Seongoun Hwang
ePrint Report ePrint Report
Tensor core is a specially designed hardware included in new NVIDIA GPU chips, aimed at accelerating deep learning applications. With the introduction of tensor core, the matrix multiplication at low precision can be computed much faster than using conventional integer and floating point units in NVIDIA GPU. In the past, applications of tensor core were mainly restricted to machine learning and mixed precision scientific computing. In this paper, we show that for the first time, tensor core can be used to accelerate state-of-the-art lattice-based cryptosystems. In particular, we employed tensor core to accelerate NTRU, one of the finalists in NIST post-quantum standardization. Towards our aim, several parallel algorithms are proposed to allow the tensor core to handle flexible matrix sizes and ephemeral key pair. Experimental results show that the polynomial convolution using tensor core is 2.79× (ntruhps2048509) and 2.72× (ntruhps2048677) faster than the version implemented with conventional integer units of NVIDIA GPU. The proposed tensor core based polynomial convolution technique was applied to NTRU public key scheme (TensorTRU). It achieved 1.94×/1.95× (encryption) and 1.97×/2.02× (decryption) better performance for the two parameter sets, compared to the conventional integer based implementations in GPU. TensorTRU is also more than 20× faster than the reference implementation in CPU and 2× faster than the AVX2 implementation, for both encryption and decryption. To demonstrate the flexibility of the proposed technique, we have extended the implementation to other lattice-based cryptosystems that have a small modulus (LAC and two variant parameter sets in FrodoKEM). Experimental results show that the tensor core based polynomial convolution is flexible and useful in accelerating lattice-based cryptosystems that cannot utilize number theoretic transform in performing polynomial multiplication.
Expand

17 February 2021

CRYPTO CRYPTO
The IACR is soliciting for affiliated events to be held in conjunction with Crypto 2021 on Saturday, August 14 and/or Sunday, August 15.

Each such event is expected to provide a forum discussing a specific topic of the broad cryptographic world (theory, practice, implementation, standardizations, etc.). The format of the event (e.g. workshop, tutorial, etc.) is up to the organizers.

Proposals for events should be submitted by email to the Crypto 2021 workshop chair at crypto2021workshopchair@gmail.com by March 15, 2021 (23:59 UTC).

For more details visit https://crypto.iacr.org/2021/affiliated.php
Expand
Nishanth Chandran, Nishka Dasgupta, Divya Gupta, Sai Lakshmi Bhavana Obbattu, Sruthi Sekar, Akash Shah
ePrint Report ePrint Report
Multiparty Private Set Intersection (mPSI), enables $n$ parties, each holding private sets (each of size $m$) to compute the intersection of these private sets, without revealing any other information to each other. While several protocols for this task are known, the only concretely efficient protocol is due to the work of Kolesnikov et al. (KMPRT, CCS 2017), who gave a semi-honest secure protocol with communication complexity $\mathcal{O}(nmt\lambda)$, where $t<n$ is the number of corrupt parties and $\lambda$ is the security parameter. In this work, we make the following contributions: $-$ First, for the natural adversarial setting of semi-honest honest majority (i.e. $t<n/2$), we asymptotically improve upon the above result and provide a concretely efficient protocol with total communication of $\mathcal{O}(nm\lambda)$. $-$ Second, concretely, our protocol has $6(t+2)/5$ times lesser communication than KMPRT and is upto $5\times$ and $6.2\times$ faster than KMPRT in the LAN and WAN setting even for 15 parties. $-$ Finally, we introduce and consider two important variants of mPSI - circuit PSI (that allows the parties to compute a function over the intersection set without revealing the intersection itself) and quorum PSI (that allows $P_1$ to learn all the elements in his/her set that are present in at least $k$ other sets) and provide concretely efficient protocols for these variants.
Expand
Wei Yu, Guangwu Xu
ePrint Report ePrint Report
Let $E_a/ \mathbb{F}_{2}: y^2+xy=x^3+ax^2+1$ be a Koblitz curve. The window $\tau$-adic non-adjacent form (window $\tau$NAF) is currently the standard representation system to perform scalar multiplications on $E_a/ \mathbb{F}_{2^m}$ utilizing the Frobenius map $\tau$. This work focuses on the pre-computation part of scalar multiplication. We first introduce $\mu\bar{\tau}$-operations where $\mu=(-1)^{1-a}$ and $\bar{\tau}$ is the complex conjugate of $\tau$. Efficient formulas of $\mu\bar{\tau}$-operations are then derived and used in a novel pre-computation scheme. Our pre-computation scheme requires $6${\bf M}$+6${\bf S}, $18${\bf M}$+17${\bf S}, $44${\bf M}$+32${\bf S}, and $88${\bf M}$+62${\bf S} ($a=0$) and $6${\bf M}$+6${\bf S}, $19${\bf M}$+17${\bf S}, $46${\bf M}$+32${\bf S}, and $90${\bf M}$+62${\bf S} ($a=1$) for window $\tau$NAF with widths from $4$ to $7$ respectively. It is about two times faster, compared to the state-of-the-art technique of pre-computation in the literature. The impact of our new efficient pre-computation is also reflected by the significant improvement of scalar multiplication. Traditionally, window $\tau$NAF with width at most $6$ is used to achieve the best scalar multiplication. Because of the dramatic cost reduction of the proposed pre-computation, we are able to increase the width for window $\tau$NAF to $7$ for a better scalar multiplication. This indicates that the pre-computation part becomes more important in performing scalar multiplication. With our efficient pre-computation and the new window width, our scalar multiplication runs in at least 85.2\% the time of Kohel's work (Eurocrypt'2017) combining the best previous pre-computation. Our results push the scalar multiplication of Koblitz curves, a very well-studied and long-standing research area, to a significant new stage.
Expand
Ai Kitagawa, Yusuke Sakai, Keita Emura, Goichiro Hanaoka, Keisuke Tanaka
ePrint Report ePrint Report
Group signature with verifier-local revocation (VLR-GS) is a special type of revocable group sig- nature which enables a user to sign messages without referring to information regarding revoked users. Although there have been several proposals of VLR-GS schemes since the first scheme proposed by Boneh and Shacham [CCS 2004], all of these schemes only achieve a security notion called selfless anonymity, which is strictly weaker than the de facto standard security notion, full anonymity. Thus, for more than a decade, it has been an open problem whether a fully anonymous VLR-GS scheme can be achieved. In this paper, we give an affirmative answer to this problem. Concretely, we show the construction of a fully anonymous VLR-GS scheme from a digital signature scheme, a key-private public key encryption scheme, and a non-interactive zero-knowledge proof system. Also, we show that backward unlinkability, which ensures that even after a user is revoked, signatures produced by the user before the revocation remain anonymous, can be realized without additional building blocks. Although the size of group public key and signing key depend on the number of time periods, finally, we show that the size of these keys can be reduced by employing an identity-based encryption scheme.
Expand
Yasuhiko Ikematsu, Shuhei Nakamura, Bagus Santoso, Takanori Yasuda
ePrint Report ePrint Report
Isomorphism of polynomials with two secrets (IP2S) problem was proposed by Patarin et al. at Eurocrypt 1996 and the problem is to find two secret linear maps filling in the gap between two polynomial maps over a finite field. At PQC 2020, Santoso proposed a problem originated from IP2S, which is called block isomorphism of polynomials with circulant matrices (BIPC) problem. The BIPC problem is obtained by linearizing IP2S and restricting secret linear maps to linear maps represented by circulant matrices. Using the commutativity of products of circulant matrices, Santoso also proposed an El-Gamal-like encryption scheme based on the BIPC problem. In this paper, we give a new security analysis on the El-Gamal-like encryption scheme. In particular, we introduce a new attack (called linear stack attack) which finds an equivalent key of the El-Gamal-like encryption scheme by using the linearity of the BIPC problem. We see that the attack is a polynomial-time algorithm and can break some 128-bit proposed parameters of the El-Gamal-like encryption scheme within 10 hours on a standard PC.
Expand
Xiaohan Zhang, Chi Cheng, Yue Qin , Ruoyu Ding
ePrint Report ePrint Report
NTRU is regarded as an appealing finalist due to its long history against all known attacks and relatively high efficiency. In the third round of the NIST competition, the submitted NTRU cryptosystem is the merger of NTRU-HPS and NTRU-HRSS. In 2019, Ding et al. have analyzed the case when the public key is reused for the original NTRU scheme. However, NTRU-HRSS selects coefficients in an arbitrary way, instead of fixed-weight sample spaces in the original NTRU and NTRU-HPS. Therefore, their method cannot be applied to NTRU-HRSS. To address this problem, we propose a full key mismatch attack on NTRU-HRSS. Firstly, we find a longest chain which helps us in recovering the following coefficients. Next, the most influential interference factors are eliminated by increasing the weight of targeted coefficients. In this step, we adaptively select the weights according to the feedbacks of the oracle to avoid errors. Finally, experiments show that we succeed in recovering all coefficients of the secret key in NTRU-HRSS with a success rate of $93.6\%$. Furthermore, we illustrate the trade-off among the success rate, average number of queries, and average time. Particularly, we show that when the success rate is 93.6\%, it has the minimum number of queries at the same time.
Expand
Seetal Potluri, Aydin Aysu
ePrint Report ePrint Report
Stealing trained machine learning (ML) models is a new and growing concern due to the model's development cost. Existing work on ML model extraction either applies a mathematical attack or exploits hardware vulnerabilities such as side-channel leakage. This paper shows a new style of attack, for the first time, on ML models running on embedded devices by abusing the scan-chain infrastructure. We illustrate that having course-grained scan-chain access to non-linear layer outputs is sufficient to steal ML models. To that end, we propose a novel small-signal analysis inspired attack that applies small perturbations into the input signals, identifies the quiescent operating points and, selectively activates certain neurons. We then couple this with a Linear Constraint Satisfaction based approach to efficiently extract model parameters such as weights and biases. We conduct our attack on neural network inference topologies defined in earlier works, and we automate our attack. The results show that our attack outperforms mathematical model extraction proposed in CRYPTO 2020, USENIX 2020, and ICML 2020 by an increase in accuracy of 2^20.7x, 2^50.7x, and 2^33.9x, respectively, and a reduction in queries by 2^6.5x, 2^4.6x, and 2^14.2x, respectively.
Expand
Matthias Lohr, Benjamin Schlosser, Jan Jürjens, Steffen Staab
ePrint Report ePrint Report
Blockchains can guarantee fairness during the exchange of digital goods such that in a two-party exchange no one is defrauded by a malicious opponent. While several notions of fairness have been discussed in the literature, they all ignore that damage cannot only be incurred by the malicious failure of the exchange, but also by an unfair allocation of transaction costs. To address this issue we: 1. define the novel concept of cost fairness, which 2. builds on the notion of maximum cost matrices that formalize transaction costs in different combinations of benevolent and malicious behavior. 3. We show how limited notions of cost fairness can be achieved by modifying an existing exchange protocol or using a container protocol. In particular, we also provide 4. a tool that let us predict the maximum cost matrix for a specific protocol execution and, thus, gives trade exchange parties the possibility to weigh not only the value of transaction of exchanged goods but also the associated transaction costs.
Expand
Thomas Kerber, Aggelos Kiayias, Markulf Kohlweiss
ePrint Report ePrint Report
Zero-knowledge succinct non-interactive arguments (zk-SNARKs) rely on knowledge assumptions for their security. Meanwhile, as the complexity and scale of cryptographic systems continues to grow, the composition of secure protocols is of vital importance. The current gold standards of composable security, the Universal Composability and Constructive Cryptography frameworks cannot capture knowledge assumptions, as their core proofs of composition prohibit white-box extraction. In this paper, we present a formal model allowing the composition of knowledge assumptions. Despite showing impossibility for the general case, we demonstrate the model’s usefulness when limiting knowledge assumptions to few instances of protocols at a time. We finish by providing the first instance of a simultaneously succinct and composable zk-SNARK, by using existing results within our framework.
Expand
◄ Previous Next ►