IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
17 May 2022
Léonard Lys, Maria Potop-Butucaru
ePrint Report
Blockchain oracles are systems that connect blockchains with
the outside world by interfacing with external data providers. They provide
decentralized applications with the external information needed for
smart contract execution. In this paper, we focus on decentralized price
oracles, which are distributed systems that provide exchange rates of
digital assets to smart contracts. They are the cornerstone of the safety
of some decentralized finance applications such as stable coins or lending
protocols. They consist of a network of nodes called oracles that gather
information from off-chain sources such as an exchange market’s API and
feed it to smart contracts. Among the desired properties of a price oracle
system are low latency, availability, and low operating cost. Moreover,
they should overcome constraints such as having diverse data sources
which is known as the freeloading problem or Byzantine failures.
In this paper, we define the distributed price oracle problem and present
PoWacle, the first asynchronous decentralized oracle protocol that copes
with Byzantine behavior.
Clément Fanjas, Clément Gaine, Driss Aboulkassimi, Simon Pontié, Olivier Potin
ePrint Report
The success rate of Fault Injection (FI) and Side-Channel Analysis (SCA) depends on the quality of the synchronization available in the target.
As the modern SoCs implement complex hardware architectures able to run at high-speed frequency, the synchronization of hardware security characterization becomes therefore a real challenge.
However when I/Os are unavailable, unreachable or if the synchronization quality is not sufficient, other triggering methodologies should be investigated.
This paper proposes a new synchronization approach named Synchronization by Frequency Detection (SFD), which does not use the target I/Os.
This approach consists in the identification of a vulnerability following a specific code responsible for the activation of a characteristic frequency which can be detected in the EM field measured from the target.
A real time analysis of EM field is applied in order to trigger the injection upon the detection of this characteristic frequency.
For validating the proof-of-concept of this new triggering methodology, this paper presents an exploitation of the SFD concept against the Android Secure-Boot of a smartphone-grade SoC.
By triggering the attack upon the activation of a frequency at 124.5 MHz during a RSA signature computation, we were able to synchronize an electromagnetic fault injection to skip a vulnerable instruction in the Linux Kernel Authentication.
We successfully bypassed this security feature, effectively running Android OS with a compromised Linux Kernel with one success every 15 minutes.
Lucianna Kiffer, Rajmohan Rajaraman, abhi shelat
ePrint Report
The celebrated Nakamoto consensus protocol ushered in several new consensus applications including cryptocurrencies. A
few recent works have analyzed important properties of blockchains, including most significantly, consistency, which is a
guarantee that all honest parties output the same sequence of blocks throughout the execution of the protocol.
To establish consistency, the prior analysis of Pass, Seeman and shelat required a careful counting of certain combinatorial events that was difficult to apply to variations of Nakamoto. The work of Garay, Kiayas, and Leonardas provides another method of analyzing the blockchain under both a synchronous and partially synchronous setting.
The contribution of this paper is the development of a simple Markov-chain based method for analyzing consistency properties of blockchain protocols. The method includes a formal way of stating strong concentration bounds as well as easy ways to concretely compute the bounds. We use our new method to answer a number of basic questions about consistency of blockchains:
• Our new analysis provides a tighter guarantee on the consistency property of Nakamoto’s protocol, including for parameter regimes which previous work could not consider; • We analyze a family of delaying attacks and extend them to other protocols; • We analyze how long a participant should wait before considering a high-value transaction “confirmed”; • We analyze the consistency of CliqueChain, a variation of the Chainweb system; • We provide the first rigorous consistency analysis of GHOST under the partially synchronous setting and also analyze a folklore "balancing"-attack.
In each case, we use our framework to experimentally analyze the consensus bounds for various network delay parameters and adversarial computing percentages.
We hope our techniques enable authors of future blockchain proposals to provide a more rigorous analysis of their schemes.
To establish consistency, the prior analysis of Pass, Seeman and shelat required a careful counting of certain combinatorial events that was difficult to apply to variations of Nakamoto. The work of Garay, Kiayas, and Leonardas provides another method of analyzing the blockchain under both a synchronous and partially synchronous setting.
The contribution of this paper is the development of a simple Markov-chain based method for analyzing consistency properties of blockchain protocols. The method includes a formal way of stating strong concentration bounds as well as easy ways to concretely compute the bounds. We use our new method to answer a number of basic questions about consistency of blockchains:
• Our new analysis provides a tighter guarantee on the consistency property of Nakamoto’s protocol, including for parameter regimes which previous work could not consider; • We analyze a family of delaying attacks and extend them to other protocols; • We analyze how long a participant should wait before considering a high-value transaction “confirmed”; • We analyze the consistency of CliqueChain, a variation of the Chainweb system; • We provide the first rigorous consistency analysis of GHOST under the partially synchronous setting and also analyze a folklore "balancing"-attack.
In each case, we use our framework to experimentally analyze the consensus bounds for various network delay parameters and adversarial computing percentages.
We hope our techniques enable authors of future blockchain proposals to provide a more rigorous analysis of their schemes.
Loïc Masure, Olivier Rioul, François-Xavier Standaert
ePrint Report
We prove a bound that approaches Duc et al.'s conjecture from Eurocrypt 2015 for the side-channel security of masked implementations. Let \(Y\) be a sensitive intermediate variable of a cryptographic primitive taking its values in a set \(\mathcal{Y}\). If \(Y\) is protected by masking (a.k.a. secret sharing) at order \(d\) (i.e., with $d+1$ shares), then the complexity of any non-adaptive side-channel analysis --- measured by the number of queries to the target implementation required to guess the secret key with sufficient confidence --- is lower bounded by a quantity inversely proportional to the product of mutual informations between each share of \(Y\) and their respective leakage. Our new bound is nearly tight in the sense that each factor in the product has an exponent of \(-1\) as conjectured, and its multiplicative constant is\(\mathcal{O}\left(\log |\mathcal{Y}| \cdot |\mathcal{Y}|^{-1} \cdot C^{-d}\right)\), where \(C = 2 \log(2) \approx 1.38\). It drastically improves upon previous proven bounds, where the exponent was \(-1/2\), and the multiplicative constant was \(\mathcal{O}\left(|\mathcal{Y}|^{-d}\right)\). As a consequence for side-channel security evaluators, it is possible to provably and efficiently infer the security level of a masked implementation by simply analyzing each individual share, under the necessary condition that the leakage of these shares are independent.
Lionel Beltrando, Maria Potop-Butucaru, Jose Alfaro
ePrint Report
Blockchain and distributed ledger technologies have emerged as one of the most revolutionary distributed systems, with the goal of eliminating centralised intermediaries and installing distributed trusted services. They facilitate trustworthy trades and exchanges over the Internet, power cryptocurrencies, ensure transparency for documents, and much more.
Committee based-blockchains are considered today as a viable alternative to the original proof-of-work paradigm, since they offer strong consistency and are energy efficient. One of the most popular committee based-blockchain is Tendermint used as core by several popular blockchains such Tezos, Binance Smart Chain or Cosmos. Interestingly, Tendermint as many other committee based-blockchains is designed to tolerate one third of Byzantine nodes.
In this paper we propose TenderTee, an enhanced version of Tendermint, able to tolerate one half of Byzantine nodes. The resilience improvement is due to the use of a trusted abstraction, a light version of attested append-only memory, which makes the protocol immune to equivocation (i.e behavior of a faulty node when it sends different faulty messages to different nodes). Furthermore, we prove the correctness of TenderTee for both one-shot and repeated consensus specifications.
Laltu Sardar, Sushmita Ruj
ePrint Report
In a dynamic searchable encryption (DSE) scheme, a cloud server can search on encrypted data that the client stores and updates from time to time. Due to information leakage during the search and update phase, DSE schemes are prone to file injection attacks. If during document addition, a DSE scheme does not leak any information about the previous search results, the scheme is said to be forward private. A DSE scheme that supports conjunctive keyword search should be forward private. There has been a fair deal of work on designing forward private DSE schemes in the presence of an honest-but-curious cloud server. However, a malicious cloud server might not run the protocol correctly and still want to be undetected. In a verifiable DSE, the cloud server not only returns the result of a search query but also provides proof that the result is computed correctly.
We design a forward private DSE scheme that supports conjunctive keyword search. At the heart of the construction is our proposed data structure called the dynamic interval accumulation tree (DIA tree). It is an accumulator-based authentication tree that efficiently returns both membership and non-membership proofs. Using the DIA tree, we can convert any single keyword forward private DSE scheme to a verifiable forward private DSE scheme that can support conjunctive queries as well. Our proposed scheme has the same storage as the base DSE scheme and low computational overhead on the client-side. We have shown the efficiency of our design by comparing it with existing conjunctive DSE schemes. The comparison also shows that our scheme is suitable for practical use.
We design a forward private DSE scheme that supports conjunctive keyword search. At the heart of the construction is our proposed data structure called the dynamic interval accumulation tree (DIA tree). It is an accumulator-based authentication tree that efficiently returns both membership and non-membership proofs. Using the DIA tree, we can convert any single keyword forward private DSE scheme to a verifiable forward private DSE scheme that can support conjunctive queries as well. Our proposed scheme has the same storage as the base DSE scheme and low computational overhead on the client-side. We have shown the efficiency of our design by comparing it with existing conjunctive DSE schemes. The comparison also shows that our scheme is suitable for practical use.
Sisi Duan, Haibin Zhang
ePrint Report
This paper studies dynamic BFT, where replicas can join and leave the system dynamically, a primitive that is nowadays increasingly needed. We provide a formal treatment for dynamic BFT protocols, endowing them with a flexible syntax and various security definitions.
We demonstrate the challenges of extending static BFT to dynamic BFT. Then we design and implement Dyno, a highly efficient dynamic BFT protocol under the partial synchrony model. We show that Dyno can seamlessly handle membership changes without incurring performance degradation.
We demonstrate the challenges of extending static BFT to dynamic BFT. Then we design and implement Dyno, a highly efficient dynamic BFT protocol under the partial synchrony model. We show that Dyno can seamlessly handle membership changes without incurring performance degradation.
Liam Eagen
ePrint Report
Zero Knowledge proofs of Elliptic Curve Inner Products (ECIPs) and elliptic curve operations more generally are an increasingly important part of zero knowledge protocols and a significant bottle neck in recursive proof composition over amicable cycles of elliptic curves. To prove ECIPs more efficiently, I represent a collection of points that sum to zero using a polynomial element of the function field and evaluate this function at a random principal divisor. By Weil reciprocity, this is equal to the function interpolating the random divisor evaluated at the original points. Taking the logarithmic derivative of both expressions allows the prover to use a similar technique to the Bulletproofs++ permutation argument and take linear combinations logarithmic derivatives of divisor witnesses and collect terms for the same basis point by adding the multiplicities. The linear combination can be random or can be structured to cancel intermediate points in computing the sum. Since the multiplicities are field elements, this system can prove ECIP relations in zero knowledge with respect to the linear combination, the curve points, or both. Compared to existing techniques, the witness size is reduced by up to a factor of 10 and the number of multiplications by a factor of about 100 with significantly more flexibility in the organization of the protocol. The specific improvement will depend on the instantiating proof system, number of curve points, and which information is zero knowledge. This technique also works, with small modification, for proving multiexponentiations in the multiplicative group of the field.
Theo von Arx, Kenneth G. Paterson
ePrint Report
Telegram is a popular messenger with more than 550 million monthly active users and a large ecosystem of different clients. Telegram has its own bespoke transport layer security protocol, MTProto 2.0. This protocol was recently subjected to a detailed study by Albrecht et al. (IEEE S&P 2022). They gave attacks on the protocol and its implementations, along with a security proof for a modified version of the protocol. We complement that study by analysing a range of third-party client implementations of MTProto 2.0. We report practical replay attacks for the Pyrogram, Telethon and GramJS clients, and a more theoretical timing attack against the MadelineProto client. We show how vulnerable third-party clients can affect the security of the entire ecosystem, including official clients. Our analysis reveals that many third-party clients fail to securely implement MTProto 2.0. We discuss the reasons for these failures, focussing on complications in the design of MTProto 2.0 that lead developers to omit security-critical features or to implement the protocol in an insecure manner. We also discuss changes that could be made to MTProto 2.0 to remedy this situation. Overall, our work highlights the cryptographic fragility of the Telegram ecosystem.
Maria Ferrara, Antonio Tortora
ePrint Report
The homomorphic encryption allows to operate on encrypted data, making any action less vulnerable to hacking. The implementation of a fully homomorphic cryptosystem has long been impracticable. A breakthrough was achieved only in 2009 thanks to Gentry and his innovative idea of bootstrapping. TFHE is a torus-based fully homomorphic cryptosystem using the bootstrapping technique. This paper aims to present TFHE from an algebraic point of view, starting from the CONCRETE library which implements TFHE.
Yupu Hu, Shanshan Zhang, Baocang Wang, Siyue Dong
ePrint Report
On CRYPTO2021, Nishanth Chandran, Bhavana Kanukurthi, Sai Lakshmi Bhavana Obattu, and Sruthi Sekar presented a novel secret sharing scheme, called CKO+21 scheme. This scheme makes use of Shamir secret sharing schemes and randomness extractors as its basic components, to generate a multi-layer encapsulation structure. The authors claimed that CKO+21 scheme satisfied “leakage resilience”, that is, the privacy still held under both “not enough revealing” and “appropriate leakage”. More important is that authors presented a bulky proof for the security of CKO+21 scheme.
In this paper we only consider the simple case of \((n,t)\) threshold secret sharing. We find following 5 facts about CKO+21 scheme, which are the basic reasons we negate the security proof of CKO+21 scheme. (1) In the expression of share of CKO+21 scheme, some bottom Shamir share is simply included, rather than encapsulated. (2) The leakage of the share is not a random leakage, but rather related to the inquiry of the attacker, that is, a chosen leakage. (3) The permitted leakage length of each share is proportional to the share length. (4) The bottom Shamir scheme has such special feature: when the length of the share $l^{*}$ is kept unchanged, it can make the number of shares $n$, the threshold value $t$, and the difference value $n-t+1$ any large, as long as $t
\setlength{\parindent}{2em}In this paper we point that, CKO+21 scheme didn’t successfully prove its security. As long as the bottom Shamir secret sharing scheme satisfies both “leakage recoverability” and “contaminated leakage irrecoverability”, the security proof of CKO+21 scheme is wrong. It needs to be pointed out that “leakage recoverability” and “contaminated leakage irrecoverability” cannot be naturally negated by “privacy” of Shamir scheme, and up to now there is not a proof that Shamir scheme doesn’t satisfy “leakage recoverability” or “contaminated leakage irrecoverability”.
The detailed contribution of this paper is as follow. CKO+21 scheme designed several leakage models: \(\mathsf{Leak}{\mathsf{B}_0}\),\(\mathsf{Leak}{\mathsf{A}_1}\),\(\mathsf{Leak}{\mathsf{B}_1}\),\(\mathsf{Leak}{\mathsf{A}_2}\),\(\mathsf{Leak}{\mathsf{B}_2}\),$\cdots$,\(\mathsf{Leak}{\mathsf{A}_h}\),\(\mathsf{Leak}{\mathsf{B}_h}\),\(\mathsf{Leak}{\mathsf{C}}\), where \(\mathsf{Leak}{\mathsf{B}_0}\) is the practical leakage model, \(\mathsf{Leak}{\mathsf{C}}\) is a leakage model independent of the secret message. CKO+21 scheme claimed that an attacker cannot distinguish two adjacent leakage models, so the scheme is “leakage resilient”. We point that, if the bottom Shamir scheme satisfies both “leakage recoverability” and “contaminated leakage irrecoverability”, the attacker can distinguish \(\mathsf{Leak}{\mathsf{B}_0}\) and \(\mathsf{Leak}{\mathsf{A}_1}\) with non-negligible probability.
Besides, if the bottom Shamir scheme doesn’t satisfy “leakage recoverability”. Shamir scheme itself has some ability to resist leakage, and the bulky structure of CKO+21 scheme is not necessary.
In this paper we only consider the simple case of \((n,t)\) threshold secret sharing. We find following 5 facts about CKO+21 scheme, which are the basic reasons we negate the security proof of CKO+21 scheme. (1) In the expression of share of CKO+21 scheme, some bottom Shamir share is simply included, rather than encapsulated. (2) The leakage of the share is not a random leakage, but rather related to the inquiry of the attacker, that is, a chosen leakage. (3) The permitted leakage length of each share is proportional to the share length. (4) The bottom Shamir scheme has such special feature: when the length of the share $l^{*}$ is kept unchanged, it can make the number of shares $n$, the threshold value $t$, and the difference value $n-t+1$ any large, as long as $t
\setlength{\parindent}{2em}In this paper we point that, CKO+21 scheme didn’t successfully prove its security. As long as the bottom Shamir secret sharing scheme satisfies both “leakage recoverability” and “contaminated leakage irrecoverability”, the security proof of CKO+21 scheme is wrong. It needs to be pointed out that “leakage recoverability” and “contaminated leakage irrecoverability” cannot be naturally negated by “privacy” of Shamir scheme, and up to now there is not a proof that Shamir scheme doesn’t satisfy “leakage recoverability” or “contaminated leakage irrecoverability”.
The detailed contribution of this paper is as follow. CKO+21 scheme designed several leakage models: \(\mathsf{Leak}{\mathsf{B}_0}\),\(\mathsf{Leak}{\mathsf{A}_1}\),\(\mathsf{Leak}{\mathsf{B}_1}\),\(\mathsf{Leak}{\mathsf{A}_2}\),\(\mathsf{Leak}{\mathsf{B}_2}\),$\cdots$,\(\mathsf{Leak}{\mathsf{A}_h}\),\(\mathsf{Leak}{\mathsf{B}_h}\),\(\mathsf{Leak}{\mathsf{C}}\), where \(\mathsf{Leak}{\mathsf{B}_0}\) is the practical leakage model, \(\mathsf{Leak}{\mathsf{C}}\) is a leakage model independent of the secret message. CKO+21 scheme claimed that an attacker cannot distinguish two adjacent leakage models, so the scheme is “leakage resilient”. We point that, if the bottom Shamir scheme satisfies both “leakage recoverability” and “contaminated leakage irrecoverability”, the attacker can distinguish \(\mathsf{Leak}{\mathsf{B}_0}\) and \(\mathsf{Leak}{\mathsf{A}_1}\) with non-negligible probability.
Besides, if the bottom Shamir scheme doesn’t satisfy “leakage recoverability”. Shamir scheme itself has some ability to resist leakage, and the bulky structure of CKO+21 scheme is not necessary.
Tomer Ashur, Mohammad Mahzoun, Dilara Toprakhisar
ePrint Report
The Recent progress in practical applications of secure computation protocols has also attracted attention to the symmetric-key primitives underlying them. Whereas traditional ciphers have evolved to be efficient with respect to certain performance metrics, advanced cryptographic protocols call for a different focus. The so called arithmetic complexity is viewed through the number and layout of non-linear operations in the circuit implemented by the protocol. Symmetric-key algorithms that are optimized with respect to this metric are said to be algebraic ciphers. Previous work targeting ZK and MPC protocols delivered great improvement in the performance of these applications both in lab and in practical use. Interestingly, despite its apparent benefits to privacy-aware cloud computing, algebraic ciphers targeting FHE did not attract similar attention.
In this paper we present Chaghri, an FHE-friendly block cipher enabling efficient transciphering in BGV-like schemes. A complete Chaghri circuit can be implemented using only 16 multiplications, 32 Frobenius automorphisms and 32 rotations, all arranged in a depth-32 circuit. Our HElib implemention achieves a throughput of 0.26 seconds-per-bit which is 65% faster than AES in the same setting.
In this paper we present Chaghri, an FHE-friendly block cipher enabling efficient transciphering in BGV-like schemes. A complete Chaghri circuit can be implemented using only 16 multiplications, 32 Frobenius automorphisms and 32 rotations, all arranged in a depth-32 circuit. Our HElib implemention achieves a throughput of 0.26 seconds-per-bit which is 65% faster than AES in the same setting.
Ryota Hira, Tomoaki Kitahara, Daiki Miyahara, Yuko Hara-Azumi, Yang Li, Kazuo Sakiyama
ePrint Report
Lightweight cryptography algorithms are increasing in value because they can enhance security under limited resources. National Institute of Standards and Technology is working on standardising lightweight authenticated encryption with associated data. Thirty-two candidates are included in the second round of the NIST selection process, and their specifications differ with respect to various points. Therefore, for each algorithm, the differences in specifications are expected to affect the algorithm's performance. This study aims to facilitate the selection and design of those algorithms according to the usage scenarios. For this purpose, we investigate and compare the 32 lightweight cryptography algorithm candidates using specifications and software implementations. The results indicate that latency and memory usage depend on parameters and nonlinear operations. In terms of memory usage, a difference exists in ROM usage, but not in the RAM usage from our experiments using ARM platform.
We also discovered that the data size to be processed efficiently differs according to the padding scheme, mode of operation, and block size.
Mark Blunk, Paul Bunn, Samuel Dittmer, Steve Lu, Rafail Ostrovsky
ePrint Report
Secure merge considers the problem of combining two sorted lists (which are either held separately by two parties, or held by two parties in some privacy-preserving manner, e.g. via secret-sharing), and outputting a single merged (sorted) list in a privacy-preserving manner (typically the final list is encrypted or secret-shared amongst the original two parties).
Just as algorithms for \textit{insecure} merge are faster than comparison-based sorting ($\Theta(n)$ versus $\Theta(n \log n)$ for lists of size $n$), we explore protocols for performing a \textit{secure} merge that are more performant than simply invoking a secure sort protocol. Namely, we construct a semi-honest protocol that requires $O(n)$ communication and computation and $O(\log \log n)$ rounds of communication. This matches the metrics of the insecure merge for communication and computation, although it does not match the $O(1)$ round-complexity of insecure merge. Our protocol relies only on black-box use of basic secure primitives, like secure comparison and shuffle.
Our protocol improves on previous work of [FNO22], which gave a $O(n)$ communication and $O(n)$ round complexity protocol, and other ``naive'' approaches, such as the shuffle-sort paradigm, which has $O(n \log n)$ communication and $O(\log n)$ round complexity. It is also more efficient for most practical applications than either a garbled circuit or fully-homomorphic encryption (FHE) approach, which each require $O(n \log n)$ communication or computation and have $O(1)$ round complexity.
There are several applications that stand to benefit from our result, including secure sort (in cases where two or more parties have access to their own list of data, secure sort reduces to secure merge since the parties can first sort their own data locally), which in-turn has implications for more efficient private set intersection (PSI) protocols; as well as secure mutable database storage and search, whereby secure merge can be used to insert new rows into an existing database.
In building our secure merge protocol, we develop several subprotocols that may be of independent interest. For example, we develop a protocol for secure asymmetric merge (where one list is much larger than the other), which matches theoretic lower-bounds for all three metrics (assuming the ratio of list sizes is small enough).
Our protocol improves on previous work of [FNO22], which gave a $O(n)$ communication and $O(n)$ round complexity protocol, and other ``naive'' approaches, such as the shuffle-sort paradigm, which has $O(n \log n)$ communication and $O(\log n)$ round complexity. It is also more efficient for most practical applications than either a garbled circuit or fully-homomorphic encryption (FHE) approach, which each require $O(n \log n)$ communication or computation and have $O(1)$ round complexity.
There are several applications that stand to benefit from our result, including secure sort (in cases where two or more parties have access to their own list of data, secure sort reduces to secure merge since the parties can first sort their own data locally), which in-turn has implications for more efficient private set intersection (PSI) protocols; as well as secure mutable database storage and search, whereby secure merge can be used to insert new rows into an existing database.
In building our secure merge protocol, we develop several subprotocols that may be of independent interest. For example, we develop a protocol for secure asymmetric merge (where one list is much larger than the other), which matches theoretic lower-bounds for all three metrics (assuming the ratio of list sizes is small enough).
Simin Ghesmati, Andreas Kern, Aljosha Judmayer, Nicholas Stifter and
ePrint Report
Over the years, several privacy attacks targeted at UTXO-based cryptocurrencies such as Bitcoin have been proposed. This has led to an arms race between increasingly sophisticated analysis approaches and a continuous stream of proposals that seek to counter such attacks against users' privacy. Recently, PayJoin was presented as a new technique for mitigating one of the most prominent heuristics, namely \emph{common input ownership}.
This heuristic assumes that the inputs of a transaction, and thus the associated addresses, belong to the same entity.
However, a problem with PayJoin is that implementations can accidentally reveal such transactions if the corresponding inputs from involved parties are not chosen carefully. Specifically, if a transaction is formed in a way such that it contains seemingly unnecessary inputs, it can be identified through so-called unnecessary input heuristic (UIH).
What is not yet clear is the impact of naive coin selection algorithms within PayJoin implementations that may flag such transactions as PayJoin. This paper investigates the resemblance of PayJoin transactions to ordinary payment transactions by examining the significance of the unnecessary input heuristic in transactions with more than one input and exactly two outputs which is the common template of recent PayJoin transactions.
Daniel Kales, Greg Zaverucha
ePrint Report
MPC-in-the-head based zero-knowledge proofs allow one to prove
knowledge of a preimage for a circuit defined over a finite field F. In recent
proofs the soundness depends on the size F, and small fields require more
parallel repetitions, and therefore produce larger proofs. In this paper we
develop and systematically apply lifting strategies to such proof protocols
in order to increase soundness and reduce proof size. The strategies are
(i) lifting parts of the protocol to extension fields of F, (ii) using reverse-
multiplication friendly embeddings to pack elements of F into a larger field
and (iii) to use an alternative circuit representation. Using a combination
of these strategies at different points in the protocol, we design two new
proof systems well suited to small circuits defined over small fields.
As a case study we consider efficient constructions of post-quantum signatures, where a signature is a proof of knowledge of a one-way function preimage, and two commonly used one-way functions are defined over small fields (AES and LowMC). We find that carefully applying these lifting strategies gives shorter signatures than the state-of-the-art: our AES-based signatures are 1.3x shorter than Banquet (PKC 2021) and our LowMC-based signatures are almost 2x shorter than the NIST-candidate algorithm Picnic3. We implement our schemes and provide benchmarks. Finally, we also give other optimizations: some generally applicable to this class of proofs, and some specific to the circuits we focused on.
As a case study we consider efficient constructions of post-quantum signatures, where a signature is a proof of knowledge of a one-way function preimage, and two commonly used one-way functions are defined over small fields (AES and LowMC). We find that carefully applying these lifting strategies gives shorter signatures than the state-of-the-art: our AES-based signatures are 1.3x shorter than Banquet (PKC 2021) and our LowMC-based signatures are almost 2x shorter than the NIST-candidate algorithm Picnic3. We implement our schemes and provide benchmarks. Finally, we also give other optimizations: some generally applicable to this class of proofs, and some specific to the circuits we focused on.
Eduardo Soria-Vazquez
ePrint Report
We introduce the first proof system for layered arithmetic circuits over an arbitrary ring $R$ that is (possibly) non-commutative and (possibly) infinite, while only requiring black-box access to its arithmetic and a subset $A \subseteq R$. Our construction only requires limited commutativity and regularity properties from $A$, similar to recent work on efficient information theoretic multi-party computation over non-commutative rings by Escudero and Soria-Vazquez (CRYPTO 2021), but furthermore covering infinite rings.
We achieve our results through a generalization of GKR-style interactive proofs (Goldwasser, Kalai and Rothblum, Journal of the ACM, 2015). When $A$ is a subset of the center of $R$, generalizations of the sum-check protocol and other building blocks are not too problematic. The case when the elements of $A$ only commute with each other, on the other hand, introduces a series of challenges. In order to overcome those, we need to introduce a new definition of polynomial ring over a non-commutative ring, the notion of left (and right) multi-linear extensions, modify the layer consistency equation and adapt the sum-check protocol.
Despite these changes, our results are compatible with recent developments such as linear time provers. Moreover, for certain rings our construction achieves provers that run in sublinear time in the circuit size. We obtain such result both for known cases, such as matrix and polynomial rings, as well as new ones, such as for some rings resulting from Clifford algebras. Besides efficiency improvements in computation and/or round complexity for several instantiations, the core conclusion of our results is that state of the art doubly efficient interactive proofs do not require much algebraic structure. This enables exact rather than approximate computation over infinite rings as well as agile proof systems, where the black-box choice of the underlying ring can be easily switched through the software life cycle.
We achieve our results through a generalization of GKR-style interactive proofs (Goldwasser, Kalai and Rothblum, Journal of the ACM, 2015). When $A$ is a subset of the center of $R$, generalizations of the sum-check protocol and other building blocks are not too problematic. The case when the elements of $A$ only commute with each other, on the other hand, introduces a series of challenges. In order to overcome those, we need to introduce a new definition of polynomial ring over a non-commutative ring, the notion of left (and right) multi-linear extensions, modify the layer consistency equation and adapt the sum-check protocol.
Despite these changes, our results are compatible with recent developments such as linear time provers. Moreover, for certain rings our construction achieves provers that run in sublinear time in the circuit size. We obtain such result both for known cases, such as matrix and polynomial rings, as well as new ones, such as for some rings resulting from Clifford algebras. Besides efficiency improvements in computation and/or round complexity for several instantiations, the core conclusion of our results is that state of the art doubly efficient interactive proofs do not require much algebraic structure. This enables exact rather than approximate computation over infinite rings as well as agile proof systems, where the black-box choice of the underlying ring can be easily switched through the software life cycle.
Diego F. Aranha, Youssef El Housni, Aurore Guillevic
ePrint Report
Elliptic curves have become key ingredients for instantiating zero-knowledge proofs and more generally proof systems. Recently, there have been many tailored constructions of these curves that aim at efficiently implementing different kinds of proof systems. In this survey we provide the reader with a comprehensive view on existing work and revisit the contributions in terms of efficiency and security. We present an overview at three stages of the process: curves to instantiate a SNARK, curves to instantiate a recursive SNARK, and also curves to express an elliptic-curve related statement. We provide new constructions of curves for SNARKs and generalize the state-of-the-art constructions for recursive SNARKs. We also exhaustively document the existing work and open-source implementations.
Yi Liu, Qi Wang, Siu-Ming Yiu
ePrint Report
Time-lock puzzle schemes allow one to encrypt messages for the future. More concretely, one can efficiently generate a time-lock puzzle for a secret/solution $s$, such that $s$ remains hidden until a specified time $T$ has elapsed, even for any parallel adversaries. However, since computation on secrets within multiple puzzles can be performed only when \emph{all} of these puzzles are solved, the usage of classical time-lock puzzles is greatly limited. Homomorphic time-lock puzzle (HTLP) schemes were thus proposed to allow evaluating functions over puzzles directly without solving them.
However, although efficient HTLP schemes exist, more improvements are still needed for practicability. In this paper, we improve HTLP schemes to broaden their application scenarios from the aspects of \emph{applicability} and \emph{verifiability}. In terms of applicability, we design the \emph{first} multiplicatively HTLP scheme with the solution space over $\ZZ_n^*$, which is more expressible than the original one, \eg representing integers. Then, to fit HTLP into scenarios requiring verifiability that is missing in existing schemes, we propose three \emph{simple} and \emph{fast} protocols for both the additively HTLP scheme and our multiplicatively HTLP scheme, respectively. The first two protocols allow a puzzle solver to convince others of the correctness of the solution or the invalidity of the puzzle so that others do not need to solve the puzzle themselves. The third protocol allows a puzzle generator to prove the validity of his puzzles. It is shown that a puzzle in our scheme is only $1.25$KB, and one multiplication on puzzles takes simply $0.01$ms. Meanwhile, the overhead of each protocol is less than $0.6$KB in communication and $40$ms in computation. Hence, HTLP still demonstrates excellent efficiency in both communication and computation with these versatile properties.
However, although efficient HTLP schemes exist, more improvements are still needed for practicability. In this paper, we improve HTLP schemes to broaden their application scenarios from the aspects of \emph{applicability} and \emph{verifiability}. In terms of applicability, we design the \emph{first} multiplicatively HTLP scheme with the solution space over $\ZZ_n^*$, which is more expressible than the original one, \eg representing integers. Then, to fit HTLP into scenarios requiring verifiability that is missing in existing schemes, we propose three \emph{simple} and \emph{fast} protocols for both the additively HTLP scheme and our multiplicatively HTLP scheme, respectively. The first two protocols allow a puzzle solver to convince others of the correctness of the solution or the invalidity of the puzzle so that others do not need to solve the puzzle themselves. The third protocol allows a puzzle generator to prove the validity of his puzzles. It is shown that a puzzle in our scheme is only $1.25$KB, and one multiplication on puzzles takes simply $0.01$ms. Meanwhile, the overhead of each protocol is less than $0.6$KB in communication and $40$ms in computation. Hence, HTLP still demonstrates excellent efficiency in both communication and computation with these versatile properties.
Lior Rotem
ePrint Report
We prove strong security guarantees for a wide array of computational and decisional problems, both in hidden-order groups and in bilinear groups, within the algebraic group model (AGM) of Fuchsbauer, Kiltz and Loss (CRYPTO '18). As our first contribution, we put forth a new fine-grained variant of the Uber family of assumptions in hidden-order groups. This family includes in particular the repeated squaring function of Rivest, Shamir and Wagner, which underlies their time-lock puzzle as well as the main known candidates for verifiable delay functions; and a computational variant of the generalized BBS problem, which underlies the timed commitments of Boneh and Naor (CRYPTO '00). We then provide two results within a variant of the AGM, which show that the hardness of solving problems in this family in a less-than-trivial number of steps is implied by well-studied assumptions. The first reduction may be applied in any group (and in particular, class groups), and is to the RSA assumption; and our second reduction is in RSA groups with a modulus which is the product of two safe primes, and is to the factoring assumption.
Additionally, we prove that the hardness of any computational problem in the Uber family of problems in bilinear groups is implied by the hardness of the $q$-discrete logarithm problem. The parameter $q$ in our reduction is the maximal degree in which a variable appears in the polynomials which define the specific problem within the Uber family. This improves upon a recent result of Bauer, Fuchsbauer and Loss (CRYPTO '20), who obtained a similar implication but for a parameter $q$ which is lower bounded by the maximal total degree of one of the above polynomials. We discuss the implications of this improvement to prominent group key-exchange protocols.
Additionally, we prove that the hardness of any computational problem in the Uber family of problems in bilinear groups is implied by the hardness of the $q$-discrete logarithm problem. The parameter $q$ in our reduction is the maximal degree in which a variable appears in the polynomials which define the specific problem within the Uber family. This improves upon a recent result of Bauer, Fuchsbauer and Loss (CRYPTO '20), who obtained a similar implication but for a parameter $q$ which is lower bounded by the maximal total degree of one of the above polynomials. We discuss the implications of this improvement to prominent group key-exchange protocols.