IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
24 October 2021
Hyesun Kwak, Dongwon Lee, Yongsoo Song, Sameer Wagh
Homomorphic Encryption (HE), first demonstrated in 2009, is a class of encryption schemes that enables computation over encrypted data. Recent advances in the design of better protocols have led to the development of two different lines of HE schemes -- Multi-Party Homomorphic Encryption (MPHE) and Multi-Key Homomorphic Encryption (MKHE). These primitives cater to different applications as each approach has its own pros and cons.
At a high level, MPHE schemes tend to be much more efficient but require the set of computing parties to be fixed throughout the entire operation, frequently a limiting assumption. On the other hand, MKHE schemes tend to have poor scaling (quadratic) with the number of parties but allow us to add new parties to the joint computation anytime since they support computation between ciphertexts under different keys.
In this work, we formalize a new variant of HE called Multi-Group Homomorphic Encryption (MGHE). Stated informally, an MGHE scheme provides a seamless integration between MPHE and MKHE, thereby enjoying the best of both worlds. In this framework, a group of parties generates a public key jointly which results in the compactness of ciphertexts and the efficiency of homomorphic operations similar to MPHE. However, unlike MPHE, it also supports computations on encrypted data under different keys similar to MKHE.
We provide the first construction of such an MGHE scheme from BFV and demonstrate experimental results. More importantly, the joint public key generation procedure of our scheme is fully non-interactive so that the set of computing parties does not have to be determined and no information about other parties is needed in advance of individual key generation. At the heart of our construction is a novel re-factoring of the relinearization key.
In this work, we formalize a new variant of HE called Multi-Group Homomorphic Encryption (MGHE). Stated informally, an MGHE scheme provides a seamless integration between MPHE and MKHE, thereby enjoying the best of both worlds. In this framework, a group of parties generates a public key jointly which results in the compactness of ciphertexts and the efficiency of homomorphic operations similar to MPHE. However, unlike MPHE, it also supports computations on encrypted data under different keys similar to MKHE.
We provide the first construction of such an MGHE scheme from BFV and demonstrate experimental results. More importantly, the joint public key generation procedure of our scheme is fully non-interactive so that the set of computing parties does not have to be determined and no information about other parties is needed in advance of individual key generation. At the heart of our construction is a novel re-factoring of the relinearization key.
Long Meng, Liqun Chen
Time-stamping services produce time-stamp tokens as evidence to prove that digital data existed at given points in time. Time-stamp tokens contain verifiable cryptographic bindings between data and time, which are produced using cryptographic algorithms. In the ANSI, ISO/IEC and IETF standards for time-stamping services, cryptographic algorithms are addressed in two aspects: (i) Client-side hash functions used to hash data into digests for nondisclosure. (ii) Server-side algorithms used to bind the time and digests of data. These algorithms are associated with limited lifespans due to their operational life cycles and increasing computational powers of attackers. After the algorithms are compromised, time-stamp tokens using the algorithms are no longer trusted. The ANSI and ISO/IEC standards provide renewal mechanisms for time-stamp tokens. However, the renewal mechanisms for client-side hash functions are specified ambiguously, that may lead to the failure of implementations. Besides, in existing papers, the security analyses of long-term time-stamping schemes only cover the server-side renewal, and the client-side renewal is missing. In this paper, we analyse the necessity of client-side renewal, and propose a comprehensive long-term time-stamping scheme that addresses both client-side renewal and server-side renewal mechanisms. After that, we formally analyse and evaluate the client-side security of our proposed scheme.
Bhaskar Roberts, Mark Zhandry
The construction of public key quantum money based on standard cryptographic assumptions is a longstanding open question. Here we introduce franchised quantum money, an alternative form of quantum money that is easier to construct. Franchised quantum money retains the features of a useful quantum money scheme, namely unforgeability and local verification: anyone can verify banknotes without communicating with the bank. In franchised quantum money, every user gets a unique secret verification key, and the scheme is secure against counterfeiting and sabotage, a new security notion that appears in the franchised model. Finally, we construct franchised quantum money and prove security assuming one-way functions.
Ashrujit Ghoshal, Riddhi Ghosal, Joseph Jaeger, Stefano Tessaro
This paper continues the study of memory-tight reductions (Auerbach et al, CRYPTO '17). These are reductions that only incur minimal memory costs over those of the original adversary, allowing precise security statements for memory-bounded adversaries (under appropriate assumptions expressed in terms of adversary time and memory usage). Despite its importance, only a few techniques to achieve memory-tightness are known and impossibility results in prior works show that even basic, textbook reductions cannot be made memory-tight.
This paper introduces a new class of memory-tight reductions which leverage random strings in the interaction with the adversary to hide state information, thus shifting the memory costs to the adversary.
We exhibit this technique with several examples. We give memory-tight proofs for digital signatures allowing many forgery attempts when considering randomized message distributions or probabilistic RSA-FDH signatures specifically. We prove security of the authenticated encryption scheme Encrypt-then-PRF with a memory-tight reduction to the underlying encryption scheme. By considering specific schemes or restricted definitions we avoid generic impossibility results of Auerbach et al. (CRYPTO '17) and Ghoshal et al. (CRYPTO '20).
As a further case study, we consider the textbook equivalence of CCA-security for public-key encryption for one or multiple encryption queries. We show two qualitatively different memory-tight versions of this result, depending on the considered notion of CCA security.
This paper introduces a new class of memory-tight reductions which leverage random strings in the interaction with the adversary to hide state information, thus shifting the memory costs to the adversary.
We exhibit this technique with several examples. We give memory-tight proofs for digital signatures allowing many forgery attempts when considering randomized message distributions or probabilistic RSA-FDH signatures specifically. We prove security of the authenticated encryption scheme Encrypt-then-PRF with a memory-tight reduction to the underlying encryption scheme. By considering specific schemes or restricted definitions we avoid generic impossibility results of Auerbach et al. (CRYPTO '17) and Ghoshal et al. (CRYPTO '20).
As a further case study, we consider the textbook equivalence of CCA-security for public-key encryption for one or multiple encryption queries. We show two qualitatively different memory-tight versions of this result, depending on the considered notion of CCA security.
Maikel Kerkhof, Lichao Wu, Guilherme Perin, Stjepan Picek
The deep learning-based side-channel analysis represents one of the most powerful side-channel attack approaches. Thanks to its capability in dealing with raw features and countermeasures, it becomes the de facto standard evaluation method for the evaluation labs/certification schemes. To reach this performance level, recent works significantly improved the deep learning-based attacks from various perspectives, like hyperparameter tuning, design guidelines, or custom neural network architecture elements. Still, limited attention has been given to the core of the learning process - the loss function.
This paper analyzes the limitations of the existing loss functions and then proposes a novel side-channel analysis-optimized loss function: Focal Loss Ratio (FLR), to cope with the identified drawbacks observed in other loss functions. To validate our design, we 1) conduct a thorough experimental study considering various scenarios (datasets, leakage models, neural network architectures) and 2) compare with other loss functions commonly used in the deep learning-based side-channel analysis (both ``traditional'' one and those designed for side-channel analysis). Our results show that FLR loss outperforms other loss functions in various conditions while not having computation overheads compared to common loss functions like categorical cross-entropy.
This paper analyzes the limitations of the existing loss functions and then proposes a novel side-channel analysis-optimized loss function: Focal Loss Ratio (FLR), to cope with the identified drawbacks observed in other loss functions. To validate our design, we 1) conduct a thorough experimental study considering various scenarios (datasets, leakage models, neural network architectures) and 2) compare with other loss functions commonly used in the deep learning-based side-channel analysis (both ``traditional'' one and those designed for side-channel analysis). Our results show that FLR loss outperforms other loss functions in various conditions while not having computation overheads compared to common loss functions like categorical cross-entropy.
Keitaro Hashimoto, Shuichi Katsumata, Eamonn Postlethwaite, Thomas Prest, Bas Westerbaan
Continuous group key agreements (CGKAs) are a class of protocols that can provide strong security guarantees to secure group messaging protocols such as Signal and MLS. Protection against device compromise is provided by commit messages: at a regular rate, each group member may refresh their key material by uploading a commit message, which is then downloaded and processed by all the other members. In practice, propagating commit messages dominates the bandwidth consumption of existing CGKAs.
We propose Chained CmPKE, a CGKA with an asymmetric bandwidth cost: in a group of $N$ members, a commit message costs $O(N)$ to upload and $O(1)$ to download, for a total bandwidth cost of $O(N)$. In contrast, TreeKEM [19, 24, 76] costs $\Omega(\log N)$ in both directions, for a total cost $\Omega(N\log N)$. Our protocol relies on generic primitives, and is therefore readily post-quantum.
We go one step further and propose post-quantum primitives that are tailored to Chained CmPKE, which allows us to cut the growth rate of uploaded commit messages by two or three orders of magnitude compared to naive instantiations. Finally, we realize a software implementation of Chained CmPKE. Our experiments show that even for groups with a size as large as $N = 2^{10}$, commit messages can be computed and processed in less than 100 ms.
We propose Chained CmPKE, a CGKA with an asymmetric bandwidth cost: in a group of $N$ members, a commit message costs $O(N)$ to upload and $O(1)$ to download, for a total bandwidth cost of $O(N)$. In contrast, TreeKEM [19, 24, 76] costs $\Omega(\log N)$ in both directions, for a total cost $\Omega(N\log N)$. Our protocol relies on generic primitives, and is therefore readily post-quantum.
We go one step further and propose post-quantum primitives that are tailored to Chained CmPKE, which allows us to cut the growth rate of uploaded commit messages by two or three orders of magnitude compared to naive instantiations. Finally, we realize a software implementation of Chained CmPKE. Our experiments show that even for groups with a size as large as $N = 2^{10}$, commit messages can be computed and processed in less than 100 ms.
Veronika Kuchta, Joseph K. Liu
In this paper, we formally prove the non-slanderability property of the first linkable ring signature paper in ACISP 2004 (in which the notion was called linkable spontaneous anonymous group signature (LSAG)). The rigorous security analysis will give confidence to any future construction of Ring Confidential Transaction (RingCT) protocol for blockchain systems which may use this signature scheme as the basis.
Tianyu Zheng, Shang Gao, Bin Xiao, Yubo Song
In this paper, we propose any-out-of-many proofs, a logarithmic zero-knowledge scheme for proving knowledge of arbitrarily many secrets out of a public list. Unlike existing $k$-out-of-$N$ proofs [S\&P'21, CRYPTO'21], our approach also hides the exact amount of secrets $k$, which can be used to achieve a higher anonymity level. Furthermore, we enhance the efficiency of our scheme through a transformation that can adopt the improved inner product argument in Bulletproofs [S\&P'18], only $2 \cdot \lceil log_2(N) \rceil + 13$ elements need to be sent in a non-interactive proof.
We further use our proof scheme to implement both multiple ring signature schemes and RingCT protocols. For multiple ring signatures, we need to add a boundary constraint for the number $k$ to avoid the proof of an empty secret set. Thus, an improved version called bounded any-out-of-many proof is presented, which preserves all nice features of the original protocol such as high anonymity and logarithmic size. As for the RingCT, both the original and bounded proofs can be used safely. The result of the performance evaluation indicates that our RingCT protocol is more efficient and secure than others. We also believe our techniques are applicable in other privacy-preserving occasions.
We further use our proof scheme to implement both multiple ring signature schemes and RingCT protocols. For multiple ring signatures, we need to add a boundary constraint for the number $k$ to avoid the proof of an empty secret set. Thus, an improved version called bounded any-out-of-many proof is presented, which preserves all nice features of the original protocol such as high anonymity and logarithmic size. As for the RingCT, both the original and bounded proofs can be used safely. The result of the performance evaluation indicates that our RingCT protocol is more efficient and secure than others. We also believe our techniques are applicable in other privacy-preserving occasions.
23 October 2021
Dakshita Khurana
We introduce non-interactive distributionally indistinguishable arguments (NIDI) to address a significant weakness of NIWI proofs: namely, the lack of meaningful secrecy when proving statements about $\mathsf{NP}$ languages with unique witnesses.
NIDI arguments allow a prover P to send a single message to verifier V, given which V obtains a sample d from a (secret) distribution D, together with a proof of membership of d in an NP language L. The soundness guarantee is that if the sample d obtained by the verifier V is not in L, then V outputs $\bot$. The privacy guarantee is that secrets about the distribution remain hidden: for every pair of distributions $D_0$ and $D_1$ of instance-witness pairs in L such that instances sampled according to $D_0$ or $D_1$ are (sufficiently) hard-to-distinguish, a NIDI that outputs instances according to $D_0$ with proofs of membership in L is indistinguishable from one that outputs instances according to $D_1$ with proofs of membership in L.
- We build NIDI arguments for sufficiently hard-to-distinguish distributions assuming sub-exponential indistinguishability obfuscation and sub-exponential one-way functions.
- We demonstrate preliminary applications of NIDI and of our techniques to obtaining the first (relaxed) non-interactive constructions in the plain model, from well-founded assumptions, of:
1. Commit-and-prove that provably hides the committed message
2. CCA-secure commitments against non-uniform adversaries.
The commit phase of our commitment schemes consists of a single message from the committer to the receiver, followed by a randomized output by the receiver (that need not necessarily be returned to the committer).
NIDI arguments allow a prover P to send a single message to verifier V, given which V obtains a sample d from a (secret) distribution D, together with a proof of membership of d in an NP language L. The soundness guarantee is that if the sample d obtained by the verifier V is not in L, then V outputs $\bot$. The privacy guarantee is that secrets about the distribution remain hidden: for every pair of distributions $D_0$ and $D_1$ of instance-witness pairs in L such that instances sampled according to $D_0$ or $D_1$ are (sufficiently) hard-to-distinguish, a NIDI that outputs instances according to $D_0$ with proofs of membership in L is indistinguishable from one that outputs instances according to $D_1$ with proofs of membership in L.
- We build NIDI arguments for sufficiently hard-to-distinguish distributions assuming sub-exponential indistinguishability obfuscation and sub-exponential one-way functions.
- We demonstrate preliminary applications of NIDI and of our techniques to obtaining the first (relaxed) non-interactive constructions in the plain model, from well-founded assumptions, of:
1. Commit-and-prove that provably hides the committed message
2. CCA-secure commitments against non-uniform adversaries.
The commit phase of our commitment schemes consists of a single message from the committer to the receiver, followed by a randomized output by the receiver (that need not necessarily be returned to the committer).
Amey Bhangale, Chen-Da Liu-Zhang, Julian Loss, Kartik Nayak
We investigate the communication complexity of Byzantine agreement protocols for long messages against an adaptive adversary. In this setting, prior results either achieved a communication complexity of $O(nl\cdot\poly(\kappa))$ or $O(nl + n^2 \cdot \poly(\kappa))$ for $l$-bit long messages. We improve the state of the art by presenting protocols with communication complexity $O(nl + n \cdot \poly(\kappa))$ in both the synchronous and asynchronous communication models. The synchronous protocol tolerates $t \le (1-\epsilon) \frac{n}{2}$ corruptions and assumes a VRF setup, while the asynchronous protocol tolerates $t \le (1-\epsilon) \frac{n}{3}$ corruptions under further cryptographic assumptions. Our protocols are very simple and combine subcommittee election with the recent approach of Nayak et al. (DISC `20). Surprisingly, the analysis of our protocols is \emph{all but simple} and involves an interesting new application of Mc Diarmid's inequality to obtain {\em optimal} corruption thresholds.
Marc Joye
First posed as a challenge in 1978 by Rivest et al., fully homomorphic encryption—the ability to evaluate any function over encrypted data— was only solved in 2009 in a breakthrough result by Gentry. After a decade of intense research, practical solutions have emerged and are being pushed for standardization.
This guide is intended to practitioners. It explains the inner-workings of TFHE, a torus-based fully homomorphic encryption scheme. More exactly, it describes its implementation on a discretized version of the torus. It also explains in detail the technique of the programmable bootstrapping.
This guide is intended to practitioners. It explains the inner-workings of TFHE, a torus-based fully homomorphic encryption scheme. More exactly, it describes its implementation on a discretized version of the torus. It also explains in detail the technique of the programmable bootstrapping.
Zeta Avarikioti, Krzysztof Pietrzak, Iosif Salem, Stefan Schmid, Samarth Tiwari, Michelle Yeo
Payment channels effectively move the transaction load off-chain thereby successfully addressing the inherent scalability problem most cryptocurrencies face. A major drawback of payment channels is the need to ``top up'' funds on-chain when a channel is depleted. Rebalancing was proposed to alleviate this issue, where parties with depleting channels move their funds along a cycle to replenish their channels off-chain. Protocols for rebalancing so far either introduce local solutions or compromise privacy.
In this work, we present an opt-in rebalancing protocol that is both private and globally optimal, meaning our protocol maximizes the total amount of rebalanced funds. We study rebalancing from the framework of linear programming. To obtain full privacy guarantees, we leverage multi-party computation in solving the linear program, which is executed by selected participants to maintain efficiency. Finally, we efficiently decompose the rebalancing solution into incentive-compatible cycles which conserve user balances when executed atomically.
In this work, we present an opt-in rebalancing protocol that is both private and globally optimal, meaning our protocol maximizes the total amount of rebalanced funds. We study rebalancing from the framework of linear programming. To obtain full privacy guarantees, we leverage multi-party computation in solving the linear program, which is executed by selected participants to maintain efficiency. Finally, we efficiently decompose the rebalancing solution into incentive-compatible cycles which conserve user balances when executed atomically.
Anubhab Baksi, Vishnu Asutosh Dasu, Banashri Karmakar, Anupam Chattopadhyay, Takanori Isobe
The linear layer, which is basically a binary non-singular matrix, is an integral part of cipher construction in a lot of private key ciphers. As a result, optimising the linear layer for device implementation has been an important research direction for about two decades. The Boyar-Peralta's algorithm (SEA'10) is one such common algorithm, which offers significant improvement compared to the straightforward implementation. This algorithm only returns implementation with XOR2 gates, and is deterministic. Over the last couple of years, some improvements over this algorithm has been proposed, so as to make support for XOR3 gates as well as make it randomised. In this work, we take an already existing improvement (Tan and Peyrin, TCHES'20) that allows randomised execution and extend it to support three input XOR gates. This complements the other work done in this direction (Banik et al., IWSEC'19) that also supports XOR3 gates with randomised execution. Further, noting from another work (Maximov, Eprint'19), we include one additional tie-breaker condition in the original Boyar-Peralta's algorithm. Our work thus collates and extends the state-of-the-art, at the same time offers a simpler interface. We show several results that improve from the lastly best-known results.
Jiaxin Guan, Mark Zhandry
Let $p$ be a polynomial, and let $p^{(i)}(x)$ be the result of iterating the polynomial $i$ times, starting at an input $x$. The case where $p(x)$ is the homogeneous polynomial $x^2$ has been extensively studied in cryptography. Due to its associated group structure, iterating this polynomial gives rise to a number of interesting cryptographic applications such as time-lock puzzles and verifiable delay functions. On the other hand, the associated group structure leads to quantum attacks on the applications.
In this work, we consider whether inhomogeneous polynomials, such as $2x^2+3x+1$, can have useful cryptographic applications. We focus on the case of polynomials mod $2^n$, due to some useful mathematical properties. The natural group structure no longer exists, so the quantum attacks but also applications no longer immediately apply. We nevertheless show classical polynomial-time attacks on analogs of hard problems from the homogeneous setting. We conclude by proposing new computational assumptions relating to these inhomogeneous polynomials, with cryptographic applications.
In this work, we consider whether inhomogeneous polynomials, such as $2x^2+3x+1$, can have useful cryptographic applications. We focus on the case of polynomials mod $2^n$, due to some useful mathematical properties. The natural group structure no longer exists, so the quantum attacks but also applications no longer immediately apply. We nevertheless show classical polynomial-time attacks on analogs of hard problems from the homogeneous setting. We conclude by proposing new computational assumptions relating to these inhomogeneous polynomials, with cryptographic applications.
Nishanth Chandran, Pouyan Forghani, Juan Garay, Rafail Ostrovsky, Rutvik Patel, Vassilis Zikas
Most existing work on secure multi-party computation (MPC) ignores a key idiosyncrasy of modern communication networks, that there are a limited number of communication paths between any two nodes, many of whom might even be corrupted. The work by Garay and Ostrovsky [EUROCRYPT'08] on almost-everywhere MPC (AE-MPC), introduced “best-possible security” properties for MPC over such incomplete networks, where necessarily some of the honest parties may be excluded from the computation—we call such parties “doomed.”
In this work we provide a universally composable definition of almost-everywhere security, which allows us to automatically and accurately capture the guarantees of AE-MPC (as well as AE-communication, the analogous “best-possible security” version of secure communication) in the Universal Composability (UC) framework of Canetti. Our result offers the first simulation-based treatment of this important but under-investigated problem, along with the first simulation-based proof of AE-MPC.
In this work we provide a universally composable definition of almost-everywhere security, which allows us to automatically and accurately capture the guarantees of AE-MPC (as well as AE-communication, the analogous “best-possible security” version of secure communication) in the Universal Composability (UC) framework of Canetti. Our result offers the first simulation-based treatment of this important but under-investigated problem, along with the first simulation-based proof of AE-MPC.
Craig Gentry, Shai Halevi, Vadim Lyubashevsky
Non-interactive publicly verifiable secret sharing (PVSS) schemes allow parties to re-share a secret in a decentralized setting in the presence of malicious parties. A recently proposed application of PVSS schemes is to enable permissionless proof-of-stake blockchains to ``keep a secret" via a sequence of committees that share that secret. Such committees can use the secret to produce signatures on the blockchain's behalf, or to disclose hidden data conditioned on consensus that some event has occurred. Such a setting may involve thousands of parties, so the PVSS scheme that it uses must be very efficient, both in computation and communication. Yet, previous PVSS schemes have large proofs and/or require many exponentiations over large groups.
We present a non-interactive PVSS scheme in which the underlying encryption scheme is based on the learning with errors (LWE) problem. While lattice-based encryption schemes are very fast, they have issues with bandwidth (long ciphertexts and public keys). We deal with the bandwidth issue in two ways. First, we adapt the Peikert-Vaikuntanathan-Waters (PVW) encryption scheme to the multi-receiver setting so that the bulk of the parties' keys is a common random string, and so that we get good amortized communication: $\Omega(1)$ plaintext/ciphertext rate (rate $\approx 1/60$ for 100 parties, $\approx 1/8$ for 1000 parties, approaching 1/2 as the number of parties grows). Second, we use bulletproofs over a DL-group of order about 256 bits to get compact proofs of correct encryption of shares. Switching from the lattice setting to the DL setting is relatively painless, as we equate the LWE modulus with the order of the group, and apply dimension reduction to vectors before the switch to minimize the number of exponentiations in the bulletproof. An implementation of our PVSS for 1000 parties showed that it's quite practical, and should remain so with up to a two order of magnitude increase in the group size.
We present a non-interactive PVSS scheme in which the underlying encryption scheme is based on the learning with errors (LWE) problem. While lattice-based encryption schemes are very fast, they have issues with bandwidth (long ciphertexts and public keys). We deal with the bandwidth issue in two ways. First, we adapt the Peikert-Vaikuntanathan-Waters (PVW) encryption scheme to the multi-receiver setting so that the bulk of the parties' keys is a common random string, and so that we get good amortized communication: $\Omega(1)$ plaintext/ciphertext rate (rate $\approx 1/60$ for 100 parties, $\approx 1/8$ for 1000 parties, approaching 1/2 as the number of parties grows). Second, we use bulletproofs over a DL-group of order about 256 bits to get compact proofs of correct encryption of shares. Switching from the lattice setting to the DL setting is relatively painless, as we equate the LWE modulus with the order of the group, and apply dimension reduction to vectors before the switch to minimize the number of exponentiations in the bulletproof. An implementation of our PVSS for 1000 parties showed that it's quite practical, and should remain so with up to a two order of magnitude increase in the group size.
Jonathan Bradbury, Nir Drucker, Marius Hillenbrand
Software implementations of the number-theoretic transform (NTT) method often leverage Harvey’s butterfly to gain speedups. This is the case in cryptographic libraries such as IBM’s HElib, Microsoft’s SEAL, and Intel’s HEXL, which provide optimized implementations of fully homomorphic encryption schemes or their primitives.
We extend the Harvey butterfly to the radix-4 case for primes in the range [2^31, 2^52). This enables us to use the vector multiply sum logical (VMSL) instruction, which is available on recent IBM Z^(R) platforms. On an IBM z14 system, our implementation performs more than 2.5x faster than the scalar implementation of SEAL we converted to native C. In addition, we implemented a mixed-radix implementation that uses AVX512-IFMA on Intel’s Ice Lake processor, which happens to be ~1.1 times faster than the super-optimized implementation of Intel’s HEXL. Finally, we compare the performance of some of our implementation using GCC versus Clang compilers and discuss the results.
Reo Eriguchi, Koji Nuida
Homomorphic secret sharing (HSS) for a function $f$ allows input parties to distribute shares for their private inputs and then locally compute output shares from which the value of $f$ is recovered. HSS can be directly used to obtain a two-round multiparty computation (MPC) protocol for possibly non-threshold adversary structures whose communication complexity is independent of the size of $f$. In this paper, we propose two constructions of HSS schemes supporting parallel evaluation of a single low-degree polynomial and tolerating multipartite and general adversary structures. Our multipartite scheme tolerates a wider class of adversary structures than the previous multipartite one in the particular case of a single evaluation and has exponentially smaller share size than the general construction. While restricting the range of tolerable adversary structures (but still applicable to non-threshold ones), our schemes perform $\ell$ parallel evaluations with communication complexity approximately $\ell/\log\ell$ times smaller than simply using $\ell$ independent instances. We also formalize two classes of adversary structures taking into account real-world situations to which the previous threshold schemes are inapplicable. Our schemes then perform $O(m)$ parallel evaluations with almost the same communication cost as a single evaluation, where $m$ is the number of parties.
15 October 2021
Vidal Attias, Luigi Vigneri, Vassil Dimitrov
The importance of efficient multi-exponen- tiation algorithms in a large spectrum of cryptographic applications continues to grow. Many of the algorithms proposed in the past pay attention exclusively on the minimization of the number of modular multiplications. However, a short reduction of the multiplicative com- plexity can be easily overshadowed by other figures of merit. In this article we demonstrate a large number of practical results aimed at concrete cryptographic tasks requiring multi-exponentiations and provide rec- ommendations on the best possible algorithmic strate- gies for different selection of security parameters.
Chaya Ganesh, Claudio Orlandi, Mahak Pancholi, Akira Takahashi, Daniel Tschudi
Bulletproofs (Bünz et al. IEEE S&P 2018) are a celebrated ZK proof system that allows for short and efficient proofs, and have been implemented and deployed in several real-world systems. In practice, they are most often implemented in their non-interactive version obtained using the Fiat-Shamir transform, despite the lack of a formal proof of security for this setting.
Prior to this work, there was no evidence that malleability attacks were not possible against Fiat-Shamir Bulletproofs. Malleability attacks can lead to very severe vulnerabilities, as they allow an adversary to forge proofs re-using or modifying parts of the proofs provided by the honest parties. In this paper, we show for the first time that Bulletproofs (or any other similar multi-round proof system satisfying some form of weak unique response property) achieve simulation-extractability in the algebraic group model.
This implies that Fiat-Shamir Bulletproofs are non-malleable.
Prior to this work, there was no evidence that malleability attacks were not possible against Fiat-Shamir Bulletproofs. Malleability attacks can lead to very severe vulnerabilities, as they allow an adversary to forge proofs re-using or modifying parts of the proofs provided by the honest parties. In this paper, we show for the first time that Bulletproofs (or any other similar multi-round proof system satisfying some form of weak unique response property) achieve simulation-extractability in the algebraic group model.
This implies that Fiat-Shamir Bulletproofs are non-malleable.