IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
23 September 2019
Kai-Min Chung; Luowen Qian
We construct the first adaptively secure garbling scheme based on standard public-key assumptions for garbling a circuit $C: \{0, 1\}^n \mapsto \{0, 1\}^m$ that simultaneously achieves a near-optimal online complexity $n + m + \textrm{poly}(\lambda, \log |C|)$ (where $\lambda$ is the security parameter) and \emph{preserves the parallel efficiency} for evaluating the garbled circuit; namely, if the depth of $C$ is $d$, then the garbled circuit can be evaluated in parallel time $d \cdot \textrm{poly}(\log|C|, \lambda)$. In particular, our construction improves over the recent seminal work of Garg et al. (Eurocrypt 2018), which constructs the first adaptively secure garbling scheme with a near-optimal online complexity under the same assumptions, but the garbled circuit can only be evaluated gate by gate in a sequential manner. Our construction combines their novel idea of linearization with several new ideas to achieve parallel efficiency without compromising online complexity.
We take one step further to construct the first adaptively secure garbling scheme for parallel RAM (PRAM) programs under standard assumptions that preserves the parallel efficiency. Previous such constructions we are aware of is from strong assumptions like indistinguishability obfuscation Ananth et al. (TCC 2016). Our construction is based on the work of Garg et al. (Crypto 2018) for adaptively secure garbled RAM, but again introduces several new ideas to handle parallel RAM computation, which may be of independent interests. As an application, this yields the first constant round secure computation protocol for persistent PRAM programs in the malicious settings from standard assumptions.
We take one step further to construct the first adaptively secure garbling scheme for parallel RAM (PRAM) programs under standard assumptions that preserves the parallel efficiency. Previous such constructions we are aware of is from strong assumptions like indistinguishability obfuscation Ananth et al. (TCC 2016). Our construction is based on the work of Garg et al. (Crypto 2018) for adaptively secure garbled RAM, but again introduces several new ideas to handle parallel RAM computation, which may be of independent interests. As an application, this yields the first constant round secure computation protocol for persistent PRAM programs in the malicious settings from standard assumptions.
Alessandro Chiesa, Dev Ojha, Nicholas Spooner
We present a new methodology to efficiently realize recursive composition of succinct non-interactive arguments of knowledge (SNARKs). Prior to this work, the only known methodology relied on pairing-based SNARKs instantiated on cycles of pairing-friendly elliptic curves, an expensive algebraic object. Our methodology does not rely on any special algebraic objects and, moreover, achieves new desirable properties: it is *post-quantum* and it is *transparent* (the setup is public coin).
We exploit the fact that recursive composition is simpler for SNARKs with *preprocessing*, and the core of our work is obtaining a preprocessing zkSNARK for rank-1 constraint satisfiability (R1CS) that is post-quantum and transparent. We obtain this latter by establishing a connection between holography and preprocessing in the random oracle model, and then constructing a holographic proof for R1CS.
We experimentally validate our methodology, demonstrating feasibility in practice.
We exploit the fact that recursive composition is simpler for SNARKs with *preprocessing*, and the core of our work is obtaining a preprocessing zkSNARK for rank-1 constraint satisfiability (R1CS) that is post-quantum and transparent. We obtain this latter by establishing a connection between holography and preprocessing in the random oracle model, and then constructing a holographic proof for R1CS.
We experimentally validate our methodology, demonstrating feasibility in practice.
Henry Corrigan-Gibbs, Dmitry Kogan
We present the first protocols for private information retrieval that allow fast (sublinear-time) database lookups without increasing the server-side storage requirements. To achieve these efficiency goals, our protocols work in an offline/online model. In an offline phase, which takes place before the client has decided which database bit it wants to read, the client fetches a short string from the servers. In a subsequent online phase, the client can privately retrieve its desired bit of the database by making a second query to the servers. By pushing the bulk of the server-side computation into the offline phase (which is independent of the client's query), our protocols allow the online phase to complete very quicklyin time sublinear in the size of the database. Our protocols can provide statistical security in the two-server setting and computational security in the single-server setting. Finally, we prove that our protocols are optimal in terms of the trade-off they achieve between communication and running time.
Dirk Thatmann
This work shows all necessary calculations to extend the ``Practical Attribute Based Encryption: Traitor Tracing, Revocation, and Large Universe'' scheme of Liu and Wong with non-monotonic access structures. We ensure that the blackbox tracability property is preserved.
Jan Camenisch, Stephan Krenn, Ralf Kuesters, Daniel Rausch
Proving the security of complex protocols is a crucial and very challenging task. A widely used approach for reasoning about such protocols in a modular way is universal composability. A perfect model for universal composability should provide a sound basis for formal proofs and be very flexible in order to allow for modeling a multitude of different protocols. It should also be easy to use, including useful design conventions for repetitive modeling aspects, such as corruption, parties, sessions, and subroutine relationships, such that protocol designers can focus on the core logic of their protocols.
While many models for universal composability exist, including the UC, GNUC, and IITM models, none of them has achieved this ideal goal yet. As a result, protocols cannot be modeled faithfully and/or using these models is a burden rather than a help, often even leading to underspecified protocols and formally incorrect proofs.
Given this dire state of affairs, the goal of this work is to provide a framework for universal composability which combines soundness, flexibility, and usability in an unmatched way. Developing such a security framework is a very difficult and delicate task, as the long history of frameworks for universal composability shows.
We build our framework, called iUC, on top of the IITM model, which already provides soundness and flexibility while lacking sufficient usability. At the core of iUC is a single simple template for specifying essentially arbitrary protocols in a convenient, formally precise, and flexible way. We illustrate the main features of our framework with example functionalities and realizations.
While many models for universal composability exist, including the UC, GNUC, and IITM models, none of them has achieved this ideal goal yet. As a result, protocols cannot be modeled faithfully and/or using these models is a burden rather than a help, often even leading to underspecified protocols and formally incorrect proofs.
Given this dire state of affairs, the goal of this work is to provide a framework for universal composability which combines soundness, flexibility, and usability in an unmatched way. Developing such a security framework is a very difficult and delicate task, as the long history of frameworks for universal composability shows.
We build our framework, called iUC, on top of the IITM model, which already provides soundness and flexibility while lacking sufficient usability. At the core of iUC is a single simple template for specifying essentially arbitrary protocols in a convenient, formally precise, and flexible way. We illustrate the main features of our framework with example functionalities and realizations.
Nico Döttling, Sanjam Garg, Mohammad Hajiabadi, Kevin Liu, Giulio Malavolta
Trapdoor functions (TDFs) are one of the fundamental building blocks in cryptography. Studying the underlying assumptions and the efficiency of the resulting instantiations is therefore of both theoretical and practical interest.
In this work we improve the input-to-image rate of TDFs based on the Diffie-Hellman problem. Specically, we present:
\begin{enumerate}
\item A rate-1 TDF from the computational Diffie-Hellman (CDH) assumption, improving the result of Garg, Gay, and Hajiabadi [EUROCRYPT 2019], which achieved linear-size outputs but with large constants. Our techniques combine non-binary alphabets and high-rate error-correcting codes over large fields.
\item A rate-1 deterministic public-key encryption satisfying block-source security from the decisional Diffie-Hellman (DDH) assumption. While this question was recently settled by Döttling et al. [CRYPTO 2019], our scheme is conceptually simpler and concretely more efficient. We demonstrate this fact by implementing our construction.
\end{enumerate}
In this work we improve the input-to-image rate of TDFs based on the Diffie-Hellman problem. Specically, we present:
\begin{enumerate}
\item A rate-1 TDF from the computational Diffie-Hellman (CDH) assumption, improving the result of Garg, Gay, and Hajiabadi [EUROCRYPT 2019], which achieved linear-size outputs but with large constants. Our techniques combine non-binary alphabets and high-rate error-correcting codes over large fields.
\item A rate-1 deterministic public-key encryption satisfying block-source security from the decisional Diffie-Hellman (DDH) assumption. While this question was recently settled by Döttling et al. [CRYPTO 2019], our scheme is conceptually simpler and concretely more efficient. We demonstrate this fact by implementing our construction.
\end{enumerate}
Martin Brisfors, Sebastian Forsmark
Abstract - Research on Side Channel Analysis (SCA) is veryactive and progressing at a fast pace. The idea of usingMachine Learning (ML), and more recently Deep Learning(DL), to help SCA data is explored extensively. One issue facingsecurity researchers interested in contributing to this cause isthe difficulties getting started. While replicating previous workswith open source code is not difficult, taking the next steps fromthere can be daunting. The presented open-source DLSCA toolis created to aid with research on DL-based SCA and to helpnewcomers to DL to get started. It is hoped to contribute toinvestigating the strengths and limitations of ML-based SCA.
Keywords - Machine Learning, Side Channel Attack, Software Tool
Keywords - Machine Learning, Side Channel Attack, Software Tool
Robi Pedersen, Osmanbey Uzunkol
We address the problem of speeding up isogeny computation for supersingular elliptic curves over finite fields using untrusted computational resources like third party servers or cloud service providers (CSPs). We first propose new, efficient and secure delegation schemes. This especially enables resource-constrained devices (e.g. smart cards, RFID tags, tiny sensor nodes) to effectively deploy post-quantum isogeny-based cryptographic protocols. To the best of our knowledge, these new schemes are the first attempt to generalize the classical secure delegation schemes for group exponentiations and pairing computation to
an isogeny-based post-quantum setting. Then, we apply these secure delegation subroutines to improve the performance of supersingular isogeny-based zero-knowledge proofs of identity. Our experimental results show that, at the 128−bit quantum-security level, the proving party only needs about 3% of the original protocol cost, while the verifying partys effort is fully
reduced to comparison operations. Lastly, we also apply our delegation schemes to decrease the computational cost of the decryption step for the NIST postquantum standardization candidate SIKE.
Yoshiki Abe, Mitsugu Iwamoto, Kazuo Ohta
A private PEZ protocol is a variant of secure multi-party
computation performed using a (long) PEZ dispenser. The original paper
by Balogh et al. presented a private PEZ protocol for computing
an arbitrary function with n inputs. This result is interesting, but no
follow-up work has been presented since then, to the best of our knowledge.
We show herein that it is possible to shorten the initial string (the
sequence of candies filled in a PEZ dispenser) and the number of moves
(a player pops out a specified number of candies in each move) drastically
if the function is symmetric. Concretely, it turns out that the length of
the initial string is reduced from O((2^n)!) for general functions in Balogh
et al.'s results to O(n * n!) for symmetric functions, and 2^n moves for
general functions are reduced to n^2 moves for symmetric functions. Our
main idea is to utilize the recursive structure of symmetric functions to
construct the protocol recursively. This idea originates from a new initial
string we found for a private PEZ protocol for the three-input majority
function, which is different from the one with the same length given by
Balogh et al. without describing how they derived it.
Joey Green, Tilo Burghardt, Elisabeth Oswald
Neural Networks have become a much studied approach in the recent literature on profiled side channel attacks: many articles examine their use and performance in profiled single-target DPA style attacks. This paper contributes to this ongoing discourse by taking a slightly different angle: we train networks for many intermediates of a typical AES implementation on an ARM Cortex-M0 processor, and compare their performance with classical profiling methods. Because the cost of finding good hyperparameters for networks is high, we demonstrate how to configure a network with a set of hyperparameters for a specific intermediate (SubBytes) that can also be used for learning the leakage of other intermediates. This is interesting because although we can't beat the no free lunch theorem (i.e. we find that different profiling methods excel on different intermediates), we can still get ``good value for money'' (i.e. reasonable classification results across many intermediates with reasonable profiling effort). To put the trained classifiers into side channel practice we integrate them not only into a standard profiled single-target attack on SubBytes, but we use them as part of a (multi-target) belief-propagation attack.
Alex Lombardi, Vinod Vaikuntanathan, Thuy Duong Vuong
Middle-product learning with errors (MP-LWE) was recently introduced by Rosca, Sakzad, Steinfeld and Stehlé (CRYPTO 2017) as a way to combine the efficiency of Ring-LWE with the more robust security guarantees of plain LWE. While Ring-LWE is at the heart of efficient lattice-based cryptosystems, it involves the choice of an underlying ring which is essentially arbitrary. In other words, the effect of this choice on the security of Ring-LWE is poorly understood. On the other hand, Rosca et al. showed that a new LWE variant, called MP-LWE, is as secure as Polynomial-LWE (another variant of Ring-LWE) over any of a broad class of number fields. They also demonstrated the usefulness of MP-LWE by constructing an MP-LWE based public-key encryption scheme whose efficiency is comparable to Ring-LWE based public-key encryption.
In this work, we take this line of research further by showing how to construct Identity-Based Encryption (IBE) schemes that are secure under a variant of the MP-LWE assumption. Our IBE schemes match the efficiency of Ring-LWE based IBE, including a scheme in the random oracle model with keys and ciphertexts of size $\tilde{O}(n)$ (for $n$-bit identities).
We construct our IBE scheme following the lattice trapdoors paradigm of [Gentry, Peikert, and Vaikuntanathan, STOC'08]; our main technical contributions are introducing a new leftover hash lemma and instantiating a new variant of lattice trapdoors compatible with MP-LWE.
This work demonstrates that the efficiency/security tradeoff gains of MP-LWE can be extended beyond public-key encryption to more complex lattice-based primitives.
We construct our IBE scheme following the lattice trapdoors paradigm of [Gentry, Peikert, and Vaikuntanathan, STOC'08]; our main technical contributions are introducing a new leftover hash lemma and instantiating a new variant of lattice trapdoors compatible with MP-LWE.
This work demonstrates that the efficiency/security tradeoff gains of MP-LWE can be extended beyond public-key encryption to more complex lattice-based primitives.
M. Sadegh Riazi, Kim Laine, Blake Pelton, Wei Dai
With the rapid increase in cloud computing, concerns surrounding data privacy, security, and confidentiality also have been increased significantly. Not only cloud providers are susceptible to internal and external hacks, but also in some scenarios, data owners cannot outsource the computation due to privacy laws such as GDPR, HIPAA, or CCPA. Fully Homomorphic Encryption (FHE) is a groundbreaking invention in cryptography that, unlike traditional cryptosystems, enables computation on encrypted data without ever decrypting it. However, the most critical obstacle in deploying FHE at large-scale is the enormous computation overhead.
In this paper, we present HEAX, a novel hardware architecture for FHE that achieves unprecedented performance improvement. HEAX leverages multiple levels of parallelism, ranging from ciphertext-level to fine-grained modular arithmetic level. Our first contribution is a new highly-parallelizable architecture for number-theoretic transform (NTT) which can be of independent interest as NTT is frequently used in many lattice-based cryptography systems. Building on top of NTT engine, we design a novel architecture for computation on homomorphically encrypted data. We also introduce several techniques to enable an end-to-end, fully pipelined design as well as reducing on-chip memory consumption. Our implementation on reconfigurable hardware demonstrates 164-268× performance improvement for a wide range of FHE parameters.
In this paper, we present HEAX, a novel hardware architecture for FHE that achieves unprecedented performance improvement. HEAX leverages multiple levels of parallelism, ranging from ciphertext-level to fine-grained modular arithmetic level. Our first contribution is a new highly-parallelizable architecture for number-theoretic transform (NTT) which can be of independent interest as NTT is frequently used in many lattice-based cryptography systems. Building on top of NTT engine, we design a novel architecture for computation on homomorphically encrypted data. We also introduce several techniques to enable an end-to-end, fully pipelined design as well as reducing on-chip memory consumption. Our implementation on reconfigurable hardware demonstrates 164-268× performance improvement for a wide range of FHE parameters.
21 September 2019
Karim Baghery
A commitment scheme allows a committer to create a commitment to a secret value, and later may open and reveal the secret value in a verifiable manner. In the common reference string model, commitment schemes require a setup phase which is supposed to be done by a third trusted party or distributed authority. During last few years, various news are reported about subversion of $\textit{trusted}$ setup phase in mass-surveillance activities; strictly speaking about commitment schemes, recently it was discovered that the SwissPost-Scytl mix-net uses a trapdoor commitment scheme, that allows undetectably altering the votes once you know the trapdoor [Hae19, LPT19]. Motivated by such news and recent studies on subversion-resistance of various cryptographic primitives, this research studies security of commitment schemes in the presence of a maliciously chosen public commitment key. To attain a clear understanding of achievable security, we present a variation of current definitions called subversion hiding, subversion equivocality and subversion binding. Then we provide both negative and positive results on constructing subversion-resistant commitment schemes, by showing that some combinations of notions are not compatible, while presenting subversion-resistant constructions that can achieve other combinations.
Julia Hesse
Password-Authenticated Key Exchange (PAKE) is a method to establish cryptographic keys between two users sharing a low-entropy password. In its asymmetric version, one of the users acts as a server and only stores some function of the password, e.g., a hash. Upon server compromise, the adversary learns H(pw). Depending on the strength of the password, the attacker now has to invest more or less work to reconstruct pw from H(pw). Intuitively, asymmetric PAKE seems more challenging than standard (symmetric) PAKE since the latter is not supposed to protect the password upon compromise.
In this paper, we provide three contributions:
* Separating standard and asymmetric PAKE. We prove that a strong assumption like a programmable random oracle is necessary to achieve security of asymmetric PAKE in the Universal Composability (UC) framework. For standard PAKE, programmability is not required. Our results thus give the first formal evidence that, in the UC model, asymmetric PAKE is indeed harder to achieve than standard PAKE.
* Revising the security definition. We identify and close a gap in the UC security definition of 2-party asymmetric PAKE given by Gentry, MacKenzie and Ramzan (Crypto 2006). For this, we specify a natural corruption model for server compromise attacks. We further remove an undesirable weakness that lets parties wrongly believe in security of compromised session keys. We demonstrate usefulness by proving that the $\Omega$-protocol proposed by Gentry et al. satisfies our new security notion for aPAKE.
* Composable multi-party aPAKE. We demonstrate that reliance on a programmable random oracle hinders construction of multi-party aPAKE protocols from 2-party protocols via UC composition. Namely, the resulting protocols offer such strong security guarantees that they become impractical in any application. We provide guidance on how to relax composable security notions for multi-party asymmetric aPAKE to obtain useful protocols.
Behzad Abdolmaleki, Hamidreza Khoshakhlagh, Daniel Slamanig
Hash proof systems or smooth projective hash functions (SPHFs) have been proposed by Cramer and Shoup (Eurocrypt'02) and can be seen as special type of zero-knowledge proof system for a language. While initially used to build efficient chosen-ciphertext secure public-key encryption, they found numerous applications in several other contexts. In this paper, we revisit the notion of SPHFs and introduce a new feature (a third mode of hashing) that allows to compute the hash value of an SPHF without having access to neither the witness nor the hashing key, but some additional auxiliary information. We call this new type publicly computable SPHFs (PC-SPHFs) and present a formal framework along with concrete instantiations from a large class of SPHFs.
We then show that this new tool generically leads to commitment schemes that are secure against adaptive adversaries, assuming erasures in the Universal Composability (UC) framework, yielding the first UC secure commitments build from a single SPHF instance. Instantiating our PC-SPHF with an SPHF for labeled Cramer-Shoup encryption gives the currently most efficient non-interactive UC-secure commitment. Finally, we also discuss additional applications to information retrieval based on anonymous credentials being UC secure against adaptive adversaries.
Noga Ron-Zewi, Ron D. Rothblum
Interactive oracle proofs (IOPs) are a hybrid between interactive proofs and PCPs. In an IOP the prover is allowed to interact with a verifier (like in an interactive proof) by sending relatively long messages to the verifier, who in turn is only allowed to query a few of the bits that were sent (like in a PCP).
In this work we construct, for a large class of NP relations, IOPs in which the communication complexity approaches the witness length. More precisely, for any NP relation for which membership can be decided in polynomial-time and bounded polynomial space (e.g., SAT, Hamiltonicity, Clique, Vertex-Cover, etc.) and for any constant $\gamma>0$, we construct an IOP with communication complexity $(1+\gamma) \cdot n$, where $n$ is the original witness length. The number of rounds as well as the number of queries made by the IOP verifier are constant.
This result improves over prior works on short IOPs/PCPs in two ways. First, the communication complexity in these short IOPs is proportional to the complexity of verifying the NP witness, which can be polynomially larger than the witness size. Second, even ignoring the difference between witness length and non-deterministic verification time, prior works incur (at the very least) a large constant multiplicative overhead to the communication complexity.
In particular, as a special case, we also obtain an IOP for Circuit-SAT with rate approaching 1: the communication complexity is $(1+\gamma) \cdot t$, for circuits of size $t$ and any constant $\gamma>0$. This improves upon the prior state-of-the-art work of Ben Sasson et-al (ICALP, 2017) who construct an IOP for Circuit-SAT with rate that is a small (unspecified) constant bounded away from 0.
Our proof leverages recent constructions of high-rate locally testable tensor codes. In particular, we bypass the barrier imposed by the low rate of multiplication codes (e.g., Reed-Solomon, Reed-Muller or AG codes) - a core component in all known short PCP/IOP constructions.
In this work we construct, for a large class of NP relations, IOPs in which the communication complexity approaches the witness length. More precisely, for any NP relation for which membership can be decided in polynomial-time and bounded polynomial space (e.g., SAT, Hamiltonicity, Clique, Vertex-Cover, etc.) and for any constant $\gamma>0$, we construct an IOP with communication complexity $(1+\gamma) \cdot n$, where $n$ is the original witness length. The number of rounds as well as the number of queries made by the IOP verifier are constant.
This result improves over prior works on short IOPs/PCPs in two ways. First, the communication complexity in these short IOPs is proportional to the complexity of verifying the NP witness, which can be polynomially larger than the witness size. Second, even ignoring the difference between witness length and non-deterministic verification time, prior works incur (at the very least) a large constant multiplicative overhead to the communication complexity.
In particular, as a special case, we also obtain an IOP for Circuit-SAT with rate approaching 1: the communication complexity is $(1+\gamma) \cdot t$, for circuits of size $t$ and any constant $\gamma>0$. This improves upon the prior state-of-the-art work of Ben Sasson et-al (ICALP, 2017) who construct an IOP for Circuit-SAT with rate that is a small (unspecified) constant bounded away from 0.
Our proof leverages recent constructions of high-rate locally testable tensor codes. In particular, we bypass the barrier imposed by the low rate of multiplication codes (e.g., Reed-Solomon, Reed-Muller or AG codes) - a core component in all known short PCP/IOP constructions.
Ulrich Haböck, Stephan Krenn
In an attribute-based credential (ABC) system, users obtain a digital certificate on their personal attributes, and can later prove possession of such a certificate in an unlinkable way, thereby selectively disclosing chosen attributes to the service provider. Recently, the concept of encrypted ABCs (EABCs) was introduced by Krenn et al. at CANS 2017, where virtually all computation is outsourced to a semi-trusted cloud-provider called wallet, thereby overcoming existing efficiency limitations on the users side, and for the first time enabling privacy-preserving identity management as a service.
While their approach is highly relevant for bringing ABCs into the real world, we present a simple attack fully breaking privacy of their construction if the wallet colludes with other users a scenario which is not excluded in their analysis and needs to be considered in any realistic modeling. We then revise the construction of Krenn et al. in various ways, such that the above attack is no longer possible. Furthermore, we also remove existing non-collusion assumptions between wallet and service provider or issuer from their construction. Our protocols are still highly efficient in the sense that the computational effort on the end user side consists of a single exponentiation only, and otherwise efficiency is comparable to the original work of Krenn et al.
Barcelona, Spain, 10 June - 12 June 2020
Event date: 10 June to 12 June 2020
Submission deadline: 10 February 2020
Notification: 10 April 2020
Submission deadline: 10 February 2020
Notification: 10 April 2020
18 September 2019
The 2019 TCC Test-of-Time Award goes to Paul Valiant, for his TCC 2008 paper "Incrementally Verifiable Computation or Proofs of Knowledge Imply Time/Space Efficiency".
The award committee recognizes this paper “for demonstrating the power of recursive composition of proofs of knowledge and enabling the development of efficiently verifiable proofs of correctness for complex computations"
The TCC Test of Time Award recognizes outstanding papers, published in TCC at least eight years ago, making a significant contribution to the theory of cryptography, preferably with influence also in other area of cryptography, theory, and beyond. The inaugural TCC Test of Time Award was given in TCC 2015 for papers published no later than TCC 2007.
The award committee recognizes this paper “for demonstrating the power of recursive composition of proofs of knowledge and enabling the development of efficiently verifiable proofs of correctness for complex computations"
The TCC Test of Time Award recognizes outstanding papers, published in TCC at least eight years ago, making a significant contribution to the theory of cryptography, preferably with influence also in other area of cryptography, theory, and beyond. The inaugural TCC Test of Time Award was given in TCC 2015 for papers published no later than TCC 2007.
Daniele Cozzo, Nigel P. smart
We examine all of the signature submissions to Round-2 of the NIST PQC ``competition'' in the context of whether one can transform them into threshold signature schemes in a relatively straight forward manner. We conclude that all schemes, except the ones in the MQ family, have significant issues when one wishes to convert them using relatively generic MPC techniques. The lattice based schemes are hampered by requiring a mix of operations which are suited to both linear secret shared schemes (LSSS)- and garbled circuits (GC)-based MPC techniques (thus requiring costly transfers between the two paradigms). The Picnic and SPHINCS+ algorithms are hampered by the need to compute a large number of hash function queries on secret data. Of the nine submissions the two which would appear to be most suitable for using in a threshold like manner are Rainbow and LUOV, with LUOV requiring less rounds and less data storage.