IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
12 July 2020
Yu Yu, Jiang Zhang
ePrint Report
Learning parity with noise (LPN) is a notorious (average-case) hard problem that has been well studied in learning theory, coding theory and cryptography since the early 90's. It further inspires the Learning with Errors (LWE) problem [Regev, STOC 2005], which has become one of the central building blocks for post-quantum cryptography and advanced cryptographic primitives such as fully homomorphic encryption [Gentry, STOC 2009]. Unlike LWE whose hardness can be reducible from worst-case lattice problems, no corresponding worst-case hardness results were known for LPN until very recently. At Eurocrypt 2019, Brakerski et al. [BLVW19] established the first feasibility result that the worst-case hardness of nearest codeword problem (NCP) (on balanced linear code) at the extremely low noise rate $\frac{log^2 n}{n}$ implies the quasi-polynomial hardness of LPN at the extremely high noise rate $1/2-1/poly(n)$. It remained open whether a worst-case to average-case reduction can be established for standard (constant-noise) LPN, ideally with sub-exponential hardness.
In this paper, we carry on the worst-case to average-case reduction for LPN [BLVW19]. We first expand the underlying binary linear codes (of the worst-case NCP) to not only the balanced code considered in [BLVW19] but also to another code (in some sense dual to balanced code). At the core of our reduction is a new variant of smoothing lemma (for both binary codes) that circumvents the barriers (inherent in the underlying worst-case randomness extraction) and admits tradeoffs for a wider spectrum of parameter choices. In addition to the worst-case hardness result obtained in [BLVW19], we show that for any constant $0<c<1$ the constant-noise LPN problem is ($T=2^{\Omega(n^{1-c})},\epsilon=2^{-\Omega(n^{min(c,1-c)})},q=2^{\Omega(n^{min(c,1-c)})}$)-hard assuming that the NCP (on either code) at the low-noise rate $\tau=n^{-c}$ is ($T'={2^{\Omega(\tau n)}}$, $\epsilon'={2^{-\Omega(\tau n)}}$,$m={2^{\Omega(\tau n)}}$)-hard in the worst case, where $T$, $\epsilon$, $q$ and $m$ are time complexity, success rate, sample complexity, and codeword length respectively. Moreover, refuting the worst-case hardness assumption would imply arbitrary polynomial speedups over the current state-of-the-art algorithms for solving the NCP (and LPN), which is a win-win result. Unfortunately, public-key encryptions and collision resistant hash functions would need constant-noise LPN with ($T={2^{\omega(\sqrt{n})}}$, $\epsilon'={2^{-\omega(\sqrt{n})}}$,$q={2^{\sqrt{n}}}$)-hardness (Yu et al., CRYPTO 2016 \& ASIACRYPT 2019), which is almost (up to an arbitrary $\omega(1)$ factor in the exponent) what is reducible from the worst-case NCP when $c= 0.5$. We leave it as an open problem whether the gap can be closed or there is a separation in place.
In this paper, we carry on the worst-case to average-case reduction for LPN [BLVW19]. We first expand the underlying binary linear codes (of the worst-case NCP) to not only the balanced code considered in [BLVW19] but also to another code (in some sense dual to balanced code). At the core of our reduction is a new variant of smoothing lemma (for both binary codes) that circumvents the barriers (inherent in the underlying worst-case randomness extraction) and admits tradeoffs for a wider spectrum of parameter choices. In addition to the worst-case hardness result obtained in [BLVW19], we show that for any constant $0<c<1$ the constant-noise LPN problem is ($T=2^{\Omega(n^{1-c})},\epsilon=2^{-\Omega(n^{min(c,1-c)})},q=2^{\Omega(n^{min(c,1-c)})}$)-hard assuming that the NCP (on either code) at the low-noise rate $\tau=n^{-c}$ is ($T'={2^{\Omega(\tau n)}}$, $\epsilon'={2^{-\Omega(\tau n)}}$,$m={2^{\Omega(\tau n)}}$)-hard in the worst case, where $T$, $\epsilon$, $q$ and $m$ are time complexity, success rate, sample complexity, and codeword length respectively. Moreover, refuting the worst-case hardness assumption would imply arbitrary polynomial speedups over the current state-of-the-art algorithms for solving the NCP (and LPN), which is a win-win result. Unfortunately, public-key encryptions and collision resistant hash functions would need constant-noise LPN with ($T={2^{\omega(\sqrt{n})}}$, $\epsilon'={2^{-\omega(\sqrt{n})}}$,$q={2^{\sqrt{n}}}$)-hardness (Yu et al., CRYPTO 2016 \& ASIACRYPT 2019), which is almost (up to an arbitrary $\omega(1)$ factor in the exponent) what is reducible from the worst-case NCP when $c= 0.5$. We leave it as an open problem whether the gap can be closed or there is a separation in place.
Thomas Debris-Alazard, Léo Ducas, Wessel P.J. van Woerden
ePrint Report
In this article, we propose an adaptation of the algorithmic reduction theory of lattices to binary codes. This includes the celebrated LLL algorithm (Lenstra, Lenstra, Lovasz, 1982), as well as adaptations of associated algorithms such as the Nearest Plane Algorithm of Babai (1986). Interestingly, the adaptation of LLL to binary codes can be interpreted as an algorithmic version of the bound of Griesmer (1960) on the minimal distance of a code.
Using these algorithms, we demonstrate ---both with a heuristic analysis and in practice--- a small polynomial speed-up over the Information-Set Decoding algorithm of Lee and Brickell (1988) for random binary codes. This appears to be the first such speed-up that is not based on a time-memory trade-off.
The above speed-up should be read as a very preliminary example of the potential of a reduction theory for codes, for example in cryptanalysis. In constructive cryptography, this algorithmic reduction theory could for example also be helpful for designing trapdoor functions from codes.
Using these algorithms, we demonstrate ---both with a heuristic analysis and in practice--- a small polynomial speed-up over the Information-Set Decoding algorithm of Lee and Brickell (1988) for random binary codes. This appears to be the first such speed-up that is not based on a time-memory trade-off.
The above speed-up should be read as a very preliminary example of the potential of a reduction theory for codes, for example in cryptanalysis. In constructive cryptography, this algorithmic reduction theory could for example also be helpful for designing trapdoor functions from codes.
Kostis Karantias
ePrint Report
The primary function of a cryptocurrency is money transfer between individuals. The wallet is the software that facilitates such transfers. Wallets are nowadays ubiquitous in the cryptocurrency space and a cryptocurrency is usually supported by many wallets. Despite that, the functionality of wallets has never been formally defined. Additionally, the mechanisms employed by the many wallets in the wild remain hidden in their respective codebases.
In this work we provide the first definition of a cryptocurrency wallet, which we model as a client to a server, or set of servers. We provide a distinction of wallets in various categories, based on whether they work for transparent or private cryptocurrencies, what trust assumptions they require, their performance and their communication overhead. For each type of wallet we provide a description of its client and server protocols. Additionally, we explore superlight wallets and describe their difference to superlight clients that have appeared in recent literature. We demonstrate how new wallet protocols can be produced by combining concepts from existing protocols. Finally we evaluate the performance and security characteristics of all wallet protocols and compare them.
In this work we provide the first definition of a cryptocurrency wallet, which we model as a client to a server, or set of servers. We provide a distinction of wallets in various categories, based on whether they work for transparent or private cryptocurrencies, what trust assumptions they require, their performance and their communication overhead. For each type of wallet we provide a description of its client and server protocols. Additionally, we explore superlight wallets and describe their difference to superlight clients that have appeared in recent literature. We demonstrate how new wallet protocols can be produced by combining concepts from existing protocols. Finally we evaluate the performance and security characteristics of all wallet protocols and compare them.
Ping Wang, Ping Chen, Zhimin Luo, Gaofeng Dong, Mengce Zheng, Nenghai Yu, Honggang Hu
ePrint Report
Recently, many profiling side-channel attacks based on Machine Learning and Deep Learning have been proposed. Most of them focus on reducing the number of traces required for successful attacks by optimizing the modeling algorithms. In previous work, relatively sufficient traces need to be used for training a model. However, in the practical profiling phase, it is difficult or impossible to collect sufficient traces due to the constraint of various resources. In this case, the performance of profiling attacks is inefficient even if proper modeling algorithms are used.
In this paper, the main problem we consider is how to conduct more efficient profiling attacks when sufficient profiling traces cannot be obtained. To deal with this problem, we first introduce the Conditional Generative Adversarial Network (CGAN) in the context of side-channel attacks. We show that CGAN can generate new traces to enlarge the size of the profiling set, which improves the performance of profiling attacks. For both unprotected and protected cryptographic algorithms, we find that CGAN can effectively learn the leakage of traces collected in their implementations. We also apply it to different modeling algorithms. In our experiments, the model constructed with the augmented profiling set can reduce the required attack traces by more than half, which means the generated traces can provide useful information as the real traces.
Markku-Juhani O. Saarinen, G. Richard Newell, Ben Marshall
ePrint Report
The currently proposed RISC-V True Random Number Generator (TRNG) architecture breaks with previous ISA TRNG practice by splitting the Entropy Source (ES) component away from cryptographic PRNGs into a separate interface, and in its use of polling. We describe the interface, its use in cryptography, and offer additional discussion, background, and rationale for various aspects of it. This design is informed by lessons learned from earlier mainstream ISAs, recently introduced SP 800-90B and FIPS 140-3 entropy audit requirements, AIS 31 and Common Criteria, current and emerging cryptographic needs such as post-quantum cryptography, and the goal of supporting a wide variety of RISC-V implementations and applications. Many of the architectural choices are a result of quantitative observations about random number generators in secure microcontrollers, the Linux kernel, and cryptographic libraries (OpenSSL). We further compare the architecture to some contemporary random number generators and describe a minimalistic TRNG reference implementation that uses the Entropy Source together with RISC-V AES instructions.
Vlasis Koutsos, Dimitrios Papadopoulos, Dimitris Chatzopoulos, Sasu Tarkoma, Pan Hui
ePrint Report
We propose Agora, the first blockchain-based data marketplace that enables multiple privacy-concerned parties to get compensated for contributing and exchanging data, without relying on a trusted third party during the exchange. Agora achieves data privacy, output verifiability, and atomicity of payments by leveraging cryptographic techniques, and is designed as a decentralized application via smart contracts. Particularly, data generators provide encrypted data to data brokers who use a functional secret key to learn nothing but the output of a specific, agreed upon, function over the raw data. Data consumers can purchase decrypted outputs from the brokers, accompanied by corresponding proofs of correctness. We implement a working prototype of Agora on Ethereum and experimentally evaluate its performance and deployment costs. As a core building block of Agora, we propose a new functional encryption scheme with additional public parameters that operate as a trust anchor for verifying decrypted results.
Ferhat Karakoç, Alptekin Küpçü
ePrint Report
In this paper, we propose a new private set intersection (PSI) protocol that computes the following functionality. The two parties ($P_1$ and $P_2$) input two sets of items ($X$ and $Y$, respectively) and one of the parties outputs a function of the intersection ($f(X \cap Y)$). This functionality is generally required when the PSI protocol is used as a part of a larger secure two-party secure computation. Pinkas et al. presented a PSI protocol at Eurocrypt 2019 for this functionality, which has linear complexity only in communication. While there are PSI protocols with linear computation and communication complexities in the classical PSI setting where the intersection itself is revealed to one party, to the best of our knowledge, there is no PSI protocol, which outputs a function of the intersection and satisfies linear complexity in both communication and computation. We present the first PSI protocol that outputs only a function of the intersection with linear communication and computation complexities. While creating the protocol, as a side contribution, we provide a one-time oblivious programmable pseudo-random function based on garbled Bloom filters. We also implemented our protocol and provide performance results.
Ran Canetti, Yael Tauman Kalai, Anna Lysyanskaya, Ronald L. Rivest, Adi Shamir, Emily Shen, Ari Trachtenberg, Mayank Varia, Daniel J. Weitzner
ePrint Report
Contact tracing is an essential component of public health efforts to slow the spread of COVID-19 and other infectious diseases. Automating parts of the contact tracing process has the potential to significantly increase its scalability and efficacy, but also raises an array of privacy concerns, including the risk of unwanted identification of infected individuals and clandestine collection of privacy-invasive data about the population at large.
In this paper, we focus on automating the exposure notification part of contact tracing, which notifies people who have been in close proximity to infected people of their potential exposure to the virus. This work is among the first to focus on the privacy aspects of automated exposure notification. We introduce two privacy-preserving exposure notification schemes based on proximity detection. Both systems are decentralized -- no central entity has access to sensitive data. The first scheme is simple and highly efficient, and provides strong privacy for non-diagnosed individuals and some privacy for diagnosed individuals. The second scheme provides enhanced privacy guarantees for diagnosed individuals, at some cost to efficiency. We provide formal definitions for automated exposure notification and its security, and we prove the security of our constructions with respect to these definitions.
In this paper, we focus on automating the exposure notification part of contact tracing, which notifies people who have been in close proximity to infected people of their potential exposure to the virus. This work is among the first to focus on the privacy aspects of automated exposure notification. We introduce two privacy-preserving exposure notification schemes based on proximity detection. Both systems are decentralized -- no central entity has access to sensitive data. The first scheme is simple and highly efficient, and provides strong privacy for non-diagnosed individuals and some privacy for diagnosed individuals. The second scheme provides enhanced privacy guarantees for diagnosed individuals, at some cost to efficiency. We provide formal definitions for automated exposure notification and its security, and we prove the security of our constructions with respect to these definitions.
Sarah Scheffler, Mayank Varia
ePrint Report
The information security community has devoted substantial effort to the design, development, and universal deployment of strong encryption schemes that withstand search and seizure by computationally-powerful nation-state adversaries. In response, governments are increasingly turning to a different tactic: issuing subpoenas that compel people to decrypt devices themselves, under the penalty of contempt of court if they do not comply. Compelled decryption subpoenas sidestep questions around government search powers that have dominated the Crypto Wars and instead touch upon a different (and still unsettled) area of the law: how encryption relates to a person's right to silence and against self-incrimination.
In this work, we provide a rigorous, composable definition of a critical piece of the law that determines whether cryptosystems are vulnerable to government compelled disclosure in the United States. We justify our definition by showing that it is consistent with prior court cases. We prove that decryption is often not compellable by the government under our definition. Conversely, we show that many techniques that bolster security overall can leave one more vulnerable to compelled disclosure.
As a result, we initiate the study of protecting cryptographic protocols against the threat of future compelled disclosure. We find that secure multi-party computation is particularly vulnerable to this threat, and we design and implement new schemes that are provably resilient in the face of government compelled disclosure. We believe this work should influence the design of future cryptographic primitives and contribute toward the legal debates over the constitutionality of compelled decryption.
In this work, we provide a rigorous, composable definition of a critical piece of the law that determines whether cryptosystems are vulnerable to government compelled disclosure in the United States. We justify our definition by showing that it is consistent with prior court cases. We prove that decryption is often not compellable by the government under our definition. Conversely, we show that many techniques that bolster security overall can leave one more vulnerable to compelled disclosure.
As a result, we initiate the study of protecting cryptographic protocols against the threat of future compelled disclosure. We find that secure multi-party computation is particularly vulnerable to this threat, and we design and implement new schemes that are provably resilient in the face of government compelled disclosure. We believe this work should influence the design of future cryptographic primitives and contribute toward the legal debates over the constitutionality of compelled decryption.
Pedro Geraldo M. R. Alves, Jheyne N. Ortiz, Diego F. Aranha
ePrint Report
Privacy guarantees are still insufficient for outsourced data processing in the cloud. While employing encryption is feasible for data at rest or in transit, it is not for computation without remarkable performance slowdown. Thus, handling data in plaintext during processing is still required, which creates vulnerabilities that can be exploited by malicious entities. Homomorphic encryption (HE) schemes are natural candidates for computation in the cloud since they enable processing of ciphertexts without any knowledge about the related plaintexts or the decryption key. This work focuses on the challenge of developing an efficient implementation of the BFV HE scheme on CUDA. This is done by combining and adapting different approaches from the literature, namely the double-CRT representation and the Discrete Galois Transform. Moreover, we propose and implement an improved formulation of the DGT inspired by classical algorithms, which computes the transform up to $2.6$ times faster than the state-of-the-art. By using these approaches, we obtain up to $3.6$ times faster homomorphic multiplication.
Yael Tauman Kalai, Rachel Zhang
ePrint Report
We construct a succinct non-interactive publicly-verifiable delegation scheme for any log-space uniform circuit under the sub-exponential $\mathsf{LWE}$ assumption, a standard assumption that is believed to be post-quantum secure. For a circuit of size $S$ and depth $D$, the prover runs in time poly$(S)$, and the verifier runs in time $(D + n) \cdot S^{o(1)}$, where $n$ is the input size. We obtain this result by slightly modifying the $\mathsf{GKR}$ protocol and proving that the Fiat-Shamir heuristic is sound when applied to this modified protocol. We build on the recent works of Canetti et al. (STOC 2019) and Peikert and Shiehian (Crypto 2020), which prove the soundness of the Fiat-Shamir heuristic when applied to a specific (non-succinct) zero-knowledge protocol.
As a corollary, by the work of Choudhuri et al. (STOC 2019), this implies that the complexity class $\mathsf{PPAD}$ is hard (on average) under the sub-exponential $\mathsf{LWE}$ assumption, assuming that $\mathsf{\#SAT}$ with $o(\log n \cdot \log\log n)$ variables is hard (on average).
Balthazar Bauer, Georg Fuchsbauer, Julian Loss
ePrint Report
We give a taxonomy of computational assumptions in the algebraic group model (AGM). We first analyze Boyen's Uber assumption family for bilinear groups and then extend it in several ways to cover assumptions as diverse as Gap Diffie-Hellman and LRSW. We show that in the AGM every member of these families is implied by the $q$-discrete logarithm (DL) assumption, for some $q$ that depends on the degrees of the polynomials defining the Uber assumption.
Using the meta-reduction technique, we then separate $(q+1)$-DL from $q$-DL, which yields a classification of all members of the extended Uber-assumption families. We finally show that there are strong assumptions, such as one-more DL, that provably fall outside our classification, by proving that they cannot be reduced from $q$-DL even in the AGM.
Using the meta-reduction technique, we then separate $(q+1)$-DL from $q$-DL, which yields a classification of all members of the extended Uber-assumption families. We finally show that there are strong assumptions, such as one-more DL, that provably fall outside our classification, by proving that they cannot be reduced from $q$-DL even in the AGM.
Gareth T. Davies, Christian Janson, Daniel P. Martin
ePrint Report
Oblivious Parallel RAM (OPRAM) enables multiple clients to synchronously make read and write accesses to shared memory (more generally, any data-store) whilst hiding the access patterns from the owner/provider of that shared memory. Prior work is best suited to the setting of multiple processors (or cores) within a single client device, and consequently there are shortcomings when applying that work to the multi-client setting where distinct client devices may not trust each other, or may simply wish to minimise for legal reasons or otherwise the volume of data that is leaked to other client devices. In prior constructions, obliviousness from the storage provider is achieved by passing accesses between the clients in one or more sorting networks, both before and after the logical access is made to the shared memory: this process inherently leaks the contents of the accesses to those other clients.
In this paper we address this issue by introducing the notion of client obliviousness for OPRAM, which asks that clients should only learn as much as is necessary for the scheme to function correctly. We provide an instantiation using established tools, with careful analysis to show that our new notion and regular OPRAM security are met. This introduces several subtleties which were not previously apparent, and we further discuss the implications of using the OPRAM model in the context of outsourced storage.
Ivan Oleynikov, Elena Pagnin, Andrei Sabelfeld
ePrint Report
Location based services (LBS) extensively utilize proximity testing to help people discover nearby friends, devices, and services. Current practices rely on full trust to the service providers: users share their locations with the providers who perform proximity testing on behalf of the users. Unfortunately, location data has been often breached by LBS providers, raising privacy concerns over the current practices. To address these concerns previous research has suggested cryptographic protocols for privacy-preserving location proximity testing. Yet general and precise location proximity testing has been out of reach for the current research. A major roadblock has been the requirement by much of the previous work that for proximity testing between Alice and Bob both must be present online. This requirement is not problematic for one-to-one proximity testing but it does not generalize to one-to-many testing. Indeed, in settings like ridesharing, it is desirable to match against ride preferences of all users, not necessarily ones that are currently online.
This paper proposes a novel privacy-preserving proximity testing protocol where, after providing some data about its location, one party can go offline (nap) during the proximity testing execution, without undermining user privacy. We thus break away from the limitation of much of the previous work where the parties must be online and interact directly to each other to retain user privacy. Our basic protocol achieves privacy against semi-honest parties and can be upgraded to full security (against malicious parties) in a straight forward way using advanced cryptographic tools. Finally, we reduce the responding client overhead from quadratic (in the proximity radius parameter) to constant, compared to the previous research. Analysis and performance experiments with an implementation confirm our findings.
This paper proposes a novel privacy-preserving proximity testing protocol where, after providing some data about its location, one party can go offline (nap) during the proximity testing execution, without undermining user privacy. We thus break away from the limitation of much of the previous work where the parties must be online and interact directly to each other to retain user privacy. Our basic protocol achieves privacy against semi-honest parties and can be upgraded to full security (against malicious parties) in a straight forward way using advanced cryptographic tools. Finally, we reduce the responding client overhead from quadratic (in the proximity radius parameter) to constant, compared to the previous research. Analysis and performance experiments with an implementation confirm our findings.
Olivier Sanders
ePrint Report
Group signature is a major cryptographic tool allowing anonymous access to a service. However, in practice, access to a service is usually granted for some periods of time, which implies that the signing rights must be deactivated the rest of the time. This requirement thus calls for complex forms of revocation, reminiscent of the concept of time-bound keys. However, schemes satisfying this concept are rare and only allow revocation with limited granularity. That is, signing keys are associated with an expiry time and become definitively useless once the latter is over.
In this paper, we revisit the notion of group signatures with time-bound keys with several contributions. Firstly, we extend this notion to allow high granularity revocation: a member's signing key can in particular be deactivated at some moments and then be automatically reinstated. Secondly, we show that this complex property is actually simple to achieve using redactable signature. In particular, we consider in this context a recent redactable signature scheme from PKC 20 that we improve by dramatically reducing the size of the public key. The resulting construction is of independent interest.
In this paper, we revisit the notion of group signatures with time-bound keys with several contributions. Firstly, we extend this notion to allow high granularity revocation: a member's signing key can in particular be deactivated at some moments and then be automatically reinstated. Secondly, we show that this complex property is actually simple to achieve using redactable signature. In particular, we consider in this context a recent redactable signature scheme from PKC 20 that we improve by dramatically reducing the size of the public key. The resulting construction is of independent interest.
Vladimir Sedlacek, Jan Jancar, Petr Svenda
ePrint Report
We analyse whether the smartcards of the JavaCard platform correctly validate primality of domain parameters. The work is inspired by the paper Prime and prejudice: primality testing under adversarial conditions, where the authors analysed many open-source libraries and constructed pseudoprimes fooling the primality testing functions. However, in the case of smartcards, often there is no way to invoke the primality test directly, so we trigger it by replacing (EC)DSA and (EC)DH prime domain parameters by adversarial composites. Such a replacement results in vulnerability to Pohlig-Hellman style attacks, leading to private key recovery.
Out of nine smartcards (produced by five major manufacturers) we tested, all but one have no primality test in parameter validation. As the JavaCard platform provides no public primality testing API, the problem cannot be fixed by an extra parameter check, %an additional check before the parameters are passed to existing (EC)DSA and (EC)DH functions, making it difficult to mitigate in already deployed smartcards.
Out of nine smartcards (produced by five major manufacturers) we tested, all but one have no primality test in parameter validation. As the JavaCard platform provides no public primality testing API, the problem cannot be fixed by an extra parameter check, %an additional check before the parameters are passed to existing (EC)DSA and (EC)DH functions, making it difficult to mitigate in already deployed smartcards.
Angèle Bossuat, Xavier Bultel, Pierre-Alain Fouque, Cristina Onete, Thyla van der Merwe
ePrint Report
Reverse Firewalls (RFs) were introduced by Mironov and
Stephens-Davidowitz to address algorithm-substitution attacks (ASAs)
in which an adversary subverts the implementation of a provably-secure
cryptographic primitive to make it insecure. This concept was applied
by Dodis et al. in the context of secure key exchange (handshake phase),
where the adversary wants to exfiltrate sensitive information by using
a subverted client implementation. RFs are used as a means of "sanitizing"
the client-side protocol in order to prevent this exfiltration. In
this paper, we propose a new security model for both the handshake and
record layers, a.k.a. secure channel. We present a signed, Diffie-Hellman
based secure channel protocol, and show how to design a provably-secure
reverse firewall for it. Our model is stronger since the adversary has a
larger surface of attacks, which makes the construction challenging. Our
construction uses classical and off-the-shelf cryptography.
Marco Holz, Ágnes Kiss, Deevashwer Rathee, Thomas Schneider
ePrint Report
Private function evaluation (PFE) allows to obliviously evaluate a private function on private inputs. PFE has several applications such as privacy-preserving credit checking and user-specific insurance tariffs. Recently, PFE protocols based on universal circuits (UCs), that have an inevitable superlinear overhead, have been investigated thoroughly. Specialized public key-based protocols with linear complexity were believed to be less efficient than UC-based approaches.
In this paper, we take another look at the linear-complexity PFE protocol by Katz and Malka (ASIACRYPT'11): We propose several optimizations and split the protocol in different phases that depend on the function and inputs respectively. We show that HE-based PFE is practical when instantiated with state-of-the-art ECC and RLWE-based homomorphic encryption. Our most efficient implementation outperforms the most recent UC-based PFE implementation of Alhassan et al. (JoC'20) in communication for all circuit sizes and in computation starting from circuits of a few thousand gates already.
In this paper, we take another look at the linear-complexity PFE protocol by Katz and Malka (ASIACRYPT'11): We propose several optimizations and split the protocol in different phases that depend on the function and inputs respectively. We show that HE-based PFE is practical when instantiated with state-of-the-art ECC and RLWE-based homomorphic encryption. Our most efficient implementation outperforms the most recent UC-based PFE implementation of Alhassan et al. (JoC'20) in communication for all circuit sizes and in computation starting from circuits of a few thousand gates already.
Chelsea Komlo, Ian Goldberg
ePrint Report
Unlike signatures in a single-party setting, threshold signatures require cooperation among a threshold number of signers each holding a share of a common private key. Consequently, generating signatures in a threshold setting imposes overhead due to network rounds among signers, proving costly when secret shares are stored on network-limited devices or when coordination occurs over unreliable networks. In this work, we present FROST, a Flexible Round-Optimized Schnorr Threshold signature scheme that reduces network overhead during signing operations while employing a novel technique to protect against forgery attacks applicable to similar schemes in the literature. FROST improves upon the state of the art in Schnorr threshold signature protocols, as it can be safely used without limiting concurrency of signing operations yet allows for true threshold signing, as only a threshold number of participants are required for signing operations. FROST can be used as either a two-round protocol where signers send and receive two messages in total, or optimized to a single-round signing protocol with a pre-processing stage. FROST achieves its efficiency improvements in part by allowing the protocol to abort in the presence of a misbehaving participant (who is then identified and excluded from future operations)---a reasonable model for practical deployment scenarios. We present proofs of security demonstrating that FROST is secure against chosen-message attacks assuming the discrete logarithm problem is hard and the adversary controls fewer participants than the threshold.
Erica Blum, Jonathan Katz, Chen-Da Liu-Zhang, Julian Loss
ePrint Report
Understanding the communication complexity of Byzantine agreement (BA) is a fundamental problem in distributed computing. In particular, as protocols are run with a large number of parties (as, e.g., in the context of blockchain protocols), it is important to understand the dependence of the communication on the number of parties~$n$. Although adaptively secure BA protocols with $o(n^2)$ communication are known in the synchronous and partially synchronous settings, no such protocols are known in the fully asynchronous case.
We show here an asynchronous BA protocol with subquadratic communication tolerating an adaptive adversary who can corrupt $f<(1-\epsilon)n/3$ of the parties (for any $\epsilon>0$). One variant of our protocol assumes initial setup done by a trusted dealer, after which an unbounded number of BA executions can be run; alternately, we can achieve subquadratic \emph{amortized} communication with no prior setup. We also show that some form of setup is needed for (non-amortized) subquadratic BA tolerating $\Theta(n)$ corrupted parties.
As a contribution of independent interest, we show a secure-computation protocol in the same threat model that has $o(n^2)$ communication when computing no-input functionalities with short output (e.g., coin tossing).
We show here an asynchronous BA protocol with subquadratic communication tolerating an adaptive adversary who can corrupt $f<(1-\epsilon)n/3$ of the parties (for any $\epsilon>0$). One variant of our protocol assumes initial setup done by a trusted dealer, after which an unbounded number of BA executions can be run; alternately, we can achieve subquadratic \emph{amortized} communication with no prior setup. We also show that some form of setup is needed for (non-amortized) subquadratic BA tolerating $\Theta(n)$ corrupted parties.
As a contribution of independent interest, we show a secure-computation protocol in the same threat model that has $o(n^2)$ communication when computing no-input functionalities with short output (e.g., coin tossing).