International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

24 March 2023

Benny Applebaum, Eliran Kachlon, Arpita Patra
ePrint Report ePrint Report
In STOC 1989, Rabin and Ben-Or (RB) established an important milestone in the fields of cryptography and distributed computing by showing that every functionality can be computed with statistical (information-theoretic) security in the presence of an active (aka Byzantine) rushing adversary that controls up to half of the parties. We study the round complexity of general secure multiparty computation and several related tasks in the RB model.

Our main result shows that every functionality can be realized in only four rounds of interaction which is known to be optimal. This completely settles the round complexity of statistical actively-secure optimally-resilient MPC, resolving a long line of research.

Along the way, we construct the first round-optimal statistically-secure verifiable secret sharing protocol (Chor, Goldwasser, Micali, and Awerbuch; STOC 1985), show that every single-input functionality (e.g., multi-verifier zero-knowledge) can be realized in 3 rounds, and prove that the latter bound is optimal. The complexity of all our protocols is exponential in the number of parties, and the question of deriving polynomially-efficient protocols is left for future research.

Our main technical contribution is a construction of a new type of statistically-secure signature scheme whose existence was open even for smaller resiliency thresholds. We also describe a new statistical compiler that lifts up passively-secure protocols to actively-secure protocols in a round-efficient way via the aid of protocols for single-input functionalities. This compiler can be viewed as a statistical variant of the GMW compiler (Goldreich, Micali, Wigderson; STOC, 1987) that originally employed zero-knowledge proofs and public-key encryption.
Expand
Isaac A. Canales-Martínez, Igor Semaev
ePrint Report ePrint Report
Cryptanalysis of modern symmetric ciphers may be done by using linear equation systems with multiple right hand sides, which describe the encryption process. The tool was introduced by Raddum and Semaev where several solving methods were developed. In this work, the probabilities are ascribed to the right hand sides and a statistical attack is then applied. The new approach is a multivariate generalisation of the correlation attack by Siegenthaler. A fast version of the attack is provided too. It may be viewed as an extension of the fast correlation attack by Meier and Staffelbach, based on exploiting so called parity-checks for linear recurrences. Parity-checks are a particular case of the relations that we introduce in the present work. The notion of a relation is irrelevant to linear recurrences. We show how to apply the method to some LFSR-based stream ciphers including those from the Grain family. The new method generally requires a lower number of the keystream bits to recover the initial states than other techniques reported in the literature.
Expand
Asaf Cohen, Paweł Cyprys, Shlomi Dolev
ePrint Report ePrint Report
Self-masking allows the masking of success criteria, part of a problem instance (such as the sum in a subset-sum instance) that restricts the number of solutions. Self-masking is used to prevent the leakage of helpful information to attackers; while keeping the original solution valid and, at the same time, not increasing the number of unplanned solutions. Self-masking can be achieved by xoring the sums of two (or more) independent subset sum instances \cite{DD20, CDM22}, and by doing so, eliminate all known attacks that use the value of the sum of the subset to find the subset fast, namely, in a polynomial time; much faster than the naive exponential exhaustive search. We demonstrate that the concept of self-masking can be applied to a single instance of the subset sum and a single instance of the permuted secret-sharing polynomials. We further introduce the benefit of permuting the bits of the success criteria, avoiding leakage of information on the value of the $i$'th bit of the success criteria, in the case of a single instance, or the parity of the $i$'th bit of the success criteria in the case of several instances. In the case of several instances, we permute the success criteria bits of each instance prior to xoring them with each other. One basic permutation and its nesting versions (e.g., $\pi^i$) are used, keeping the solution space small and at the same time, attempting to create an ``all or nothing'' effect, where the result of a wrong $\pi$ trials does not imply much.
Expand
Giovanni Deligios, Aarushi Goel, Chen-Da Liu-Zhang
ePrint Report ePrint Report
To overcome the limitations of traditional secure multi-party computation (MPC) protocols that consider a static set of participants, in a recent work, Choudhuri et al. [CRYPTO 2021] introduced a new model called Fluid MPC, which supports {\em dynamic} participants. Protocols in this model allow parties to join and leave the computation as they wish. Unfortunately, known fluid MPC protocols (even with strong honest-majority), either only achieve security with abort, or require strong computational and trusted setup assumptions.

In this work, we also consider the "hardest" setting --- called the maximally-fluid model --- where each party can leave the computation after participating in a single round. We study the problem of designing information-theoretic maximally-fluid MPC protocols that achieve security with guaranteed output delivery (without relying on trusted setup), and obtain the following main results:

(1) We design a perfectly secure maximally-fluid MPC protocol, that achieves guaranteed output delivery against unbounded adversaries who are allowed to corrupt less than a third of the parties in every round/committee.

(2) We show that the corruption threshold in the above protocol is optimal. In particular, we prove that in fluid MPC, when the adversary can corrupt a third (or more) of the parties in any round, it is impossible to achieve information-theoretic security and guaranteed output delivery simultaneously --- even assuming a common random string (CRS) setup.

Additionally, for the case where the adversary is allowed to corrupt up to half of the parties in each committee, we present a new computationally secure maximally-fluid MPC protocol with guaranteed output delivery. Unlike prior works that require correlated setup and NIZKs, our construction only uses a common random string setup and is based on linearly-homomorphic equivocal commitments.
Expand
Guru-Vamsi Policharla, Bas Westerbaan, Armando Faz-Hernández, Christopher A Wood
ePrint Report ePrint Report
It is known that one can generically construct a post-quantum anonymous credential scheme, supporting the showing of arbitrary predicates on its attributes using general-purpose zero-knowledge proofs secure against quantum adversaries [Fischlin, CRYPTO 2006]. Traditionally, such a generic instantiation is thought to come with impractical sizes and performance. We show that with careful choices and optimizations, such a scheme can perform surprisingly well. In fact, it performs competitively against state-of-the-art post-quantum blind signatures, for the simpler problem of post-quantum unlinkable tokens, required for a post-quantum version of Privacy Pass.

To wit, a post-quantum Privacy Pass constructed in this way using zkDilithium, our proposal for a STARK-friendly variation on Dilithium2, allows for a trade-off between token size (85–175KB) and generation time (0.3–5s) with a proof security level of 115 bits. Verification of these tokens can be done in 20–30ms. We argue that these tokens are reasonably practical, adding less than a second upload time over traditional tokens, supported by a measurement study.

Finally, we point out a clear advantage of our approach: the flexibility afforded by the general purpose zero-knowledge proofs. We demonstrate this by showing how we can construct a rate-limited variant of Privacy Pass that doesn't not rely on non-collusion for privacy.
Expand
Miran Kim, Dongwon Lee, Jinyeong Seo, Yongsoo Song
ePrint Report ePrint Report
Lattice-based homomorphic encryption (HE) schemes are based on the noisy encryption technique, where plaintexts are masked with some random noise for security. Recent advanced HE schemes rely on a decomposition technique to manage the growth of noise, which involves a conversion of a ciphertext entry into a short vector followed by multiplication with an evaluation key. Prior to this work, the decomposition procedure turns out to be the most time-consuming part, as it requires discrete Fourier transforms (DFTs) over the base ring for efficient polynomial arithmetic. In this paper, an expensive decomposition operation over a large modulus is replaced with relatively cheap operations over a ring of integers with a small bound. Notably, the cost of DFTs is reduced from quadratic to linear with the level of a ciphertext without any extra noise growth. We demonstrate the implication of our approach by applying it to the key-switching procedure. Our experiments show that the new key-switching method achieves a speedup of 1.2--2.3 or 2.1--3.3 times over the previous method, when the dimension of a base ring is $2^{15}$ or $2^{16}$, respectively.
Expand
Keita Emura
ePrint Report ePrint Report
Forward security is a fundamental requirement in searchable encryption, where a newly generated ciphertext is not allowed to be searched by previously generated trapdoors. However, forward security is somewhat overlooked in the public key encryption with keyword search (PEKS) context and there are few proposals, whereas forward security has been stated as a default security notion in the (dynamic) symmetric searchable encryption (SSE) context. In the PEKS context, forward secure PEKS (FS-PEKS) is essentially the same as public key encryption with temporary keyword search (PETKS) proposed by Abdalla et al. (JoC 2016) which can be constructed generically from hierarchical identity-based encryption (HIBE) with level-1 anonymity. Alternatively, Zeng et al. (IEEE Transactions on Cloud Computing 2022) also proposed a generic construction of FS-PEKS from attribute-based searchable encryption supporting OR gates. In the public key authenticated encryption with keyword search (PAEKS) context, a concrete forward secure PAEKS (FS-PAEKS) construction has been proposed by Jiang et al. (The Computer Journal 2022), and no generic construction has been proposed to date. In this paper, we propose a generic construction of FS-PAEKS from PAEKS. In addition to PAEKS, we employ 0/1 encodings proposed by Lin et al. (ACNS 2005). We also show that the Jiang et al. FS-PAEKS scheme does not provide forward security, and thus our generic construction yields the first secure FS-PAEKS schemes. Our generic construction is quite simple, and it can also be applied to construct FS-PEKS. Our generic construction yields a comparably efficient FS-PEKS scheme compared to the previous scheme. Moreover, it eliminates the hierarchical structure or attribute-based feature of the previous generic constructions which is meaningful from a feasibility perspective.
Expand
Vikas Srivastava, Anubhab Baksi, Sumit Kumar Debnath
ePrint Report ePrint Report
Digital signatures are one of the most basic cryptographic building blocks which are utilized to provide attractive security features like authenticity, unforgeability, and undeniability. The security of existing state of the art digital signatures is based on hardness of number theoretic hardness assumptions like discrete logarithm and integer factorization. However, these hard problems are insecure and face a threat in the quantum world. In particular, quantum algorithms like Shor’s algorithm can be used to solve the above mentioned hardness problem in polynomial time. As an alternative, a new direction of research called post-quantum cryptography (PQC) is supposed to provide a new generation of quantum-resistant digital signatures. Hash based signature is one such candidate to provide post quantum secure digital signatures. Hash based signature schemes are a type of digital signature scheme that use hash functions as their central building block. They are efficient, flexible, and can be used in a variety of applications. In this document, we provide an overview of the hash based signatures. Our presentation of the topic covers a wide range of aspects that are not only comprehensible for readers without expertise in the subject matter, but also serve as a valuable resource for experts seeking reference material.
Expand
Alper Cakan, Vipul Goyal, Chen-Da Liu-Zhang, João Ribeiro
ePrint Report ePrint Report
Side-channel attacks, which aim to leak side information on secret system components, are ubiquitous. Even simple attacks, such as measuring time elapsed or radiation emitted during encryption and decryption procedures, completely break textbook versions of many cryptographic schemes. This has prompted the study of leakage-resilient cryptography, which remains secure in the presence of side-channel attacks.

Classical leakage-resilient cryptography must necessarily impose restrictions on the type of leakage one aims to protect against. As a notable example, the most well-studied leakage model is that of bounded leakage, where it is assumed that an adversary learns at most $\ell$ bits of leakage on secret components, for some leakage bound $\ell$. Although this leakage bound is necessary, it is unclear if such a bound is realistic in practice since many practical side-channel attacks cannot be captured by bounded leakage.

In this work, we investigate the possibility of designing cryptographic schemes that provide guarantees against arbitrary side-channel attacks:

- Using techniques from uncloneable quantum cryptography, we design several basic leakage-resilient primitives, such as secret sharing, (weak) pseudorandom functions, digital signatures, and public- and private-key encryption, which remain secure under (polynomially) unbounded classical leakage. In particular, this leakage can be much longer than the (quantum) secret being leaked upon. In our view, leakage is the result of observations of quantities such as power consumption and hence is most naturally viewed as classical information. - In the even stronger adversarial setting where the adversary is allowed to obtain unbounded quantum leakage (and thus leakage-resilience is impossible), we design schemes for many cryptographic tasks which support leakage-detection. This means that we can efficiently check whether the security of such a scheme has been compromised by a side-channel attack. These schemes are based on techniques from cryptography with certified deletion. - We also initiate a study of classical cryptographic schemes with (bounded) post-quantum leakage-resilience. These schemes resist side-channel attacks performed by adversaries with quantum capabilities which may even share arbitrary entangled quantum states. That is, even if such adversaries are non-communicating, they can still have "spooky" communication via entangled states.
Expand
Jiaxin Guan, Daniel Wichs, Mark Zhandry
ePrint Report ePrint Report
Consider a state-level adversary who observes and stores large amounts of encrypted data from all users on the Internet, but does not have the capacity to store it all. Later, it may target certain "persons of interest" in order to obtain their decryption keys. We would like to guarantee that, if the adversary's storage capacity is only (say) $1\%$ of the total encrypted data size, then even if it can later obtain the decryption keys of arbitrary users, it can only learn something about the contents of (roughly) $1\%$ of the ciphertexts, while the rest will maintain full security. This can be seen as an extension of incompressible cryptography (Dziembowski CRYPTO '06, Guan, Wichs and Zhandry EUROCRYPT '22) to the multi-user setting. We provide solutions in both the symmetric key and public key setting with various trade-offs in terms of computational assumptions and efficiency. As the core technical tool, we study an information-theoretic problem which we refer to as "somewhere randomness extraction". Suppose $X_1, \ldots, X_t$ are correlated random variables whose total joint min-entropy rate is $\alpha$, but we know nothing else about their individual entropies. We choose $t$ random and independent seeds $S_1, \ldots, S_t$ and attempt to individually extract some small amount of randomness $Y_i = \mathsf{Ext}(X_i;S_i)$ from each $X_i$. We'd like to say that roughly an $\alpha$-fraction of the extracted outputs $Y_i$ should be indistinguishable from uniform even given all the remaining extracted outputs and all the seeds. We show that this indeed holds for specific extractors based on Hadamard and Reed-Muller codes.
Expand
Manuel Barbosa, François Dupressoir, Benjamin Grégoire, Andreas Hülsing, Matthias Meijers, Pierre-Yves Strub
ePrint Report ePrint Report
This work presents a novel machine-checked tight security proof for $\mathrm{XMSS}$ — a stateful hash-based signature scheme that is (1) standardized in RFC 8391 and NIST SP 800-208, and (2) employed as a primary building block of $\mathrm{SPHINCS}^{+}$, one of the signature schemes recently selected for standardization as a result of NIST’s post-quantum competition. In 2020, Kudinov, Kiktenko, and Fedoro pointed out a flaw affecting the tight security proofs of $\mathrm{SPHINCS}^{+}$ and $\mathrm{XMSS}$. For the case of $\mathrm{SPHINCS}^{+}$, this flaw was fixed in a subsequent tight security proof by Hülsing and Kudinov. Unfortunately, employing the fix from this proof to construct an analogous tight security proof for XMSS would merely demonstrate security with respect to an insufficient notion. At the cost of modeling the message-hashing function as a random oracle, we complete the tight security proof for $\mathrm{XMSS}$ and formally verify it using the EasyCrypt proof assistant. As part of this endeavor, we formally verify the crucial step common to (the security proofs of) $\mathrm{SPHINCS}^{+}$ and $\mathrm{XMSS}$ that was found to be flawed before, thereby confirming that the core of the aforementioned security proof by Hülsing and Kudinov is correct. As this is the first work to formally verify proofs for hash-based signature schemes in EasyCrypt, we develop several novel libraries for the fundamental cryptographic concepts underlying such schemes — e.g., hash functions and digital signature schemes — establishing a common starting point for future formal verification efforts. These libraries will be particularly helpful in formally verifying proofs of other hash-based signature schemes such as $\mathrm{LMS}$ or $\mathrm{SPHINCS}^{+}$.
Expand
Simone Galimberti, Maria Potop-Butucaru
ePrint Report ePrint Report
We study the rational behaviors of participants in $DAG$-Based Distributed Ledgers. We analyze generic algorithms that encapsulate the main actions of participants in a $DAG$-based distributed ledger: voting for a block, and checking its validity. Knowing that those actions have costs, and validating a block gives rewards to users who participated in the validation procedure, we study using game theory how strategic participants behave while trying to maximize their gains. We consider scenarios with different type of participants and investigate if there exist equilibria where the properties of the protocols are guaranteed. The analysis is focused on the study of equilibria with trembling participants (i.e. rational participants that can do unintended actions with a low probability). We found that in presence of trembling participants, there exist equilibria where protocols properties may be violated.
Expand
Claude Carlet, Abderrahman Daif, Sylvain Guilley, Cédric Tavernier
ePrint Report ePrint Report
The implementation of cryptographic algorithms must be protected against physical attacks. Side-channel and fault injection analyses are two prominent such implem\-entation-level attacks. Protections against either do exist; they are characterized by security orders: the higher the order, the more difficult the attack. In this paper, we leverage fast discrete Fourier transform to reduce the complexity of high-order masking, and extend it to allow for fault detection and/or correction. The security paradigm is that of code-based masking. Coding theory is amenable both to mix the information and masking material at a prescribed order, and to detect and/or correct errors purposely injected by an attacker. For the first time, we show that quasi-linear masking (pioneered by Goudarzi, Joux and Rivain at ASIACRYPT 2018) can be achieved alongside with cost amortisation. This technique consists in masking several symbols/bytes with the same masking material, therefore improving the efficiency of the masking. Similarly, it allows to optimize the detection capability of codes as linear codes are all the more efficient as the information to protect is longer. Namely, we prove mathematically that our scheme features side-channel security order of $d+1-t$, detects $d$ faults and corrects $\lfloor(d-1)/2\rfloor$ faults, where $2d+1$ is the encoding length and $t$ is the information size ($t\geq1$). Applied to AES, one can get side-channel protection of order $d=7$ when masking one column/line ($t=4$ bytes) at once. In addition to the theory, that makes use of the Frobenius Additive Fast Fourier Transform, we show performance results, both in software and hardware.
Expand
Carsten Baum, Bernardo David, Elena Pagnin, Akira Takahashi
ePrint Report ePrint Report
Time-based cryptographic primitives such as Time-Lock Puzzles (TLPs) and Verifiable Delay Functions (VDFs) have recently found many applications to the efficient design of secure protocols such as randomness beacons or multiparty computation with partial fairness. However, current TLP and VDF candidate constructions rely on the average hardness of sequential computational problems. Unfortunately, obtaining concrete parameters for these is notoriously hard, as there cannot be a large gap between the honest parties’ and the adversary’s runtime when solving the same problem. Moreover, even a constant improvement in algorithms for solving these problems can render parameter choices, and thus deployed systems, insecure - unless very conservative and therefore highly inefficient parameters are chosen.

In this work, we investigate how to construct time-based cryptographic primitives from communication delay, which has a known lower bound given the physical distance between devices: the speed of light. In order to obtain high delays, we explore the sequential communication delay that arises when sending a message through a constellation of satellites. This has the advantage that distances between protocol participants are guaranteed as positions of satellites are observable, so delay lower bounds can be easily computed. At the same time, building cryptographic primitives for this setting is challenging due to the constrained resources of satellites and possible corruptions of parties within the constellation.

We address these challenges by constructing efficient proofs of sequential communication delay to convince a verifier that a message has accrued delay by traversing a path among satellites. As part of this construction, we propose the first ordered multisignature scheme with security under a version of the the discrete logarithm assumption, which enjoys constant-size signatures and, modulo preprocessing, computational complexity independent of the number of signers. Building on our proofs of sequential communication delay, we show new constructions of Publicly Verifiable TLPs and VDFs whose delay guarantees are rooted on physical communication delay lower bounds. Our protocols as well as the ordered multisignature are analysed in the Universal Composability framework using novel models for sequential communication delays and (ordered) multisignatures. A direct application of our results is a randomness beacon that only accesses expensive communication resources in case of cheating.
Expand
Nico Döttling, Dimitris Kolonelos, Russell W. F. Lai, Chuanwei Lin, Giulio Malavolta, Ahmadreza Rahimi
ePrint Report ePrint Report
Laconic cryptography is an emerging paradigm that enables cryptographic primitives with sublinear communication complexity in just two messages. In particular, a two-message protocol between Alice and Bob is called laconic if its communication and computation complexity are essentially independent of the size of Alice's input. This can be thought of as a dual notion of fully-homomorphic encryption, as it enables "Bob-optimized" protocols. This paradigm has led to tremendous progress in recent years. However, all existing constructions of laconic primitives are considered only of theoretical interest: They all rely on non-black-box cryptographic techniques, which are highly impractical.

This work shows that non-black-box techniques are not necessary for basic laconic cryptography primitives. We propose a completely algebraic construction of laconic encryption, a notion that we introduce in this work, which serves as the cornerstone of our framework. We prove that the scheme is secure under the standard Learning With Errors assumption (with polynomial modulus-to-noise ratio). We provide proof-of-concept implementations for the first time for laconic primitives, demonstrating the construction is indeed practical: For a database size of $2^{50}$, encryption and decryption are in the order of single digit milliseconds.

Laconic encryption can be used as a black box to construct other laconic primitives. Specifically, we show how to construct:

- Laconic oblivious transfer - Registration-based encryption scheme - Laconic private-set intersection protocol

All of the above have essentially optimal parameters and similar practical efficiency. Furthermore, our laconic encryption can be preprocessed such that the online encryption step is entirely combinatorial and therefore much more efficient. Using similar techniques, we also obtain identity-based encryption with an unbounded identity space and tight security proof (in the standard model).
Expand
Daniel Collins, Simone Colombo, Loïs Huguenin-Dumittan
ePrint Report ePrint Report
This work discusses real world deniability in messaging. We highlight how the different models for cryptographic deniability do not ensure practical deniability. To overcome this situation, we propose a model for real world deniability that takes into account the entire messaging system. We then discuss how deniability is (not) used in practice and the challenges arising from the design of a deniable system. We propose a simple, yet powerful solution for deniability: applications should enable direct modification of local messages; we discuss the impacts of this strong deniability property.
Expand
Kang Hoon Lee, Ji Won Yoon
ePrint Report ePrint Report
In recent history of fully homomorphic encryption, bootstrapping has been actively studied throughout many HE schemes. As bootstrapping is an essential process to transform somewhat homomorphic encryption schemes into fully homomorphic, enhancing its performance is one of the key factors of improving the utility of homomorphic encryption.

In this paper, we propose an extended bootstrapping for TFHE, which we name it by EBS. One of the main drawback of TFHE bootstrapping was that the precision of bootstrapping is mainly decided by the polynomial dimension $N$. Thus if one wants to bootstrap with high precision, one must enlarge $N$, or take alternative method. Our EBS enables to use small $N$ for parameter selection, but to bootstrap in higher dimension to keep high precision. Moreover, it can be easily parallelized for faster computation. Also, the EBS can be easily adapted to other known variants of TFHE bootstrappings based on the original bootstrapping algorithm.

We implement our EBS along with the full domain bootstrapping methods known ($\mathsf{FDFB}$, $\mathsf{TOTA}$, $\mathsf{Comp}$), and show how much our EBS can improve the precision for those bootstrapping methods. We provide experimental results and thorough analysis with our EBS, and show that EBS is capable of bootstrapping with high precision even with small $N$, thus small key size, and small complexity than selecting large $N$ by birth.
Expand
Keita Emura
ePrint Report ePrint Report
As a multi-receiver variant of public key authenticated encryption with keyword search (PAEKS), broadcast authenticated encryption with keyword search (BAEKS) was proposed by Liu et al. (ACISP 2021). BAEKS focuses on receiver anonymity, where no information about the receiver is leaked from ciphertexts, which is reminiscent of the anonymous broadcast encryption. Here, there are rooms for improving their security definitions, e.g., two challenge sets of receivers are selected before the setup phase, and an adversary is not allowed to corrupt any receiver. In this paper, we propose a generic construction of BAEKS derived from PAEKS that provides ciphertext anonymity and consistency in a multi-receiver setting. The proposed construction is an extension of the generic construction proposed by Libert et al. (PKC 2012) for the anonymous broadcast encryption and provides adaptive corruptions. We also demonstrate that the Qin et al. PAEKS scheme (ProvSec 2021) provides ciphertext anonymity and consistency in a multi-receiver setting.
Expand
Antigoni Polychroniadou, Gilad Asharov, Benjamin Diamond, Tucker Balch, Hans Buehler, Richard Hua, Suwen Gu, Greg Gimler, Manuela Veloso
ePrint Report ePrint Report
Inventory matching is a standard mechanism for trading financial stocks by which buyers and sellers can be paired. In the financial world, banks often undertake the task of finding such matches between their clients. The related stocks can be traded without adversely impacting the market price for either client. If matches between clients are found, the bank can offer the trade at advantageous rates. If no match is found, the parties have to buy or sell the stock in the public market, which introduces additional costs.

A problem with the process as it is presently conducted is that the involved parties must share their order to buy or sell a particular stock, along with the intended quantity (number of shares), to the bank. Clients worry that if this information were to “leak” somehow, then other market participants would become aware of their intentions and thus cause the price to move adversely against them before their transaction finalizes.

We provide a solution, Prime Match, that enables clients to match their orders efficiently with reduced market impact while maintaining privacy. In the case where there are no matches, no information is revealed. Our main cryptographic innovation is a two-round secure linear comparison protocol for computing the minimum between two quantities without preprocessing and with malicious security, which can be of independent interest. We report benchmarks of our Prime Match system, which runs in production and is adopted by a large bank in the US -- J.P. Morgan. The system is designed utilizing a star topology network, which provides clients with a centralized node (the bank) as an alternative to the idealized assumption of point-to-point connections, which would be impractical and undesired for the clients to implement in reality.

Prime Match is the first secure multiparty computation solution running live in the traditional financial world.
Expand
Wai-Kong Lee, Raymond K. Zhao, Ron Steinfeld, Amin Sakzad, Seong Oun Hwang
ePrint Report ePrint Report
The US National Institute of Standards and Technology initiated a standardization process for post-quantum cryptography in 2017, with the aim of selecting key encapsulation mechanisms and signature schemes that can withstand the threat from emerging quantum computers. In 2022, Falcon was selected as one of the standard signature schemes, eventually attracting effort to optimize the implementation of Falcon on various hardware architectures for practical applications. Recently, Mitaka was proposed as an alternative to Falcon, allowing parallel execution of most of its operations. These recent advancements motivate us to develop high throughput implementations of Falcon and Mitaka signature schemes on Graphics Processing Units (GPUs), a massively parallel architecture widely available on cloud service platforms. In this paper, we propose the first parallel implementation of Falcon on various GPUs. An iterative version of the sampling process in Falcon, which is also the most time-consuming Falcon operation, was developed. This allows us to implement Falcon signature generation without relying on expensive recursive function calls on GPUs. In addition, we propose a parallel random samples generation approach to accelerate the performance of Mitaka on GPUs. We evaluate our implementation techniques on state-of-the-art GPU architectures (RTX 3080, A100, T4 and V100). Experimental results show that our Falcon-512 implementation achieves 58, 595 signatures/second and 2, 721, 562 verifications/second on an A100 GPU, which is 20.03× and 29.51× faster than the highly optimized AVX2 implementation on CPU. Our Mitaka implementation achieves 161, 985 signatures/second and 1, 421, 046 verifications/second on the same GPU. Due to the adoption of a parallelizable sampling process, Mitaka signature generation enjoys ≈ 2 – 20× higher throughput than Falcon on various GPUs. The high throughput signature generation and verification achieved by this work can be very useful in various emerging applications, including the Internet of Things.
Expand
◄ Previous Next ►