International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

27 June 2022

Jian Guo, Ling Song, Haoyang Wang
ePrint Report ePrint Report
This paper introduces structure to key, in the related-key attack settings. While the idea of structure has been long used in keyrecovery attacks against block ciphers to enjoy the birthday effect, the same had not been applied to key materials due to the fact that key structure results in uncontrolled differences in key and hence affects the validity or probabilities of the differential trails. We apply this simple idea to improve the related-key boomerang attack against AES-256 by Biryukov and Khovratovich in 2009. Surprisingly, it turns out to be effective, i.e., both data and time complexities are reduced by a factor of about 2^8, to 2^92 and 2^91 respectively, at the cost of the amount of required keys increased from 4 to 2^19. There exist some tradeoffs between the data/time complexity and the number of keys. To the best of our knowledge, this is the first essential improvement of the attack against the full AES-256 since 2009. It will be interesting to see if the structure technique can be applied to other AES-like block ciphers, and to tweaks rather than keys of tweakable block ciphers so the amount of required keys of the attack will not be affected.
Expand
Yong-Jin Kim, Dok-Jun An, Kum-Sok Sin, Son-Gyong Kim
ePrint Report ePrint Report
In this paper, we proposed some vulnerabilities of a recent pairing-based certificateless authenticated key agreement protocol for blockchain-based wireless body area networks (WBAN). According to our analysis, this protocol is insecure against key offset attack (KOA), basic impersonation attack (BIA), and man-in-the-middle attack (MMA) of the malicious key generation center (KGC) administrators. We also found and pointed out some errors in the description of the protocol.
Expand
Martin R. Albrecht, Jianwei Li
ePrint Report ePrint Report
Primal attacks against the Learning With Errors (LWE) problem rely on reducing \(q\)-ary lattices. These reduced bases have been observed to exhibit a so-called ``Z-shape'' on their Gram--Schmidt vectors. We propose an efficient simulator to accurately predict this Z-shape behaviour, which we back up with extensive simulations and experiments. We also formalise (under standard heuristics) the intuition that the presence of a Z-shape makes enumeration-based primal lattice attacks faster. Furthermore, we upgrade the LWE or lattice estimator with our simulator to assess and then rule out the impact of the \(q\)-ary Z-shape on solving LWE instances derived from parameter sets for NIST PQC candidates. We consider this improved estimator to be of independent interest.
Expand
Justin Holmgren, Minghao Liu, LaKyah Tyner, Daniel Wichs
ePrint Report ePrint Report
Property-preserving hashing (PPH) consists of a family of compressing hash functions $h$ such that, for any two inputs $x,y$, we can correctly identify whether some property $P(x,y)$ holds given only the digests $h(x),h(y)$. In a basic PPH, correctness should hold with overwhelming probability over the choice of $h$ when $x,y$ are worst-case values chosen a-priori and independently of $h$. In an adversarially robust PPH (RPPH), correctness must hold even when $x,y$ are chosen adversarially and adaptively depending on $h$. Here, we study (R)PPH for the property that the Hamming distance between $x$ and $y$ is at most $t$.

The notion of (R)PPH was introduced by Boyle, LaVigne and Vaikuntanathan (ITCS '19), and further studied by Fleischhacker, Simkin (Eurocrypt '21) and Fleischhacker, Larsen, Simkin (Eurocrypt '22). In this work, we obtain improved constructions that are conceptually simpler, have nearly optimal parameters, and rely on more general assumptions than prior works. Our results are:

* We construct information-theoretic non-robust PPH for Hamming distance via syndrome list-decoding of linear error-correcting codes. We provide a lower bound showing that this construction is essentially optimal.

* We make the above construction robust with little additional overhead, by relying on homomorphic collision-resistant hash functions, which can be constructed from either the discrete-logarithm or the short-integer-solution assumptions. The resulting RPPH achieves improved compression compared to prior constructions, and is nearly optimal.

* We also show an alternate construction of RPPH for Hamming distance under the minimal assumption that standard collision-resistant hash functions exist. The compression is slightly worse than our optimized construction using homomorphic collision-resistance, but essentially matches the prior state of the art constructions from specific algebraic assumptions.

* Lastly, we study a new notion of randomized robust PPH (R2P2H) for Hamming distance, which relaxes RPPH by allowing the hashing algorithm itself to be randomized. We give an information-theoretic construction with optimal parameters.
Expand
Viet Tung Hoang, Cong Wu, Xin Yuan
ePrint Report ePrint Report
System logs are crucial for forensic analysis, but to be useful, they need to be tamper-proof. To protect the logs, a number of secure logging systems have been proposed from both academia and the industry. Unfortunately, except for the recent KennyLoggings construction, all other logging systems are broken by an attack of Paccagnella et al. (CCS 2020). In this work, we build a secure logging system that improves KennyLoggings in several fronts: adoptability, security, and performance. Our key insight for performance gain is to use AES on a fixed, known key. While this trick is widely used in secure distributed computing, this is the first time it has found an application in the area of symmetric-key cryptography.
Expand
Clémence Bouvier, Pierre Briaud, Pyrros Chaidos, Léo Perrin, Vesselin Velichkov
ePrint Report ePrint Report
Advanced cryptographic protocols such as Zero-knowledge (ZK) proofs of knowledge, widely used in cryptocurrency applications such as Bitcoin, Ethereum and Zcash, demand new cryptographic hash functions that are efficient not only over the binary field $\mathbb{F}_2$, but also over large fields of prime characteristic $\mathbb{F}_p$. This need has been acknowledged by the wider community and new so-called Arithmetization-Oriented (AO) hash functions have been proposed in response, e.g. MiMC-Hash, Rescue and Poseidon to name a few. In this paper we propose Anemoi: a new family of ZK-friendly AO hash functions. The main features that set Anemoi apart from other such families are that 1) it is designed to be efficient within multiple proof systems (e.g. Groth16, Plonk, etc.), 2) it contains dedicated functions optimised for specific applications (namely Merkle tree hashing and general purpose hashing), 3) has competitive performance e.g. about a factor of 2 improvement over Poseidon and Rescue in terms of R1CS constraints, and a 10%-28% improvement over a highly optimized Poseidon implementation in Plonk constraints. On the theoretical side, Anemoi pushes further the frontier in understating the design principles that are truly entailed by arithmetization-orientation. In particular, we identify and exploit a previously unknown relationship between CCZ-equivalence and arithmetization-orientation. In addition, we propose two new standalone components that can be easily reused in new designs. One is a new S-box called Flystel, based on the well-studied butterfly structure, and the second is Jive -- a new mode of operation, inspired by the "Latin dance'' symmetric algorithms (Salsa, ChaCha and derivatives).
Expand
Mahdi Sedaghat, Daniel Slamanig, Markulf Kohlweiss, Bart Preneel
ePrint Report ePrint Report
The by now broadly accepted reliance of society on online services, led to a push for decentralization to mitigate the societal and technical risks caused by single points of failure (PoF). One such PoF are cryptographic keys. Thus there is renewed interest in threshold cryptography to distribute the generation and use of such keys. Structure-preserving signatures (SPS) are an important building block for privacy-preserving cryptographic protocols such as electronic cash and (delegatable) anonymous credentials. However, to date, no structure-preserving threshold signatures (SPTS) are available. This is unfortunate, as another PoF is centralized identity management, which could be mitigated by anonymous credentials.

In this work we aim to close this gap by introducing a notion and constructions of (non-) interactive SPTS. While it is relatively easy to devise interactive SPTS supporting static corruptions, e.g., based on the SPS of Ghadafi (CT-RSA'16), constructing non-interactive SPTS is a much more delicate task. Due to their structural properties, starting from existing SPS does not yield secure schemes. Thus, we take a different path and first introduce the notion of message-indexed SPS, a variant of SPS that is parameterized by a message indexing function. Inspired by Pointcheval-Sanders (PS) signatures (CT-RSA'16) and the SPS of Ghadafi, we then present a message-indexed SPS, which is non-interactive threshold-friendly. We prove its security in the random oracle model based on a variant of the generalized PS assumption. Based on our message-indexed SPS we then propose the first non-interactive message-indexed SPTS, which we prove to be secure under adaptive corruption. Finally, we discuss applications of SPTS to privacy-preserving primitives.
Expand
Francesca Falzon, Kenneth G. Paterson
ePrint Report ePrint Report
Ghosh, Kamara and Tamassia (ASIA CCS 2021) presented a Graph Encryption Scheme supporting shortest path queries. We show how to perform a query recovery attack against this GKT scheme when the adversary is given the original graph together with the leakage of certain subsets of queries. Our attack falls within the security model used by Ghosh et al., and is the first targeting schemes supporting shortest path queries. Our attack uses classical graph algorithms to compute the canonical names of the single-destination shortest path spanning trees of the underlying graph and uses these canonical names to pre-compute the set of candidate queries that match each response. Then, when all shortest path queries to a single node have been observed, the canonical names for the corresponding query tree are computed and the responses are matched to the candidate queries from the offline phase. The output is guaranteed to contain the correct query. For a graph on $n$ vertices, our attack runs in time $O(n^3)$ and matches the time complexity of the GKT scheme's setup. We evaluate the attack's performance using the real world datasets used in the original paper and on random graphs, and show that for the real-world datasets as many as 21.9% of the queries can be uniquely recovered and as many as 50% of the queries result in sets of only three candidates.
Expand
Tim Beyne, Vincent Rijmen
ePrint Report ePrint Report
A systematic approach to the fixed-key analysis of differential probabilities is proposed. It is based on the propagation of 'quasidifferential trails', which keep track of probabilistic linear relations on the values satisfying a differential characteristic in a theoretically sound way. It is shown that the fixed-key probability of a differential can be expressed as the sum of the correlations of its quasidifferential trails.

The theoretical foundations of the method are based on an extension of the difference-distribution table, which we call the quasidifferential transition matrix. The role of these matrices is analogous to that of correlation matrices in linear cryptanalysis. This puts the theory of differential and linear cryptanalysis on an equal footing.

The practical applicability of the proposed methodology is demonstrated by analyzing several differentials for RECTANGLE, KNOT, Speck and Simon. The analysis is automated and applicable to other SPN and ARX designs. Several attacks are shown to be invalid, most others turn out to work only for some keys but can be improved for weak-keys.
Expand
Samuel Dittmer, Yuval Ishai, Steve Lu, Rafail Ostrovsky
ePrint Report ePrint Report
We revisit the problem of constant-round malicious secure two-party computation by considering the use of simple correlations, namely sources of correlated randomness that can be securely generated with sublinear communication complexity and good concrete efficiency.

The current state-of-the-art protocol of Katz et al. (Crypto 2018) achieves malicious security by realizing a variant of the authenticated garbling functionality of Wang et al. (CCS 2017). Given oblivious transfer correlations, the communication cost of this protocol (with 40 bits of statistical security) is comparable to roughly $10$ garbled circuits (GCs). This protocol inherently requires more than 2 rounds of interaction.

In this work, we use other kinds of simple correlations to realize the authenticated garbling functionality with better efficiency. Concretely, we get the following reduced costs in the random oracle model: - Using variants of both vector oblivious linear evaluation (VOLE) and multiplication triples (MT), we reduce the cost to $1.31$ GCs. - Using only variants of VOLE, we reduce the cost to $2.25$ GCs. - Using only variants of MT, we obtain a non-interactive (i.e., 2-message) protocol with cost comparable to $8$ GCs. Finally, we show that by using recent constructions of pseudorandom correlation generators (Boyle et al., CCS 2018, Crypto 2019, 2020), the simple correlations consumed by our protocols can be securely realized without forming an efficiency bottleneck.
Expand
Rajendra Kumar, Khoa Nguyen
ePrint Report ePrint Report
Introduced by von Ahn et al. (STOC’05), covert two-party computation is an appealing cryptographic primitive that allows Al- ice and Bob to securely evaluate a function on their secret inputs in a steganographic manner, i.e., even the existence of a computation is oblivious to each party - unless the output of the function is favourable to both. A prominent form of covert computation is covert authentica- tion, where Alice and Bob want to authenticate each other based on their credentials, in a way such that the party who does not hold the appro- priate credentials cannot pass the authentication and is even unable to distinguish a protocol instance from random noise. Jarecki (PKC’14) put forward a blueprint for designing covert authentication protocols, which relies on a covert conditional key-encapsulation mechanism, an identity escrow scheme, a covert commitment scheme and a Σ-protocol satisfying several specific properties. He also proposed an instantiation based on the Strong RSA, the Decisional Quadratic Residuosity and the Decisional Diffie-Hellman assumptions. Despite being very efficient, Jarecki’s con- struction is vulnerable against quantum adversaries. In fact, designing covert authentication protocols from post-quantum assumptions remains an open problem. In this work, we present several contributions to the study of covert authentication protocols. First, we identify several technical obstacles in realizing Jarecki’s blueprint under lattice assumptions. To remedy, we then provide a new generic construction of covert Mutual Authentica- tion (MA) protocol, that departs from given blueprint and that requires somewhat weaker properties regarding the employed cryptographic ingre- dients. Next, we instantiate our generic construction based on commonly used lattice assumptions. The protocol is proven secure in the random oracle model, assuming the hardness of the Module Learning With Errors (M-LWE) and Module Short Integer Solution (M-SIS) and the NTRU problems, and hence, is potentially quantum-safe. In the process, we also develop an approximate smooth projective hashing function associated with a covert commitment, based on the M-LWE assumption. We then demonstrate that this new ingredient can be smoothly combined with existing lattice-based techniques to yield a secure covert MA scheme.
Expand
Rafael del Pino, Shuichi Katsumata
ePrint Report ePrint Report
Blind signatures, proposed by Chaum (CRYPTO'82), are interactive protocols between a signer and a user, where a user can obtain a signature without revealing the message to be signed. Recently, Hauck et al. (EUROCRYPT'20) observed that all efficient lattice-based blind signatures following the blueprint of the original blind signature by Rükert (ASIACRYPT'10) have a flawed security proof. This puts us in a situation where all known lattice-based blind signatures have at least two of the following drawbacks: heuristic security; 1 MB or more signature size; only supporting bounded polynomially many signatures, or being based on non-standard assumptions.

In this work, we construct the first round-optimal (i.e., two-round) lattice-based blind signature with a signature size of roughly 100 KB that supports unbounded polynomially many signatures and is provably secure under standard assumptions. Even if we allow non-standard assumptions and more rounds, ours provide the shortest signature size while simultaneously supporting unbounded polynomially many signatures. The main idea of our work is revisiting the generic blind signature construction by Fischlin (CRYPTO'06) and optimizing the commit-then-open proof using techniques tailored to lattices. Our blind signature is also the first to have a formal security proof in the quantum random oracle model. Finally, our blind signature extends naturally to partially blind signatures, where the user and signer can include an agreed-upon public string in the message.
Expand
Mihir Bellare, Stefano Tessaro, Chenzhi Zhu
ePrint Report ePrint Report
We give a unified syntax, and a hierarchy of definitions of security of increasing strength, for non-interactive threshold signature schemes. They cover both fully non-interactive schemes (these are ones that have a single-round signing protocol, the canonical example being threshold-BLS) and ones, like FROST, that have a prior round of message-independent pre-processing. The definitions in the upper echelon of our hierarchy ask for security that is well beyond any currently defined, let alone proven to be met by the just-mentioned schemes, yet natural, and important for modern applications like securing digital wallets. We prove that BLS and FROST are better than advertised, meeting some of these stronger definitions. Yet, they fall short of meeting our strongest definition, a gap we fill for FROST via a simple enhancement to the scheme. We also surface subtle differences in the security achieved by variants of FROST.
Expand
Jeremiah Blocki, Blake Holman
ePrint Report ePrint Report
Memory-hard functions (MHFs) are a useful cryptographic primitive which can be used to design egalitarian proof of work puzzles and to protect low entropy secrets like passwords against brute-force attackers. Intuitively, a memory-hard function is a function whose evaluation costs are dominated by memory costs even if the attacker uses specialized hardware (FPGAs/ASICs), and several cost metrics have been proposed to quantify this intuition. For example, space-time cost looks at the product of running time and the maximum space usage over the entire execution of an algorithm. Alwen and Serbinenko (STOC 2015) observed that the space-time cost of evaluating a function multiple times may not scale linearly in the number of instances being evaluated and introduced the stricter requirement that a memory-hard function has high cumulative memory complexity (CMC) to ensure that an attacker's amortized space-time costs remain large even if the attacker evaluates the function on multiple different inputs in parallel. Alwen et al. (EUROCRYPT 2018) observed that the notion of CMC still gives the attacker undesirable flexibility in selecting space-time tradeoffs e.g., while the MHF scrypt has maximal CMC $\Omega(N^2)$, an attacker could evaluate the function with constant $O(1)$ memory in time $O(N^2)$. Alwen et al. introduced an even stricter notion of Sustained Space complexity and designed an MHF which has $s=\Omega(N/\log N)$ sustained complexity $t=\Omega(N)$ i.e., any algorithm evaluating the function in the parallel random oracle model must have at least $t=\Omega(N)$ steps where the memory usage is at least $\Omega(N/\log N)$. In this work, we use dynamic pebbling games and dynamic graphs to explore tradeoffs between sustained space complexity and cumulative memory complexity for data-dependent memory-hard functions such as Argon2id and scrypt. We design our own dynamic graph (dMHF) with the property that {\em any} dynamic pebbling strategy either (1) has $\Omega(N)$ rounds with $\Omega(N)$ space, or (2) has CMC $\Omega(N^{3-\epsilon})$ --- substantially larger than $N^2$. For Argon2id we show that {\em any} dynamic pebbling strategy either(1) has $\Omega(N)$ rounds with $\Omega(N^{1-\epsilon})$ space, or (2) has CMC $\omega(N^2)$. We also present a dynamic version of DRSample (Alwen et al. 2017) for which {\em any} dynamic pebbling strategy either (1) has $\Omega(N)$ rounds with $\Omega(N/\log N)$ space, or (2) has CMC $\Omega(N^3/\log N)$.
Expand
Vipul Goyal, Antigoni Polychroniadou, Yifan Song
ePrint Report ePrint Report
In the last few years, the efficiency of secure multi-party computation (MPC) in the dishonest majority setting has increased by several orders of magnitudes starting with the SPDZ protocol family which offers a speedy information-theoretic online phase in the prepossessing model. However, state-of-the-art $n$-party MPC protocols in the dishonest majority setting incur online communication complexity per multiplication gate which is linear in the number of parties, i.e. $O(n)$, per gate across all parties. In this work, we construct the first MPC protocols in the preprocessing model for dishonest majority with sub-linear communication complexity per gate in the number of parties $n$. To achieve our results, we extend the use of packed secret sharing to the dishonest majority setting. For a constant fraction of corrupted parties (i.e. if 99 percent of the parties are corrupt), we can achieve a communication complexity of $O(1)$ field elements per multiplication gate across all parties. At the crux of our techniques lies a new technique called sharing transformation. The sharing transformation technique allows us to transform shares under one type of linear secret sharing scheme into another, and even perform arbitrary linear maps on the secrets of (packed) secret sharing schemes with optimal communication complexity. This technique can be of independent interest since transferring shares from one type of scheme into another (e.g., for degree reduction) is ubiquitous in MPC. Furthermore, we introduce what we call sparsely packed Shamir sharing which allows us to address the issue of network routing efficiently, and packed Beaver triples which is an extension of the widely used technique of Beaver triples for packed secret sharing (for dishonest majority).
Expand
Arthur Lazzaretti, Charalampos Papamanthou
ePrint Report ePrint Report
In Private Information Retrieval (PIR), a client wishes to access an index $i$ from a public $n$-bit database without revealing any information about this index. Recently, a series of works starting with the seminal paper of Corrigan-Gibbs et al. (Eurocrypt 2020) have introduced offline-online PIR schemes with $\tilde{O}(\sqrt{n})$ (amortized) server time, $\tilde{O}(\sqrt{n})$ (amortized) bandwidth and no additional storage at the server, in both the single-server and two-server models. As a followup to this work, Shi et al. (CRYPTO 2021) further decreased the bandwidth to polylogarithmic, but only in the two-server model. In this paper we fill this gap by constructing the first single-server PIR with $\tilde{O}(\sqrt{n})$ amortized server time and polylogarithmic bandwidth. Central to our approach is a new cryptographic primitive that we call extended puncturable pseudorandomn set: With an extended puncturable pseudorandom set, one can represent a random set succinctly (e.g., with a fixed-size key), and can, at the same time both add and remove elements from the set, by manipulating the key. This extension improves previously-proposed constructions that supported only removal, and could have further applications. We acknowledge our work has limitations; more work is required to bring our ideas closer to practice, due to the use of cryptographic primitives such as FHE (only in the offline phase) and LWE-based privately-puncturable PRFs. However, our protocol yields the best asymptotic complexities in single-server PIR to date and we believe it is an important step towards eventually building a practical PIR scheme.
Expand
Jonathan Takeshita, Zachariah Carmichael, Ryan Karl, Taeho Jung
ePrint Report ePrint Report
The massive scale and performance demands of privacy-preserving data aggregation make integration of security and privacy difficult. Traditional tools in private computing are not well-suited to handle these challenges, especially for more limited client devices. Efficient primitives and protocols for secure and private data aggregation are a promising approach for private data analytics with resource-constrained devices. However, even such efficient primitives may be much slower than computation with plain data (i.e., without security/privacy guarantees). In this paper, we present TERSE, a new Private Stream Aggregation (PSA) protocol for quantum-secure time-series additive data aggregation. Due to its simplicity, low latency, and low communication overhead, TERSE is uniquely well-suited for real-world deployment. In our implementation, TERSE shows very low latency for both clients and servers, achieving encryption latency on a smartphone of 0.0003 ms and aggregation latency of 0.006 ms for 1000 users. TERSE also shows significant improvements in latency over other state-of-the-art quantum-secure PSA, achieving improvements of 1796x to 12406x for encryption at the client's end and 848x to 5433x for aggregation and decryption at the server's end.
Expand
Kevin Yeo
ePrint Report ePrint Report
In this paper, we study batch private information retrieval with private preprocessing. Private information retrieval (PIR) is the problem where one or more servers hold a database of $n$ bits and a client wishes to retrieve the $i$-th bit in the database from the server(s). In PIR with private preprocessing (also known as offline-online PIR), the client is able to compute a private $r$-bit hint in an offline stage that may be leveraged to perform retrievals in $t$ online time. For privacy, the client wishes to hide index $i$ from an adversary that has compromised some of the servers. We will focus on the batch PIR setting where the client performs queries to retrieve the contents of multiple entries simultaneously.

We present a tight characterization for the trade-offs between hint size and online query time. For any $\ell = O(1)$ and $\ell$-server PIR scheme that enables clients to perform batch retrievals of $k$ entries, we prove a lower bound of $tr = \Omega(nk)$ when $r \ge k$. When $r < k$, we prove that $t = \Omega(n)$. Our lower bounds hold when the scheme errs with probability at most $1/15$ and against PPT adversaries that only compromise one server. Our results also improve the best known lower bounds for the single query setting by a logarithmic factor. On the positive side, we show there exists a construction with a single-round query algorithm such that $tr = \tilde{O}(nk)$ that matches our lower bound up to logarithmic factors.
Expand
Dimitris Mouris, Charles Gouert, Nektarios Georgios Tsoutsos
ePrint Report ePrint Report
The rapid growth of the globalized integrated circuit (IC) supply chain has drawn the attention of numerous malicious actors that try to exploit it for profit. One of the most prominent targets of such parties is the third-party intellectual property (3PIP) vendors and their circuit designs. With the increasing number of transactions between vendors and system integrators, the threat of IP reuse and piracy has become a significant consideration for the IC industry. What is more, the correctness of 3PIP designs should be verified before integration, imposing another challenge for 3PIP vendors since they have to prove the functionality of their designs to system integrators while protecting the privacy of the circuit implementations. To eliminate this deadlock, we utilize the cryptographic technique of 'zero-knowledge proofs' to enable 3PIP vendors to convince system integrators about various functional properties of a circuit (e.g., area, power, frequency) without disclosing its netlist (i.e., in zero-knowledge). Our approach comprises a circuit compiler that transforms arbitrary netlists into a zero knowledge-friendly format and a library of modules that provide cryptographic guarantees for various properties of the netlist while hiding the actual gates. We evaluate our method using combinational and sequential circuits from the ISCAS and ITC benchmark suites.
Expand
Sameer Wagh
ePrint Report ePrint Report
Machine learning algorithms crucially depend on non-linear mathematical functions such as division (for normalization), exponentiation (for softmax and sigmoid), tanh (as an activation function), logarithm (for cross-entropy loss), and square root (for back-propagation of normalization layers). However, when machine learning is performed over secure computation, these protocols incur a large communication overhead and high round complexity. In this work, we propose new multi-party computation (MPC) protocols for such functions. Our protocols achieve constant round complexity (3 for semi-honest, 4 for malicious), an order of magnitude lower communication (54-121x lower than prior art), and high concrete efficiency (2-1163x faster runtime). We rely on recent advances in function secret sharing (FSS) to construct these protocols. Our contributions can be summarized as follows:

(1) A constant round protocol to securely evaluate non-linear functions such as division, exponentiation, logarithm, and tanh (in comparison to prior art which uses round complexity proportional to the rounds of iterative methods/required precision) with high accuracy. This construction largely follows prior work in look-up style secure computation. (2) Our main contribution is the extension of the above protocol to be secure in the presence of malicious adversaries in the honest majority setting. We provide a malicious sketching protocol for FSS schemes that works over rings and in order to prove its security, we extend (and prove) a corresponding form of Schwartz-Zippel lemma over rings. This is the first such extension of the lemma and it can be of independent interest in other domains of secure computation. (3) We implement our protocol and showcase order of magnitude improvements in runtime and communication. Given the low round complexity and substantially lower communication, our protocols achieve even better performance over network constrained environments such as WAN. Finally, we showcase how such functions can lead to scalability in machine learning.

Note that techniques presented are applicable beyond the application of machine learning as the protocols effectively present an efficient 1-out-of-N oblivious transfer or an efficient private information retrieval protocol.
Expand
◄ Previous Next ►