IACR News item: 20 February 2023
Neha Jawalkar, Kanav Gupta, Arkaprava Basu, Nishanth Chandran, Divya Gupta, Rahul Sharma
Secure Two-party Computation (2PC) allows two parties to compute any function on their private inputs without revealing their inputs in the clear to each other. Since 2PC is known to have notoriously high overheads, one of the most popular computation models is that of 2PC with a trusted dealer, where a trusted dealer provides correlated randomness (independent of any input) to both parties during a preprocessing phase. Recent works construct efficient 2PC protocols in this model based on the cryptographic technique of function secret sharing (FSS).
We build an end-to-end system Orca to accelerate the computation of FSS-based 2PC protocols with GPUs. Next, we observe that the main performance bottleneck in such accelerated protocols is in storage (due to the large amount of correlated randomness), and we design new FSS-based 2PC cryptographic protocols for several key functionalities in ML which reduce storage by up to $5\times$. Compared to prior state-of-the-art on secure training accelerated with GPUs in the same computation model (Piranha, Usenix Security 2022), we show that Orca has $4\%$ higher accuracy, $123\times$ lesser communication, and is $19\times$ faster on CIFAR-10.
We build an end-to-end system Orca to accelerate the computation of FSS-based 2PC protocols with GPUs. Next, we observe that the main performance bottleneck in such accelerated protocols is in storage (due to the large amount of correlated randomness), and we design new FSS-based 2PC cryptographic protocols for several key functionalities in ML which reduce storage by up to $5\times$. Compared to prior state-of-the-art on secure training accelerated with GPUs in the same computation model (Piranha, Usenix Security 2022), we show that Orca has $4\%$ higher accuracy, $123\times$ lesser communication, and is $19\times$ faster on CIFAR-10.
Additional news items may be found on the IACR news page.