International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

20 June 2024

Shravani Patil, Arpita Patra
ePrint Report ePrint Report
We study network-agnostic secure multiparty computation with perfect security. Traditionally MPC is studied assuming the underlying network is either synchronous or asynchronous. In a network-agnostic setting, the parties are unaware of whether the underlying network is synchronous or asynchronous.

The feasibility of perfectly-secure MPC in synchronous and asynchronous networks has been settled a long ago. The landmark work of [Ben-Or, Goldwasser, and Wigderson, STOC'88] shows that $n > 3t_s$ is necessary and sufficient for any MPC protocol with $n$-parties over synchronous network tolerating $t_s$ active corruptions. In yet another foundational work, [Ben-Or, Canetti, and Goldreich, STOC'93] show that the bound for asynchronous network is $n > 4t_a$, where $t_a$ denotes the number of active corruptions. However, the same question remains unresolved for network-agnostic setting till date. In this work, we resolve this long-standing question.

We show that perfectly-secure network-agnostic $n$-party MPC tolerating $t_s$ active corruptions when the network is synchronous and $t_a$ active corruptions when the network is asynchronous is possible if and only if $n > 2 \max(t_s,t_a) + \max(2t_a,t_s)$.

When $t_a \geq t_s$, our bound reduces to $n > 4t_a$, whose tightness follows from the known feasibility results for asynchronous MPC. When $t_s > t_a$, our result gives rise to a new bound of $n > 2t_s + \max(2t_a,t_s)$. Notably, the previous network-agnostic MPC in this setting [Appan, Chandramouli, and Choudhury, PODC'22] only shows sufficiency for a loose bound of $n > 3t_s + t_a$. When $t_s > 2t_a$, our result shows tightness of $ n > 3t_s$, whereas the existing work shows sufficiency for $n > 3t_s+t_a$.
Expand
Matilda Backendal, Hannah Davis, Felix Günther, Miro Haller, Kenneth G. Paterson
ePrint Report ePrint Report
Users increasingly store their data in the cloud, thereby benefiting from easy access, sharing, and redundancy. To additionally guarantee security of the outsourced data even against a server compromise, some service providers have started to offer end-to-end encrypted (E2EE) cloud storage. With this cryptographic protection, only legitimate owners can read or modify the data. However, recent attacks on the largest E2EE providers have highlighted the lack of solid foundations for this emerging type of service.

In this paper, we address this shortcoming by initiating the formal study of E2EE cloud storage. We give a formal syntax to capture the core functionality of a cloud storage system, capturing the real-world complexity of such a system's constituent interactive protocols. We then define game-based security notions for confidentiality and integrity of a cloud storage system against a fully malicious server. We treat both selective and fully adaptive client compromises. Our notions are informed by recent attacks on E2EE cloud storage providers. In particular we show that our syntax is rich enough to capture the core functionality of MEGA and that recent attacks on it arise as violations of our security notions. Finally, we present an E2EE cloud storage system that provides all core functionalities and that is both efficient and provably secure with respect to our selective security notions. Along the way, we discuss challenges on the path towards bringing the security of cloud storage up to par with other end-to-end primitives, such as secure messaging and TLS.
Expand
Benjamin Ostrovsky
ePrint Report ePrint Report
Given a graph $G(V,E)$, represented as a secret-sharing of an adjacency list, we show how to obliviously convert it into an alternative, MPC-friendly secret-shared representation, so-called $d$-normalized replicated adjacency list (which we abbreviate to $d$-normalized), where the size of our new data-structure is only 4x larger -- compared to the original (secret-shared adjacency list) representation of $G$. Yet, this new data structure enables us to execute oblivious graph algorithms that simultaneously improve underlying graph algorithms' round, computation, and communication complexity. Our $d$-normalization proceeds in two steps:

First, we show how for any graph $G$, given a secret-shared adjacency list, where vertices are arbitrary alphanumeric strings that fit into a single RAM memory word, we can efficiently and securely rename vertices to integers from $1$ to $V$ that will appear in increasing order in the resulting secret-shared adjacency list. The secure renaming takes $O(\log V)$ rounds and $O((V+E)\log V)$ secure operations. Our algorithm also outputs in a secret-shared form two arrays: a mapping from integers to alphanumeric names and its sorted inverse. Of course, if the adjacency list is already in such an integer format, this step could be skipped.

Second, given a secret-shared adjacency list for any graph $G$, where vertices are integers from $1$ to $V$ and are sorted, there exists a $d$-normalization algorithm that takes $O(1)$ rounds and $O((V+E))$ secure operations.

We believe that both conversions are of independent interest. We demonstrate the power of our data structures by designing a privacy-preserving Dijkstra's single-source shortest-path algorithm that simultaneously achieves $O\left((V +E) \cdot \log V \right)$ secure operations and $O(V \cdot \log V \cdot \log \log\log V)$ rounds. Our secure Dijkstra algorithm works for any adjacency list representation as long as all vertex labels and weights can individually fit into RAM memory word. Our algorithms work for two or a constant number of servers in the honest but curious setting. The limitation of our result (to only a constant number of servers) is due to our reliance on linear work and constant-round secure shuffle.
Expand
Zhenhua Zou, Zhuotao Liu, Jinyong Shan, Qi Li, Ke Xu, Mingwei Xu
ePrint Report ePrint Report
Collaborative graph learning represents a learning paradigm where multiple parties jointly train a graph neural network (GNN) using their own proprietary graph data. To honor the data privacy of all parties, existing solutions for collaborative graph learning are either based on federated learning (FL) or secure machine learning (SML). Although promising in terms of efficiency and scalability due to their distributed training scheme, FL-based approaches fall short in providing provable security guarantees and achieving good model performance. Conversely, SML-based solutions, while offering provable privacy guarantees, are hindered by their high computational and communication overhead, as well as poor scalability as more parties participate.

To address the above problem, we propose CoGNN, a novel framework that simultaneously reaps the benefits of both FL-based and SML-based approaches. At a high level, CoGNN is enabled by (i) a novel message passing mechanism that can obliviously and efficiently express the vertex data propagation/aggregation required in GNN training and inference and (ii) a two-stage Dispatch-Collect execution scheme to securely decompose and distribute the GNN computation workload for concurrent and scalable executions. We further instantiate the CoGNN framework, together with customized optimizations, to train Graph Convolutional Network (GCN) models. Extensive evaluations on three graph datasets demonstrate that compared with the state-of-the-art (SOTA) SML-based approach, CoGNN reduces up to $123$x running time and up to $522$x communication cost per party. Meanwhile, the GCN models trained using CoGNN have nearly identical accuracies as the plaintext global-graph training, yielding up to $11.06\%$ accuracy improvement over the GCN models trained via federated learning.
Expand
Long Meng, Liqun Chen, Yangguang Tian, Mark Manulis
ePrint Report ePrint Report
Attribute-Based Encryption (ABE) provides fine-grained access control to encrypted data and finds applications in various domains. The practicality of ABE schemes hinges on the balance between security and efficiency. The state-of-the-art adaptive secure ABE scheme, proven to be adaptively secure under standard assumptions (FAME, CCS'17), is less efficient compared to the fastest one (FABEO, CCS'22) which is only proven secure under the Generic Group Model (GGM). These traditional ABE schemes focus solely on message privacy. To address scenarios where attribute value information is also sensitive, Anonymous ABE (${\rm A}^{\rm 2}$BE) ensures the privacy of both the message and attributes. However, most ${\rm A}^{\rm 2}$BE schemes suffer from intricate designs with low efficiency, and the security of the fastest key-policy ${\rm A}^{\rm 2}$BE (proposed in FEASE, USENIX'24) relies on the GGM.

In this paper, we propose novel fast key-policy and ciphertext-policy ABE schemes that (1) support both AND and OR gates for access policies, (2) have no restriction on the size and type of policies or attributes, (3) achieve adaptive security under the standard DLIN assumption, and (4) only need 4 pairings for decryption. As our ABE constructions automatically provide ciphertext anonymity, we easily transform our ABE schemes to ${\rm A}^{\rm 2}$BE schemes while maintaining the same features and high-level efficiency.

The implementation results show that all our schemes achieve the best efficiency comparing to other schemes with adaptive security proven under standard assumptions. Specifically, our ABE schemes perform better than FAME and are close to FABEO. Our key-policy ${\rm A}^{\rm 2}$BE scheme performs close to the one in FEASE and our ciphertext-policy ${\rm A}^{\rm 2}$BE outperforms the state-of-the-art (Cui et al., ProvSec'16).
Expand
Xinyu Zhang, Ron Steinfeld, Joseph K. Liu, Muhammed F. Esgin, Dongxi Liu, Sushmita Ruj
ePrint Report ePrint Report
Ring signatures are one of the crucial cryptographic primitives used in the design of privacy-preserving systems. Such a signature scheme allows a signer to anonymously sign a message on behalf of a spontaneously formed group. It not only ensures the authenticity of the message but also conceals the true signer within the group. An important extension of ring signatures is linkable ring signatures, which prevent a signer from signing twice without being detected (under some constraints). Linkable ring signatures offer advantages in applications where full anonymity might jeopardise the intended purpose, such as privacy-oriented cryptocurrencies like Monero.

In this work, we introduce post-quantum ring signature (DualRing-PRF) and linkable ring signature (DualRingL-PRF) schemes whose security solely rely on symmetric-key primitives (namely, Legendre PRF and power residue PRF). Our construction of the ring signature departs from previous approaches with similar security assumptions, offering the most competitive signature sizes for small and medium-sized rings. In particular, for a ring size of 16, DualRing-PRF has a communication overhead 1.4 times smaller than the state-of-the-art scheme proposed by Goel et al. (PETS’21). Furthermore, we demonstrate the extension of DualRing-PRF to incorporate linkability and non-slanderability. Compared to the existing one-time traceable ring signature (a variant of linkable ring signature) by Scafuro and Zhang (ESORICS’21), our construction supports many-time signing and achieves significantly smaller signature sizes when ring size exceeds 16. This advantage becomes more pronounced as the ring size increases.
Expand
Aneesh Kandi, Anubhab Baksi, Peizhou Gan, Sylvain Guilley, Tomáš Gerlich, Jakub Breier, Anupam Chattopadhyay, Ritu Ranjan Shrivastwa, Zdeněk Martinásek, Shivam Bhasin
ePrint Report ePrint Report
In this work, we present various hardware implementations for the lightweight cipher ASCON, which was recently selected as the winner of the NIST organized Lightweight Cryptography (LWC) competition. We cover encryption + tag generation and decryption + tag verification for the ASCON hash function and ASCON AEAD. On top of the usual (unprotected) implementation, we present side-channel protection (threshold countermeasure) and triplication/majority-based fault protection. To the best of our knowledge, this is the first protected hardware implementation of ASCON with respect to side-channel and fault inject protection. The side-channel and fault protections work orthogonal to each other (i.e., either one can be turned on/off without affecting the other). We present ASIC and FPGA benchmarks for all our implementations (hash and AEAD) with/without countermeasures for varying input sizes.
Expand
Shams Tarek, Dipayan Saha, Sujan Kumar Saha, Mark Tehranipoor, Farimah Farahmandi
ePrint Report ePrint Report
Contemporary methods for hardware security verification struggle with adaptability, scalability, and availability due to the increasing complexity of the modern system-on-chips (SoCs). Large language models (LLMs) have emerged as a viable approach to address these shortcomings in security verification because of their natural language understanding, advanced reasoning, and knowledge transfer capabilities. However, their application to large designs is limited by inherent token limitation and memorization constraints. In this paper, we introduce SoCureLLM, an LLM-based framework that excels in identifying security vulnerabilities within SoC designs and creating a comprehensive security policy database. Our framework is adaptable and adept at processing varied, large-scale designs, overcoming the abovementioned issues of LLM. In evaluations, SoCureLLM detected 76.47% of security bugs across three vulnerable RISC-V SoCs, outperforming the state-of-the-art security verification methods. Furthermore, assessing three additional large-scale RISC-V SoC designs against various threat models led to the formulation of 84 novel security policies, enriching the security policy database. Previously requiring extensive manual effort to craft, these newly generated security policies can be used as guidelines for developing secured SoC designs.
Expand
Daniel Benarroch, Bryan Gillespie, Ying Tong Lai, Andrew Miller
ePrint Report ePrint Report
This Systematization of Knowledge conducts a survey of contemporary distributed blockchain protocols, with the aim of identifying cryptographic and design techniques which practically enable both expressive programmability and user data confidentiality. To facilitate a framing which supports the comparison of concretely very different protocols, we define an epoch-based computational model in the form of a flexible UC-style ideal functionality which divides the operation of privacy-preserving networks into three phases: Independent, Mediated, and Global computation. Our analysis of protocols focuses in particular on features of the Mediated computation phase, which provides the facility to execute non-trivial program logic on private inputs from multiple users. Specifically, we compare implementations in different protocols for private limit order auctions, which we find to be a representative application which is common and relatively simple, but which exhibits adversarial dynamics which demonstrate the capabilities of a non-trivial Mediated computation mechanism. In our analysis, we identify four protocols representative of different high-level approaches used to implement Mediated computations. We compare protocols according to the degree and flexibility of programmability, the privacy properties achieved, and the security assumptions required for correct operation. We conclude by offering recommendations and best practices for future programmable privacy designs.
Expand
Kyeongtae Lee, Donghwan Oh, Hankyung Ko, Jihye Kim, Hyunok Oh
ePrint Report ePrint Report
This paper introduces transparent and efficient arguments for Hadamard products between committed vectors from two source groups. For vectors of length $n$, the proofs consist of $\mathcal{O}(\log n)$ target group elements and $\mathcal{O}(1)$ additional elements. The verifier's workload is dominated by $\mathcal{O}(\log n)$ multi-exponentiations in the target group and $\mathcal{O}(1)$ pairings. We prove our security under the standard SXDH assumption. Additionally, we propose an aggregator for Groth16 pairing-based zk-SNARKs and a proof aggregation technique for the general case of the KZG pairing-based polynomial commitment scheme using our Hadamard product arguments. Both applications support logarithmic-sized aggregated proofs without requiring an additional trusted setup, significantly reducing the verifier’s pairing operations to $\mathcal{O}(1)$.
Expand
Mohammad Hashemi, Dev Mehta, Kyle Mitard, Shahin Tajik, Fatemeh Ganji
ePrint Report ePrint Report
The success of deep learning across a variety of applications, including inference on edge devices, has led to increased concerns about the privacy of users’ data and deep learning models. Secure multiparty computation allows parties to remedy this concern, resulting in a growth in the number of such proposals and improvements in their efficiency. The majority of secure inference protocols relying on multiparty computation assume that the client does not deviate from the protocol and passively attempts to extract information. Yet clients, driven by different incentives, can act maliciously to actively deviate from the protocol and disclose the deep learning model owner’s private information. Interestingly, faults are well understood in multiparty computation-related literature, although fault attacks have not been explored. Our paper introduces the very first fault attack against secure inference implementations relying on garbled circuits as a prime example of multiparty computation schemes. In this regard, laser fault injection coupled with a model-extraction attack is successfully mounted against existing solutions that have been assumed to be secure against active attacks. Notably, the number of queries required for the attack is equal to that of the best model extraction attack mounted against the secure inference engines under the semi-honest scenario.
Expand

18 June 2024

University of Isfahan, Department of Computer Engineering, Isfahan, Iran
Job Posting Job Posting
We are seeking an exceptional candidate for a 1-year postdoctoral position in Post-quantum Key Exchange Protocols, offering a competitive salary of at least the salary a fresh Associate Professor receives from the Ministry of Science, Research and Technology in Iran. The University of Isfahan (UI) is one of the top, ranked A, universities in Iran. It is one of the pioneer universities in Iran which has admitted MSc and PhD students in the Cyber security and Secure Computation discipline for the last two decades. If you are interested in the position, please send an email including your CV and transcripts to h.mala@eng.ui.ac.ir. As the position should be filled as soon as possible, your application will be considered promptly.

Closing date for applications:

Contact: Dr. Hamid Mala (h.mala@eng.ui.ac.ir)

More information: https://rao.ui.ac.ir/page-ResearchOffic/fa/59/form/pId27273

Expand
LIRMM, Montpellier, France
Job Posting Job Posting
We have a full-time two-years post-doc position available. The recruited person will have the possibility to do research in cryptography. More precisely, we are searching for someone interested in cryptography based on lattice problems and/or class group problems. Moreover, we would like to investigate further threshold solutions for both classes of problems.

The recruited person will be part of the very welcoming ECO research group. Given the topic, the postdoc will be most likely collaborating with Katharina Boudgoust and Fabien Laguillaumie, but there are also other interesting research fields represented in the team: side-channel analysis, coding theory, computer algebra, information-theoretic cryptography.

We are preferably looking for a candidate with experience in designing public-key primitives. It is not mandatory to have prior knowledge on lattice-based or class-group-based cryptography. The postdoc could be your opportunity to learn about those exciting topics! It goes without saying (but we still write it, to be sure) that we are welcoming everyone, independent of their gender, sexual orientation, race, ethnicity, national origin, health status, or disability. We promote a friendly, safe, and supporting work environment.

The deadline for applying is August 15th 2024. The starting date is flexible between October 2024 and January 2025.

Please send an e-mail to Katharina and Fabien, including your CV, at least one paragraph on why you want to work with us, as well as the name of one person who we can contact for an informal recommendation.

Closing date for applications:

Contact: Katharina Boudgoust (katharina.boudgoust@lirmm.fr) Fabien Laguillaumie (fabien.laguillaumie@lirmm.fr)

More information: https://katinkabou.github.io/Documents/202406_postdoc.pdf

Expand
Alex Ozdemir, Evan Laufer, Dan Boneh
ePrint Report ePrint Report
In verifiable outsourcing, an untrusted server runs an expensive computation and produces a succinct proof (called a SNARK) of the results. In many scenarios, the computation accesses a RAM that the server maintains a commitment to (persistent RAM) or that is initially zero (volatile RAM). But, SNARKs for such scenarios are limited by the high overheads associated with existing techniques for RAM checking. We develop new proofs about volatile, persistent, and sparse persistent RAM that reduce SNARK proving times. Our results include both asymptotic and concrete improvements---including a proving time reduction of up to 51.3$\times$ for persistent RAM. Along the way, we apply two tools that may be of independent interest. First, we generalize an existing construction to convert any algebraic interactive proof (AIP) into a SNARK. An AIP is a public-coin, non-succinct, interactive proof with a verifier that is an arithmetic circuit. Second, we apply Bézout's identity for polynomials to construct new AIPs for uniqueness and disjointness. These are useful for showing the independence of accesses to different addresses.
Expand
Elkana Tovey, Jonathan Weiss, Yossi Gilad
ePrint Report ePrint Report
This paper presents a new architecture for metadata-private messaging that counters scalability challenges by offloading most computations to the clients. At the core of our design is a distributed private information retrieval (PIR) protocol, where the responder delegates its work to alleviate PIR's computational bottleneck and catches misbehaving delegates by efficiently verifying their results. We introduce DPIR, a messaging system that uses distributed PIR to let a server storing messages delegate the work to the system's clients, such that each client contributes proportional processing to the number of messages it reads. The server removes clients returning invalid results, which DPIR leverages to integrate an incentive mechanism for honest client behavior by conditioning messaging through DPIR on correctly processing PIR requests from other users. The result is a metadata-private messaging system that asymptotically improves scalability over prior work with the same threat model. We show through experiments on a prototype implementation that DPIR concretely improves performance by $3.25\times$ and $4.31\times$ over prior work and that the performance gap grows with the user base size.
Expand
Augustin Bariant, Orr Dunkelman, Nathan Keller, Gaëtan Leurent, Victor Mollimard
ePrint Report ePrint Report
The boomerang attack is a cryptanalytic technique which allows combining two short high-probability differentials into a distinguisher for a large number of rounds. Since its introduction by Wagner in 1999, it has been applied to many ciphers. One of the best-studied targets is a 6-round variant of AES, on which the boomerang attack is outperformed only by the dedicated Square attack. Recently, two new variants of the boomerang attack were presented: retracing boomerang (Eurocrypt'20) and truncated boomerang (Eurocrypt'23). These variants seem incompatible: the former achieves lower memory complexity by throwing away most of the data in order to force dependencies, while the latter achieves lower time complexity by using large structures, which inevitably leads to a large memory complexity.

In this paper we show that elements of the two techniques can be combined to get `the best of the two worlds' – the practical memory complexity of the retracing attack and the lower time complexity of the truncated attack. We obtain an attack with data complexity of $2^{57}$ (compared to $2^{59}$ and $2^{55}$ of truncated and retracing boomerang, respectively), memory complexity of $2^{33}$ (compared to $2^{59}$ and $2^{31}$), and time complexity of $2^{61}$ (compared to $2^{61}$ and $2^{80}$). This is the second-best attack on 6-round AES, after the Square attack.
Expand
Yuval Ishai, Elaine Shi, Daniel Wichs
ePrint Report ePrint Report
It is well-known that classical Private Information Retrieval (PIR) schemes without preprocessing must suffer from linear server computation per query. Moreover, any such single-server PIR with sublinear bandwidth must rely on public-key cryptography. Several recent works showed that these barriers pertaining to classical PIR can be overcome by introducing a preprocessing phase where each client downloads a small hint that helps it make queries subsequently. Notably, the Piano PIR scheme (and subsequent improvements) showed that with such a client-side preprocessing, not only can we have PIR with sublinear computation and bandwidth per query, but somewhat surprisingly, we can also get it using only symmetric-key cryptography (i.e., one-way functions).

In this paper, we take the question of minimizing cryptographic assumptions to an extreme. In particular, we are the first to explore the landscape of information theoretic single-server preprocessing PIR. We make contributions on both the upper- and lower-bounds fronts. First, we show new information-theoretic constructions with various non-trivial performance tradeoffs between server computation, client space and bandwidth. Second, we prove a (nearly) tight lower bound on the tradeoff between the client space and bandwidth in information-theoretic constructions. Finally, we prove that any computational scheme that overcomes the information-theoretic lower bound and satisfies a natural syntactic requirement (which is met by all known constructions) would imply a hard problem in the complexity class SZK. In particular, this shows that Piano achieves (nearly) optimal client space and bandwidth tradeoff subject to making a black-box use of a one-way function. Some of the techniques we use for the above results can be of independent interest.
Expand
Wonseok Choi, Seongha Hwang, Byeonghak Lee, Jooyoung Lee
ePrint Report ePrint Report
Online authenticated encryption has been considered of practical relevance in light-weight environments due to low latency and constant memory usage. In this paper, we propose a new tweakable block cipher-based online authenticated encryption scheme, dubbed ZLR, and its domain separation variant, dubbed DS-ZLR. ZLR and DS-ZLR follow the Encrypt-MixEncrypt paradigm. However, in contrast to existing schemes using the same paradigm such as ELmE and CoLM, ZLR and DS-ZLR enjoy n-bit security by using larger internal states with an efficient ZHash-like hashing algorithm. In this way, 2n-bit blocks are processed with only a single primitive call for hashing and two primitive calls for encryption and decryption, when they are based on an n-bit tweakable block cipher using n-bit (resp. 2n-bit) tweaks for ZLR (resp. DS-ZLR). Furthermore, they support pipelined computation as well as online nonce-misuse resistance. To the best of our knowledge, ZLR and DS-ZLR are the first pipelineable tweakable block cipher-based online authenticated encryption schemes of rate 2/3 that provide n-bit security with online nonce-misuse resistance.
Expand

17 June 2024

Daniel Collins, Sisi Duan, Julian Loss, Charalampos Papamanthou, Giorgos Tsimos, Haochen Wang
ePrint Report ePrint Report
The parallel broadcast (PBC) problem generalises the classic Byzantine broadcast problem to the setting where all $n$ nodes broadcast a message and deliver $O(n)$ messages. PBC arises naturally in many settings including multi-party computation. Recently, Tsimos, Loss, and Papamanthou (CRYPTO 2022) showed PBC protocols with improved communication, against an adaptive adversary who can corrupt all but a constant fraction $\epsilon$ of nodes (i.e., $f < (1 - \epsilon)n$). However, their study is limited to single-bit messages, and their protocols have large polynomial overhead in the security parameter $\kappa$: their TrustedPBC protocol achieves $\tilde{O}(n^2 \kappa^4)$ communication and $O(\kappa\log n)$ rounds. Since these factors of $\kappa$ are in practice often close (or at least polynomially related) to $n$, they add a significant overhead. In this work, we propose three parallel broadcast protocols for $L$-bit messages, for any size $L$, that significantly improve the communication efficiency of the state-of-the-art.

We first propose a new extension protocol that uses a $\kappa$-bit PBC as a black box and achieves i) communication complexity of $O(L n^2 + \mathcal{P}(\kappa))$, where $\mathcal{P}(\kappa)$ is the communication complexity of the $\kappa$-bit PBC, and ii) round complexity same as the $\kappa$-bit PBC. By comparison, the state-of-the-art extension protocol for regular broadcast (Nayak et al., DISC 2020) incurs $O(n)$ additional rounds of communication. Next, we propose a protocol that is secure against a static adversary, for $\kappa$-bit messages with $O(n^2 \kappa^{1+K} + n\kappa^3 + \kappa^4)$ communication and $O(\kappa)$ round complexity, where $K$ is an arbitrarily small constant such that $0
Expand
Karthik Inbasekar, Yuval Shekel, Michael Asa
ePrint Report ePrint Report
Polynomials play a central role in cryptography. In the context of Zero Knowledge Proofs (ZKPs), protocols can be exclusively expressed using polynomials, making them a powerful abstraction tool, as demonstrated in most ZK research papers. Our first contribution is a high-level framework that enables practitioners to implement ZKPs in a more natural way, based solely on polynomial primitives.

ZK provers are considered computationally intensive algorithms with a high degree of parallelization. These algorithms benefit significantly from hardware acceleration, and deployed ZK systems typically include specialized hardware to optimize the performance of the prover code. Our second contribution is leveraging our polynomial API to abstract away low-level hardware primitives and automate their memory management. This device-agnostic design allows ZK engineers to prototype and build solutions while taking advantage of the performance gains offered by specialized hardware, such as GPUs and FPGAs, without needing to understand the hardware implementation details.

Finally, our polynomial API is integrated into version 2 of the ICICLE library and is running in production. This paper also serves as a comprehensive documentation for the ICICLE v2 polynomial API.
Expand
◄ Previous Next ►