IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
04 October 2024
Matteo Campanelli, Mathias Hall-Andersen
ePrint Report
Succinct arguments of knowledge allow an untrusted prover to establish that they know a witness for an NP relation. Many recent efficient constructions of such schemes work over arithmetic computations expressed in finite fields.
Several common settings, however, have an extremely simple representation when expressed over the integers (e.g., RSA signatures/accumulators, range checks for committed values, computations over rational numbers). Efficient arguments of knowledge working natively over $\mathbb{Z}$ could be applied to such computations without the overhead from emulating integer arithmetic over a finite field.
We propose the first native construction of SNARKs over the integers that is fully succinct, thus resolving an open problem from Towa and Vergnaud (Asiacrypt 2020). By fully succinct, we mean that \textit{both} the proof size and the verifier's running time should be sublinear in both $|\vec w|$—the size of the witness as a vector of integers—and $\log_2 \lVert \vec w \rVert_\infty$—the size in bits of the largest integer in the witness vector (in absolute value).
As a stepping stone for our results we provide a general theoretical framework for building succinct arguments over the integers.
Its most attractive feature is that it allows to reuse already existing constructions of SNARKs in a modular way and can be used as a starting point for constructions following up our work. We build these systematic foundations by leveraging a common technique in theoretical computer science—fingerprinting—and applying it to a new setting. Our framework consists of two main ingredients: idealized protocols and polynomial commitments such that an object ``committed over the integers'' can however be ``queried modulo $q$'', for a randomly sampled prime $q$.
We obtain our final construction, $\mathbb{Z}$aratan, by lifting the $\mathsf{Spartan}$ construction (Setty, CRYPTO 2020) to the integers and applying a form of polynomial commitment based on the techniques from DARK (Bünz et al., Eurocrypt 2020). $\mathbb{Z}$aratan has a transparent setup, is proven secure in the generic group model for groups of unknown order and can be heuristically made non-interactive in the ROM via the Fiat-Shamir transform.
Cezary Pilaszewicz, Marian Margraf
ePrint Report
We use the HHL algorithm to retrieve a quantum state holding the algebraic normal formal of a Boolean function. Unlike the standard HHL applications, we do not describe the cipher as an exponentially big system of equations. Rather, we perform a set of small matrix inversions which corresponds to the Boolean Möbius transform. This creates a superposition holding information about the ANF in the form: $\ket{\mathcal{A}_{f}} =\frac{1}{C} \sum_{I=0}^{2^n-1} c_I \ket{I}$, where $c_I$ is the coefficient of the ANF and $C$ is a scaling factor. The procedure has a time complexity of $\mathcal{O}(n)$ for a Boolean function with $n$ bit input. We also propose two approaches how some information about the ANF can be extracted from such a state.
John Gaspoz, Siemen Dhooghe
ePrint Report
Masking is a sound countermeasure to protect against differential power analysis. Since the work by Balasch et al. in ASIACRYPT 2012, inner product masking has been explored as an alternative to the well known Boolean masking. In CARDIS 2017, Poussier et al. showed that inner product masking achieves higher-order security versus Boolean masking, for the same shared size, in the bit-probing model. Wang et al. in TCHES 2020 verified the inner product masking's security order amplification in practice and proposed new gadgets for inner product masking. Finally, Wu et al. in TCHES 2022 showed that this security amplification comes from the bit-probing model, but that Wang al.'s gadgets are not higher-order bit-probing secure reducing the computation's practical security. The authors concluded their work with the open question of providing an inner product multiplication gadget which maintains the masking's bit-probing security, and conjectured that such gadget maintains the practical security order amplification of the masking during its computation.
In this paper, we answer positively to Wu et al.'s open problems. We are the first to present a multiplication gadget for inner product masking which is proven secure in the bit-level probing model using the t-Strong Non-Interference (SNI) property. Moreover, we provide practical evidence that the gadget indeed maintains the security amplification of its masking. This is done via an evaluation of an assembly implementation of the gadget on an ARM Cortex-M4 core. We used this implementation to take leakage measurements and show no leakage happens for orders below the gadget's bit-probing security level either for its univariate or multivariate analysis.
In this paper, we answer positively to Wu et al.'s open problems. We are the first to present a multiplication gadget for inner product masking which is proven secure in the bit-level probing model using the t-Strong Non-Interference (SNI) property. Moreover, we provide practical evidence that the gadget indeed maintains the security amplification of its masking. This is done via an evaluation of an assembly implementation of the gadget on an ARM Cortex-M4 core. We used this implementation to take leakage measurements and show no leakage happens for orders below the gadget's bit-probing security level either for its univariate or multivariate analysis.
Daniele Micciancio
ePrint Report
The traditional definition of fully homomorphic encryption (FHE) is not composable, i.e., it does not guarantee that evaluating two (or more) homomorphic computations in a sequence produces correct results. We formally define and investigate a stronger notion of homomorphic encryption which we call "fully composable homomorphic encryption", or "composable FHE". The definition is both simple and powerful: it does not directly involve the evaluation of multiple functions, and yet it supports the arbitrary composition of homomorphic evaluations. On the technical side, we compare the new definition with other definitions proposed in the past, proving both implications and separations, and show how the "bootstrapping" technique of (Gentry, STOC 2009) can be formalized as a method to transform a (non-composable, circular secure) homomorphic encryption scheme into a fully composable one. We use this formalization of bootstrapping to formulate a number of conjectures and open problems.
Zonglun Li, Shuhao Zheng, Junliang Luo, Ziyue Xin, Dun Yuan, Shang Gao, Sichao Yang, Bin Xiao, Xue Liu
ePrint Report
Decentralized storage networks, including IPFS and Filecoin, have created a marketplace where individuals exchange storage space for profit. These networks employ protocols that reliably ensure data storage providers accurately store data without alterations, safeguarding the interests of storage purchasers. However, these protocols lack an effective and equitable payment mechanism for data retrieval, particularly when multiple data queriers are involved. This necessitates a protocol that ensures both data integrity and fair compensation for data providers.
In decentralized storage, data is fragmented into small blocks and stored across multiple nodes, a process known as data swarming. Due to this property, traditional data exchange protocols are inadequate in terms of communication and economic efficiency.
We propose the Proof of Unified Data Retrieval protocol (PoUDR). PoUDR incorporates ZK-SNARK to facilitate a fair data exchange protocol. PoUDR reduces the number of blockchain transactions for both single block and data swarming retrieval. The protocol requires only a single key-revealing transaction submitted by the provider to the blockchain for each data block. This architecture allows for further optimization of transaction numbers through a batched proof technique on the provider's side. This approach necessitates only $N_P$ transactions within a specific time frame when data consisting of $N_D$ blocks, provided by $N_P$ providers, is queried by $N_Q$ queriers.
This work provides a comprehensive definition for Secure Swarming Data Exchange (SSDE), including security assumptions. Also it offers a detailed game-based security analysis for the PoUDR protocol. Moreover, the PoUDR protocol has been fully integrated into the Bitswap protocol (IPFS). Within this integration, our proposed Relaxed Groth16 algorithm addresses the significant technical challenge of generating zero-knowledge proofs, leading to substantial cost reductions for overall feasibility of secure data retrieval in decentralized storage networks.
In decentralized storage, data is fragmented into small blocks and stored across multiple nodes, a process known as data swarming. Due to this property, traditional data exchange protocols are inadequate in terms of communication and economic efficiency.
We propose the Proof of Unified Data Retrieval protocol (PoUDR). PoUDR incorporates ZK-SNARK to facilitate a fair data exchange protocol. PoUDR reduces the number of blockchain transactions for both single block and data swarming retrieval. The protocol requires only a single key-revealing transaction submitted by the provider to the blockchain for each data block. This architecture allows for further optimization of transaction numbers through a batched proof technique on the provider's side. This approach necessitates only $N_P$ transactions within a specific time frame when data consisting of $N_D$ blocks, provided by $N_P$ providers, is queried by $N_Q$ queriers.
This work provides a comprehensive definition for Secure Swarming Data Exchange (SSDE), including security assumptions. Also it offers a detailed game-based security analysis for the PoUDR protocol. Moreover, the PoUDR protocol has been fully integrated into the Bitswap protocol (IPFS). Within this integration, our proposed Relaxed Groth16 algorithm addresses the significant technical challenge of generating zero-knowledge proofs, leading to substantial cost reductions for overall feasibility of secure data retrieval in decentralized storage networks.
Ali Şah Özcan, Erkay Savaş
ePrint Report
HEonGPU is a high-performance library designed to optimize Fully Homomorphic Encryption (FHE) operations on Graphics Processing Unit (GPU). By leveraging the parallel processing capac- ity of GPUs, HEonGPU significantly reduces the computational overhead typically associated with FHE by executing complex operation concurrently. This allows for faster execution of homomorphic computations on encrypted data, enabling real-time applications in privacy-preserving machine learn- ing and secure data processing. A key advantage of HEonGPU lies in its multi-stream architecture, which not only allows parallel processing of tasks to improve throughput but also eliminates the over- head of data transfers between the host device (i.e., CPU) and GPU. By efficiently managing data within the GPU using multi-streams, HEonGPU minimizes the need for repeated memory transfers, further enhancing performance. HEonGPU’s GPU-optimized design makes it ideal for large-scale encrypted computations, providing users with reduced latency and higher performance across various FHE schemes.
Viet Tung Hoang, Sanketh Menda
ePrint Report
There has been a recent interest to develop and standardize Robust Authenticated Encryption (Robust AE) schemes. NIST, for example, is considering an Accordion mode (a wideblock tweakable blockcipher), with Robust AE as a primary application. On the other hand, recent attacks and applications suggest that encryption needs to be committing. Indeed, committing security isalso a design consideration in the Accordion mode. Yet it is unclear how to build a Robust AE with committing security.
In this work, we give a modular solution for this problem. We first show how to transform any wideblock tweakable blockcipher TE to a Robust AE scheme SE that commits just the key. The overhead is cheap, just a few finite-field multiplications and blockcipher calls. If one wants to commit the entire encryption context, one can simply hash the context to derive a 256-bit subkey, and uses SE on that subkey. The use of 256-bit key on SE only means that it has to rely on AES-256 but doesn't require TE to have 256-bit key.
Our approach frees the Accordion designs from consideration of committing security. Moreover, it gives a big saving for several key-committing applications that don't want to pay the inherent hashing cost of full committing.
In this work, we give a modular solution for this problem. We first show how to transform any wideblock tweakable blockcipher TE to a Robust AE scheme SE that commits just the key. The overhead is cheap, just a few finite-field multiplications and blockcipher calls. If one wants to commit the entire encryption context, one can simply hash the context to derive a 256-bit subkey, and uses SE on that subkey. The use of 256-bit key on SE only means that it has to rely on AES-256 but doesn't require TE to have 256-bit key.
Our approach frees the Accordion designs from consideration of committing security. Moreover, it gives a big saving for several key-committing applications that don't want to pay the inherent hashing cost of full committing.
Théophile Brézot, Chloé Hébant
ePrint Report
State-of-the-art database implementations offer a wide range of functionalities and impressive performances while supporting highly concurrent loads. However they all rely on the server knowing the content of the database, which raises issues when sensitive information is being stored on a server that cannot be trusted. Encrypting documents before sending them to a remote server solves the confidentiality issue at the cost of loosing the keyword search functionality. Cryptographic primitives such as Symmetric Searchable Encryption (SSE) schemes have been proposed to recover this functionality. However, no SSE construction properly defines correctness and successfully guarantees security in a concurrent setting.
This paper attempts a first step in this direction by recommending linearizability as the standard notion of correctness for a concurrent SSE. We study the impact of concurrency on security and stress the need for finer-grained security models. Hence, we propose the log-security model that guarantees security against an adversary having access to persistency-related logs, fixing a blind spot in the snapshot model while capturing security in a concurrent setting.
We also build the first concurrent SSE solution proven correct and secure in a concurrent setting, that can be implemented on top of any database. Our scheme proved to be fast thanks to optimal wait-free search operations and sequentially-optimal, lock-free modifications, that both execute under one micro-second per binding, while only incurring a 13.3% storage expansion.
Pedram Hosseyni, Ralf Küsters, Tim Würtele
ePrint Report
FAPI 2.0 is a suite of Web protocols developed by the OpenID Foundation's FAPI Working Group (FAPI WG) for third-party data sharing and digital identity in high-risk environments. Even though the specifications are not completely finished, several important entities have started to adopt the FAPI 2.0 protocols, including Norway's national HelseID, Australia's Consumer Data Standards, as well as private companies like Authlete and Australia-based connectID; the predecessor FAPI 1.0 is in widespread use with millions of users.
The FAPI WG asked us to accompany the standardization of the FAPI 2.0 protocols with a formal security analysis to proactively identify vulnerabilities before widespread deployment and to provide formal security guarantees for the standards. In this paper, we report on our analysis and findings.
Our analysis is based on a detailed model of the Web infrastructure, the so-called Web Infrastructure Model (WIM), which we extend to be able to carry out our analysis of the FAPI 2.0 protocols including important extensions like FAPI-CIBA. Based on the (extended) WIM and formalizations of the security goals and attacker model laid out in the FAPI 2.0 specifications, we provide a formal model of the protocols and carry out a formal security analysis, revealing several attacks. We have worked with the FAPI WG to fix the protocols, resulting in several amendments to the specifications. With these changes in place, we have adjusted our protocol model and formally proved that the security properties hold true under the strong attacker model defined by the FAPI WG.
The FAPI WG asked us to accompany the standardization of the FAPI 2.0 protocols with a formal security analysis to proactively identify vulnerabilities before widespread deployment and to provide formal security guarantees for the standards. In this paper, we report on our analysis and findings.
Our analysis is based on a detailed model of the Web infrastructure, the so-called Web Infrastructure Model (WIM), which we extend to be able to carry out our analysis of the FAPI 2.0 protocols including important extensions like FAPI-CIBA. Based on the (extended) WIM and formalizations of the security goals and attacker model laid out in the FAPI 2.0 specifications, we provide a formal model of the protocols and carry out a formal security analysis, revealing several attacks. We have worked with the FAPI WG to fix the protocols, resulting in several amendments to the specifications. With these changes in place, we have adjusted our protocol model and formally proved that the security properties hold true under the strong attacker model defined by the FAPI WG.
Taiga Hiroka, Tomoyuki Morimae
ePrint Report
In classical cryptography, one-way functions (OWFs) are the minimal assumption, while recent active studies have demonstrated that OWFs are not necessarily the minimum assumption in quantum cryptography. Several new primitives have been introduced such as pseudorandom unitaries (PRUs), pseudorandom function-like state generators (PRFSGs), pseudorandom state generators (PRSGs), one-way state generators (OWSGs), one-way puzzles (OWPuzzs), and EFI pairs. They are believed to be weaker than OWFs, but they still imply many useful applications such as private-key quantum money schemes, secret-key encryption, message authentication codes, digital signatures, commitments, and multiparty computations. Now that the possibility of quantum cryptography without OWFs has opened up, the most important goal in the field is to provide them with concrete instantiations. For example, in classical cryptography, there are many instantiations of OWFs based on concrete hardness assumptions, such as the hardness of discrete logarithms or learning with errors. The study of generic primitives is justified by the existence of concrete instantiations. On the other hand, in quantum cryptography, all known constructions of those primitives are only from OWFs. We therefore have the following important open problem: Do they have instantiations based on some concrete hardness assumptions that will not imply OWFs? Ideally, the assumptions should be the ones that are studied in other contexts than cryptography. In this paper, we give a candidate answer to the question by showing that quantum-average-hardness of GapK problem implies the existence of OWPuzzs. A GapK problem is a promise problem to decide whether a given bit string has a small Kolmogorov complexity or not. Its quantum-average-hardness means that an instance is sampled from a quantum-polynomial-time-samplable distribution, and no quantum-polynomial-time algorithm can solve the problem with high probability. As far as we know, this is the first time that a ``Microcrypt'' primitive is constructed based on concrete hardness assumptions that do not seem to imply OWFs. Moreover, the assumptions are studied in other contexts than cryptography, especially in the field of meta-complexity. (Note: During the preparation of this manuscript, Khurana and Tomer \cite{cryptoeprint:2024/1490} uploaded a concurrent work.)
Maggie Yongqi Guan, Yaman Yu, Tanusree Sharma, Molly Zhuangtong Huang, Kaihua Qin, Yang Wang, Kanye Ye Wang
ePrint Report
Stablecoins, a type of cryptocurrency pegged to another asset to maintain a stable price, have become an important part of the cryptocurrency ecosystem. Prior studies have primarily focused on examining the security of stablecoins from technical and theoretical perspectives, with limited investigation into users’ risk perceptions and security behaviors in stablecoin practices. To address this research gap, we conducted a mixed-method study that included constructing a stablecoin interaction framework based on the literature, which informed the design of our interview protocol, semi-structured interviews (n=21), and Reddit data analysis (9,326 posts). We found that participants see stable value and regulatory compliance as key security advantages of stablecoins over other cryptocurrencies. However, participants also raised concerns about centralization risks in fiat-backed stablecoins, perceived challenges in crypto-backed stablecoins due to limited reliance on fully automated execution, and confusion regarding the complex mechanisms of algorithmic stablecoins. We proposed improving user education and optimizing mechanisms to address these concerns and promote the safer use of stablecoins.
Janik Huth, Antoine Joux
ePrint Report
In this paper, we introduce a new method to construct a signature scheme based on the subfield bilinear collision problem published at Crypto 2024. We use techniques based on vector oblivious linear evaluation (VOLE) to significantly improve the running time and signature size of the scheme compared to the MPC-in-the-head version.
Tomoyuki Morimae, Yuki Shirakawa, Takashi Yamakawa
ePrint Report
Quantum computational advantage refers to an existence of computational tasks that are easy for quantum computing but hard for classical one. Unconditionally showing quantum advantage is beyond our current understanding of complexity theory, and therefore some computational assumptions are needed. Which complexity assumption is necessary and sufficient for quantum advantage? In this paper, we show that inefficient-verifier proofs of quantumness (IV-PoQ) exist if and only if classically-secure one-way puzzles (OWPuzzs) exist. As far as we know, this is the first time that a complete cryptographic characterization of quantum advantage is obtained. IV-PoQ capture various types of quantum advantage previously studied, such as sampling advantage and searching advantage. Previous work [Morimae and Yamakawa 2024] showed that IV-PoQ can be constructed from OWFs, but a construction of IV-PoQ from weaker assumptions was left open. Our result solves the open problem. OWPuzzs are one of the most fundamental quantum cryptographic primitives implied by many quantum cryptographic primitives weaker than one-way functions (OWFs). The equivalence between IV-PoQ and classically-secure OWPuzzs therefore highlights that if there is no quantum advantage, then these fundamental primitives do not exist. The equivalence also means that quantum advantage is an example of the applications of OWPuzzs. Except for commitments, no application of OWPuzzs was known before. Our result shows that quantum advantage is another application of OWPuzzs. Moreover, it is the first quantum computation classical communication (QCCC) application of OWPuzzs.
Xingye Lu, Jingjing Fan, Man Ho AU
ePrint Report
In this paper, we introduce the notion of relaxed lattice-based programmable hash function (RPHF), which is a novel variant of lattice-based programmable hash functions (PHFs). Lattice-based PHFs, together with preimage trapdoor functions (TDFs), have been widely utilized (implicitly or explicitly) in the construction of adaptively secure identity-based encryption (IBE) schemes. The preimage length and the output length of the underlying PHF and TDF together determine the user secret key and ciphertext lengths of the IBE schemes.
However, the current lattice-based PHF definition imposes the restriction that the preimage length of TDF in the IBE schemes cannot be too short, hindering the utilization of size-efficient NTRU TDF. To overcome this hurdle, RPHF relaxes the hash key distribution requirement in the definition of PHF from statistical indistinguishability to computational indistinguishability. This relaxation eliminates limitations on the preimage length of underlying TDFs in IBE, enabling the construction of IBEs from NTRU TDFs.
We introduce two instantiations of RPHF: the first produces a hash output length of $2$ ring elements, with a hash key size linear to the input length, and the second yields an output length of $14$ ring elements, with a hash key size proportional to the square root of the input length.
Building upon these RPHF instantiations, we propose two adaptively secure lattice-based IBE schemes with ciphertext lengths of $5$ and $17$ ring elements and user secret key lengths of $4$ and $16$ ring elements, respectively.
The length of the IBE master public key is roughly equivalent to the size of the hash key of the underlying RPHF.
In comparison to existing IBE constructions, our proposed schemes achieve a significant reduction (over an order of magnitude) in ciphertext and secret key sizes. Notably, state-of-the-art constructions from ideal lattices exhibit secret key and ciphertext sizes over 100 ring elements, making our proposed schemes highly efficient. Moreover, the master public key sizes of our IBEs remain comparable.
Leo de Castro, Duhyeong Kim, Miran Kim, Keewoo Lee, Seonhong Min, Yongsoo Song
ePrint Report
We present a new and efficient method to obtain circuit privacy for lattice-based linearly homomorphic encryptions (LHE). In particular, our method does not involve noise-flooding with exponetially large errors or iterative bootstrapping. As a direct result, we obtain a semi-honest oblivious linear evaluation (OLE) protocol with the same efficiency, reducing the communication cost of the prior state of the art by 50%.
Consequently, the amortized time of our protocol improves the prior work by 33% under 100Mbps network setting. Our semi-honest OLE is the first to achieve both concrete efficiency and asymptotic quasi-optimality. Together with an extension of the recent zero-knowledge proof of plaintext knowledge, our LHE yields actively-secure OLE with 2.7x reduced communication from the prior work. When applied to Overdrive (Eurocrypt '18), an MPC preprocessing protocol, our method provides 1.4x improvement in communication over the state of the art.
Jan Bormet, Sebastian Faust, Hussien Othman, Ziyan Qu
ePrint Report
In decentralized finance (DeFi), the public availability of pending transactions presents significant privacy concerns, enabling market manipulation through miner extractable value (MEV). MEV occurs when block proposers exploit the ability to reorder, omit, or include transactions, causing financial loss to users from frontrunning. Recent research has focused on encrypting pending transactions, hiding transaction data until block finalization. To this end, Choudhuri et al. (USENIX '24) introduce an elegant new primitive called Batched Threshold Encryption (BTE) where a batch of encrypted transactions is selected by a committee and only decrypted after block finalization. Crucially, BTE achieves low communication complexity during decryption and guarantees that all encrypted transactions outside the batch remain private. An important shortcoming of their construction is, however, that it progresses in epochs and requires a costly setup in MPC for each batch decryption.
In this work, we introduce a novel BTE scheme addressing the limitations by eliminating the need for an expensive epoch setup while achieving practical encryption and decryption times. Additionally, we explore the problem of how users can coordinate their transactions, which is crucial for the functionality of the system. Along the way, we present several optimizations and trade-offs between communication and computational complexity that allow us to achieve practical performance on standard hardware ($<2$ ms for encryption and $<440$ ms for decrypting $512$ transactions). Finally, we prove our constructions secure in a model that captures practical attacks on MEV-prevention mechanisms.
Fei Xu, Honggang Hu, Changhong Xu
ePrint Report
At Eurocrypt 2015, Zahur, Rosulek, and Evans proposed the model of Linear Garbling Schemes. This model proved a $2\kappa$-bit lower bound of ciphertexts for a broad class of garbling schemes. Since then, several methods have been developed that bypass this lower bound, albeit with a notable limitation: Their reliance on specifically correlated input wire labels restricts their applicability across most gates. At Crypto 2021, Rosulek and Roy presented the innovative "three-halves" garbling scheme in which AND gates cost $1.5\kappa+5$ bits and XOR gates are free. A noteworthy aspect of their scheme is the slicing-and-dicing technique, which is applicable universally to all AND gates when garbling a boolean circuit. Following this revelation, Rosulek and Roy presented several open problems. Our research primarily addresses one of them: ``Is $1.5\kappa$ bits optimal for garbled AND gates in a more inclusive model than Linear Garbling Schemes?''
In this paper, we propose the Bitwise Garbling Schemes, a model that seamlessly incorporates the slicing-and-dicing technique. Our key revelation is that $1.5\kappa$ bits is indeed optimal for arbitrary garbled AND gates in our model. Since Rosulek and Roy also suggested another problem which questions the necessity of free-XOR, we explore constructions without free-XOR and prove a $2\kappa$-bit lower bound. Therefore, sacrificing compatibility with free-XOR does not lead to a more efficient scheme.
In this paper, we propose the Bitwise Garbling Schemes, a model that seamlessly incorporates the slicing-and-dicing technique. Our key revelation is that $1.5\kappa$ bits is indeed optimal for arbitrary garbled AND gates in our model. Since Rosulek and Roy also suggested another problem which questions the necessity of free-XOR, we explore constructions without free-XOR and prove a $2\kappa$-bit lower bound. Therefore, sacrificing compatibility with free-XOR does not lead to a more efficient scheme.
Albert Garreta, Ignacio Manzur
ePrint Report
We introduce two folding schemes for lookup instances: FLI and FLI+SOS. Both use a PIOP to check that a matrix has elementary basis vectors as rows, with FLI+SOS adding a twist based on Lasso’s SOS-decomposability.
FLI takes two lookup instances $\{\mathbf{a}_1\}, \{\mathbf{a}_2\}\subseteq\mathbf{t}$, and expresses them as matrix equations $M_i\cdot\mathbf{t}^\mathsf{T}=\mathbf{a}_i^\mathsf{T}$ for $i=1,2$, where each matrix $M_i\in\mathbb{F}^{m\times N}$ has rows which are elementary basis vectors in $\mathbb{F}^N$. Matrices that satisfy this condition are said to be in $R_{\mathsf{elem}}$. Then, a folding scheme for $R_{\mathsf{elem}}$ into a relaxed relation is used, which combines the matrices $M_1, M_2$ as $M_1+\alpha M_2$ for a random $\alpha\in\mathbb{F}$. Finally, the lookup equations are combined as $(M_1+\alpha M_2)\cdot \mathbf{t}^{\mathsf{T}} = (\mathbf{a}_1+\alpha \mathbf{a}_2)^\mathsf{T}$. In FLI, only the property that a matrix is in $R_{\mathsf{elem}}$ is folded, and this makes the FLI folding step the cheapest among existing solutions. The price to pay is in the cost for proving accumulated instances.
FLI+SOS builds upon FLI to enable folding of large SOS-decomposable tables. This is achieved through a variation of Lasso's approach to SOS-decomposability, which fits FLI naturally. For comparison, we describe (for the first time to our knowledge) straightforward variations of Protostar and Proofs for Deep Thought that also benefit from SOS-decomposability. We see that for many reasonable parameter choices, and especially those arising from lookup-based zkVMs, FLI+SOS can concretely be the cheapest folding solution.
FLI takes two lookup instances $\{\mathbf{a}_1\}, \{\mathbf{a}_2\}\subseteq\mathbf{t}$, and expresses them as matrix equations $M_i\cdot\mathbf{t}^\mathsf{T}=\mathbf{a}_i^\mathsf{T}$ for $i=1,2$, where each matrix $M_i\in\mathbb{F}^{m\times N}$ has rows which are elementary basis vectors in $\mathbb{F}^N$. Matrices that satisfy this condition are said to be in $R_{\mathsf{elem}}$. Then, a folding scheme for $R_{\mathsf{elem}}$ into a relaxed relation is used, which combines the matrices $M_1, M_2$ as $M_1+\alpha M_2$ for a random $\alpha\in\mathbb{F}$. Finally, the lookup equations are combined as $(M_1+\alpha M_2)\cdot \mathbf{t}^{\mathsf{T}} = (\mathbf{a}_1+\alpha \mathbf{a}_2)^\mathsf{T}$. In FLI, only the property that a matrix is in $R_{\mathsf{elem}}$ is folded, and this makes the FLI folding step the cheapest among existing solutions. The price to pay is in the cost for proving accumulated instances.
FLI+SOS builds upon FLI to enable folding of large SOS-decomposable tables. This is achieved through a variation of Lasso's approach to SOS-decomposability, which fits FLI naturally. For comparison, we describe (for the first time to our knowledge) straightforward variations of Protostar and Proofs for Deep Thought that also benefit from SOS-decomposability. We see that for many reasonable parameter choices, and especially those arising from lookup-based zkVMs, FLI+SOS can concretely be the cheapest folding solution.
Joan Boyar, Simon Erfurth
ePrint Report
Folding schemes are an exciting new primitive, transforming the task of performing multiple zero-knowledge proofs of knowledge for a relation into performing just one zero-knowledge proof, for the same relation, and a number of cheap inclusion-proofs. Recently, folding schemes have been used to amortize the cost associated with proving different statements to multiple distinct verifiers, which has various applications. We observe that for these uses, leaking information about the statements folded together can be problematic, yet this happens with previous constructions. Towards resolving this issue, we give a natural definition of privacy preserving folding schemes, and what security they should offer. To construct privacy preserving folding schemes, we first define a statement hiders, a primitive which might be of independent interest. In a nutshell, a statement hider hides an instance of a relation as a new instance in the same relation. The new instance is in the relation if and only if the initial instance is. With this building block, we can utilize existing folding schemes to construct a privacy preserving folding scheme, by first hiding each of the statements. Folding schemes allow verifying that a statement was folded into another statement, while statement hiders allow verifying that a statement was hidden as another statement.
Karim Eldefrawy, Benjamin Terner, Moti Yung
ePrint Report
Time-lock puzzles are unique cryptographic primitives that use computational complexity to keep information secret for some period of time, after which security expires. This topic, while over 25 years old, is still in a state where foundations are not well understood: For example, current analysis techniques of time-lock primitives provide no sound mechanism to build composed multi-party cryptographic protocols which use expiring security as a building block. Further, there are analyses that employ idealizations and simulators of unrealistic computational power to be an acceptable sound security argument. Our goal with this short paper is to advocate for understanding what approaches may lead to sound modeling beyond idealization, and what approaches may, in fact, be hopeless at this task of sound modeling.
We explain in this paper how existing attempts at this subtle problem lack either composability, a fully consistent analysis, or functionality. The subtle flaws in the existing frameworks reduce to an impossibility result by Mahmoody et al., who showed that time-lock puzzles with super-polynomial gaps (between committer and solver) cannot be constructed from random oracles alone (or any repetitive computation where the next state is completely random given the prior state); yet still the analyses of algebraic puzzles today treat the solving process as if each step is a generic or random oracle. We point out that if the generation process relies on a trapdoor function that cannot be treated as a random oracle (to allow efficient generation while avoiding this impossibility result), then, to be consistent, the analysis of the solving process should also not treat such a trapdoor function (and its intermediate states) as a random oracle.
We also delineate additional issues with the proof techniques used for time-lock puzzles. Specifically, when a time-lock puzzle must retain privacy for some amount of time, the reduction should bound the running time of the simulator. A simulator that can ``simulate" if given time that if given to an adversary allows said adversary to solve the puzzle is not a valid security argument. We survey the adherence of various attempts to this principle, as well as the properties that different attempts achieve toward composition.
We explain in this paper how existing attempts at this subtle problem lack either composability, a fully consistent analysis, or functionality. The subtle flaws in the existing frameworks reduce to an impossibility result by Mahmoody et al., who showed that time-lock puzzles with super-polynomial gaps (between committer and solver) cannot be constructed from random oracles alone (or any repetitive computation where the next state is completely random given the prior state); yet still the analyses of algebraic puzzles today treat the solving process as if each step is a generic or random oracle. We point out that if the generation process relies on a trapdoor function that cannot be treated as a random oracle (to allow efficient generation while avoiding this impossibility result), then, to be consistent, the analysis of the solving process should also not treat such a trapdoor function (and its intermediate states) as a random oracle.
We also delineate additional issues with the proof techniques used for time-lock puzzles. Specifically, when a time-lock puzzle must retain privacy for some amount of time, the reduction should bound the running time of the simulator. A simulator that can ``simulate" if given time that if given to an adversary allows said adversary to solve the puzzle is not a valid security argument. We survey the adherence of various attempts to this principle, as well as the properties that different attempts achieve toward composition.