IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
13 November 2023
Jakob Feldtkeller, Tim Güneysu, Patrick Schaumont
Active fault injection is a credible threat to real-world digital systems computing on sensitive data. Arguing about security in the presence of faults is non-trivial, and state-of-the-art criteria are overly conservative and lack the ability of fine-grained comparison. However, comparing two alternative implementations for their security is required to find a satisfying compromise between security and performance. In addition, the comparison of alternative fault scenarios can help optimize the implementation of effective countermeasures.
In this work, we use quantitative information flow analysis to establish a vulnerability metric for hardware circuits under fault injection that measures the severity of an attack in terms of information leakage. Potential use cases range from comparing implementations with respect to their vulnerability to specific fault scenarios to optimizing countermeasures. We automate the computation of our metric by integrating it into a state-of-the-art evaluation tool for physical attacks and provide new insights into the security under an active fault attacker.
In this work, we use quantitative information flow analysis to establish a vulnerability metric for hardware circuits under fault injection that measures the severity of an attack in terms of information leakage. Potential use cases range from comparing implementations with respect to their vulnerability to specific fault scenarios to optimizing countermeasures. We automate the computation of our metric by integrating it into a state-of-the-art evaluation tool for physical attacks and provide new insights into the security under an active fault attacker.
Fuxin Zhang, Zhenyu Huang
We propose a new method to encode the problems of optimizing S-box implementations into SAT problems. By considering the inputs and outputs of gates as Boolean functions, the fundamental idea of our method is representing the relationships between these inputs and outputs according to their algebraic normal forms. Based on this method, we present several encoding schemes for
optimizing S-box implementations according to various criteria, such as multiplicative complexity, bitslice gate complexity, gate complexity, and circuit depth complexity. The experimental results of these optimization problems show that, compared to the encoding method proposed in FSE 2016, which represents these relationships between Boolean functions by their truth tables, our new encoding method can significantly reduce accelerate the subsequent solving process by 2-100 times for the majority of instances. To further improve the solving efficiency, we propose several strategies to eliminate the redundancy of the derived equation system and break the symmetry of the solution space. We apply our method in the optimizations of the S-boxes used in Ascon, ICEPOLE, PRIMATEs, Keccak/Ketje/Keyak, Joltik/Piccolo, LAC, Minalpher, Prøst, and RECTANGLE. We achieve some new improved implementations and narrow the range of the optimal values for different optimization criteria of these S-boxes.
Samuel Bouaziz--Ermann, Alex B. Grilo, Damien Vergnaud, Quoc-Huy Vu
There has been a recent interest in proposing quantum protocols whose security relies on weaker computational assumptions than their classical counterparts. Importantly to our work, it has been recently shown that public-key encryption (PKE) from one-way functions (OWF) is possible if we consider quantum public keys. Notice that we do not expect classical PKE from OWF given the impossibility results of Impagliazzo and Rudich (STOC'89).
However, the distribution of quantum public keys is a challenging task. Therefore, the main question that motivates our work is if quantum PKE from OWF is possible if we have classical public keys. Such protocols are impossible if ciphertexts are also classical, given the impossibility result of Austrin et al. (CRYPTO'22) of quantum enhanced key-agreement (KA) with classical communication.
In this paper, we focus on black-box separation for PKE with classical public key and quantum ciphertext from OWF under the polynomial compatibility conjecture, first introduced in Austrin et al.. More precisely, we show the separation when the decryption algorithm of the PKE does not query the OWF. We prove our result by extending the techniques of Austrin et al. and we show an attack for KA in an extended classical communication model where the last message in the protocol can be a quantum state.
Ryad Benadjila, Thibauld Feneuil, Matthieu Rivain
This paper presents MQ on my Mind (MQOM), a digital signature scheme based on the difficulty of solving multivariate systems of quadratic equations (MQ problem). MQOM has been submitted to the NIST call for additional post-quantum signature schemes. MQOM relies on the MPC-in-the-Head (MPCitH) paradigm to build a zero-knowledge proof of knowledge (ZK-PoK) for MQ which is then turned into a signature scheme through the Fiat-Shamir heuristic. The underlying MQ problem is non-structured in the sense that the system of quadratic equations defining an instance is drawn uniformly at random. This is one of the hardest and most studied problems from multivariate cryptography which hence constitutes a conservative choice to build candidate post-quantum cryptosystems. For the efficient application of the MPCitH paradigm, we design a specific MPC protocol to verify the solution of an MQ instance. Compared to other multivariate signature schemes based on non-structured MQ instances, MQOM achieves the shortest signatures (6.3-7.8 KB) while keeping very short public keys (few dozen of bytes). Other multivariate signature schemes are based on structured MQ problems (less conservative) which either have large public keys (e.g. UOV) or use recently proposed variants of these MQ problems (e.g. MAYO).
Yimeng Sun, Jiamin Cui, Meiqin Wang
The LowMC family of SPN block cipher proposed by Albrecht et al. was designed specifically for MPC-/FHE-/ZKP-friendly use cases. It is especially used as the underlying block cipher of PICNIC, one of the alternate third-round candidate digital signature algorithms for NIST post-quantum cryptography standardization. The security of PICNIC is highly related to the difficulty of recovering the secret key of LowMC from a given plaintext/ciphertext pair, which raises new challenges for security evaluation under extremely low data complexity.
In this paper, we improve the attacks on LowMC under low data complexity, i.e. 1 or 2 chosen plaintext/ciphertext pairs. For the difference enumeration attack with 2 chosen plaintexts, we propose new algebraic methods to better exploit the nonlinear relation inside the introduced variables based on the attack framework proposed by Liu et al. at ASIACRYPT 2022. With this technique, we significantly extend the number of attack rounds for LowMC with partial nonlinear layers and improve the success probability from around 0.5 to over 0.9. The security margin of some instances can be reduced to only 3/4 rounds. For the key-recovery attack using a single plaintext, we adopt a different linearization strategy to reduce the huge memory consumption caused by the polynomial methods for solving multivariate equation systems. The memory complexity reduces drastically for all 5-/6-round LowMC instances with full nonlinear layers at the sacrifice of a small factor of time complexity. For 5-round LowMC instances with a block size of 129, the memory complexity decreases from $2^{86.46}$ bits to $2^{48.18}$ bits while the time complexity even slightly reduces. Our results indicate that the security for different instances of LowMC under extremely low data complexity still needs further exploration.
In this paper, we improve the attacks on LowMC under low data complexity, i.e. 1 or 2 chosen plaintext/ciphertext pairs. For the difference enumeration attack with 2 chosen plaintexts, we propose new algebraic methods to better exploit the nonlinear relation inside the introduced variables based on the attack framework proposed by Liu et al. at ASIACRYPT 2022. With this technique, we significantly extend the number of attack rounds for LowMC with partial nonlinear layers and improve the success probability from around 0.5 to over 0.9. The security margin of some instances can be reduced to only 3/4 rounds. For the key-recovery attack using a single plaintext, we adopt a different linearization strategy to reduce the huge memory consumption caused by the polynomial methods for solving multivariate equation systems. The memory complexity reduces drastically for all 5-/6-round LowMC instances with full nonlinear layers at the sacrifice of a small factor of time complexity. For 5-round LowMC instances with a block size of 129, the memory complexity decreases from $2^{86.46}$ bits to $2^{48.18}$ bits while the time complexity even slightly reduces. Our results indicate that the security for different instances of LowMC under extremely low data complexity still needs further exploration.
Elli Androulaki, Marcus Brandenburger, Angelo De Caro, Kaoutar Elkhiyaoui, Liran Funaro, Alexandros Filios, Yacov Manevich, Senthilnathan Natarajan, Manish Sethi
Central Bank Digital Currencies refer to the digitization of lifecycle's of central bank money in a way that meets first of a kind requirements for transparency in transaction processing, interoperability with legacy or new world, and resilience that goes beyond the traditional crash fault tolerant model. This comes in addition to legacy system requirements for privacy and regulation compliance, that may differ from central bank to central bank.
This paper introduces a novel framework for Central Bank Digital Currency settlement that outputs a system of record---acting a a trusted source of truth serving interoperation, and dispute resolution/fraud detection needs---, and brings together resilience in the event of parts of the system being compromised, with throughput comparable to crash-fault tolerant systems. Our system further exhibits agnosticity of the exact cryptographic protocol adopted for meeting privacy, compliance and transparency objectives, while ensuring compatibility with the existing protocols in the literature. For the latter, performance is architecturally guaranteed to scale horizontally. We evaluated our system's performance using an enhanced version of Hyperledger Fabric, showing how a throughput of >100K TPS can be supported even with computation-heavy privacy-preserving protocols are in place.
This paper introduces a novel framework for Central Bank Digital Currency settlement that outputs a system of record---acting a a trusted source of truth serving interoperation, and dispute resolution/fraud detection needs---, and brings together resilience in the event of parts of the system being compromised, with throughput comparable to crash-fault tolerant systems. Our system further exhibits agnosticity of the exact cryptographic protocol adopted for meeting privacy, compliance and transparency objectives, while ensuring compatibility with the existing protocols in the literature. For the latter, performance is architecturally guaranteed to scale horizontally. We evaluated our system's performance using an enhanced version of Hyperledger Fabric, showing how a throughput of >100K TPS can be supported even with computation-heavy privacy-preserving protocols are in place.
Yao-Ching Hsieh, Huijia Lin, Ji Luo
Although we have known about fully homomorphic encryption (FHE) from circular security assumptions for over a decade [Gentry, STOC '09; Brakerski–Vaikuntanathan, FOCS '11], there is still a significant gap in understanding related homomorphic primitives supporting all *unrestricted* polynomial-size computations. One prominent example is attribute-based encryption (ABE). The state-of-the-art constructions, relying on the hardness of learning with errors (LWE) [Gorbunov–Vaikuntanathan–Wee, STOC '13; Boneh et al., Eurocrypt '14], only accommodate circuits up to a *predetermined* depth, akin to leveled homomorphic encryption. In addition, their components (master public key, secret keys, and ciphertexts) have sizes polynomial in the maximum circuit depth. Even in the simpler setting where a single key is published (or a single circuit is involved), the depth dependency persists, showing up in constructions of 1-key ABE and related primitives, including laconic function evaluation (LFE), 1-key functional encryption (FE), and reusable garbling schemes. So far, the only approach of eliminating depth dependency relies on indistinguishability obfuscation. An interesting question that has remained open for over a decade is whether the circular security assumptions enabling FHE can similarly benefit ABE.
In this work, we introduce new lattice-based techniques to overcome the depth-dependency limitations:
- Relying on a circular security assumption, we construct LFE, 1-key FE, 1-key ABE, and reusable garbling schemes capable of evaluating circuits of unbounded depth and size.
- Based on the *evasive circular* LWE assumption, a stronger variant of the recently proposed *evasive* LWE assumption [Wee, Eurocrypt '22; Tsabary, Crypto '22], we construct a full-fledged ABE scheme for circuits of unbounded depth and size.
Our LFE, 1-key FE, and reusable garbling schemes achieve optimal succinctness (up to polynomial factors in the security parameter). Their ciphertexts and input encodings have sizes linear in the input length, while function digest, secret keys, and garbled circuits have constant sizes independent of circuit parameters (for Boolean outputs). In fact, this gives the first constant-size garbled circuits without relying on indistinguishability obfuscation. Our ABE schemes offer short components, with master public key and ciphertext sizes linear in the attribute length and secret key being constant-size.
In this work, we introduce new lattice-based techniques to overcome the depth-dependency limitations:
- Relying on a circular security assumption, we construct LFE, 1-key FE, 1-key ABE, and reusable garbling schemes capable of evaluating circuits of unbounded depth and size.
- Based on the *evasive circular* LWE assumption, a stronger variant of the recently proposed *evasive* LWE assumption [Wee, Eurocrypt '22; Tsabary, Crypto '22], we construct a full-fledged ABE scheme for circuits of unbounded depth and size.
Our LFE, 1-key FE, and reusable garbling schemes achieve optimal succinctness (up to polynomial factors in the security parameter). Their ciphertexts and input encodings have sizes linear in the input length, while function digest, secret keys, and garbled circuits have constant sizes independent of circuit parameters (for Boolean outputs). In fact, this gives the first constant-size garbled circuits without relying on indistinguishability obfuscation. Our ABE schemes offer short components, with master public key and ciphertext sizes linear in the attribute length and secret key being constant-size.
10 November 2023
Shiyuan Xu, Yibo Cao, Xue Chen, Yuer Yang, Siu-Ming Yiu
Public key encryption with keyword search (PEKS), formalized by Boneh et al. [EUROCRYPT' 04], enables secure searching for specific keywords in the ciphertext. Nevertheless, in certain scenarios, varying user tiers are granted disparate data searching privileges, and administrators need to restrict the searchability of ciphertexts to select users exclusively. To address this concern, Jiang et al. [ACISP' 16] devised a variant of PEKS, namely public key encryption with authorized keyword search (PEAKS), wherein solely authorized users possess the ability to conduct targeted keyword searches. Nonetheless, it is vulnerable to resist quantum computing attacks. As a result, research focusing on authorizing users to search for keywords while achieving quantum security is far-reaching.
In this work, we present a novel construction, namely lattice-based PEAKS (L-PEAKS), which is the first mechanism to permit the authority to authorize users to search different keyword sets while ensuring quantum-safe properties. Specifically, the keyword is encrypted with a public key, and each authorized user needs to obtain a search privilege from an authority. The authority distributes an authorized token to a user within a time period and the user will generate a trapdoor for any authorized keywords. Technically, we utilize several lattice sampling and basis extension algorithms to fight against attacks from quantum adversaries. Moreover, we leverage identity-based encryption (IBE) to alleviate the bottleneck of public key management. Furthermore, we conduct parameter analysis, rigorous security reduction, and theoretical complexity comparison of our scheme and perform comprehensive evaluations at a commodity machine for completeness. Our L-PEAKS satisfies IND-sID-CKA and T-EUF security and is efficient in terms of space and computation complexity compared to other existing primitives. Finally, we provide two potential applications to show its versatility.
06 November 2023
Alessandro Chiesa, Ziyi Guan, Burcu Yıldız
Parallel repetition refers to a set of valuable techniques used to reduce soundness error of probabilistic proofs while saving on certain efficiency measures. Parallel repetition has been studied for interactive proofs (IPs) and multi-prover interactive proofs (MIPs). In this paper we initiate the study of parallel repetition for probabilistically checkable proofs (PCPs).
We show that, perhaps surprisingly, parallel repetition of a PCP can increase soundness error, in fact bringing the soundness error to one as the number of repetitions tends to infinity. This "failure" of parallel repetition is common: we find that it occurs for a wide class of natural PCPs for NP-complete languages. We explain this unexpected phenomenon by providing a characterization result: the parallel repetition of a PCP brings the soundness error to zero if and only if a certain "MIP projection" of the PCP has soundness error strictly less than one. We show that our characterization is tight via a suitable example. Moreover, for those cases where parallel repetition of a PCP does bring the soundness error to zero, the aforementioned connection to MIPs offers preliminary results on the rate of decay of the soundness error.
Finally, we propose a simple variant of parallel repetition, called consistent parallel repetition (CPR), which has the same randomness complexity and query complexity as the plain variant of parallel repetition. We show that CPR brings the soundness error to zero for every PCP (with non-trivial soundness error). In fact, we show that CPR decreases the soundness error at an exponential rate in the repetition parameter.
We show that, perhaps surprisingly, parallel repetition of a PCP can increase soundness error, in fact bringing the soundness error to one as the number of repetitions tends to infinity. This "failure" of parallel repetition is common: we find that it occurs for a wide class of natural PCPs for NP-complete languages. We explain this unexpected phenomenon by providing a characterization result: the parallel repetition of a PCP brings the soundness error to zero if and only if a certain "MIP projection" of the PCP has soundness error strictly less than one. We show that our characterization is tight via a suitable example. Moreover, for those cases where parallel repetition of a PCP does bring the soundness error to zero, the aforementioned connection to MIPs offers preliminary results on the rate of decay of the soundness error.
Finally, we propose a simple variant of parallel repetition, called consistent parallel repetition (CPR), which has the same randomness complexity and query complexity as the plain variant of parallel repetition. We show that CPR brings the soundness error to zero for every PCP (with non-trivial soundness error). In fact, we show that CPR decreases the soundness error at an exponential rate in the repetition parameter.
Santiago Arranz Olmos, Gilles Barthe, Ruben Gonzalez, Benjamin Grégoire, Vincent Laporte, Jean-Christophe Lechenet, Tiago Oliveira, Peter Schwabe
In this paper we revisit the problem of erasing sensitive data from memory and registers during return from a cryptographic routine. While the problem and related attacker model is fairly easy to phrase, it turns out to be surprisingly hard to guarantee security in this model when implementing cryptography in common languages such as C/C++ or Rust. We revisit the issues surrounding zeroization and then present a principled solution in the sense that it guarantees that sensitive data is erased and it clearly defines when this happens. We implement our solution as extension to the formally verified Jasmin compiler and extend the correctness proof of the compiler to cover zeroization. We show that the approach seamlessly integrates with state-of-the-art protections against microarchitectural attacks by integrating zeroization into Libjade, a cryptographic library written in Jasmin with systematic protections against timing and Spectre-v1 attacks. We present benchmarks showing that in many cases the overhead of zeroization is barely measurable and that it stays below 2% except for highly optimized symmetric crypto routines on short inputs.
Feng Li, Jianfeng Ma, Yinbin Miao, Pengfei Wu, Xiangfu Song
Boolean Searchable Symmetric Encryption (BSSE) enables users to perform retrieval operations on the encrypted data while sup- porting complex query capabilities. This paper focuses on addressing the storage overhead and privacy concerns associated with existing BSSE schemes. While Patel et al. (ASIACRYPT’21) and Bag et al. (PETS’23) introduced BSSE schemes that conceal the number of single keyword re- sults, both of them suffer from quadratic storage overhead and neglect the privacy of search and access patterns. Consequently, an open ques- tion arises: Can we design a storage-efficient Boolean query scheme that effectively suppresses leakage, covering not only the volume pattern for singleton keywords, but also search and access patterns?
In light of the limitations of existing schemes in terms of storage over- head and privacy protection, this work presents a novel solution called SESAME. It realizes efficient storage and privacy preserving based on Bloom filter and functional encryption. Moreover, we propose an en- hanced version, SESAME+, which offers improved search performance. By rigorous security analysis on the leakage functions of our schemes, we provide a formal security proof. Finally, we implement our schemes and demonstrate that SESAME+ achieves superior search efficiency and reduced storage overhead.
Keegan Ryan, Kaiwen He, George Arnold Sullivan, Nadia Heninger
We demonstrate that a passive network attacker can opportunistically obtain private RSA host keys from an SSH server that experiences a naturally arising fault during signature computation. In prior work, this was not believed to be possible for the SSH protocol because the signature included information like the shared Diffie-Hellman secret that would not be available to a passive network observer. We show that for the signature parameters commonly in use for SSH, there is an efficient lattice attack to recover the private key in case of a signature fault. We provide a security analysis of the SSH, IKEv1, and IKEv2 protocols in this scenario, and use our attack to discover hundreds of compromised keys in the wild from several independently vulnerable implementations.
Mingjie Chen, Yi-Fu Lai, Abel Laval, Laurane Marco, Christophe Petit
Zero-knowledge proofs for NP statements are an essential tool
for building various cryptographic primitives and have been extensively
studied in recent years. In a seminal result from Goldreich, Micali and
Wigderson (JACM'91), zero-knowledge proofs for NP statements can be built
from any one-way function, but this construction leads very inefficient
proofs. To yield practical constructions, one often uses the additional
structure provided by homomorphic commitments.
In this paper, we introduce a relaxed notion of homomorphic commitments,
called malleable commitments, which requires less structure to
be instantiated. We provide a malleable commitment construction from
the ElGamal-type isogeny-based group action (Eurocrypt’22). We show how malleable commitments with a group structure in the malleability can be used to build zero-knowledge proofs for NP statements, improving on the naive construction from one-way functions. We consider three representations: arithmetic circuits, rank-1 constraint systems and branching programs.
This work gives the first attempt at constructing a post-quantum generic proof system from isogeny assumptions (the group action DDH problem).
Though the resulting proof systems are linear in the circuit size, they possess interesting features such as non-interactivity, statistical zero-knowledge, and online-extractability.
Zhiwei Li, Jun Xu, Lei Hu
In 2012, Ding, Xie and Lin designed a key exchange protocol based on Ring-LWE problem, called the DXL key exchange protocol, which can be seen as an extended version of the Diffie-Hellman key exchange. In this protocol, Ding et al. achieved key exchange between the communicating parties according to the associativity of matrix multiplications, that is, $(x^T\cdot A)\cdot y = x^T\cdot (A\cdot y)$, where $x,y$ are column vectors and $A$ is a square matrix. However, the DXL key exchange protocol cannot resist key reuse attacks. At ESORICS 2022, Qin et al. proposed a method that an adversary can recover the reused private key after forging the public keys for several times. Nevertheless, Qin et al.'s method leads to a lot of redundant operations. In this paper, we improve Qin et al.'s method to a more general case and propose an effective approach to combine signal leakage attacks with depth first search. Compared with state-of-the-art result appeared at ESORICS 2022, the number of reused private key have been decreased from 29 to 10. In other words, if the number of reuses exceeds 10, the private key will be restored. Moreover, we validate the effectiveness of the results through experiments.
Jan Schoone, Joan Daemen
The Boolean map $\chi_n \colon \mathbb{F}_2^n \to \mathbb{F}_2^n,\ x \mapsto y$ defined by $y_i = x_i + (x_{i+1}+1)x_{i+2}$ (where $i\in \mathbb{Z}/n\mathbb{Z}$) is used in various permutations that are part of cryptographic schemes, e.g., Keccak-f (the SHA-3-permutation), ASCON (the winner of the NIST Lightweight competition), Xoodoo, Rasta and Subterranean (2.0).
In this paper, we study various algebraic properties of this map.
We consider $\chi_n$ (through vectorial isomorphism) as a univariate polynomial.
We show that it is a power function if and only if $n=1,3$.
We furthermore compute bounds on the sparsity and degree of these univariate polynomials, and the number of different univariate representations.
Secondly, we compute the number of monomials of given degree in the inverse of $\chi_n$ (if it exists).
This number coincides with binomial coefficients.
Lastly, we consider $\chi_n$ as a polynomial map, to study whether the same rule ($y_i = x_i + (x_{i+1}+1)x_{i+2}$) gives a bijection on field extensions of $\mathbb{F}_2$.
We show that this is not the case for extensions whose degree is divisible by two or three.
Based on these results, we conjecture that this rule does not give a bijection on any extension field of $\mathbb{F}_2$.
Ivan Buchinskiy, Matvei Kotov, Alexander Treier
Several key exchange protocols based on tropical circulant matrices were proposed in the last two years. In this paper, we show that protocols offered by M. Durcheva [M. I. Durcheva. TrES: Tropical Encryption Scheme Based on Double Key Exchange. In: Eur. J. Inf. Tech. Comp. Sci. 2.4 (2022), pp. 11–17], by B. Amutha and R. Perumal [B. Amutha and R. Perumal. Public key exchange protocols based on tropical lower circulant and anti-circulant matrices. In: AIMS Math. 8.7 (2023), pp. 17307–17334.], and by H. Huang, C. Li, and L. Deng [H. Huang, C. Li, and L. Deng. Public-Key Cryptography Based on Tropical Circular Matrices. In: Appl. Sci. 12.15 (2022), p. 7401] are insecure.
Yang Tan, Bo Lv
Private Set Intersection Cardinality(PSI-CA) is a type of secure two-party computation. It enables two parties, each holding a private set, to jointly compute the cardinality of their intersection without revealing any other private information about their respective sets.
In this paper, we manage to break two PSI-CA protocols by recovering the specific intersection items in polynomial time. Among them, the PSI-CA protocol proposed by De Cristofaro et al. in 2012 is the most popular PSI-CA protocol based on the Google Scholar search results and it is still deemed one of the most efficient PSI-CA protocols.
In this paper, we also propose several solutions to these protocols' security problems.
In this paper, we manage to break two PSI-CA protocols by recovering the specific intersection items in polynomial time. Among them, the PSI-CA protocol proposed by De Cristofaro et al. in 2012 is the most popular PSI-CA protocol based on the Google Scholar search results and it is still deemed one of the most efficient PSI-CA protocols.
In this paper, we also propose several solutions to these protocols' security problems.
Hadas Zeilberger, Binyi Chen, Ben Fisch
Interactive Oracle Proof of Proximity (IOPPs) are a powerful tool for constructing succinct non-interactive arguments of knowledge (SNARKs) in the random oracle model, which are fast and plausibly post-quantum secure. The Fast Reed Solomon IOPP (FRI) is the most widely used in practice, while tensor-code IOPPs (such as Brakedown) achieve significantly faster prover times at the cost of much larger proofs. IOPPs are used to construct polynomial commitment schemes (PCS), which are not only an important building block for SNARKs but also have a wide range of independent applications.
This work introduces Basefold, a generalization of the FRI IOPP to a broad class of linear codes beyond Reed-Solomon, which we call $\textit{foldable linear codes}$. We construct a new family of foldable linear codes, which are a special type of randomly punctured Reed-Muller code, and prove tight bounds on their minimum distance. Finally, we introduce a new construction of a multilinear PCS from any foldable linear code, which is based on interleaving Basefold with the classical sumcheck protocol for multilinear polynomial evaluation. As a special case, this gives a new multilinear PCS from FRI.
In addition to these theoretical contributions, the Basefold PCS instantiated with our new foldable linear codes offers a more reasonable tradeoff between prover time, proof size, and verifier time than prior constructions. For instance, for polynomials over a $64$-bit field with $12$ variables, the Basefold prover is faster than both Brakedown and FRI-PCS ($2$ times faster than Brakedown and $3$ times faster than FRI-PCS), and its proof is $4$ times smaller than Brakedown's. On the other hand, for polynomials with $25$ variables, Basefold's prover is $6.5$ times faster than FRI-PCS, it's proof is $2.5$ times smaller than Brakedown's and its verifier is $7.5$ times faster. Using Basefold to compile the Hyperplonk PIOP [CBBZ23] results in an extremely fast implementation of Hyperplonk, which in addition to having competitive performance on general circuits, is particularly fast for circuits with high-degree custom gates (e.g., signature verification and table lookups). Hyperplonk with Basefold is approximately equivalent to the speed of Hyperplonk with Brakedown, but with a proof size that is more than $5$ times smaller. Finally, Basefold maintains performance across a wider variety of field choices than FRI, which requires FFT-friendly fields. Thus, Basefold can have an extremely fast prover compared to SNARKs from FRI for special applications. Benchmarking a circom ECDSA verification circuit with curve secp256k1, Hyperplonk with Basefold has a prover time that is more than $200\times$ faster than with FRI and its proof size is $5.8$ times smaller than Hyperplonk with Brakedown.
This work introduces Basefold, a generalization of the FRI IOPP to a broad class of linear codes beyond Reed-Solomon, which we call $\textit{foldable linear codes}$. We construct a new family of foldable linear codes, which are a special type of randomly punctured Reed-Muller code, and prove tight bounds on their minimum distance. Finally, we introduce a new construction of a multilinear PCS from any foldable linear code, which is based on interleaving Basefold with the classical sumcheck protocol for multilinear polynomial evaluation. As a special case, this gives a new multilinear PCS from FRI.
In addition to these theoretical contributions, the Basefold PCS instantiated with our new foldable linear codes offers a more reasonable tradeoff between prover time, proof size, and verifier time than prior constructions. For instance, for polynomials over a $64$-bit field with $12$ variables, the Basefold prover is faster than both Brakedown and FRI-PCS ($2$ times faster than Brakedown and $3$ times faster than FRI-PCS), and its proof is $4$ times smaller than Brakedown's. On the other hand, for polynomials with $25$ variables, Basefold's prover is $6.5$ times faster than FRI-PCS, it's proof is $2.5$ times smaller than Brakedown's and its verifier is $7.5$ times faster. Using Basefold to compile the Hyperplonk PIOP [CBBZ23] results in an extremely fast implementation of Hyperplonk, which in addition to having competitive performance on general circuits, is particularly fast for circuits with high-degree custom gates (e.g., signature verification and table lookups). Hyperplonk with Basefold is approximately equivalent to the speed of Hyperplonk with Brakedown, but with a proof size that is more than $5$ times smaller. Finally, Basefold maintains performance across a wider variety of field choices than FRI, which requires FFT-friendly fields. Thus, Basefold can have an extremely fast prover compared to SNARKs from FRI for special applications. Benchmarking a circom ECDSA verification circuit with curve secp256k1, Hyperplonk with Basefold has a prover time that is more than $200\times$ faster than with FRI and its proof size is $5.8$ times smaller than Hyperplonk with Brakedown.
03 November 2023
Wonseok Choi, Minki Hhan, Yu Wei, Vassilis Zikas
Security proofs of symmetric-key primitives typically consider an idealized world with access to a (uniformly) random function. The starting point of our work is the observation that such an ideal world leads to underestimating the actual security of certain primitives. As a demonstrating example, $\mathsf{XoP2}$, which relies on two independent random permutations, is proven to exhibit far superior concrete security compared to $\mathsf{XoP}$, which employs a single permutation with domain separation. But the main reason for this is an artifact of the idealized model used in the proof, in particular, that (in the random-function-ideal world) $\mathsf{XoP}$ might hit a trivially bad event (outputting 0) which does not occur in the real/domain-separated world.
Motivated by this, we put forth the analysis of such primitives in an updated ideal world, which we call the {\em fine-tuned} setting, where the above artifact is eliminated. We provide fine-tuned (and enhanced) security analyses for $\mathsf{XoP}$ and $\mathsf{XoP}$-based MACs: $\mathsf{nEHtM}$ and $\mathsf{DbHtS}$. Our analyses demonstrate that the security of $\mathsf{XoP}$-based and $\mathsf{XoP2}$-based constructions are, in fact, far more similar than what was previously proven.
Concretely, for the number of users $u$ and the maximum number of queries per user $q_m$, we show that the multi-user ``fine-tuned'' security bound of $\mathsf{XoP}$ can be proven as
$O\left({u^{0.5}{q_m}^{2}}/{2^{2n}}\right)$
via the Squared-ratio method proposed by Chen et al. [CRYPTO'23], resulted to the same security bound of $\mathsf{XoP2}$ proven there. We also show the compatibility of the fine-tuned model with the Chi-squared method proposed by Dai et al.
[CRYPTO'17], and show that $\mathsf{XoP}$ and $\mathsf{XoP2}$ enjoy the same security bound in the fine-tuned setting regardless of proving tools.
Finally, we turn to the security analysis of MACs in the multi-user setting, where the effect of transitioning the proofs to the fine-tuned setting is even higher. Concretely,
we are able to prove unexpected improvements in the security bounds for both $\mathsf{nEHtM}$ and $\mathsf{DbHtS}$. Our security proofs rely on a fine-tuned and extended version of Mirror theory for both lower and upper bounds, which yields more versatile and improved security proofs.
Of independent interest, this extension allows us to prove the multi-user MAC security of $\mathsf{nEHtM}$ in the nonce-misuse model, while the previous analysis only applied to the multi-user PRF security in the nonce-respecting model. As a side note, we also point out (and fix) a flaw in the original analysis of Chen et al..
Surya Mathialagan
When outsourcing a database to an untrusted remote server, one might want to verify the integrity of contents while accessing it. To solve this, Blum et al. [FOCS `91] propose the notion of memory checking. Memory checking allows a user to run a RAM program on a remote server, with the ability to verify integrity of the storage with small local storage.
In this work, we define and initiate the formal study of memory checking for Parallel RAMs (PRAMs). The parallel RAM model is very expressive and captures many modern architectures such as multi-core architectures and cloud clusters. When multiple clients run a PRAM algorithm on a shared remote server, it is possible that there are concurrency issues that cause inconsistencies. Therefore, integrity verification is even more desirable property in this setting.
Assuming only the existence of one-way functions, we construct an online memory checker (one that reports faults as soon as they occur) for PRAMs with $O(\log N)$ simulation overhead in both work and depth. In addition, we construct an offline memory checker (one that reports faults only after a long sequence of operations) with amortized $O(1)$ simulation overhead in both work and depth. Our constructions match the best known simulation overhead of the memory checkers in the standard single-user RAM setting. As an application of our parallel memory checking constructions, we additionally construct the first maliciously secure oblivious parallel RAM (OPRAM) with polylogarithmic overhead.