International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

30 September 2024

Gavin Cho, Georg Fuchsbauer, Adam O'Neill
ePrint Report ePrint Report
We show that the Schnorr signature scheme meets existential unforgeability under chosen-message attack (EUF-CMA) in the random oracle model (ROM) if the circular discrete-logarithm (CDL) assumption, a new, non-interactive variant of DL we introduce, holds in the underlying group. Our reduction is completely tight, meaning the constructed adversary against CDL has both essentially the same running time and success probability as the assumed forger. To our knowledge, we are the first to exhibit such a reduction. Previously, Bellare and Dai (INDOCRYPT 2020) showed the scheme is EUF-CMA the ROM if their multi-base DL assumption holds in the underlying group. However, multi-base DL is interactive; moreover, their reduction, while tighter than the initial result of Pointcheval and Stern (EUROCRYPT 1996), still incurs a security loss that is linear in the number of the adversary’s RO queries. We justify CDL by showing it holds in two carefully chosen idealized models, which idealize different aspects of our assumption. Our quantitative bounds in these models are essentially the same as for DL, giving strong evidence that CDL is as hard DL in appropriate elliptic-curve groups groups.
Expand
Peng Wang, Shuping Mao, Ruozhou Xu, Jiwu Jing, Yuewu Wang
ePrint Report ePrint Report
XCB, a tweakable enciphering mode, is part of IEEE Std. 1619.2 for shared storage media. We show that all versions of XCB are not secure through three plaintext recovery attacks. A key observation is that XCB behaves like an LRW1-type tweakable block cipher for single-block messages, which lacks CCA security. The first attack targets one-block XCB, using three queries to recover the plaintext. The second one requires four queries to recover the plaintext that excludes one block. The last one requires seven queries to recover the full plaintext. The first attack applies to any scheme that follows the XCB structure, whereas the latter two attacks work on all versions of XCB, exploiting the separable property of the underlying universal hash function. We also discuss the impact of these vulnerabilities on XCB-based applications, such as disk encryption, nonce-based encryption, deterministic authenticated encryption and robust authenticated encryption, highlighting the risks due to XCB's failure to achieve STPRP security. To address these flaws, we propose the XCB* structure, an improved version of XCB that adds only two XOR operations. We prove that XCB* is STPRP-secure when using AXU hash functions, SPRPs, and a secure IV-based stream cipher.
Expand
Brandon "Cryptskii" Ramsay
ePrint Report ePrint Report
Overpass Channels presents a groundbreaking approach to blockchain scalability, offering a horizontally scalable, privacy-enhanced payment network with independent verification, fluid liquidity, and robust censorship resistance. This paper introduces a novel architecture that leverages zero-knowledge proofs, specifically zk-SNARKs, to ensure transaction validity and privacy while enabling unprecedented throughput and efficiency. By eliminating the need for traditional consensus mechanisms and miners, Overpass Channels achieves remarkable cost-effectiveness and energy efficiency. The system's design focuses on unilateral payment channels and off-chain transaction processing, allowing for high-speed, low-latency operations without compromising security or decentralization. This paper provides a comprehensive analysis of the Overpass Channels system, including its cryptographic foundations, scalability metrics, integration, and potential applications across various domains, from global payments to confidential voting systems and secure health record management.
Expand
Patrick Ehrler, Abdelkarim Kati, Thomas Schneider, Amos Treiber
ePrint Report ePrint Report
Encrypted Search Algorithms (ESAs) are a technique to encrypt data while the user can still search over it. ESAs can protect privacy and ensure security of sensitive data stored on a remote storage. Originally, ESAs were used in the context of documents that consist of keywords. The user encrypts the documents, sends them to a remote server and is still able to search for keywords, without exposing information about the plaintext. The idea of ESAs has also been applied to relational databases, where queries (similar to SQL statements) can be privately executed on an encrypted database.But just as traditional schemes for Keyword-ESAs, also Relational-ESAs have the drawback of exposing some information, called leakage. Leakage attacks have been proposed in the literature that use this information together with auxiliary information to learn details about the plaintext. However, these leakage attacks have overwhelmingly been designed for and applied to Keyword-ESAs and not Relational-ESAs. In this work, we review the suitability of major leakage attacks against ESAs in the relational setting by adapting them accordingly. We perform extensive re-evaluations of the attacks on various relational datasets with different properties. Our evaluations show that major attacks can work against Relational-ESAs in the known-data setting. However, the attack performance differs between datasets, exploited patterns, and attacks.
Expand
Zhenjian Lu, Noam Mazor, Igor C. Oliveira, Rafael Pass
ePrint Report ePrint Report
We consider indistinguishability obfuscation (iO) for multi-output circuits $C:\{0,1\}^n\to\{0,1\}^n$ of size s, where s is the number of AND/OR/NOT gates in C. Under the worst-case assumption that NP $\nsubseteq$ BPP, we establish that there is no efficient indistinguishability obfuscation scheme that outputs circuits of size $s + o(s/ \log s)$. In other words, to be secure, an efficient iO scheme must incur an $\Omega(s/ \log s)$ additive overhead in the size of the obfuscated circuit. The hardness assumption under which this negative result holds is minimal since an optimal iO scheme with no circuit size overhead exists if NP$\nsubseteq$ BPP.

Expanding on this result, we also rule out iO for single-output database-aided circuits with an arbitrary polynomial overhead in circuit size. This strengthens an impossibility result by Goldwasser and Rothblum [GR07], which considered circuits with access to an exponential-length database that the obfuscator has oracle access to; in contrast, our impossibility result holds even w.r.t. polynomial-size databases and even w.r.t. obfuscators that may run in time polynomial in the size of the database (and thus may read the whole database).

The proof of our main result builds on a connection between obfuscation and meta-complexity put forward by Mazor and Pass [MP24], and on the NP-hardness of circuit minimization for multi-output circuits established by Loff, Ilango, and Oliveira [ILO20], together with other techniques from cryptography and complexity theory.
Expand
Nikhil Vanjani, Pratik Soni, Sri AravindaKrishnan Thyagarajan
ePrint Report ePrint Report
In scenarios where a seller holds sensitive data $x$, like employee / patient records or ecological data, and a buyer seeks to obtain an evaluation of specific function $f$ on this data, solutions in trustless digital environments like blockchain-based Web3 systems typically fall into two categories: (1) Smart contract-powered solutions and (2) cryptographic solutions leveraging tools such as adaptor signatures. The former approach offers atomic transactions where the buyer learns the function evaluation $f(x)$ (and not $x$ entirely) upon payment. However, this approach is often inefficient, costly, lacks privacy for the seller's data, and is incompatible with systems that do not support smart contracts with required functionalities. In contrast, the adaptor signature-based approach addresses all of the above issues but comes with an "all-or-nothing" guarantee, where the buyer fully extracts $x$ and does not support functional extraction of the sensitive data. In this work, we aim to bridge the gap between these approaches, developing a solution that enables fair functional sales of information while offering improved efficiency, privacy, and compatibility similar to adaptor signatures.

Towards this, we propose functional adaptor signatures (FAS) a novel cryptographic primitive that achieves all the desired properties as listed above. Using FAS, the seller can publish an advertisement committing to $x$. The buyer can pre-sign the payment transaction w.r.t. a function $f$, and send it along with the transaction to the seller. The seller adapts the pre-signature into a valid (buyer's) signature and posts the payment and the adapted signature on the blockchain to get paid. Finally, using the pre-signature and the posted signature, the buyer efficiently extracts $f(x)$, and completes the sale. We formalize the security properties of FAS, among which is a new notion called witness privacy to capture seller's privacy, which ensures the buyer does not learn anything beyond $f(x)$. We present multiple variants of witness privacy, namely, witness hiding, witness indistinguishability, and zero-knowledge, to capture varying levels of leakage about $x$ beyond $f(x)$ to a malicious buyer.

We introduce two efficient constructions of FAS supporting linear functions (like statistics/aggregates, kernels in machine learning, etc.), that satisfy the strongest notion of witness privacy. One construction is based on prime-order groups and compatible with Schnorr signatures for payments, and the other is based on lattices and compatible with a variant of Lyubashevsky's signature scheme. A central conceptual contribution of our work lies in revealing a surprising connection between functional encryption, a well-explored concept over the past decade, and adaptor signatures, a relatively new primitive in the cryptographic landscape. On a technical level, we avoid heavy cryptographic machinery and achieve improved efficiency, by making black-box use of building blocks like inner product functional encryption (IPFE) while relying on certain security-enhancing techniques for the IPFE in a non-black-box manner. We implement our FAS construction for Schnorr signatures and show that for reasonably sized seller witnesses, the different operations are quite efficient even for commodity hardware.
Expand
Yuxuan Wang, Jintong Yu, Shipei Qu, Xiaolin Zhang, Xiaowei Li, Chi Zhang, Dawu Gu
ePrint Report ePrint Report
Keccak acts as the hash algorithm and eXtendable-Output Function (XOF) specified in the NIST standard drafts for Kyber and Dilithium. The Keccak output is highly correlated with sensitive information. While in RSA and ECDSA, hash-like components are only used to process public information, such as the message. The importance and sensitivity of hash-like components like Keccak are much higher in Kyber and Dilithium than in traditional public-key cryptography. However, few works study Keccak regarding the physical security of Kyber and Dilithium. In this paper, we propose a practical fault attack scheme on Keccak to compromise Kyber and Dilithium on ARM Cortex-M devices. Specifically, by injecting loop-abort faults in the iterative assignments or updates of Keccak, we propose six attacks that can set the Keccak output to a known value. These attacks can be exploited to disrupt the random number expansion or other critical processes in Kyber and Dilithium, thereby recovering sensitive information derived from the Keccak output. In this way, we propose eight attack strategies on Kyber and seven on Dilithium, achieving key recovery, signature forgery, and verification bypass. To validate the practicality of the proposed attack strategies, we perform fault characterization on five real-world devices belonging to four different series (ARM Cortex-M0+, M3, M4, and M33). The success rate is up to 89.5%, demonstrating the feasibility of loop-abort faults. This paper also provides a guide for reliably inducing loop-abort faults on ARM Cortex-M devices using electromagnetic fault injection. We further validate our complete attacks on Kyber and Dilithium based on the official implementations, achieving a success rate of up to 55.1%. The results demonstrate that the excessive use of Keccak in generating and computing secret information leads to severe vulnerabilities. Our work can potentially be migrated to other post-quantum cryptographic algorithms that use Keccak, such as Falcon, BIKE, and HQC.
Expand
Gaëtan Cassiers, Charles Momin
ePrint Report ePrint Report
Datasets of side-channel leakage measurements are widely used in research to develop and benchmarking side-channel attack and evaluation methodologies. Compared to using custom and/or one-off datasets, widely-used and publicly available datasets improve research reproducibility and comparability. Further, performing high-quality measurements requires specific equipment and skills, while also taking a significant amount of time. Therefore, using publicly available datasets lowers the barriers to entry into side-channel research. This paper introduces the SMAesH dataset. SMAesH is an optimized masked hardware implementation of the AES with a provably secure arbitrary-order masking scheme. The SMAesH dataset contains power traces of the first-order SMAesH on two FPGAs of different generations, along with key, plaintext and masking randomness. A part of the dataset use uniformly random key and plaintext to enable leakage profiling, while another part uses a fixed key (still with uniformly random plaintext) to enable attack validation or leakage assessment in a fixed-versus-random setting. We document the experimental setup used to acquire the dataset. It is built from components that are widely available. We also discuss particular methods employed to maximize the information content in the leakage traces, such as power supply selection, fine-grained trace alignment and resolution optimization.
Expand
Antonio Sanso
ePrint Report ePrint Report
In this paper, we investigate the rough order assumption (\(RO_C\)) introduced by Braun, Damgård, and Orlandi at CRYPTO 23, which posits that class groups of imaginary quadratic fields with no small prime factors in their order are computationally indistinguishable from general class groups. We present a novel attack that challenges the validity of this assumption by leveraging properties of Mordell curves over the rational numbers. Specifically, we demonstrate that if the rank of the Mordell curve \( E_{-16D} \) is at least 2, it contradicts the rough order assumption. Our attack deterministically breaks the \(RO_C\) assumption for discriminants of a special form, assuming the parity conjecture holds and certain conditions are met. Additionally, for both special and generic cases, our results suggest that the presence of nontrivial 3-torsion elements in class groups can undermine the \(RO_C\) assumption. Although our findings are concrete for specific cases, the generic scenario relies on heuristic arguments related to the Birch and Swinnerton-Dyer (BSD) conjecture, a significant and widely believed conjecture in number theory. Attacks against 2-torsion elements in class groups are already well known, but our work introduces a distinct approach targeting 3-torsion elements. These attacks are fundamentally different in nature, and both have relatively straightforward countermeasures, though they do not generalize to higher torsions. While these results do not entirely invalidate the \(RO_C\) assumption, they highlight the need for further exploration of its underlying assumptions, especially in the context of specific torsion structures within class groups.
Expand
Ryo Yoshizumi, Hiroshi Onuki, Ryo Ohashi, Momonari Kudo, Koji Nuida
ePrint Report ePrint Report
Isogeny-based cryptography is one of the candidates for post-quantum cryptography. Recently, many isogeny-based cryptosystems using isogenies between Kummer surfaces were proposed. Most of those cryptosystems use $(2,2)$-isogenies. However, to enhance the possibility of cryptosystems, higher degree isogenies, say $(\ell,\ell)$-isogenies for an odd $\ell$, is also crucial. For an odd $\ell$, the Lubicz-Robert gave a formula to compute $(\ell)^g$-isogenies in general dimension $g$. In this paper, we propose explicit and efficient algorithms to compute $(\ell,\ell)$-isogenies between Kummer surfaces, based on the Lubicz-Robert formula.In particular, we propose two algorithms for computing the codomain of the isogeny and two algorithms for evaluating the image of a point under the isogeny. Then, we count the number of arithmetic operations required for each of our proposed algorithms, and determine the most efficient algorithm in terms of the number of arithmetic operations from each of two types of algorithms for each $\ell$. As an application, using the most efficient one, we implemented the SIDH attack on B-SIDH in SageMath.In setting that originally claimed 128-bit security, our implementation was able to recover that secret key in about 11 hours.
Expand
Paul Lou, Nathan Manohar, Amit Sahai
ePrint Report ePrint Report
To date, the strongest notions of security achievable for two-round publicly-verifiable cryptographic proofs for $\mathsf{NP}$ are witness indistinguishability (Dwork-Naor 2000, Groth-Ostrovsky-Sahai 2006), witness hiding (Bitansky-Khurana-Paneth 2019, Kuykendall-Zhandry 2020), and super-polynomial simulation (Pass 2003, Khurana-Sahai 2017). On the other hand, zero-knowledge and even weak zero-knowledge (Dwork-Naor-Reingold-Stockmeyer 1999) are impossible in the two-round publicly-verifiable setting (Goldreich-Oren 1994). This leaves an enormous gap in our theoretical understanding of known achievable security and the impossibility results for two-round publicly-verifiable cryptographic proofs for $\mathsf{NP}$.

Towards filling this gap, we propose a new and natural notion of security, called witness semantic security, that captures the natural and strong notion that an adversary should not be able to learn any partial information about the prover's witness beyond what it could learn given only the statement $x$. Not only does our notion of witness semantic security subsume both witness indistinguishability and witness hiding, but it also has an easily appreciable interpretation.

Moreover, we show that assuming the subexponential hardness of LWE, there exists a two-round public-coin publicly-verifiable witness semantic secure argument. To our knowledge, this is the strongest form of security known for this setting.

As a key application of our work, we show that non-interactive zero-knowledge (NIZK) arguments in the common reference string (CRS) model can additionally maintain witness semantic security even when the CRS is maliciously generated. Our work gives the first construction from (subexponential) standard assumptions that achieves a notion stronger than witness-indistinguishability against a malicious CRS authority.

In order to achieve our results, we give the first construction of a ZAP from subexponential LWE that is adaptively sound. Additionally, we propose a notion of simulation using non-uniform advice about a malicious CRS, which we also believe will be of independent interest.
Expand
Lih-Chung Wang, Chun-Yen Chou, Jintai Ding, Yen-Liang Kuan, Jan Adriaan Leegwater, Ming-Siou Li, Bo-Shu Tseng, Po-En Tseng, Chia-Chun Wang
ePrint Report ePrint Report
SNOVA is one of the submissions in the NIST Round 1 Additional Signature of the Post-Quantum Signature Competition. SNOVA is a UOV variant that uses the noncommutative-ring technique to educe the size of the public key. SNOVA's public key size and signature size are well-balanced and have good performance. Recently, Beullens proposed a forgery attack against SNOVA, pointing out that the parameters of SNOVA can be attacked. Beullens also argued that with some slight adjustments his attacks can be prevented. In this note, we explain Beullens' forgery attack and show that the attack can be invalid by two different approaches. Finally, we show that these two approaches do not increase the sizes of the public keys or signatures and the current parameters satisfy the security requirement of NIST.
Expand
Arka Rai Choudhuri, Sanjam Garg, Guru-Vamsi Policharla, Mingyuan Wang
ePrint Report ePrint Report
An important consideration with the growth of the DeFi ecosystem is the protection of clients who submit transactions to the system. As it currently stands, the public visibility of these transactions in the memory pool (mempool) makes them susceptible to market manipulations such as frontrunning and backrunning. More broadly, for various reasons—ranging from avoiding market manipulation to including time-sensitive information in their transactions—clients may want the contents of their transactions to remain private until they are executed, i.e. they have *pending transaction privacy*. Therefore, *mempool privacy* is becoming an increasingly important feature as DeFi applications continue to spread.

We construct the first *practical* mempool privacy scheme that uses a *one-time* DKG setup for $n$ decryption servers. Our scheme ensures the strong privacy requirement by not only hiding the transactions until they are decrypted but also guaranteeing privacy for transactions that were not selected in the epoch (*pending transaction privacy*). For each epoch (or block), clients can encrypt their transactions so that, once $B$ (encrypted) transactions are selected for the epoch, they can be decrypted by each decryption server while communicating only $O(1)$ information.

Our result improves upon the best-known prior works, which either: (i) require an expensive initial setup involving a (special purpose) multiparty computation protocol executed by the $n$ decryption servers, along with an additional *per-epoch* setup; (ii) require each decryption server to communicate $O(B)$ information; or (iii) do not guarantee pending transaction privacy.

We implement our scheme and find that transactions can be encrypted in approximately 8.5 ms, independent of committee size, and the communication required to decrypt an entire batch of transactions is 48 bytes per party, independent of the number of transactions. If deployed on Ethereum, which processes close to 500 transactions per block, it takes close to 3.2 s for each committee member to compute a partial decryption and 3.0 s to decrypt all transactions for a block in single-threaded mode. Compared to prior work, which had an expensive setup phase per epoch, we incur $<2\times$ overhead in the worst case. On some metrics such as partial decryptions size, we actually fare better.
Expand
Jipeng Zhang, Yuxing Yan, Junhao Huang, Çetin Kaya Koç
ePrint Report ePrint Report
With the standardization of NIST post-quantum cryptographic (PQC) schemes, optimizing these PQC schemes across various platforms presents significant research value. While most existing software implementation efforts have concentrated on ARM platforms, research on PQC implementations utilizing various RISC-V instruction set architectures (ISAs) remains limited. In light of this gap, this paper proposes comprehensive and efficient optimizations of Keccak, Kyber, and Dilithium on RV{32,64}IM{B}{V}. We thoroughly optimize these implementations for dual-issue CPUs, believing that our work on various RISC-V ISAs will provide valuable insights for future PQC deployments.

Specifically, for Keccak, we revisit a range of optimization techniques, including bit interleaving, lane complementing, in-place processing, and hybrid vector/scalar implementations. We construct an optimal combination of methods aimed at achieving peak performance on dual-issue CPUs for various RISC-V ISAs. For the NTT implementations of Kyber and Dilithium, we deliver optimized solutions based on Plantard and Montgomery arithmetic for diverse RISC-V ISAs, incorporating extensive dual-issue enhancements. Additionally, we improve the signed Plantard multiplication algorithm proposed by Akoi et al. Ultimately, our testing demonstrates that our implementations of Keccak and NTT across various ISAs achieve new performance records. More importantly, they significantly enrich the PQC software ecosystem for RISC-V.
Expand
Pedro Branco, Arka Rai Choudhuri, Nico Döttling, Abhishek Jain, Giulio Malavolta, Akshayaram Srinivasan
ePrint Report ePrint Report
We present a new approach for constructing non-interactive zero-knowledge (NIZK) proof systems from vector trapdoor hashing (VTDH) -- a generalization of trapdoor hashing [Döttling et al., Crypto'19]. Unlike prior applications of trapdoor hash to NIZKs, we use VTDH to realize the hidden bits model [Feige-Lapidot-Shamir, FOCS'90] leading to black-box constructions of NIZKs. This approach gives us the following new results: - A statistically-sound NIZK proof system based on the hardness of decisional Diffie-Hellman (DDH) and learning parity with noise (LPN) over finite fields with inverse polynomial noise rate. This gives the first statistically sound NIZK proof system that is not based on either LWE, or bilinear maps, or factoring. - A dual-mode NIZK satisfying statistical zero-knowledge in the common random string mode and statistical soundness in the common reference string mode assuming the hardness of learning with errors (LWE) with polynomial modulus-to-noise ratio. This gives the first black-box construction of such a dual-mode NIZK under LWE. This improves the recent work of Waters (STOC'24) which relied on LWE with super-polynomial modulus-to-noise ratio and required a setup phase with private coins.

The above constructions are black-box and satisfy single-theorem zero-knowledge property. Building on the works of Feige et al.(FOCS'90) and Fishclin and Rohrback (PKC'21), we upgrade these constructions (under the same assumptions) to satisfy multi-theorem zero-knowledge property at the expense of making non-black-box use of cryptography.
Expand
Oskar Goldhahn, Kristian Gjøsteen
ePrint Report ePrint Report
Homomorphic encryption has long been used to build voting schemes. Additively homomorphic encryption only allows simple count- ing functions. Lattice-based fully (or somewhat) homomorphic encryp- tion allows more general counting functions, but the required parameters quickly become impractical if used naively. It is safe to leak information during the counting function evaluation, as long as the information could be derived from the public result. To exploit this observation, we de- sign a flexible framework for using somewhat homomorphic encryption for voting that incorporates random input and allows controlled leakage of information. We instantiate the framework using novel circuits with low but significant multiplicative depth exploiting the fact that, in the context of voting, leakage of certain information during homomorphic evaluation can be permitted. We also instantiate the framework with a circuit that uses random input to shuffle without the use of mixnets.
Expand
Yiwen Gao, Haibin Kan, Yuan Li
ePrint Report ePrint Report
We enhance the provable soundness of FRI, an interactive oracle proof of proximity (IOPP) for Reed-Solomon codes introduced by Ben-Sasson et al. in ICALP 2018. More precisely, we prove the soundness error of FRI is less than $\max\left\{O\left(\frac{1}{\eta}\cdot \frac{n}{|\mathbb{F}_q|}\right), (1-\delta)^{t}\right\}$, where $\delta\le 1-\sqrt{\rho}-\eta$ is within the Johnson bound and $\mathbb{F}_q$ is a finite field with characteristic greater than $2$. Previously, the best-known provable soundness error for FRI was $\max\left\{O\left(\frac{\rho^2}{\eta^7}\cdot \frac{n^2}{|\mathbb{F}_q|}\right), (1-\delta)^{t}\right\}$, as established by Ben-Sasson et al. in FOCS 2020.

We prove the number of \emph{bad} folding points in FRI is linear in the length $n$ of codeword when it is $\delta$-far from the Reed-Solomon code. This implies the linear proximity gaps for Reed-Solomon codes and improves the provable soundness of batched FRI. Our results indicate that the FRI protocol can be implemented over a smaller field, thereby enhancing its efficiency. Furthermore, for a fixed finite field $\mathbb{F}_q$, we prove that FRI can achieve improved security.
Expand
RUCHI TELANG GODE
ePrint Report ePrint Report
It is well known that estimating a sharp lower bound on the second-order nonlinearity of a general class of cubic Booleanfunction is a difficult task. In this paper for a given integer $n \geq 4$, some values of $s$ and $t$ are determined for which cubic monomial Boolean functions of the form $h_{\mu}(x)=Tr( \mu x^{2^s+2^t+1})$ $(n>s>t \geq 1)$ possess good lower bounds on their second-order nonlinearity. The obtained functions are worth considering for securing symmetric cryptosystems against various quadratic approximation attacks and fast algebraic attacks.
Expand
Giuseppe D'Alconzo, Alessio Meneghetti, Edoardo Signorini
ePrint Report ePrint Report
Cryptographic group actions have gained significant attention in recent years for their application on post-quantum Sigma protocols and digital signatures. In NIST's recent additional call for post-quantum signatures, three relevant proposals are based on group actions: LESS, MEDS, and ALTEQ. This work explores signature optimisations leveraging a group's factorisation. We show that if the group admits a factorisation as a semidirect product of subgroups, the group action can be restricted on a quotient space under the equivalence relation induced by the factorisation. If the relation is efficiently decidable, we show that it is possible to construct an equivalent Sigma protocol for a relationship that depends only on one of the subgroups. Moreover, if a special class of representative of the quotient space is efficiently computable via a canonical form, the restricted action is effective and does not incur in security loss. Finally, we apply these techniques to the group actions underlying LESS and MEDS, showing how they will affect the length of signatures and public keys.
Expand
Semin Han, Geonho Yoon, Hyunok Oh, Jihye Kim
ePrint Report ePrint Report
Lookup arguments enable a prover to convince a verifier that a committed vector of lookup elements $\vec{f} \in \mathbb{F}^m$ is contained within a predefined table $T \in \mathbb{F}^N$. These arguments are particularly beneficial for enhancing the performance of SNARKs in handling non-arithmetic operations, such as batched range checks or bitwise operations. While existing works have achieved efficient and succinct lookup arguments, challenges remain, particularly when dealing with large vectors of lookup elements in privacy-sensitive applications.

In this paper, we introduce $\duplex$, a scalable zero-knowledge lookup argument scheme that offers significant improvements over previous approaches. Notably, we present the first lookup argument designed to operate over the RSA group. Our core technique allows for the transformation of elements into prime numbers to ensure compatibility with the RSA group, all without imposing substantial computational costs on the prover. Given $m$ lookup elements, $\duplex$ achieves an asymptotic proving time of $O(m \log m)$, with constant-sized proofs, constant-time verification, and a public parameter size independent of the table size $N$. Additionally, $\duplex$ ensures the privacy of lookup elements and is robust against dynamic table updates, making it highly suitable for scalable verifiable computation in real-world applications.

We implemented and empirically evaluated $\duplex$, comparing it with the state-of-the-art zero-knowledge lookup argument Caulk [CCS'22]. Our experimental results demonstrate that $\duplex$ significantly outperforms Caulk in proving time for both single and batched lookup arguments, while maintaining practical proof size and verification time.
Expand
◄ Previous Next ►