IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
04 October 2024
Fei Xu, Honggang Hu, Changhong Xu
ePrint Report
At Eurocrypt 2015, Zahur, Rosulek, and Evans proposed the model of Linear Garbling Schemes. This model proved a $2\kappa$-bit lower bound of ciphertexts for a broad class of garbling schemes. Since then, several methods have been developed that bypass this lower bound, albeit with a notable limitation: Their reliance on specifically correlated input wire labels restricts their applicability across most gates. At Crypto 2021, Rosulek and Roy presented the innovative "three-halves" garbling scheme in which AND gates cost $1.5\kappa+5$ bits and XOR gates are free. A noteworthy aspect of their scheme is the slicing-and-dicing technique, which is applicable universally to all AND gates when garbling a boolean circuit. Following this revelation, Rosulek and Roy presented several open problems. Our research primarily addresses one of them: ``Is $1.5\kappa$ bits optimal for garbled AND gates in a more inclusive model than Linear Garbling Schemes?''
In this paper, we propose the Bitwise Garbling Schemes, a model that seamlessly incorporates the slicing-and-dicing technique. Our key revelation is that $1.5\kappa$ bits is indeed optimal for arbitrary garbled AND gates in our model. Since Rosulek and Roy also suggested another problem which questions the necessity of free-XOR, we explore constructions without free-XOR and prove a $2\kappa$-bit lower bound. Therefore, sacrificing compatibility with free-XOR does not lead to a more efficient scheme.
In this paper, we propose the Bitwise Garbling Schemes, a model that seamlessly incorporates the slicing-and-dicing technique. Our key revelation is that $1.5\kappa$ bits is indeed optimal for arbitrary garbled AND gates in our model. Since Rosulek and Roy also suggested another problem which questions the necessity of free-XOR, we explore constructions without free-XOR and prove a $2\kappa$-bit lower bound. Therefore, sacrificing compatibility with free-XOR does not lead to a more efficient scheme.
Albert Garreta, Ignacio Manzur
ePrint Report
We introduce two folding schemes for lookup instances: FLI and FLI+SOS. Both use a PIOP to check that a matrix has elementary basis vectors as rows, with FLI+SOS adding a twist based on Lasso’s SOS-decomposability.
FLI takes two lookup instances $\{\mathbf{a}_1\}, \{\mathbf{a}_2\}\subseteq\mathbf{t}$, and expresses them as matrix equations $M_i\cdot\mathbf{t}^\mathsf{T}=\mathbf{a}_i^\mathsf{T}$ for $i=1,2$, where each matrix $M_i\in\mathbb{F}^{m\times N}$ has rows which are elementary basis vectors in $\mathbb{F}^N$. Matrices that satisfy this condition are said to be in $R_{\mathsf{elem}}$. Then, a folding scheme for $R_{\mathsf{elem}}$ into a relaxed relation is used, which combines the matrices $M_1, M_2$ as $M_1+\alpha M_2$ for a random $\alpha\in\mathbb{F}$. Finally, the lookup equations are combined as $(M_1+\alpha M_2)\cdot \mathbf{t}^{\mathsf{T}} = (\mathbf{a}_1+\alpha \mathbf{a}_2)^\mathsf{T}$. In FLI, only the property that a matrix is in $R_{\mathsf{elem}}$ is folded, and this makes the FLI folding step the cheapest among existing solutions. The price to pay is in the cost for proving accumulated instances.
FLI+SOS builds upon FLI to enable folding of large SOS-decomposable tables. This is achieved through a variation of Lasso's approach to SOS-decomposability, which fits FLI naturally. For comparison, we describe (for the first time to our knowledge) straightforward variations of Protostar and Proofs for Deep Thought that also benefit from SOS-decomposability. We see that for many reasonable parameter choices, and especially those arising from lookup-based zkVMs, FLI+SOS can concretely be the cheapest folding solution.
FLI takes two lookup instances $\{\mathbf{a}_1\}, \{\mathbf{a}_2\}\subseteq\mathbf{t}$, and expresses them as matrix equations $M_i\cdot\mathbf{t}^\mathsf{T}=\mathbf{a}_i^\mathsf{T}$ for $i=1,2$, where each matrix $M_i\in\mathbb{F}^{m\times N}$ has rows which are elementary basis vectors in $\mathbb{F}^N$. Matrices that satisfy this condition are said to be in $R_{\mathsf{elem}}$. Then, a folding scheme for $R_{\mathsf{elem}}$ into a relaxed relation is used, which combines the matrices $M_1, M_2$ as $M_1+\alpha M_2$ for a random $\alpha\in\mathbb{F}$. Finally, the lookup equations are combined as $(M_1+\alpha M_2)\cdot \mathbf{t}^{\mathsf{T}} = (\mathbf{a}_1+\alpha \mathbf{a}_2)^\mathsf{T}$. In FLI, only the property that a matrix is in $R_{\mathsf{elem}}$ is folded, and this makes the FLI folding step the cheapest among existing solutions. The price to pay is in the cost for proving accumulated instances.
FLI+SOS builds upon FLI to enable folding of large SOS-decomposable tables. This is achieved through a variation of Lasso's approach to SOS-decomposability, which fits FLI naturally. For comparison, we describe (for the first time to our knowledge) straightforward variations of Protostar and Proofs for Deep Thought that also benefit from SOS-decomposability. We see that for many reasonable parameter choices, and especially those arising from lookup-based zkVMs, FLI+SOS can concretely be the cheapest folding solution.
Joan Boyar, Simon Erfurth
ePrint Report
Folding schemes are an exciting new primitive, transforming the task of performing multiple zero-knowledge proofs of knowledge for a relation into performing just one zero-knowledge proof, for the same relation, and a number of cheap inclusion-proofs. Recently, folding schemes have been used to amortize the cost associated with proving different statements to multiple distinct verifiers, which has various applications. We observe that for these uses, leaking information about the statements folded together can be problematic, yet this happens with previous constructions. Towards resolving this issue, we give a natural definition of privacy preserving folding schemes, and what security they should offer. To construct privacy preserving folding schemes, we first define a statement hiders, a primitive which might be of independent interest. In a nutshell, a statement hider hides an instance of a relation as a new instance in the same relation. The new instance is in the relation if and only if the initial instance is. With this building block, we can utilize existing folding schemes to construct a privacy preserving folding scheme, by first hiding each of the statements. Folding schemes allow verifying that a statement was folded into another statement, while statement hiders allow verifying that a statement was hidden as another statement.
Karim Eldefrawy, Benjamin Terner, Moti Yung
ePrint Report
Time-lock puzzles are unique cryptographic primitives that use computational complexity to keep information secret for some period of time, after which security expires. This topic, while over 25 years old, is still in a state where foundations are not well understood: For example, current analysis techniques of time-lock primitives provide no sound mechanism to build composed multi-party cryptographic protocols which use expiring security as a building block. Further, there are analyses that employ idealizations and simulators of unrealistic computational power to be an acceptable sound security argument. Our goal with this short paper is to advocate for understanding what approaches may lead to sound modeling beyond idealization, and what approaches may, in fact, be hopeless at this task of sound modeling.
We explain in this paper how existing attempts at this subtle problem lack either composability, a fully consistent analysis, or functionality. The subtle flaws in the existing frameworks reduce to an impossibility result by Mahmoody et al., who showed that time-lock puzzles with super-polynomial gaps (between committer and solver) cannot be constructed from random oracles alone (or any repetitive computation where the next state is completely random given the prior state); yet still the analyses of algebraic puzzles today treat the solving process as if each step is a generic or random oracle. We point out that if the generation process relies on a trapdoor function that cannot be treated as a random oracle (to allow efficient generation while avoiding this impossibility result), then, to be consistent, the analysis of the solving process should also not treat such a trapdoor function (and its intermediate states) as a random oracle.
We also delineate additional issues with the proof techniques used for time-lock puzzles. Specifically, when a time-lock puzzle must retain privacy for some amount of time, the reduction should bound the running time of the simulator. A simulator that can ``simulate" if given time that if given to an adversary allows said adversary to solve the puzzle is not a valid security argument. We survey the adherence of various attempts to this principle, as well as the properties that different attempts achieve toward composition.
We explain in this paper how existing attempts at this subtle problem lack either composability, a fully consistent analysis, or functionality. The subtle flaws in the existing frameworks reduce to an impossibility result by Mahmoody et al., who showed that time-lock puzzles with super-polynomial gaps (between committer and solver) cannot be constructed from random oracles alone (or any repetitive computation where the next state is completely random given the prior state); yet still the analyses of algebraic puzzles today treat the solving process as if each step is a generic or random oracle. We point out that if the generation process relies on a trapdoor function that cannot be treated as a random oracle (to allow efficient generation while avoiding this impossibility result), then, to be consistent, the analysis of the solving process should also not treat such a trapdoor function (and its intermediate states) as a random oracle.
We also delineate additional issues with the proof techniques used for time-lock puzzles. Specifically, when a time-lock puzzle must retain privacy for some amount of time, the reduction should bound the running time of the simulator. A simulator that can ``simulate" if given time that if given to an adversary allows said adversary to solve the puzzle is not a valid security argument. We survey the adherence of various attempts to this principle, as well as the properties that different attempts achieve toward composition.
01 October 2024
Research Institute CODE, Universität der Bundeswehr Munich, Germany
Job Posting
We are looking for 1-2 bright researchers with strong interest and suitable experience in any of the following research areas:
They will work closely with members of the Privacy and Applied Cryptography (PACY) lab, led by Prof. Mark Manulis, and the Quantum-Safe and Advanced Cryptography (QuSAC) lab, led by Prof. Daniel Slamanig. Candidates will benefit from our modern infrastructure and availability of funds to support own research. Also, Munich is amongst best places to live in Germany.
Positions are available for immediate start (~58k to 74k EUR p.a. depending on qualifications and experience). Initial contracts are for 1.5 - 2 years. (Due to the nature of funding restrictions on the eligibility may apply.)
Requirements:
- Distributed cryptography: DKG, decentralised credentials with privacy properties
- Advanced encryption: algorithmic techniques for FHE and SNARKs, updatable encryption
- Secure computation: MPC techniques and protocol design, PSI
- PQC techniques for any of the aforementioned areas
They will work closely with members of the Privacy and Applied Cryptography (PACY) lab, led by Prof. Mark Manulis, and the Quantum-Safe and Advanced Cryptography (QuSAC) lab, led by Prof. Daniel Slamanig. Candidates will benefit from our modern infrastructure and availability of funds to support own research. Also, Munich is amongst best places to live in Germany.
Positions are available for immediate start (~58k to 74k EUR p.a. depending on qualifications and experience). Initial contracts are for 1.5 - 2 years. (Due to the nature of funding restrictions on the eligibility may apply.)
Requirements:
- Master's degree (or equivalent) or PhD in Mathematics, Cryptography, or Computer Science with excellent grades
- Solid knowledge and demonstrable experience in any of the aforementioned research areas
- Post-doc candidates must have a strong track record (ideally with publications at IACR conferences and/or the top 4 security conferences) and good academic writing and presentation skills
- Experience with cryptographic implementations (desirable)
- Proficiency in English (essential) and German (desirable but not essential)
Closing date for applications:
Contact: Mark Manulis (mark.manulis AT unibw.de)
More information: https://www.unibw.de/pacy-en/vacancies
30 September 2024
CryptoLux team, University of Luxembourg
Job Posting
The University of Luxembourg (UL) is seeking to hire a Post-doctoral fellow within the CryptoLux Group of the University of Luxembourg led by Prof. Alex Biryukov. The position is initially for two years, extendable up to five years. Yearly gross salary for a Postdoctoral Researcher at the UL is EUR 83K.
The main application domains are:
- Cryptanalysis and Design of Symmetric Primitives (for ex. Lightweight/Ultra-lightweight)
- Algebraic and Arithmetization-friendly cryptography for ZK and MPC applications
- Post-quantum cryptography
- Privacy enchancing technologies
- Lead high-impact research in the fields of Cryptography and Privacy
- Perform teaching activities in fields of expertise at bachelor, master and PhD levels (max 90 hours/year, less in practice)
- Help with daily supervision of Ph.D and M.Sc. students
- Participate to the writing of research grants
- PhD degree in Computer Science or related fields (e.g., applied mathematics, etc.)
- Proven research track record (publications in A* venues in Cryptography or Information Security)
- Great communication skills
- Commitment, team working, interpersonal skills and a critical mind
- Good written and oral skills in English
Closing date for applications:
Contact: http://emea3.mrted.ly/3qt46
More information: https://cryptolux.org/index.php/Vacancies
CryptoLux team, University of Luxembourg
Job Posting
The successful candidate will join the CryptoLux team led by Prof. Alex Biryukov. He or she will contribute to a research project "Advanced Cryptography for Finance and Privacy (CryptoFin)", which is funded by the Luxembourgish Fonds National de la Recherche (FNR) through the CORE program. Position is initially for one year, extendable up to five years. Yearly gross salary for a Postdoctoral Researcher at the UL is EUR 83K. Candidates with research interests in one or more of the following areas are particularly encouraged to apply:
- Applied or symmetric cryptography
- Blockchain cryptography, cryptoeconomics, DeFi
- Anonymity and privacy, ZK, PETs
- Conduct, publish and present research results at the top conferences/journals
- Collaborate with the two Ph.D. students of the project
- Attract funding in cooperation with academic and industrial partners
Closing date for applications:
Contact: http://emea3.mrted.ly/3p6l5
More information: https://cryptolux.org/index.php/Vacancies
Gavin Cho, Georg Fuchsbauer, Adam O'Neill
ePrint Report
We show that the Schnorr signature scheme meets existential unforgeability under chosen-message attack (EUF-CMA) in the random oracle model (ROM) if the circular discrete-logarithm (CDL) assumption, a new, non-interactive variant of DL we introduce, holds in the underlying group. Our reduction is completely tight, meaning the constructed adversary against CDL has both essentially the same running time and success probability as the assumed forger. To our knowledge, we are the first to exhibit such a reduction. Previously, Bellare and Dai (INDOCRYPT 2020) showed the scheme is EUF-CMA the ROM if their multi-base DL assumption holds in the underlying group. However, multi-base DL is interactive; moreover, their reduction, while tighter than the initial result of Pointcheval and Stern (EUROCRYPT 1996), still incurs a security loss that is linear in the number of the adversary’s RO queries. We justify CDL by showing it holds in two carefully chosen idealized models, which idealize different aspects of our assumption. Our quantitative bounds in these models are essentially the same as for DL, giving strong evidence that CDL is as hard DL in appropriate elliptic-curve groups groups.
Peng Wang, Shuping Mao, Ruozhou Xu, Jiwu Jing, Yuewu Wang
ePrint Report
XCB, a tweakable enciphering mode, is part of IEEE Std. 1619.2 for shared storage media. We show that all versions of XCB are not secure through three plaintext recovery attacks. A key observation is that XCB behaves like an LRW1-type tweakable block cipher for single-block messages, which lacks CCA security. The first attack targets one-block XCB, using three queries to recover the plaintext. The second one requires four queries to recover the plaintext that excludes one block. The last one requires seven queries to recover the full plaintext. The first attack applies to any scheme that follows the XCB structure, whereas the latter two attacks work on all versions of XCB, exploiting the separable property of the underlying universal hash function. We also discuss the impact of these vulnerabilities on XCB-based applications, such as disk encryption, nonce-based encryption, deterministic authenticated encryption and robust authenticated encryption, highlighting the risks due to XCB's failure to achieve STPRP security. To address these flaws, we propose the XCB* structure, an improved version of XCB that adds only two XOR operations. We prove that XCB* is STPRP-secure when using AXU hash functions, SPRPs, and a secure IV-based stream cipher.
Brandon "Cryptskii" Ramsay
ePrint Report
Overpass Channels presents a groundbreaking approach to blockchain scalability, offering a horizontally scalable, privacy-enhanced payment network with independent verification, fluid liquidity, and robust censorship resistance. This paper introduces a novel architecture that leverages zero-knowledge proofs, specifically zk-SNARKs, to ensure transaction validity and privacy while enabling unprecedented throughput and efficiency.
By eliminating the need for traditional consensus mechanisms and miners, Overpass Channels achieves remarkable cost-effectiveness and energy efficiency. The system's design focuses on unilateral payment channels and off-chain transaction processing, allowing for high-speed, low-latency operations without compromising security or decentralization. This paper provides a comprehensive analysis of the Overpass Channels system, including its cryptographic foundations, scalability metrics, integration, and potential applications across various domains, from global payments to confidential voting systems and secure health record management.
Patrick Ehrler, Abdelkarim Kati, Thomas Schneider, Amos Treiber
ePrint Report
Encrypted Search Algorithms (ESAs) are a technique to encrypt data while the user can still search over it. ESAs can protect privacy and ensure security of sensitive data stored on a remote storage. Originally, ESAs were used in the context of documents that consist of keywords. The user encrypts the documents, sends them to a remote server and is still able to search for keywords, without exposing information about the plaintext. The idea of ESAs has also been applied to relational databases, where queries (similar to SQL statements) can be privately executed on an encrypted database.But just as traditional schemes for Keyword-ESAs, also Relational-ESAs have the drawback of exposing some information, called leakage. Leakage attacks have been proposed in the literature that use this information together with auxiliary information to learn details about the plaintext. However, these leakage attacks have overwhelmingly been designed for and applied to Keyword-ESAs and not Relational-ESAs.
In this work, we review the suitability of major leakage attacks against ESAs in the relational setting by adapting them accordingly. We perform extensive re-evaluations of the attacks on various relational datasets with different properties.
Our evaluations show that major attacks can work against Relational-ESAs in the known-data setting. However, the attack performance differs between datasets, exploited patterns, and attacks.
Zhenjian Lu, Noam Mazor, Igor C. Oliveira, Rafael Pass
ePrint Report
We consider indistinguishability obfuscation (iO) for multi-output circuits $C:\{0,1\}^n\to\{0,1\}^n$ of size s, where s is the number of AND/OR/NOT gates in C. Under the worst-case assumption that NP $\nsubseteq$ BPP, we establish that there is no efficient indistinguishability obfuscation scheme that outputs circuits of size $s + o(s/ \log s)$. In other words, to be secure, an efficient iO scheme must incur an $\Omega(s/ \log s)$ additive overhead in the size of the obfuscated circuit. The hardness assumption under which this negative result holds is minimal since an optimal iO scheme with no circuit size overhead exists if NP$\nsubseteq$ BPP.
Expanding on this result, we also rule out iO for single-output database-aided circuits with an arbitrary polynomial overhead in circuit size. This strengthens an impossibility result by Goldwasser and Rothblum [GR07], which considered circuits with access to an exponential-length database that the obfuscator has oracle access to; in contrast, our impossibility result holds even w.r.t. polynomial-size databases and even w.r.t. obfuscators that may run in time polynomial in the size of the database (and thus may read the whole database).
The proof of our main result builds on a connection between obfuscation and meta-complexity put forward by Mazor and Pass [MP24], and on the NP-hardness of circuit minimization for multi-output circuits established by Loff, Ilango, and Oliveira [ILO20], together with other techniques from cryptography and complexity theory.
Expanding on this result, we also rule out iO for single-output database-aided circuits with an arbitrary polynomial overhead in circuit size. This strengthens an impossibility result by Goldwasser and Rothblum [GR07], which considered circuits with access to an exponential-length database that the obfuscator has oracle access to; in contrast, our impossibility result holds even w.r.t. polynomial-size databases and even w.r.t. obfuscators that may run in time polynomial in the size of the database (and thus may read the whole database).
The proof of our main result builds on a connection between obfuscation and meta-complexity put forward by Mazor and Pass [MP24], and on the NP-hardness of circuit minimization for multi-output circuits established by Loff, Ilango, and Oliveira [ILO20], together with other techniques from cryptography and complexity theory.
Nikhil Vanjani, Pratik Soni, Sri AravindaKrishnan Thyagarajan
ePrint Report
In scenarios where a seller holds sensitive data $x$, like employee / patient records or ecological data, and a buyer seeks to obtain an evaluation of specific function $f$ on this data, solutions in trustless digital environments like blockchain-based Web3 systems typically fall into two categories: (1) Smart contract-powered solutions and (2) cryptographic solutions leveraging tools such as adaptor signatures. The former approach offers atomic transactions where the buyer learns the function evaluation $f(x)$ (and not $x$ entirely) upon payment. However, this approach is often inefficient, costly, lacks privacy for the seller's data, and is incompatible with systems that do not support smart contracts with required functionalities. In contrast, the adaptor signature-based approach addresses all of the above issues but comes with an "all-or-nothing" guarantee, where the buyer fully extracts $x$ and does not support functional extraction of the sensitive data. In this work, we aim to bridge the gap between these approaches, developing a solution that enables fair functional sales of information while offering improved efficiency, privacy, and compatibility similar to adaptor signatures.
Towards this, we propose functional adaptor signatures (FAS) a novel cryptographic primitive that achieves all the desired properties as listed above. Using FAS, the seller can publish an advertisement committing to $x$. The buyer can pre-sign the payment transaction w.r.t. a function $f$, and send it along with the transaction to the seller. The seller adapts the pre-signature into a valid (buyer's) signature and posts the payment and the adapted signature on the blockchain to get paid. Finally, using the pre-signature and the posted signature, the buyer efficiently extracts $f(x)$, and completes the sale. We formalize the security properties of FAS, among which is a new notion called witness privacy to capture seller's privacy, which ensures the buyer does not learn anything beyond $f(x)$. We present multiple variants of witness privacy, namely, witness hiding, witness indistinguishability, and zero-knowledge, to capture varying levels of leakage about $x$ beyond $f(x)$ to a malicious buyer.
We introduce two efficient constructions of FAS supporting linear functions (like statistics/aggregates, kernels in machine learning, etc.), that satisfy the strongest notion of witness privacy. One construction is based on prime-order groups and compatible with Schnorr signatures for payments, and the other is based on lattices and compatible with a variant of Lyubashevsky's signature scheme. A central conceptual contribution of our work lies in revealing a surprising connection between functional encryption, a well-explored concept over the past decade, and adaptor signatures, a relatively new primitive in the cryptographic landscape. On a technical level, we avoid heavy cryptographic machinery and achieve improved efficiency, by making black-box use of building blocks like inner product functional encryption (IPFE) while relying on certain security-enhancing techniques for the IPFE in a non-black-box manner. We implement our FAS construction for Schnorr signatures and show that for reasonably sized seller witnesses, the different operations are quite efficient even for commodity hardware.
Towards this, we propose functional adaptor signatures (FAS) a novel cryptographic primitive that achieves all the desired properties as listed above. Using FAS, the seller can publish an advertisement committing to $x$. The buyer can pre-sign the payment transaction w.r.t. a function $f$, and send it along with the transaction to the seller. The seller adapts the pre-signature into a valid (buyer's) signature and posts the payment and the adapted signature on the blockchain to get paid. Finally, using the pre-signature and the posted signature, the buyer efficiently extracts $f(x)$, and completes the sale. We formalize the security properties of FAS, among which is a new notion called witness privacy to capture seller's privacy, which ensures the buyer does not learn anything beyond $f(x)$. We present multiple variants of witness privacy, namely, witness hiding, witness indistinguishability, and zero-knowledge, to capture varying levels of leakage about $x$ beyond $f(x)$ to a malicious buyer.
We introduce two efficient constructions of FAS supporting linear functions (like statistics/aggregates, kernels in machine learning, etc.), that satisfy the strongest notion of witness privacy. One construction is based on prime-order groups and compatible with Schnorr signatures for payments, and the other is based on lattices and compatible with a variant of Lyubashevsky's signature scheme. A central conceptual contribution of our work lies in revealing a surprising connection between functional encryption, a well-explored concept over the past decade, and adaptor signatures, a relatively new primitive in the cryptographic landscape. On a technical level, we avoid heavy cryptographic machinery and achieve improved efficiency, by making black-box use of building blocks like inner product functional encryption (IPFE) while relying on certain security-enhancing techniques for the IPFE in a non-black-box manner. We implement our FAS construction for Schnorr signatures and show that for reasonably sized seller witnesses, the different operations are quite efficient even for commodity hardware.
Yuxuan Wang, Jintong Yu, Shipei Qu, Xiaolin Zhang, Xiaowei Li, Chi Zhang, Dawu Gu
ePrint Report
Keccak acts as the hash algorithm and eXtendable-Output Function (XOF) specified in the NIST standard drafts for Kyber and Dilithium. The Keccak output is highly correlated with sensitive information. While in RSA and ECDSA, hash-like components are only used to process public information, such as the message. The importance and sensitivity of hash-like components like Keccak are much higher in Kyber and Dilithium than in traditional public-key cryptography. However, few works study Keccak regarding the physical security of Kyber and Dilithium. In this paper, we propose a practical fault attack scheme on Keccak to compromise Kyber and Dilithium on ARM Cortex-M devices. Specifically, by injecting loop-abort faults in the iterative assignments or updates of Keccak, we propose six attacks that can set the Keccak output to a known value. These attacks can be exploited to disrupt the random number expansion or other critical processes in Kyber and Dilithium, thereby recovering sensitive information derived from the Keccak output. In this way, we propose eight attack strategies on Kyber and seven on Dilithium, achieving key recovery, signature forgery, and verification bypass. To validate the practicality of the proposed attack strategies, we perform fault characterization on five real-world devices belonging to four different series (ARM Cortex-M0+, M3, M4, and M33). The success rate is up to 89.5%, demonstrating the feasibility of loop-abort faults. This paper also provides a guide for reliably inducing loop-abort faults on ARM Cortex-M devices using electromagnetic fault injection. We further validate our complete attacks on Kyber and Dilithium based on the official implementations, achieving a success rate of up to 55.1%. The results demonstrate that the excessive use of Keccak in generating and computing secret information leads to severe vulnerabilities. Our work can potentially be migrated to other post-quantum cryptographic algorithms that use Keccak, such as Falcon, BIKE, and HQC.
Gaëtan Cassiers, Charles Momin
ePrint Report
Datasets of side-channel leakage measurements are widely used in research to develop and benchmarking side-channel attack and evaluation methodologies. Compared to using custom and/or one-off datasets, widely-used and publicly available datasets improve research reproducibility and comparability. Further, performing high-quality measurements requires specific equipment and skills, while also taking a significant amount of time. Therefore, using publicly available datasets lowers the barriers to entry into side-channel research. This paper introduces the SMAesH dataset. SMAesH is an optimized masked hardware implementation of the AES with a provably secure arbitrary-order masking scheme. The SMAesH dataset contains power traces of the first-order SMAesH on two FPGAs of different generations, along with key, plaintext and masking randomness. A part of the dataset use uniformly random key and plaintext to enable leakage profiling, while another part uses a fixed key (still with uniformly random plaintext) to enable attack validation or leakage assessment in a fixed-versus-random setting. We document the experimental setup used to acquire the dataset. It is built from components that are widely available. We also discuss particular methods employed to maximize the information content in the leakage traces, such as power supply selection, fine-grained trace alignment and resolution optimization.
Antonio Sanso
ePrint Report
In this paper, we investigate the rough order assumption (\(RO_C\)) introduced by Braun, Damgård, and Orlandi at CRYPTO 23, which posits that class groups of imaginary quadratic fields with no small prime factors in their order are computationally indistinguishable from general class groups. We present a novel attack that challenges the validity of this assumption by leveraging properties of Mordell curves over the rational numbers. Specifically, we demonstrate that if the rank of the Mordell curve \( E_{-16D} \) is at least 2, it contradicts the rough order assumption. Our attack deterministically breaks the \(RO_C\) assumption for discriminants of a special form, assuming the parity conjecture holds and certain conditions are met. Additionally, for both special and generic cases, our results suggest that the presence of nontrivial 3-torsion elements in class groups can undermine the \(RO_C\) assumption. Although our findings are concrete for specific cases, the generic scenario relies on heuristic arguments related to the Birch and Swinnerton-Dyer (BSD) conjecture, a significant and widely believed conjecture in number theory. Attacks against 2-torsion elements in class groups are already well known, but our work introduces a distinct approach targeting 3-torsion elements. These attacks are fundamentally different in nature, and both have relatively straightforward countermeasures, though they do not generalize to higher torsions. While these results do not entirely invalidate the \(RO_C\) assumption, they highlight the need for further exploration of its underlying assumptions, especially in the context of specific torsion structures within class groups.
Ryo Yoshizumi, Hiroshi Onuki, Ryo Ohashi, Momonari Kudo, Koji Nuida
ePrint Report
Isogeny-based cryptography is one of the candidates for post-quantum cryptography. Recently, many isogeny-based cryptosystems using isogenies between Kummer surfaces were proposed. Most of those cryptosystems use $(2,2)$-isogenies. However, to enhance the possibility of cryptosystems, higher degree isogenies, say $(\ell,\ell)$-isogenies for an odd $\ell$, is also crucial. For an odd $\ell$, the Lubicz-Robert gave a formula to compute $(\ell)^g$-isogenies in general dimension $g$. In this paper, we propose explicit and efficient algorithms to compute $(\ell,\ell)$-isogenies between Kummer surfaces, based on the Lubicz-Robert formula.In particular, we propose two algorithms for computing the codomain of the isogeny and two algorithms for evaluating the image of a point under the isogeny. Then, we count the number of arithmetic operations required for each of our proposed algorithms, and determine the most efficient algorithm in terms of the number of arithmetic operations from each of two types of algorithms for each $\ell$. As an application, using the most efficient one, we implemented the SIDH attack on B-SIDH in SageMath.In setting that originally claimed 128-bit security, our implementation was able to recover that secret key in about 11 hours.
Paul Lou, Nathan Manohar, Amit Sahai
ePrint Report
To date, the strongest notions of security achievable for two-round publicly-verifiable cryptographic proofs for $\mathsf{NP}$ are witness indistinguishability (Dwork-Naor 2000, Groth-Ostrovsky-Sahai 2006), witness hiding (Bitansky-Khurana-Paneth 2019, Kuykendall-Zhandry 2020), and super-polynomial simulation (Pass 2003, Khurana-Sahai 2017). On the other hand, zero-knowledge and even weak zero-knowledge (Dwork-Naor-Reingold-Stockmeyer 1999) are impossible in the two-round publicly-verifiable setting (Goldreich-Oren 1994). This leaves an enormous gap in our theoretical understanding of known achievable security and the impossibility results for two-round publicly-verifiable cryptographic proofs for $\mathsf{NP}$.
Towards filling this gap, we propose a new and natural notion of security, called witness semantic security, that captures the natural and strong notion that an adversary should not be able to learn any partial information about the prover's witness beyond what it could learn given only the statement $x$. Not only does our notion of witness semantic security subsume both witness indistinguishability and witness hiding, but it also has an easily appreciable interpretation.
Moreover, we show that assuming the subexponential hardness of LWE, there exists a two-round public-coin publicly-verifiable witness semantic secure argument. To our knowledge, this is the strongest form of security known for this setting.
As a key application of our work, we show that non-interactive zero-knowledge (NIZK) arguments in the common reference string (CRS) model can additionally maintain witness semantic security even when the CRS is maliciously generated. Our work gives the first construction from (subexponential) standard assumptions that achieves a notion stronger than witness-indistinguishability against a malicious CRS authority.
In order to achieve our results, we give the first construction of a ZAP from subexponential LWE that is adaptively sound. Additionally, we propose a notion of simulation using non-uniform advice about a malicious CRS, which we also believe will be of independent interest.
Towards filling this gap, we propose a new and natural notion of security, called witness semantic security, that captures the natural and strong notion that an adversary should not be able to learn any partial information about the prover's witness beyond what it could learn given only the statement $x$. Not only does our notion of witness semantic security subsume both witness indistinguishability and witness hiding, but it also has an easily appreciable interpretation.
Moreover, we show that assuming the subexponential hardness of LWE, there exists a two-round public-coin publicly-verifiable witness semantic secure argument. To our knowledge, this is the strongest form of security known for this setting.
As a key application of our work, we show that non-interactive zero-knowledge (NIZK) arguments in the common reference string (CRS) model can additionally maintain witness semantic security even when the CRS is maliciously generated. Our work gives the first construction from (subexponential) standard assumptions that achieves a notion stronger than witness-indistinguishability against a malicious CRS authority.
In order to achieve our results, we give the first construction of a ZAP from subexponential LWE that is adaptively sound. Additionally, we propose a notion of simulation using non-uniform advice about a malicious CRS, which we also believe will be of independent interest.
Lih-Chung Wang, Chun-Yen Chou, Jintai Ding, Yen-Liang Kuan, Jan Adriaan Leegwater, Ming-Siou Li, Bo-Shu Tseng, Po-En Tseng, Chia-Chun Wang
ePrint Report
SNOVA is one of the submissions in the NIST Round 1 Additional Signature of the Post-Quantum Signature Competition. SNOVA is a UOV variant that uses the noncommutative-ring technique to educe the size of the public key. SNOVA's public key size and signature size are well-balanced and have good performance. Recently, Beullens proposed a forgery attack against SNOVA, pointing out that the parameters of SNOVA can be attacked. Beullens also argued that with some slight adjustments his attacks can be prevented. In this note, we explain Beullens' forgery attack and show that the attack can be invalid by two different approaches. Finally, we show that these two approaches do not increase the sizes of the public keys or signatures and the current parameters satisfy the security requirement of NIST.
Arka Rai Choudhuri, Sanjam Garg, Guru-Vamsi Policharla, Mingyuan Wang
ePrint Report
An important consideration with the growth of the DeFi ecosystem is the protection of clients who submit transactions to the system. As it currently stands, the public visibility of these transactions in the memory pool (mempool) makes them susceptible to market manipulations such as frontrunning and backrunning. More broadly, for various reasons—ranging from avoiding market manipulation to including time-sensitive information in their transactions—clients may want the contents of their transactions to remain private until they are executed, i.e. they have *pending transaction privacy*. Therefore, *mempool privacy* is becoming an increasingly important feature as DeFi applications continue to spread.
We construct the first *practical* mempool privacy scheme that uses a *one-time* DKG setup for $n$ decryption servers. Our scheme ensures the strong privacy requirement by not only hiding the transactions until they are decrypted but also guaranteeing privacy for transactions that were not selected in the epoch (*pending transaction privacy*). For each epoch (or block), clients can encrypt their transactions so that, once $B$ (encrypted) transactions are selected for the epoch, they can be decrypted by each decryption server while communicating only $O(1)$ information.
Our result improves upon the best-known prior works, which either: (i) require an expensive initial setup involving a (special purpose) multiparty computation protocol executed by the $n$ decryption servers, along with an additional *per-epoch* setup; (ii) require each decryption server to communicate $O(B)$ information; or (iii) do not guarantee pending transaction privacy.
We implement our scheme and find that transactions can be encrypted in approximately 8.5 ms, independent of committee size, and the communication required to decrypt an entire batch of transactions is 48 bytes per party, independent of the number of transactions. If deployed on Ethereum, which processes close to 500 transactions per block, it takes close to 3.2 s for each committee member to compute a partial decryption and 3.0 s to decrypt all transactions for a block in single-threaded mode. Compared to prior work, which had an expensive setup phase per epoch, we incur $<2\times$ overhead in the worst case. On some metrics such as partial decryptions size, we actually fare better.
We construct the first *practical* mempool privacy scheme that uses a *one-time* DKG setup for $n$ decryption servers. Our scheme ensures the strong privacy requirement by not only hiding the transactions until they are decrypted but also guaranteeing privacy for transactions that were not selected in the epoch (*pending transaction privacy*). For each epoch (or block), clients can encrypt their transactions so that, once $B$ (encrypted) transactions are selected for the epoch, they can be decrypted by each decryption server while communicating only $O(1)$ information.
Our result improves upon the best-known prior works, which either: (i) require an expensive initial setup involving a (special purpose) multiparty computation protocol executed by the $n$ decryption servers, along with an additional *per-epoch* setup; (ii) require each decryption server to communicate $O(B)$ information; or (iii) do not guarantee pending transaction privacy.
We implement our scheme and find that transactions can be encrypted in approximately 8.5 ms, independent of committee size, and the communication required to decrypt an entire batch of transactions is 48 bytes per party, independent of the number of transactions. If deployed on Ethereum, which processes close to 500 transactions per block, it takes close to 3.2 s for each committee member to compute a partial decryption and 3.0 s to decrypt all transactions for a block in single-threaded mode. Compared to prior work, which had an expensive setup phase per epoch, we incur $<2\times$ overhead in the worst case. On some metrics such as partial decryptions size, we actually fare better.