IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
17 January 2025
Yibo Cao, Shiyuan Xu, Gang Xu, Xiu-Bo Chen, Tao Shang, Yuling Chen, Zongpeng Li
ePrint Report
Searchable encryption (SE) has been widely studied for cloud storage systems, allowing data encrypted search and retrieval. However, existing SE schemes can not support the fine-grained searchability revocation, making it impractical for real applications. Puncturable encryption (PE) [Oakland'15] can revoke the decryption ability of a data receiver for a specific message, which can potentially alleviate this issue. Moreover, the threat of quantum computing remains an important and realistic concern, potentially leading to data privacy leakage for cloud storage systems. Consequently, designing a post-quantum puncturable encrypted search scheme is still far-reaching. In this paper, we propose PunSearch, the first puncturable encrypted search scheme over lattice for outsourced data privacy-preserving in cloud storage systems. PunSearch provides a fine-grained searchability revocation while enjoying quantum safety. Different from existing PE schemes, we construct a novel trapdoor generation mechanism through evaluation algorithms and lattice pre-image sampling technique. We then design a search permission verification method to revoke the searchability for specific keywords. Furthermore, we formalize a new IND-Pun-CKA security model, and utilize it to analyze the security of PunSearch. Comprehensive performance evaluation indicates that the computational overheads of Encrypt, Trapdoor, Search, and Puncture algorithms in PunSearch are just 0.06, 0.005, 0.05, and 0.31 times of other prior arts, respectively under the best cases. These results demonstrate that PunSearch is effective and secure for cloud storage systems.
16 January 2025
Kathrin Hövelmanns, Mikhail Kudinov
ePrint Report
We revisit a basic building block in the endeavor to migrate to post-quantum secure cryptography, Key Encapsulation Mechanisms (KEMs). KEMs enable the establishment of a shared secret key, using only public communication. When targeting chosen-ciphertext security against quantum attackers, the go-to method is to design a Public-Key Encryption (PKE) scheme and then apply a variant of the PKE-to-KEM conversion known as the Fujisaki-Okamoto (FO) transform, which we revisit in this work. Intuitively, FO ensures chosen-ciphertext security by rejecting dishonest messages. This comes in two flavors -- the KEM could reject by returning 'explicit' failure symbol $\bot$ or instead by returning a pseudo-random key ('implicit' reject). During the NIST post-quantum standardization process, designers chose implicit rejection, likely due to the availability of security proofs against quantum attackers. On the other hand, implicit rejection introduces complexity and can easily deteriorate into explicit rejection in practice. While it was proven that implicit rejection is not less secure than explicit rejection, the other direction was less clear. This is relevant because the available security proofs against quantum attackers still leave things to be desired. When envisioning future improvements, due to, e.g., advancements in quantum proof techniques, having to treat both variants separately creates unnecessary overhead.
In this work, we thus re-evaluate the relationship between the two design approaches and address the so-far unexplored direction: in the classical random oracle model, we show that explicit rejection is not less secure than implicit rejection, up to a rare edge case. This, however, uses the observability of random oracle queries. To lift the proof into the quantum world, we make use of the extractable QROM (eQROM). As an alternative that works without the eQROM, we give an indirect proof that involves a new necessity statement on the involved PKE scheme.
Thibauld Feneuil, Matthieu Rivain
ePrint Report
In this paper, we present a general framework for constructing SNARK-friendly post-quantum signature schemes based on minimal assumptions, specifically the security of an arithmetization-oriented family of permutations. The term "SNARK-friendly" here refers to the efficiency of the signature verification process in terms of SNARK constraints, such as R1CS or AIR constraints used in STARKs. Within the CAPSS framework, signature schemes are designed as proofs of knowledge of a secret preimage of a one-way function, where the one-way function is derived from the chosen permutation family. To obtain compact signatures with SNARK-friendly verification, our primary goal is to achieve a hash-based proof system that is efficient in both proof size and arithmetization of the verification process.
To this end, we introduce SmallWood, a hash-based polynomial commitment and zero-knowledge argument scheme tailored for statements arising in this context. The SmallWood construction leverages techniques from Ligero, Brakedown, and Threshold-Computation-in-the-Head (TCitH) to achieve proof sizes that outperform the state of the art of hash-based zero-knowledge proof systems for witness sizes ranging from $2^5$ to $2^{16}$.
From the SmallWood proof system and further optimizations for SNARK-friendliness, the CAPSS framework offers a generic transformation of any arithmetization-oriented permutation family into a SNARK-friendly post-quantum signature scheme. We provide concrete instances built on permutations such as Rescue-Prime, Poseidon, Griffin, and Anemoi. For the Anemoi family, achieving 128-bit security, our approach produces signatures of sizes ranging from 9 to 13.3 KB, with R1CS constraints between 19K and 29K. This represents a 4-6$\times$ reduction in signature size and a 5-8$\times$ reduction in R1CS constraints compared to Loquat (CRYPTO 2024), a SNARK-friendly post-quantum signature scheme based on the Legendre PRF.
To this end, we introduce SmallWood, a hash-based polynomial commitment and zero-knowledge argument scheme tailored for statements arising in this context. The SmallWood construction leverages techniques from Ligero, Brakedown, and Threshold-Computation-in-the-Head (TCitH) to achieve proof sizes that outperform the state of the art of hash-based zero-knowledge proof systems for witness sizes ranging from $2^5$ to $2^{16}$.
From the SmallWood proof system and further optimizations for SNARK-friendliness, the CAPSS framework offers a generic transformation of any arithmetization-oriented permutation family into a SNARK-friendly post-quantum signature scheme. We provide concrete instances built on permutations such as Rescue-Prime, Poseidon, Griffin, and Anemoi. For the Anemoi family, achieving 128-bit security, our approach produces signatures of sizes ranging from 9 to 13.3 KB, with R1CS constraints between 19K and 29K. This represents a 4-6$\times$ reduction in signature size and a 5-8$\times$ reduction in R1CS constraints compared to Loquat (CRYPTO 2024), a SNARK-friendly post-quantum signature scheme based on the Legendre PRF.
Shuang Sun, Eleftheria Makri
ePrint Report
Multiparty computation (MPC) allows a set of mutually distrusting parties to compute a function over their inputs, while keeping those inputs private. Most recent MPC protocols that are ready for real-world applications are based on the so-called preprocessing model, where the MPC is split into two phases: a preprocessing phase, where raw material, independent of the inputs, is produced; and an online phase, which can be efficiently computed, consuming this preprocessed material, when the inputs become available. However, the sheer number of protocols following this paradigm, makes it difficult to navigate through the literature. Our work aims at systematizing existing literature, (1) to make it easier for protocol developers to choose the most suitable preprocessing protocol for their application scenario; and (2) to identify research gaps, so as to give
pointers for future work. We devise two main categories for the preprocessing model, which we term traditional and special preprocessing, where the former refers to preprocessing for general purpose functions, and the latter refers to preprocessing for specific functions. We further systematize the protocols based on the underlying cryptographic primitive they use, the mathematical structure they are based on, and for special preprocessing protocols also their target function. For each of the 41 presented protocols, we give the intuition behind their main technical contribution, and we analyze their security properties and relative performance.
Hossein Hafezi, Aditi Partap, Sourav Das, Joseph Bonneau
ePrint Report
We introduce the concept of Fair Signature Exchange (FSE). FSE enables a client to obtain signatures on multiple messages in a fair manner: the client receives all signatures if and only if the signer receives an agreed-upon payment. We formalize security definitions for FSE and present a practical construction based on the Schnorr signature scheme, avoiding computationally expensive cryptographic primitives such as SNARKs. Our scheme imposes minimal overhead on the Schnorr signer and verifier, leaving the signature verification process unchanged from standard Schnorr signatures. Fairness is enforced using a blockchain as a trusted third party, while exchanging only a constant amount of information on-chain regardless of the number of signatures exchanged. We demonstrate how to construct a batch adaptor signature scheme using FSE, and our FSE construction based on Schnorr results in an efficient implementation of a batch Schnorr adaptor signature scheme for the discrete logarithm problem. We implemented our scheme to show that it has negligible overhead compared to standard Schnorr signatures. For instance, exchanging $2^{10}$ signatures on the Vesta curve takes approximately $80$ms for the signer and $300$ms for the verifier, with almost no overhead for the signer and $2$x overhead for the verifier compared to the original Schnorr protocol. Additionally, we propose an extension to blind signature exchange, where the signer does not learn the messages being signed. This is achieved through a natural adaptation of blinded Schnorr signatures.
Clémence Bouvier, Lorenzo Grassi, Dmitry Khovratovich, Katharina Koschatko, Christian Rechberger, Fabian Schmid, Markus Schofnegger
ePrint Report
Arithmetic hash functions defined over prime fields have been actively developed and used in verifiable computation (VC) protocols. Among those, elliptic-curve-based SNARKs require large (\(256\)-bit and higher) primes. Such hash functions are notably slow, losing a factor of up to \(1000\) compared to regular constructions like SHA-2/3.
In this paper, we present the hash function $\textsf{Skyscraper}$, which is aimed at large prime fields and provides major improvements compared to $\texttt{Reinforced Concrete}$ and $\texttt{Monolith}$. First, the design is exactly the same for all large primes, which simplifies analysis and deployment. Secondly, it achieves a performance comparable to cryptographic hash standards by using low-degree non-invertible transformations and minimizing modulo reductions. Concretely, it hashes two \(256\)-bit prime field (BLS12-381 curve scalar field) elements in \(135\) nanoseconds, whereas SHA-256 needs \(42\) nanoseconds on the same machine.
The low circuit complexity of $\textsf{Skyscraper}$, together with its high native speed, should allow a substantial reduction in many VC scenarios, particularly in recursive proofs.
In this paper, we present the hash function $\textsf{Skyscraper}$, which is aimed at large prime fields and provides major improvements compared to $\texttt{Reinforced Concrete}$ and $\texttt{Monolith}$. First, the design is exactly the same for all large primes, which simplifies analysis and deployment. Secondly, it achieves a performance comparable to cryptographic hash standards by using low-degree non-invertible transformations and minimizing modulo reductions. Concretely, it hashes two \(256\)-bit prime field (BLS12-381 curve scalar field) elements in \(135\) nanoseconds, whereas SHA-256 needs \(42\) nanoseconds on the same machine.
The low circuit complexity of $\textsf{Skyscraper}$, together with its high native speed, should allow a substantial reduction in many VC scenarios, particularly in recursive proofs.
15 January 2025
Ghent, Belgium, 11 August - 14 August 2025
Event Calendar
Event date: 11 August to 14 August 2025
Submission deadline: 28 February 2025
Notification: 2 May 2025
Submission deadline: 28 February 2025
Notification: 2 May 2025
Department of Computer Science at Aarhus University, Denmark
Job Posting
Applications are invited for a PhD fellowship/scholarship collaboration at the Cryptography and Security Group. The position is available from 1st May 2025 or later, with an application deadline on 1st February 2025.
The research scope of the position will be Cryptographic Protocols such as (Password-Autheticated) Key Exchange and MPC (Multi-Party Computation), with a focus on applications to the retail industry. In particular, we envision to improve the state-of-the-art in cryptographic protocols for stronger authentication and privacy-preserving data analytics that can be applicable to strengthen the security posture of real-world deployed systems.
The responsibilities of the PhD student are:
Qualifications
The research scope of the position will be Cryptographic Protocols such as (Password-Autheticated) Key Exchange and MPC (Multi-Party Computation), with a focus on applications to the retail industry. In particular, we envision to improve the state-of-the-art in cryptographic protocols for stronger authentication and privacy-preserving data analytics that can be applicable to strengthen the security posture of real-world deployed systems.
The responsibilities of the PhD student are:
- Collaborating with faculty members and fellow researchers to develop novel cryptographic protocols.
- Publishing research findings in top-tier conferences and journals in computer science and related fields.
- Participating in academic activities such as seminars, workshops, and conferences to stay informed of the latest developments in the field.
- Supporting teaching activities in the department by serving as TA.
Qualifications
- A previous background in cryptography and an MSc. in Computer Science, Engineering, Mathematics or a related discipline are desirable (but not required).
- Excellent communication and interpersonal skills with the ability to work effectively in a collaborative research environment.
- Strong organizational and time-management skills, with the ability to prioritize tasks between research, coursework and teaching duties.
- Analytical and critical thinking skill, fluency in technical English.
Closing date for applications:
Contact: Diego F. Aranha (dfaranha [at] cs.au.dk) or Sophia Yakoubov (sophia.yakoubov [a] cs.au.dk)
More information: https://phd.nat.au.dk/for-applicants
Brandenburg University of Technology, Chair of IT Security
Job Posting
The Young Investigator Group “COSYS - Control Systems and Cyber Security Lab” at the Chair of IT Security at the Brandenburg University of Technology Cottbus-Senftenberg has an open PhD/Postdoc position in the following areas:
AI-based Network Attack Detection and Simulation.
AI-enabled Penetration Testing.
Privacy-Enhancing Technologies in Cyber-Physical Systems.
The available position is funded as 100% TV-L E13 tariff in Germany and limited until 31.07.2026, with possibility for extension. Candidates must hold a Master’s degree (PhD degree for Postdocs) or equivalent in Computer Science or related disciplines, or be close to completing it. If you are interested, please send your CV, transcript of records from your Master studies, and an electronic version of your Master's thesis (if possible), as a single pdf file. Applications will be reviewed until the position is filled.
The available position is funded as 100% TV-L E13 tariff in Germany and limited until 31.07.2026, with possibility for extension. Candidates must hold a Master’s degree (PhD degree for Postdocs) or equivalent in Computer Science or related disciplines, or be close to completing it. If you are interested, please send your CV, transcript of records from your Master studies, and an electronic version of your Master's thesis (if possible), as a single pdf file. Applications will be reviewed until the position is filled.
Closing date for applications:
Contact: Ivan Pryvalov (ivan.pryvalov@b-tu.de)
14 January 2025
Bhargav Nagaraja Bhatt, Fatemeh Shirazi, Alistair Stewart
ePrint Report
The increasing number of blockchain projects introduced annually has led to a pressing need for secure and efficient interoperability solutions. Currently, the lack of such solutions forces end-users to rely on centralized intermediaries, contradicting the core principle of decentralization and trust minimization in blockchain technology. In this paper, we propose a decentralized and efficient interoperability solution (aka Bridge Protocol) that operates without additional trust assumptions, relying solely on the Byzantine Fault Tolerance (BFT) of the two chains being connected. In particular, relayers (actors that exchange messages between networks) are permissionless and decentralized, hence eliminating any single point of failure. We introduce Random Sampling, a novel technique for on-chain light clients to efficiently follow the history of PoS blockchains by reducing the signature verifications required. Here, the randomness is drawn on-chain, for example, using Ethereum's RANDAO. We analyze the security of the bridge from a crypto- economic perspective and provide a framework to derive the security parameters. This includes handling subtle concurrency issues and randomness bias in strawman designs. While the protocol is applicable to various PoS chains, we demonstrate its feasibility by instantiating a bridge between Polkadot and Ethereum (currently deployed), and discuss some practical security challenges. We also evaluate the efficiency (gas costs) of an on-chain light-client verifier implemented as a smart contract on ethereum against SNARK-based approaches. Even for large validator set sizes (up to $10^6$), the signature verification gas costs of our light-client verifier are a magnitude lower.
Xichao Hu, Lin Jiao
ePrint Report
The impossible boomerang attack is a very powerful attack, and the existing results show that it is more effective than the impossible differential attack in the related-key scenario. However, in the current key recovery process, the details of a block cipher are ignored, only fixed keys are pre-guessed, and the time complexity of the early abort technique is roughly estimated. These limitations are obstacles to the broader application of impossible boomerang attack. In this paper, we propose the pre-sieving technique, partial pre-guess key technique and precise complexity evaluation technique. For the pre-sieving technique, we capitalize on the specific features of both the linear layer and the nonlinear layer to expeditiously filter out the impossible quartets at the earliest possible stage. Regarding the partial pre-guess key technique, we are able to selectively determine the keys that require guessing according to our requirements. Moreover, the precise complexity evaluation technique empowers us to explicitly compute the complexity associated with each step of the attack. We integrate these techniques and utilize them to launch an attack on ARADI, which is a low-latency block cipher proposed by the NSA (National Security Agency) in 2024 for the purpose of memory encryption. Eventually, we achieve the first full-round attack with a data complexity of $2^{130}$, a time complexity of $2^{254.81}$, and a memory complexity of $2^{252.14}$. None of the previous key recovery methods have been able to attain such an outcome, thereby demonstrating the high efficacy of our new technique.
Justin Drake, Dmitry Khovratovich, Mikhail Kudinov, Benedikt Wagner
ePrint Report
With the threat posed by quantum computers on the horizon, systems like Ethereum must transition to cryptographic primitives resistant to quantum attacks. One of the most critical of these primitives is the non-interactive multi-signature scheme used in Ethereum's proof-of-stake consensus, currently implemented with BLS signatures. This primitive enables validators to independently sign blocks, with their signatures then publicly aggregated into a compact aggregate signature.
In this work, we introduce a family of hash-based signature schemes as post-quantum alternatives to BLS. We consider the folklore method of aggregating signatures via (hash-based) succinct arguments, and our work is focused on instantiating the underlying signature scheme. The proposed schemes are variants of the XMSS signature scheme, analyzed within a novel and unified framework. While being generic, this framework is designed to minimize security loss, facilitating efficient parameter selection. A key feature of our work is the avoidance of random oracles in the security proof. Instead, we define explicit standard model requirements for the underlying hash functions. This eliminates the paradox of simultaneously treating hash functions as random oracles and as explicit circuits for aggregation. Furthermore, this provides cryptanalysts with clearly defined targets for evaluating the security of hash functions. Finally, we provide recommendations for practical instantiations of hash functions and concrete parameter settings, supported by known and novel heuristic bounds on the standard model properties.
In this work, we introduce a family of hash-based signature schemes as post-quantum alternatives to BLS. We consider the folklore method of aggregating signatures via (hash-based) succinct arguments, and our work is focused on instantiating the underlying signature scheme. The proposed schemes are variants of the XMSS signature scheme, analyzed within a novel and unified framework. While being generic, this framework is designed to minimize security loss, facilitating efficient parameter selection. A key feature of our work is the avoidance of random oracles in the security proof. Instead, we define explicit standard model requirements for the underlying hash functions. This eliminates the paradox of simultaneously treating hash functions as random oracles and as explicit circuits for aggregation. Furthermore, this provides cryptanalysts with clearly defined targets for evaluating the security of hash functions. Finally, we provide recommendations for practical instantiations of hash functions and concrete parameter settings, supported by known and novel heuristic bounds on the standard model properties.
Hyunjung Son, Seunghun Paik, Yunki Kim, Sunpill Kim, Heewon Chung, Jae Hong Seo
ePrint Report
Fuzzy private set intersection (Fuzzy PSI) is a cryptographic protocol for privacy-preserving similarity matching, which is one of the essential operations in various real-world applications such as facial authentication, information retrieval, or recommendation systems. Despite recent advancements in fuzzy PSI protocols, still a huge barrier remains in deploying them for these applications. The main obstacle is the high dimensionality, e.g., from 128 to 512, of data; lots of existing methods, Garimella et al. (CRYPTO’23, CRYPTO’24) or van Baarsen et al. (EUROCRYPT’24), suffer from exponential overhead on communication and/or computation cost. In addition, the dominant similarity metric in these applications is cosine similarity, which disables several optimization tricks based on assumptions for the distribution of data, e.g., techniques by Gao et al. (ASIACRYPT’24). In this paper, we propose a novel fuzzy PSI protocol for cosine similarity, called FPHE, that overcomes these limitations at the same time. FPHE features linear complexity on both computation and communication with respect to the dimension of set elements, only requiring much weaker assumption than prior works. The basic strategy of ours is to homomorphically compute cosine similarity and run an approximated comparison function, with a clever packing method for efficiency. In addition, we introduce a novel proof technique to harmonize the approximation error from the sign function with the noise flooding, proving the security of FPHE under the semi-honest model. Moreover, we show that our construction can be extended to support various functionalities, such as labeled or circuit fuzzy PSI. Through experiments, we show that FPHE can perform fuzzy PSI over 512-dimensional data in a few minutes, which was computationally infeasible for all previous proposals under the same assumption as ours.
Gefei Tan, Adrià Gascón, Sarah Meiklejohn, Mariana Raykova, Xiao Wang, Ning Luo
ePrint Report
Zero-knowledge proofs of training (zkPoT) allow a party to prove that a model is trained correctly on a committed dataset without revealing any additional information about the model or the dataset. Existing zkPoT protocols prove the entire training process in zero knowledge; i.e., they prove that the final model was obtained in an iterative fashion starting from the training data and a random seed (and potentially other parameters) and applying the correct algorithm at each iteration. This approach inherently requires the prover to perform work linear to the number of iterations.
In this paper, we take a different approach to proving the correctness of model training. Our approach is motivated by efficiency but also more urgently by the observation that the prover's ability to pick the random seed used for training introduces the potential for it to bias the model. In other words, if the input to the training algorithm is biased, the resulting model will be biased even if the prover correctly ran the training algorithm. Rather than prove the correctness of the training process, we thus directly prove the correctness of the training model using a notion we call optimum vicinity, which bounds the distance between the trained model and the mathematically optimal model for models that can be viewed as the solution to a convex optimization problem. We show both theoretically and experimentally that this ensures the trained model behaves similarly to the optimal model, and show this is not true for existing approaches. We also demonstrate significant performance improvements as compared to the existing zkPoT paradigm: the statement proven in ZK in our protocol has a size independent of the number of training iterations, and our Boolean (respectively arithmetic) circuit size is up to $246\times$ (respectively $5\times$) smaller than that of a baseline zkPoT protocol that verifies the whole training process.
In this paper, we take a different approach to proving the correctness of model training. Our approach is motivated by efficiency but also more urgently by the observation that the prover's ability to pick the random seed used for training introduces the potential for it to bias the model. In other words, if the input to the training algorithm is biased, the resulting model will be biased even if the prover correctly ran the training algorithm. Rather than prove the correctness of the training process, we thus directly prove the correctness of the training model using a notion we call optimum vicinity, which bounds the distance between the trained model and the mathematically optimal model for models that can be viewed as the solution to a convex optimization problem. We show both theoretically and experimentally that this ensures the trained model behaves similarly to the optimal model, and show this is not true for existing approaches. We also demonstrate significant performance improvements as compared to the existing zkPoT paradigm: the statement proven in ZK in our protocol has a size independent of the number of training iterations, and our Boolean (respectively arithmetic) circuit size is up to $246\times$ (respectively $5\times$) smaller than that of a baseline zkPoT protocol that verifies the whole training process.
Yashvanth Kondi, Divya Ravi
ePrint Report
Secure Multiparty Computation (MPC) protocols that achieve Identifiable Abort (IA) guarantee honest parties that in the event that they are denied output, they will be notified of the identity of at least one corrupt party responsible for the abort. Cheater identification provides recourse in the event of a protocol failure, and in some cases can even be desired over Guaranteed Output Delivery. However, protocols in the literature typically make use of broadcast as a necessary tool in identifying cheaters. In many deployments, the broadcast channel itself may be the most expensive component.
In this work, we investigate if it is inherent that MPC with IA should bear the full complexity of broadcast. As the implication of broadcast from IA has been established in previous work, we relax our target to circumvent this connection: we allow honest parties to differ in which cheaters they identify, nonetheless retaining the ability to prove claims of cheating to an auditor.
We show that in the honest majority setting, our notion of Provable Identifiable Selective Abort (PISA) can be achieved without a traditional broadcast channel. Indeed, broadcast in this setting---which we term Broadcast with Selective Identifiable Abort (BC-IA)---is achievable in only two point-to-point rounds with a simple echoing technique. On the negative side, we also prove that BC-IA is impossible to achieve in the dishonest majority setting.
As a general result, we show that any MPC protocol that achieves IA with $r$ broadcasts, can be compiled to one that achieves PISA with $2(r+1)$ point to point rounds. As a practical application of this methodology, we design, implement, and benchmark a six-round honest majority threshold ECDSA protocol that achieves PISA, and can be deployed in any environment with point to point communication alone.
In this work, we investigate if it is inherent that MPC with IA should bear the full complexity of broadcast. As the implication of broadcast from IA has been established in previous work, we relax our target to circumvent this connection: we allow honest parties to differ in which cheaters they identify, nonetheless retaining the ability to prove claims of cheating to an auditor.
We show that in the honest majority setting, our notion of Provable Identifiable Selective Abort (PISA) can be achieved without a traditional broadcast channel. Indeed, broadcast in this setting---which we term Broadcast with Selective Identifiable Abort (BC-IA)---is achievable in only two point-to-point rounds with a simple echoing technique. On the negative side, we also prove that BC-IA is impossible to achieve in the dishonest majority setting.
As a general result, we show that any MPC protocol that achieves IA with $r$ broadcasts, can be compiled to one that achieves PISA with $2(r+1)$ point to point rounds. As a practical application of this methodology, we design, implement, and benchmark a six-round honest majority threshold ECDSA protocol that achieves PISA, and can be deployed in any environment with point to point communication alone.
Ziqi Zhu, Kai Zhang, Zhili Chen, Junqing Gong, Haifeng Qian
ePrint Report
This paper presents the first black-box registered ABE for circuit from lattices. The selective security is based on evasive LWE assumption [EUROCRYPT'22, CRYPTO'22]. The unique prior Reg-ABE scheme from lattices is derived from non-black-box construction based on function-binding hash and witness encryption [CRYPTO'23]. Technically, we first extend the black-box registration-based encryption from standard LWE [CRYPTO'23] so that we can register a public key with a function; this yields a LWE-based Reg-ABE with ciphertexts of size $L \cdot \mathsf{polylog}(L)$ where $L$ is the number of users. We then make use of the special structure of its ciphertext to reduce its size to $\mathsf{polylog}(L)$ via an algebraic obfuscator based on evasive LWE [CRYPTO'24].
Elodie Ngoie Mutombo, Mike Wa Nkongolo, Mahmut Tokmak
ePrint Report
Cryptojacking, the unauthorised use of computing resources to mine cryptocurrency, has emerged as a critical threat in today’s digital landscape. These attacks not only compromise system integrity but also result in increased costs, reduced hardware lifespan, and heightened network security risks. Early and accurate detection is essential to mitigate the adverse effects of cryptojacking. This study focuses on developing a semi-supervised machine learning (ML) approach that leverages an autoencoder for feature extraction and a random forest (RF) model for classification. The objective is to enhance cryptojacking detection while maintaining a balance between accuracy and interpretability. The proposed methodology is further enhanced with explainable artificial intelligence (XAI) techniques such as local interpretable model-agnostic explanations (LIME) to offer insights into model predictions. Results from datasets such as UGRansome and BitcoinHeist indicate that the semi-supervised approach achieves accuracy rates ranging from 70% to 99%. The study demonstrates that the proposed model provides an efficient, interpretable, and scalable solution for real-time cryptojacking detection across various scenarios.
Theophilus Agama
ePrint Report
In this paper, we study the distribution of the \textit{gap} between terms in an addition chain. In particular, we show that if $1,2,\ldots,s_{\delta(n)}=n$ is an addition chain of length $\delta(n)$ leading to $n$, then $$\underset{1\leq l\leq \delta(n)}{\mathrm{sup}}(s_{l+k}-s_l)\gg k\frac{n}{\delta(n)}$$ and $$\underset{1\leq l\leq \delta(n)}{\mathrm{inf}}(s_{l+k}-s_l)\ll k\frac{n}{\delta(n)}$$ for fixed $k\geq 1$.
13 January 2025
Jianqiao Cambridge Mo, Brandon Reagen
ePrint Report
Privacy and security have become critical priorities in many scenarios. Privacy-preserving computation (PPC) is a powerful solution that allows functions to be computed directly on encrypted data. Garbled circuit (GC) is a key PPC technology that enables secure, confidential computing. GC comes in two forms: Boolean GC supports all operations by expressing functions as logic circuits; arithmetic GC is a newer technique to efficiently compute a set of arithmetic operations like addition and multiplication. Mixed GC combines both Boolean and arithmetic GC, in an attempt to optimize performance by computing each function in its most natural form. However, this introduces additional costly conversions between the two forms. It remains unclear if and when the efficiency gains of arithmetic GC outweigh these conversion costs.
In this paper, we present A̲rithmetic Ḇoolean Ḻogic E̲xchange for Garbled Circuit, the first real implementation of mixed GC. ABLE profiles the performance of Boolean and arithmetic GC operations along with their conversions. We assess not only communication but also computation latency, a crucial factor in evaluating the end-to-end runtime of GC. Based on these insights, we propose a method to determine whether it is more efficient to use general Boolean GC or mixed GC for a given application. Rather than implementing both approaches to identify the better solution for each unique case, our method enables users to select the most suitable GC realization early in the design process. This method evaluates whether the benefits of transitioning operations from Boolean to arithmetic GC offset the associated conversion costs. We apply this approach to a neural network task as a case study.
Implementing ABLE reveals opportunities for further GC optimization. We propose a heuristic to reduce the number of primes in arithmetic GC, cutting communication by 14.1% and compute latency by 15.7% in a 16-bit system. Additionally, we optimize mixed GC conversions with row reduction technique, achieving a 48.6% reduction in garbled table size for bit-decomposition and a 50% reduction for bit-composition operation. These improvements reduce communication overhead in stream GC and free up storage in the GC with preprocessing approach. We open source our code for community use.
In this paper, we present A̲rithmetic Ḇoolean Ḻogic E̲xchange for Garbled Circuit, the first real implementation of mixed GC. ABLE profiles the performance of Boolean and arithmetic GC operations along with their conversions. We assess not only communication but also computation latency, a crucial factor in evaluating the end-to-end runtime of GC. Based on these insights, we propose a method to determine whether it is more efficient to use general Boolean GC or mixed GC for a given application. Rather than implementing both approaches to identify the better solution for each unique case, our method enables users to select the most suitable GC realization early in the design process. This method evaluates whether the benefits of transitioning operations from Boolean to arithmetic GC offset the associated conversion costs. We apply this approach to a neural network task as a case study.
Implementing ABLE reveals opportunities for further GC optimization. We propose a heuristic to reduce the number of primes in arithmetic GC, cutting communication by 14.1% and compute latency by 15.7% in a 16-bit system. Additionally, we optimize mixed GC conversions with row reduction technique, achieving a 48.6% reduction in garbled table size for bit-decomposition and a 50% reduction for bit-composition operation. These improvements reduce communication overhead in stream GC and free up storage in the GC with preprocessing approach. We open source our code for community use.
Shweta Agrawal, Giulio Malavolta, Tianwei Zhang
ePrint Report
Time-lock puzzles (TLP) are a cryptographic tool that allow one to encrypt a message into the future, for a predetermined amount of time $T$. At present, we have only two constructions with provable security: One based on the repeated squaring assumption and the other based on obfuscation. Basing TLP on any other assumption is a long-standing question, further motivated by the fact that known constructions are broken by quantum algorithms.
In this work, we propose a new approach to construct time-lock puzzles based on lattices, and therefore with plausible post-quantum security. We obtain the following main results:
* In the preprocessing model, where a one-time public-coin preprocessing is allowed, we obtain a time-lock puzzle with encryption time $\log(T)$.
* In the plain model, where the encrypter does all the computation, we obtain a time-lock puzzle with encryption time $\sqrt{T}$.
Both constructions assume the existence of any sequential function $f$, and the hardness of the circular small-secret learning with errors (LWE) problem. At the heart of our results is a new construction of succinct randomized encodings (SRE) for $T$-folded repeated circuits, where the complexity of the encoding is $\sqrt{T}$. This is the first construction of SRE where the overall complexity of the encoding algorithm is sublinear in the runtime $T$, and which is not based on obfuscation. As a direct corollary, we obtain a non-interactive RAM delegation scheme with sublinear complexity (in the number of steps $T$).
In this work, we propose a new approach to construct time-lock puzzles based on lattices, and therefore with plausible post-quantum security. We obtain the following main results:
* In the preprocessing model, where a one-time public-coin preprocessing is allowed, we obtain a time-lock puzzle with encryption time $\log(T)$.
* In the plain model, where the encrypter does all the computation, we obtain a time-lock puzzle with encryption time $\sqrt{T}$.
Both constructions assume the existence of any sequential function $f$, and the hardness of the circular small-secret learning with errors (LWE) problem. At the heart of our results is a new construction of succinct randomized encodings (SRE) for $T$-folded repeated circuits, where the complexity of the encoding is $\sqrt{T}$. This is the first construction of SRE where the overall complexity of the encoding algorithm is sublinear in the runtime $T$, and which is not based on obfuscation. As a direct corollary, we obtain a non-interactive RAM delegation scheme with sublinear complexity (in the number of steps $T$).