IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
29 November 2024
Puja Mondal, Suparna Kundu, Supriya Adhikary, Angshuman Karmakar
ePrint Report
CROSS is a code-based post-quantum digital signature scheme based on a zero-knowledge (ZK) framework. It is a second-round candidate of the National Institute of Standards and Technology’s additional call for standardizing post-quantum digital signatures. The memory footprint of this scheme is prohibitively large, especially for small embedded devices. In this work, we propose various techniques to reduce the memory footprint of the key generation, signature generation, and verification by as much as 50%, 52%, and 74%, respectively, on an ARM Cortex-M4 device. Moreover, our memory-optimized implementations adapt the countermeasure against the recently proposed (ASIACRYPT-24) fault attacks against the ZK-based signature schemes.
Akiko Inoue, Ashwin Jha, Bart Mennink, Kazuhiko Minematsu
ePrint Report
Authenticated encryption schemes guarantee that parties who share a secret key can communicate confidentially and authentically. One of the most popular and widely used authenticated encryption schemes is GCM by McGrew and Viega (INDOCRYPT 2004). However, despite its simplicity and efficiency, GCM also comes with its deficiencies, most notably devastating insecurity against nonce-misuse and imperfect security for short tags.
Very recently, Campagna, Maximov, and Mattsson presented GCM-SST (IETF Internet draft 2024), a variant of GCM that uses a slightly more involved universal hash function composition, and claimed that this construction achieves stronger security in case of tag truncation. GCM-SST already received various interest from industries (e.g., Amazon and Ericsson) and international organizations (e.g., IETF and 3GPP) but it has not received any generic security analysis to date.
In this work, we fill this gap and perform a detailed security analysis of GCM-SST. In particular, we prove that GCM-SST achieves security in the nonce-misuse resilience model of Ashur et al.~(CRYPTO 2017), roughly guaranteeing that even if nonces are reused, evaluations of GCM-SST for new nonces are secure. Our security bound also verified the designers' (informal) claim on tag truncation. Additionally, we investigate and describe possibilities to optimize the hashing in GCM-SST further, and we describe a universal forgery attack in a complexity of around $2^{33.6}$, improving over an earlier attack of $2^{40}$ complexity of Lindell, when the tag is 32 bits.
Anup Kumar Kundu, Shibam Ghosh, Aikata Aikata, Dhiman Saha
ePrint Report
In this work, we introduce ToFA, the first fault attack (FA) strategy that attempts to leverage the classically well-known idea of impossible differential cryptanalysis to mount practically verifiable attacks on bit-oriented ciphers like GIFT and BAKSHEESH. The idea used stems from the fact that truncated differential paths induced due to fault injection in certain intermediate rounds of the ciphers lead to active SBox-es in subsequent rounds whose inputs admit specific truncated differences. This leads to a (multi-round) impossible differential distinguisher, which can be incrementally leveraged for key-guess elimination via partial decryption. The key-space reduction further exploits the multi-round impossibility, capitalizing on the relations due to the quotient-remainder (QR) groups of the GIFT and BAKSHEESH linear layer, which increases the filtering capability of the distinguisher. Moreover, the primary observations made in this work are independent of the actual SBox. Clock glitch based fault attacks were mounted on 8-bit implementations of GIFT-64/GIFT-128 using a ChipWhisperer Lite board on an 8-bit ATXmega128D4-AU micro-controller. Unique key recovery was achieved for GIFT-128 with 3 random byte faults, while for GIFT-64, key space was reduced to $2^{32}$, the highest achievable for GIFT-64, with a single level fault due to its key-schedule. This work also reports the highest fault injection penetration for any variant of GIFT and BAKSHEESH. Finally, this work reiterates the role of classical cryptanalysis strategies in fault vulnerability assessment while leading to the most efficient fault attacks on GIFT.
Shengyuan Xu, Siwei Chen, Xiutao Feng, Zejun Xiang, Xiangyong Zeng
ePrint Report
BAKSHEESH is a lightweight block cipher following up the well-known cipher GIFT-128, which uses a 4-bit SBox that has a non-trivial Linear Structure (LS). Also, the Sbox requires a low number of AND gates that makes BAKSHEESH stronger to resist the side channel attacks compared to GIFT-128. In this paper, we give the first third-party security analysis of BAKSHEESH from the traditional attacks perspective: integral, differential and linear attacks. Firstly, we propose a framework for integral attacks based on the properties of BAKSHEESH's Sbox and its inverse. By this, we achieve the 9- and 10-round practical key-recovery attacks, and give a 15-round theoretical attack. Secondly, we re-evaluate the security bound against differential cryptanalysis, correcting two errors from the original paper and presenting a key-recovery attack for 19 rounds. At last, for linear cryptanalysis, we develop an automated model for key-recovery attacks and then demonstrate a key-recovery attack for 21 rounds. We stress that our attacks cannot threaten the full-round BAKSHEESH, but give a deep understanding on its security.
Simon Judd
ePrint Report
We present EndGame, a novel blockchain architecture that achieves succinctness through Reed-Solomon accumulation schemes. Our construction enables constant-time verification of blockchain state while maintaining strong security properties. We demonstrate how to efficiently encode blockchain state transitions using Reed-Solomon codes and accumulate proofs of state validity using the ARC framework. Our protocol achieves optimal light client verification costs and supports efficient state management without trusted setup.
Giulia Gaggero, Elisa Gorla
ePrint Report
In this paper, we discuss what it means for a polynomial system to be random and how hard it is to solve a random polynomial system. We propose an algebraic definition of randomness, that we call algebraic randomness. Using a conjecture from commutative algebra, we produce a sharp upper bound for the degree of regularity, hence the complexity of solving an algebraically random polynomial system by Groebner bases methods. As a proof of concept, we apply our result to Rainbow and GeMSS and show that these systems are far from being algebraically random.
Jianjun HU
ePrint Report
In 2016,Petit et al. first studied the implementation of the index calculus method on elliptic curves in prime finite fields, and in 2018, Momonari and Kudo et al. improved algorithm of Petit et al. This paper analyzes the research results of Petit, Momonari and Kudo, and points out the existing problems of the algorithm. Therefore, with the help of sum polynomial function and index calculus, a pseudo-index calculus algorithm for elliptic curves discrete logarithm problem over prime finite fields is proposed, and its correctness is analyzed and verified. It is pointed out that there is no subexponential time method for solving discrete logarithms on elliptic curves in the finite fields of prime numbers, or at least in the present research background, there is no method for solving discrete logarithms in subexponential time.
Brandon "Cryptskii" Ramsay
ePrint Report
Presenting a formal analysis of the Overpass protocol's hierarchical state channel architecture, focusing on its unique approach to state synchronization and tamper detection through cryptographic primitives. The protocol achieves global state consistency without traditional consensus mechanisms by leveraging Sparse Merkle Trees (SMTs), zero-knowledge proofs, and a deterministic hierarchical structure. We provide mathematical proofs of security properties and analyze the protocol's efficiency in terms of computational and communication complexity.
Antonina Bondarchuk, Olive Chakraborty, Geoffroy Couteau, Renaud Sirdey
ePrint Report
This paper focuses on the issue of reducing the bandwidth requirement for FHE ciphertext transmission. While this issue has been extensively studied from the uplink viewpoint (transmission of encrypted inputs towards a FHE calculation) where several approaches exist to essentially cancel FHE ciphertext expansion, the downlink case (transmission of encrypted results towards an end-user) has been the object of much less attention. In this paper, we address this latter issue with a particular focus on the TFHE scheme for which we investigate a number of methods including several approaches for switching to more compact linearly homomorphic schemes, reducing the precision of T(R)LWE coefficients (while maintaining acceptable probabilities of decryption errors) and others. We also investigate how to use these methods in combination, depending on the number of FHE results to transmit. We further perform extensive experiments demonstrating that the downlink FHE ciphertext expansion factor can be practically reduced to values below 10, depending on the setup, with little additional computational burden.
Cas Cremers, Aleksi Peltonen, Mang Zhao
ePrint Report
Despite decades of work on threshold signature schemes, there is still limited agreement on their desired properties and threat models. In this work we significantly extend and repair previous work to give a unified syntax for threshold signature schemes and a new hierarchy of security notions for them. Moreover, our new hierarchy allows us to develop an automated analysis approach for protocols that use threshold signatures, which can discover attacks on protocols that exploit the details of the security notion offered by the used scheme, which can help choose the correct security notion (and scheme that fulfills it) that is required for a specific protocol.
Unlike prior work, our syntax for threshold signatures covers both non-interactive and interactive signature schemes with any number of key generation and signing rounds, and our hierarchy of security notions additionally includes elements such as various types of corruption and malicious key generation. We show the applicability of our hierarchy by selecting representative threshold signature schemes from the literature, extracting their core security features, and categorizing them according to our hierarchy. As a side effect of our work, we show through a counterexample that a previous attempt at building a unified hierarchy of unforgeability notions does not meet its claimed ordering, and show how to repair it without further restricting the scope of the definitions.
Based on our syntax and hierarchy, we develop the first systematic, automated analysis method for higher-level protocols that use threshold signatures. We use a symbolic analysis framework to abstractly model threshold signature schemes that meet security notions in our hierarchy, and implement this in the Tamarin prover. Given a higher-level protocol that uses threshold signatures, and a security notion from our hierarchy, our automated approach can find attacks on such protocols that exploit the subtle differences between elements of our hierarchy. Our approach can be used to formally analyze the security implications of implementing different threshold signature schemes in higher-level protocols.
Unlike prior work, our syntax for threshold signatures covers both non-interactive and interactive signature schemes with any number of key generation and signing rounds, and our hierarchy of security notions additionally includes elements such as various types of corruption and malicious key generation. We show the applicability of our hierarchy by selecting representative threshold signature schemes from the literature, extracting their core security features, and categorizing them according to our hierarchy. As a side effect of our work, we show through a counterexample that a previous attempt at building a unified hierarchy of unforgeability notions does not meet its claimed ordering, and show how to repair it without further restricting the scope of the definitions.
Based on our syntax and hierarchy, we develop the first systematic, automated analysis method for higher-level protocols that use threshold signatures. We use a symbolic analysis framework to abstractly model threshold signature schemes that meet security notions in our hierarchy, and implement this in the Tamarin prover. Given a higher-level protocol that uses threshold signatures, and a security notion from our hierarchy, our automated approach can find attacks on such protocols that exploit the subtle differences between elements of our hierarchy. Our approach can be used to formally analyze the security implications of implementing different threshold signature schemes in higher-level protocols.
Aikata Aikata, Daniel Sanz Sobrino, Sujoy Sinha Roy
ePrint Report
Fully Homomorphic Encryption (FHE) enables privacy-preserving computation but imposes significant computational and communication overhead on the client for the public-key encryption. To alleviate this burden, previous works have introduced the Hybrid Homomorphic Encryption (HHE) paradigm, which combines symmetric encryption with homomorphic decryption to enhance performance for the FHE client. While early HHE schemes focused on binary data, modern versions now support integer prime fields, improving their efficiency for practical applications such as secure machine learning.
Despite several HHE schemes proposed in the literature, there has been no comprehensive study evaluating their performance or area advantages over FHE for encryption tasks. This paper addresses this gap by presenting the first implementation of an HHE scheme- PASTA. It is a symmetric encryption scheme over integers designed to facilitate fast client encryption and homomorphic symmetric decryption on the server. We provide performance results for both FPGA and ASIC platforms, including a RISC-V System-on-Chip (SoC) implementation on a low-end 130nm ASIC technology, which achieves a 43–171$\times$ speedup compared to a CPU. Additionally, on high-end 7nm and 28nm ASIC platforms, our design demonstrates a 97$\times$ speedup over prior public-key client accelerators for FHE. We have made our design public and benchmarked an application to support future research.
Despite several HHE schemes proposed in the literature, there has been no comprehensive study evaluating their performance or area advantages over FHE for encryption tasks. This paper addresses this gap by presenting the first implementation of an HHE scheme- PASTA. It is a symmetric encryption scheme over integers designed to facilitate fast client encryption and homomorphic symmetric decryption on the server. We provide performance results for both FPGA and ASIC platforms, including a RISC-V System-on-Chip (SoC) implementation on a low-end 130nm ASIC technology, which achieves a 43–171$\times$ speedup compared to a CPU. Additionally, on high-end 7nm and 28nm ASIC platforms, our design demonstrates a 97$\times$ speedup over prior public-key client accelerators for FHE. We have made our design public and benchmarked an application to support future research.
Florian Hirner, Florian Krieger, Constantin Piber, Sujoy Sinha Roy
ePrint Report
Zero-knowledge proofs (ZKPs) are cryptographic protocols that enable one party to prove the validity of a statement without revealing the underlying data. Such proofs have applications in privacy-preserving technologies and verifiable computations. However, slow proof generation poses a significant challenge in the wide-scale adoption of ZKP. Orion is a recent ZKP scheme with linear prover time. It leverages coding theory, expander graphs, and Merkle hash trees to improve computational efficiency. However, the polynomial commitment phase in Orion is yet a primary performance bottleneck due to the memory-intensive nature of expander graph-based encoding and the data-heavy hashing required for Merkle Tree generation.
This work introduces several algorithmic and hardware-level optimizations aimed at accelerating Orion’s commitment phase. We replace the recursive encoding construction with an iterative approach and propose novel expander graph strategies optimized for hardware to enable more parallelism and reduce off-chip memory access. Additionally, we implement an on-the-fly expander graph generation technique, reducing memory usage by gigabytes. Further optimizations in Merkle Tree generation reduce the cost of SHA3 hashing, resulting in significant speedups of the polynomial commitment phase. Our FPGA implementation heavily optimizes access to the off-chip high-bandwidth memory (HBM) utilizing memory-efficient computational strategies. The accelerator demonstrates speedups of up to 381$\times$ for linear encoding and up to 2,390$\times$ for the hashing operations over a software implementation on a high-end CPU. In the context of real-world applications, such as zero-knowledge proof-of-training of deep neural networks (DNNs), our techniques show up to 241$\times$ speed up for the polynomial commitment.
This work introduces several algorithmic and hardware-level optimizations aimed at accelerating Orion’s commitment phase. We replace the recursive encoding construction with an iterative approach and propose novel expander graph strategies optimized for hardware to enable more parallelism and reduce off-chip memory access. Additionally, we implement an on-the-fly expander graph generation technique, reducing memory usage by gigabytes. Further optimizations in Merkle Tree generation reduce the cost of SHA3 hashing, resulting in significant speedups of the polynomial commitment phase. Our FPGA implementation heavily optimizes access to the off-chip high-bandwidth memory (HBM) utilizing memory-efficient computational strategies. The accelerator demonstrates speedups of up to 381$\times$ for linear encoding and up to 2,390$\times$ for the hashing operations over a software implementation on a high-end CPU. In the context of real-world applications, such as zero-knowledge proof-of-training of deep neural networks (DNNs), our techniques show up to 241$\times$ speed up for the polynomial commitment.
Gurgen Arakelov, Sergey Gomenyuk, Hovsep Papoyan
ePrint Report
The concept of a decentralized computer is a powerful and transformative idea that has proven its significance in enabling trustless, distributed computations. However, its application has been severely constrained by an inability to handle private data due to the inherent transparency of blockchain systems. This limitation restricts the scope of use cases, particularly in domains where confidentiality is critical.
In this work, we introduce a model for a Fully Homomorphic Encryption (FHE) decentralized computer. Our approach leverages recent advancements in FHE technology to enable secure computations on encrypted data while preserving privacy. By integrating this model into the decentralized ecosystem, we address the long-standing challenge of privacy in public blockchain environments. The proposed FHE computer supports a wide range of use cases, is scalable, and offers a robust framework for incentivizing developer contributions.
In this work, we introduce a model for a Fully Homomorphic Encryption (FHE) decentralized computer. Our approach leverages recent advancements in FHE technology to enable secure computations on encrypted data while preserving privacy. By integrating this model into the decentralized ecosystem, we address the long-standing challenge of privacy in public blockchain environments. The proposed FHE computer supports a wide range of use cases, is scalable, and offers a robust framework for incentivizing developer contributions.
Seunghwan Lee, Dohyuk Kim, Dong-Joon Shin
ePrint Report
This paper proposes a fast, compact key-size, and hardware-friendly bootstrapping using only 16-bit integer arithmetic and fully homomorphic encryption FHE16, which enables gate operations on ciphertexts using only 16-bit integer arithmetic. The proposed bootstrapping consists of unit operations on ciphertexts, such as (incomplete) number theoretic transform (NTT), inverse NTT, polynomial multiplication, gadget decomposition, and automorphism, under a composite modulus constructed from 16-bit primes. Since our bootstrapping does not use any floating-point operations, extra floating-point errors do not occur so that FHE16 can pack more message bits into a single ciphertext than TFHE-rs which utilizes floating-point operations. Furthermore, we propose multiple instruction multiple ciphertext(MIMC) method to accelerate the simultaneous execution of different homomorphic operations across multiple ciphertexts. Finally, experimental results show that the bootstrapping operation completes in 2.89 milliseconds for ciphertext dimension of 512.
Thomas Prévost, Olivier Alibart, Anne Marin, Marc Kaplan
ePrint Report
We introduce MULTISS, a new distributed storage proto-
col over multiple remote Quantum Key Distribution (QKD) networks
that ensures long-term data confidentiality. Our protocol extends LIN-
COS, a secure storage protocol that uses Shamir secret sharing
to distribute data in a single QKD network. Instead MULTISS uses a
hierarchical secret scheme that makes certain shares mandatory for the
reconstruction of the original secret. We prove that MULTISS ensures
that the stored data remain secure even if an eavesdropper (1) gets full
access to all storage servers of some of the QKD networks or (2) stores
and breaks later all the classical communication between the QKD net-
works. We demonstrate that this is strictly more secure than LINCOS
which is broken as soon as one QKD network is compromised.
Our protocol, like LINCOS, has a procedure to update the shares stored
in each QKD network without reconstructing the original data. In addi-
tion, we provide a procedure to recover from a full compromission of one
of the QKD network. In particular, we introduce a version of the protocol
that can only be implemented over a restricted network topologies, but
minimizes the communication required in the recovery procedure.
In practice, the MULTISS protocol is designed for the case of several
QKD networks at the metropolitan scale connected to each other through
channels secured by classical cryptography. Hence, MULTISS offers a
secure distributed storage solution in a scenario that is compatible with
the current deployment of quantum networks.
Zhuo Wu, Shi Qi, Xinxuan Zhang, Yi Deng
ePrint Report
This work introduces a novel technique to enhance the efficiency of proving composite statements. We present the \textit{Hash-and-Prove} framework to construct zkSNARKs for proving satisfiability of arithmetic circuits with additional \textit{Algebraic Gate}. These algebraic gates serve as building blocks for forming more generalized relations in algebra. Unlike Pedersen-committed \textit{Commit-and-Prove} SNARKs, which suffer from increased proof size and verification overhead when proving composite statements, our solution significantly improves both proof size and verification time while maintaining competitive and practical prover efficiency.
In the application of proof of solvency where we need to prove knowledge of $x$ such that SHA$256(g^x)=y$, our approach achieves a 100$\times$ reduction in proof size and a 500$\times$ reduction in verification time, along with a 2$\times$ speedup in proving time compared to the work of Agrawal et al.(CRYPTO 2018). For proving ECDSA signatures verification, we achieve a proof time of 2.1 seconds, which is a 70$\times$ speedup compared to using Groth16, and a proof size of 4.81 kb, which is a 160$\times$ reduction compared to Field Agnostic SNARKs(Block et al., CRYPTO 2024).
In the application of proof of solvency where we need to prove knowledge of $x$ such that SHA$256(g^x)=y$, our approach achieves a 100$\times$ reduction in proof size and a 500$\times$ reduction in verification time, along with a 2$\times$ speedup in proving time compared to the work of Agrawal et al.(CRYPTO 2018). For proving ECDSA signatures verification, we achieve a proof time of 2.1 seconds, which is a 70$\times$ speedup compared to using Groth16, and a proof size of 4.81 kb, which is a 160$\times$ reduction compared to Field Agnostic SNARKs(Block et al., CRYPTO 2024).
Yipeng Shi
ePrint Report
White-box cryptography is a software implementation technique based on lookup tables, with effective resistance against key extraction and code lifting attacks being a primary focus of its research. Space hardness is a widely used property for evaluating the resistance of white-box ciphers against code lifting attacks. However, none of the existing ciphers can provide strong space hardness under adaptively chosen-space attack model.
We propose a new scheme based on the lookup table pool and key guidance implementation as a more efficient approach to utilizing lookup tables to provide better security and practicality. Specifically, we introduce a new white-box cipher, RubikStone, which offers a range of variants from tens of kilobytes to infinite size. For the first time, we prove that all variants of RubikStone can provide strong space hardness under an adaptively chosen-space attack model. Additionally, we present a specific key guidance application for cloud-based DRM scenarios. Based on our proposed RubikStone variants, the key guidance applications can achieve at least overall $(0.950T, 128)$-space hardness.
Furthermore, we introduce a novel property, table consumption rate, for evaluating the durability of a specific white-box cryptographic implementation. In our evaluation, all the instantiations of RubikStone exhibit the lowest table consumption rate in algorithms with equally sized lookup tables. Besides, we conduct a comprehensive statistical analysis of the operations in all existing white-box ciphers. Our findings indicate that RubikStone remains highly competitive in terms of computational efficiency despite offering unprecedented levels of security.
Shaoqiang Wu, Ding Wang
ePrint Report
The password-hardening service (PH) is a crypto service that armors canonical password authentication with an external key against offline password guessing in case the password file is somehow compromised/leaked. The game-based formal treatment of PH was brought by Everspaugh et al. at USENIX Security'15. Their work is followed by efficiency-enhancing PO-COM (CCS'16), security-patching Phoenix (USENIX Security'17), and functionality-refining PW-Hero (SRDS'22). However, the issue of single points of failure (SPF) inherently impairs the availability of these PH schemes. More specifically, the failure of a single PH server responsible for crypto computation services will suspend password authentication for all users.
We propose the notion of reliable PH, which improves the availability of PH by eliminating SPF. We present a modular PH construction, TF-PH, essentially a generic compiler that can transform any PH protocol into a reliable one without SPF via introducing threshold failover. Particularly, we propose a concrete reliable PH protocol, called TF-RePhoenix, a simple and efficient construction with RePhoenix (which improves over Phoenix at USENIX Security'17) as the PH module. Security is proven within the universally composable (UC) security framework and the random oracle model (ROM), where we, for the first time, formalize the ideal UC functionalities of PH and reliable PH. We comparatively evaluate the efficiency of our TF-PH with the canonical threshold method (taken as an example, the threshold solution introduced by Brost et al. at CCS'20 in a PH-derived domain -- password-hardened encryption). Results show that our threshold failover-based solution to SPF provides optimal performance and achieves failover in a millisecond.
We propose the notion of reliable PH, which improves the availability of PH by eliminating SPF. We present a modular PH construction, TF-PH, essentially a generic compiler that can transform any PH protocol into a reliable one without SPF via introducing threshold failover. Particularly, we propose a concrete reliable PH protocol, called TF-RePhoenix, a simple and efficient construction with RePhoenix (which improves over Phoenix at USENIX Security'17) as the PH module. Security is proven within the universally composable (UC) security framework and the random oracle model (ROM), where we, for the first time, formalize the ideal UC functionalities of PH and reliable PH. We comparatively evaluate the efficiency of our TF-PH with the canonical threshold method (taken as an example, the threshold solution introduced by Brost et al. at CCS'20 in a PH-derived domain -- password-hardened encryption). Results show that our threshold failover-based solution to SPF provides optimal performance and achieves failover in a millisecond.
25 November 2024
Mia Filić, Keran Kocher, Ella Kummer, Anupama Unnikrishnan
ePrint Report
Probabilistic data structures (PDS) are compact representations of high-volume data that provide approximate answers to queries about the data. They are commonplace in today's computing systems, finding use in databases, networking and more. While PDS are designed to perform well under benign inputs, they are frequently used in applications where inputs may be adversarially chosen. This may lead to a violation of their expected behaviour, for example an increase in false positive rate.
In this work, we focus on PDS that handle approximate membership queries (AMQ). We consider adversarial users with the capability of making adaptive insertions, deletions and membership queries to AMQ-PDS, and analyse the performance of AMQ-PDS under such adversarial inputs.
We argue that deletions significantly empower adversaries, presenting a challenge to enforcing honest behaviour when compared to insertion-only AMQ-PDS. To address this, we introduce a new concept of an honest setting for AMQ-PDS with deletions. By leveraging simulation-based security definitions, we then quantify how much harm can be caused by adversarial users to the functionality of AMQ-PDS. Our resulting bounds only require calculating the maximal false positive probability and insertion failure probability achievable in our novel honest setting.
We apply our results to Cuckoo filters and Counting filters. We show how to protect these AMQ-PDS at low cost, by replacing or composing the hash functions with keyed pseudorandom functions in their construction. This strategy involves establishing practical bounds for the probabilities mentioned above. Using our new techniques, we demonstrate that achieving security against adversarial users making both insertions and deletions remains practical.
In this work, we focus on PDS that handle approximate membership queries (AMQ). We consider adversarial users with the capability of making adaptive insertions, deletions and membership queries to AMQ-PDS, and analyse the performance of AMQ-PDS under such adversarial inputs.
We argue that deletions significantly empower adversaries, presenting a challenge to enforcing honest behaviour when compared to insertion-only AMQ-PDS. To address this, we introduce a new concept of an honest setting for AMQ-PDS with deletions. By leveraging simulation-based security definitions, we then quantify how much harm can be caused by adversarial users to the functionality of AMQ-PDS. Our resulting bounds only require calculating the maximal false positive probability and insertion failure probability achievable in our novel honest setting.
We apply our results to Cuckoo filters and Counting filters. We show how to protect these AMQ-PDS at low cost, by replacing or composing the hash functions with keyed pseudorandom functions in their construction. This strategy involves establishing practical bounds for the probabilities mentioned above. Using our new techniques, we demonstrate that achieving security against adversarial users making both insertions and deletions remains practical.
Gal Cohen, Itamar Levy
ePrint Report
Interconnected devices enhance daily life but introduce security
vulnerabilities, new technologies enable malicious activities
such as information theft. This article combines radio frequency (RF) side-channel attacks with software Trojans to create a hard-to-detect, stealthy method for extracting kilobytes of secret information per millisecond over record distances with a single measurement in the RF spectrum. The technique exploits Trojan-induced electrical disturbances in RF components originating from peripherals, buses, memories and CPUs to achieve high SNR data leakage schemes. Experimental results show negligible acquisition time and stealth. The research introduces optimized modulation, demodulation schemes, and specialized synchronization symbols to minimize error rates and maximize data rates. It highlights the need for advanced detection and defense mechanisms to ensure the security and privacy of interconnected devices.