International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

22 November 2021

Liang Zhao, Ze Chen, Liqun Chen, Xinyi Huang
ePrint Report ePrint Report
In this paper we present an optimized variant of Gentry, Halevi and Vaikuntanathan (GHV)'s Homomorphic Encryption (HE) scheme (EUROCRYPT'10). Our scheme is appreciably more efficient than the original GHV scheme without losing its merits of the (multi-key) homomorphic property and matrix encryption property. In this research, we first measure the density for the trapdoor pairs that are created by using Alwen and Peikert's trapdoor generation algorithm and Micciancio and Peikert's trapdoor generation algorithm, respectively, and use the measurement result to precisely discuss the time and space complexity of the corresponding GHV instantiations. We then propose a generic GHV-type construction with several optimizations that improve the time and space efficiency from the original GHV scheme. In particular, our new scheme can achieve asymptotically optimal time complexity and avoid generating and storing the inverse of the used trapdoor. Finally, we present an instantiation that, by using a new set of (lower) bound parameters, has the smaller sizes of the key and ciphertext than the original GHV scheme.
Expand
Lorenzo Grassi, Dmitry Khovratovich, Sondre Rønjom, Markus Schofnegger
ePrint Report ePrint Report
Motivated by modern cryptographic use cases such as multi-party computation (MPC), homomorphic encryption (HE), and zero-knowledge (ZK) protocols, several symmetric schemes that are efficient in these scenarios have recently been proposed in the literature. Some of these schemes are instantiated with low-degree nonlinear functions, for example low-degree power maps (e.g., MiMC, HadesMiMC, Poseidon) or the Toffoli gate (e.g., Ciminion). Others (e.g., Rescue, Vision, Grendel) are instead instantiated via high-degree functions which are easy to evaluate in the target application. A recent example for the latter case is the hash function Grendel, whose nonlinear layer is constructed using the Legendre symbol.

In this paper, we analyze high-degree functions such as the Legendre symbol or the modulo-2 operation as building blocks for the nonlinear layer of a cryptographic scheme over (F_p)^n. Our focus regards the security analysis rather than the efficiency in the mentioned use cases. For this purpose, we present several new invertible functions that make use of the Legendre symbol or of the modulo-2 operation.

Even though these functions often provide strong statistical properties and ensure a high degree after a few rounds, the main problem regards their small number of possible outputs, that is, only three for the Legendre symbol and only two for the modulo-2 operation. By guessing them, it is possible to reduce the overall degree of the function significantly. We exploit this behavior by describing the first preimage attack on full Grendel, and we verify it in practice.
Expand
Ingerid Fosli, Yuval Ishai, Victor I. Kolobov, Mary Wootters
ePrint Report ePrint Report
A homomorphic secret sharing (HSS) scheme is a secret sharing scheme that supports evaluating functions on shared secrets by means of a local mapping from input shares to output shares. We initiate the study of the download rate of HSS, namely, the achievable ratio between the length of the output shares and the output length when amortized over $\ell$ function evaluations. We obtain the following results.

* In the case of linear information-theoretic HSS schemes for degree-$d$ multivariate polynomials, we characterize the optimal download rate in terms of the optimal minimal distance of a linear code with related parameters. We further show that for sufficiently large $\ell$ (polynomial in all problem parameters), the optimal rate can be realized using Shamir's scheme, even with secrets over $\mathbb{F}_2$.

* We present a general rate-amplification technique for HSS that improves the download rate at the cost of requiring more shares. As a corollary, we get high-rate variants of computationally secure HSS schemes and efficient private information retrieval protocols from the literature.

* We show that, in some cases, one can beat the best download rate of linear HSS by allowing nonlinear output reconstruction and $2^{-\Omega(\ell)}$ error probability.
Expand
Jonathan Prokos, Tushar M. Jois, Neil Fendley, Roei Schuster, Matthew Green, Eran Tromer, Yinzhi Cao
ePrint Report ePrint Report
Many online communications systems use perceptual hash matching systems to detect illicit files in user content. These systems employ specialized perceptual hash functions such as Microsoft's PhotoDNA or Facebook's PDQ to produce a compact digest of an image file that can be approximately compared to a database of known illicit-content digests. Recently, several proposals have suggested that hash-based matching systems be incorporated into client-side and end-to-end encrypted (E2EE) systems: in these designs, files that register as illicit content will be reported to the provider, while the remaining content will be sent confidentially. By using perceptual hashing to determine confidentiality guarantees, this new setting significantly changes the function of existing perceptual hashing -- thus motivating the need to evaluate these functions from an adversarial perspective, using their perceptual capabilities against them. For example, an attacker may attempt to trigger a match on innocuous, but politically-charged, content in an attempt to stifle speech.

In this work we develop threat models for perceptual hashing algorithms in an adversarial setting, and present attacks against the two most widely deployed algorithms: PhotoDNA and PDQ. Our results show that it is possible to efficiently generate targeted second-preimage attacks in which an attacker creates a variant of some source image that matches some target digest. As a complement to this main result, we also further investigate the production of images that facilitate detection avoidance attacks, continuing a recent investigation of Jain et al. Our work shows that existing perceptual hash functions are likely insufficiently robust to survive attacks on this new setting.
Expand
Alex Ozdemir, Dan Boneh
ePrint Report ePrint Report
A zk-SNARK is a powerful cryptographic primitive that provides a succinct and efficiently checkable argument that the prover has a witness to a public NP statement, without revealing the witness. However, in their native form, zk-SNARKs only apply to a secret witness held by a single party. In practice, a collection of parties often need to a prove a statement where the secret witness is distributed or shared among them.

We implement and experiment with *collaborative zk-SNARKs*: proofs over the secrets of multiple, mutually distrusting parties. We construct these by lifting conventional zk-SNARKs into secure protocols among $N$ provers to jointly produce a single proof over the distributed witness. We optimize the proof generation algorithm in pairing-based zk-SNARKs so that algebraic techniques for multiparty computation (MPC) yield efficient proof generation protocols. For some zk-SNARKs, optimization is more challenging. This suggests MPC "friendliness" as an additional criterion for evaluating zk-SNARKs.

We implement 3 collaborative proofs and evaluate the concrete cost of proof generation. We find that over a good network, security against a malicious minority of provers can be achieved with *approximately the same runtime* as a single prover. Security against $N-1$ malicious provers requires only a $2\times$ slowdown. This efficiency is unusual: most computations slow down by several orders of magnitude when securely distributed. It is also significant: most server-side applications that can tolerate the cost of a single-prover proof should also be able to tolerate the cost of a collaborative proof.
Expand
Hosein Hadipour, Maria Eichlseder
ePrint Report ePrint Report
The guess-and-determine technique is one of the most widely used techniques in cryptanalysis to recover unknown variables in a given system of relations. In such attacks, a subset of the unknown variables is guessed such that the remaining unknowns can be deduced using the information from the guessed variables and the given relations. This idea can be applied in various areas of cryptanalysis such as finding the internal state of stream ciphers when a sufficient amount of output data is available, or recovering the internal state and the secret key of a block cipher from very few known plaintexts. Another important application is the key-bridging technique in key-recovery attacks on block ciphers, where the attacker aims to find the minimum number of required sub-key guesses to deduce all involved sub-keys via the key schedule. Since the complexity of the guess-and-determine technique directly depends on the number of guessed variables, it is essential to find the smallest possible guess basis, i.e., the subset of guessed variables from which the remaining variables can be deduced. In this paper, we present Autoguess, an easy-to-use general tool to search for a minimal guess basis. We propose several new modeling techniques to harness SAT/SMT, MILP, and Gröbner basis solvers. We demonstrate their usefulness in guess-and-determine attacks on stream ciphers and block ciphers, as well as finding key-bridges in key recovery attacks on block ciphers. Moreover, integrating our CP models for the key-bridging technique into the previous CP-based frameworks to search for distinguishers, we propose a unified and general CP model to search for key recovery friendly distinguishers which supports both linear and nonlinear key schedules.
Expand
Kaizhan Lin, Weize Wang, Lin Wang, Chang-An Zhao
ePrint Report ePrint Report
Currently, public-key compression of supersingular isogeny Diffie-Hellman (SIDH) and its variant, supersingular isogeny key encapsulation (SIKE) involve pairing computation and discrete logarithm computation. For efficiency, relatively large storage of precomputed values is required for discrete logarithm computation. In this paper, we propose novel algorithms to compute discrete logarithms, allowing us to make a trade-off between memory and efficiency. Our implementation shows that the efficiency of our algorithms is close to that of the previous work, and our algorithms perform better in some special cases.
Expand
Kemal Derya, Ahmet Can Mert, Erdinç Öztürk, Erkay Savaş
ePrint Report ePrint Report
In this paper, we introduce a configurable hardware architecture that can be used to generate unified and parametric NTT-based polynomial multipliers that support a wide range of parameters of lattice-based cryptographic schemes proposed for post-quantum cryptography. Both NTT and inverse NTT operations can be performed using the unified butterfly unit of our architecture, which constitutes the core building block in NTT operations. The multitude of this unit plays an essential role in achieving the performance goals of a specific application area or platform. To this end, the architecture takes the size of butterfly units as input and generates an efficient NTT-based polynomial multiplier hardware to achieve the desired throughput and area requirements. More specifically, the proposed hardware architecture provides run-time configurability for the scheme parameters and compile-time configurability for throughput and area requirements. This work presents the first architecture with both run-time and compile-time configurability for NTT-based polynomial multiplication operations to the best of our knowledge. The implementation results indicate that the advanced configurability has a negligible impact on the time and area of the proposed architecture and that its performance is on par with the state-of-the-art implementations in the literature, if not better. The proposed architecture comprises various sub-blocks such as modular multiplier and butterfly units, each of which can be of interest on its own for accelerating lattice-based cryptography. Thus, we provide the design rationale of each sub-block and compare it with those in the literature, including our earlier works in terms of configurability and performance.
Expand
Arush Chhatrapati, Susan Hohenberger, James Trombo, Satyanarayana Vusirikala
ePrint Report ePrint Report
In a broadcast encryption system, a sender can encrypt a message for any subset of users who are listening on a broadcast channel. The goal of broadcast encryption is to leverage the broadcasting structure to achieve better efficiency than individually encrypting to each user; in particular, reducing the bandwidth (i.e., ciphertext size) required to transmit securely, although other factors such as public and private key size and the time to execute setup, encryption and decryption are also important. In this work, we conduct a detailed performance evaluation of eleven public-key, pairing-based broadcast encryption schemes offering different features and security guarantees, including public-key, identity-based, traitor-tracing, private linear and augmented systems. We implemented each system using the MCL Java pairings library, reworking some of the constructions to achieve better efficiency. We tested their performance on a variety of parameter choices, resulting in hundreds of data points to compare, with some interesting results from the classic Boneh-Gentry-Waters scheme (CRYPTO 2005) to Zhandry's recent generalized scheme (CRYPTO 2020), and more. We combine this performance data and knowledge of the systems' features with data we collected on practical usage scenarios to determine which schemes are likely to perform best for certain applications, such as video streaming services, online gaming, live sports betting and smartphone streaming. This work can inform both practitioners and future cryptographic designs in this area.
Expand
Melissa Chase, Sanjam Garg, Mohammad Hajiabadi, Jialin Li, Peihan Miao
ePrint Report ePrint Report
Recent new constructions of rate-1 OT [Döttling, Garg, Ishai, Malavolta, Mour, and Ostrovsky, CRYPTO 2019] have brought this primitive under the spotlight and the techniques have led to new feasibility results for private-information retrieval, and homomorphic encryption for branching programs. The receiver communication of this construction consists of a quadratic (in the sender's input size) number of group elements for a single instance of rate-1 OT. Recently [Garg, Hajiabadi, Ostrovsky, TCC 2020] improved the receiver communication to a linear number of group elements for a single string-OT. However, most applications of rate-1 OT require executing it multiple times, resulting in large communication costs for the receiver.

In this work, we introduce a new technique for amortizing the cost of multiple rate-1 OTs. Specifically, based on standard pairing assumptions, we obtain a two-message rate-1 OT protocol for which the amortized cost per string-OT is asymptotically reduced to only four group elements. Our results lead to significant communication improvements in PSI and PIR, special cases of SFE for branching programs.

- PIR: We obtain a rate-1 PIR scheme with client communication cost of $O(\lambda\cdot\log N)$ group elements for security parameter $\lambda$ and database size $N$. Notably, after a one-time setup (or one PIR instance), any following PIR instance only requires communication cost $O(\log N)$ number of group elements.

- PSI with unbalanced inputs: We apply our techniques to private set intersection with unbalanced set sizes (where the receiver has a smaller set) and achieve receiver communication of $O((m+\lambda) \log N)$ group elements where $m, N$ are the sizes of the receiver and sender sets, respectively. Similarly, after a one-time setup (or one PSI instance), any following PSI instance only requires communication cost $O(m \cdot \log N)$ number of group elements. All previous sublinear-communication non-FHE based PSI protocols for the above unbalanced setting were also based on rate-1 OT, but incurred at least $O(\lambda^2 m \log N)$ group elements.
Expand
Esra Günsay, Cansu Betin Onur, Murat Cenk
ePrint Report ePrint Report
Zero-knowledge protocols (ZKPs) allow a party to prove the validation of secret information to some other party without revealing any information about the secret itself. Appropriate, effective, and efficient use of cryptographic ZKPs contributes to many novel advances in real-world privacy-preserving frameworks. One of the most important type of cryptographic ZKPs is the zero-knowledge range proofs (ZKRPs). Such proofs have wide range of applications such as anonymous credentials, cryptocurrencies, e-cash schemes etc. In many ZKRPs the secret is represented in binary then committed via a suitable commitment scheme. Though there exist different base approaches on bilinear paring-based and RSA-like based constructions, to our knowledge there is no study on investigating the discrete logarithm-based constructions. In this study, we focus on a range proof construction produced by Mao in 1998. This protocol contains a bit commitment scheme with an OR-construction. We investigate the effect of different base approach on Mao's range proof and compare the efficiency of these basis approaches. To this end, we have extended Mao's range proof to base-3 with a modified OR-proof. We derive the number of computations in modulo exponentiations and the cost of the number of integers exchanged between parties. Then, we have generalized these costs for the base-u construction. Here, we mainly show that comparing with other base approaches, the base-3 approach consistently provides approximately 12% efficiency in computation cost and 10% efficiency in communication cost. We implemented the base-3 protocol and demonstrated that the results are consistent with our theoretical computations.
Expand
Andrea Caforio, Subhadeep Banik, Yosuke Todo, Willi Meier, Takanori Isobe, Fukang Liu, Bin Zhang
ePrint Report ePrint Report
Energy efficiency is critical in battery-driven devices, and designing energy- optimal symmetric-key ciphers is one of the goals for the use of ciphers in such environments. In the paper by Banik et al. (IACR ToSC 2018), stream ciphers were identified as ideal candidates for low-energy solutions. One of the main conclusions of this paper was that Trivium, when implemented in an unrolled fashion, was by far the most energy-efficient way of encrypting larger quantity of data. In fact, it was shown that as soon as the number of databits to be encrypted exceeded 320 bits, Trivium consumed the least amount of energy on STM 90 nm ASIC circuits and outperformed the Midori family of block ciphers even in the least energy hungry ECB mode (Midori was designed specifically for energy efficiency).

In this work, we devise the first heuristic energy model in the realm of stream ciphers that links the underlying algebraic topology of the state update function to the consumptive behaviour. The model is then used to derive a metric that exhibits a heavy negative correlation with the energy consumption of a broad range of stream cipher architectures, i.e., the families of Trivium-like, Grain-like and Subterranean-like constructions. We demonstrate that this correlation is especially pronounced for Trivium-like ciphers which leads us to establish a link between the energy consumption and the security guarantees that makes it possible to find several alternative energy- optimal versions of Trivium that meet the requirements but consume less energy. We present two such designs Trivium-LE(F) and Trivium-LE(S) that consume around 15% and 25% less energy respectively making them the to date most energy-efficient encryption primitives. They inherit the same security level as Trivium, i.e., 80-bit security. We further present Triad-LE as an energy-efficient variant satisfying a higher security level. The simplicity and wide applicability of our model has direct consequences for the conception of future hardware-targeted stream ciphers as for the first time it is possible to optimize for energy during the design phase. Moreover, we extend the reach of our model beyond plain encryption primitives and propose a novel energy-efficient message authentication code Trivium-LE-MAC.
Expand
Ittay Eyal
ePrint Report ePrint Report
The security of cryptocurrency and decentralized blockchain-maintained assets relies on their owners safeguarding secrets, typically cryptographic keys. This applies equally to individuals keeping daily-spending amounts and to large asset management companies. Loss of keys and attackers gaining control of keys resulted in numerous losses of funds.

The security of individual keys was widely studied with practical solutions available, from mnemonic phrases to dedicated hardware. There are also techniques for securing funds by requiring combinations of multiple keys. However, to the best of our knowledge, a crucial question was never addressed: How is wallet security affected by the number of keys, their types, and how they are combined? This is the focus of this work.

We present a model where each key has certain probabilities for being safe, lost, leaked, or stolen (available only to an attacker). The number of possible wallets for a given number of keys is the Dedekind number, prohibiting an exhaustive search with many keys. Nonetheless, we bound optimal-wallet failure probabilities with an evolutionary algorithm.

We evaluate the security (complement of failure probability) of wallets based on the number and types of keys used. Our analysis covers a wide range of settings and reveals several surprises. The failure probability general trend drops exponentially with the number of keys, but has a strong dependency on its parity. In many cases, but not always, heterogeneous keys (not all with the same fault probabilities) allow for superior wallets than homogeneous keys. Nonetheless, in the case of 3 keys, the common practice of requiring any pair is optimal in many settings.

Our formulation of the problem and initial results reveal several open questions, from user studies of key fault probabilities to finding optimal wallets with very large numbers of keys. But they also have an immediate practical outcome, informing cryptocurrency users on optimal wallet design.
Expand
Nicolas Bruneau, Charles Christen, Jean-Luc Danger, Adrien Facon, Sylvain Guilley
ePrint Report ePrint Report
Masking countermeasure is implemented to thwart side-channel attacks. The maturity of high-order masking schemes has reached the level where the concepts are sound and proven. For instance, Rivain and Prouff proposed a full-fledged AES at CHES 2010. Some non-trivial fixes regarding refresh functions were needed though. Now, industry is adopting such solutions, and for the sake of both quality and certification requirements, masked cryptographic code shall be checked for correctness using the same model as that of the the theoretical protection rationale (for instance the probing leakage model).

Seminal work has been initiated by Barthe et al. at EUROCRYPT 2015 for automated verification at higher orders on concrete implementations. In this paper, we build on this work to actually perform verification from within a compiler, so as to enable timely feedback to the developer. Precisely, our methodology enables to provide the actual security order of the code at the intermediate representation (IR) level, thereby identifying possible flaws (owing either to source code errors or to compiler optimizations). Second, our methodology allows for an exploitability analysis of the analysed IR code. In this respect, we formally handle all the symbolic expressions in the static single assignment (SSA) representation to build the optimal distinguisher function. This enables to evaluate the most powerful attack, which is not only function of the masking order $d$, but also on the number of leaking samples and of the expressions (e.g., linear vs non-linear leakages).

This scheme allows to evaluate the correctness of a masked cryptographic code, and also its actual security in terms of number of traces in a given deployment context (characterized by a leakage model of the target CPU and the signal-to-noise ratio of the platform).
Expand
Jean-Pierre Thibault, Colin O’Flynn, Alex Dewar
ePrint Report ePrint Report
Power analysis attacks on ECC have been presented since almost the very beginning of DPA itself, even before the standardization of AES. Given that power analysis attacks against AES are well known and have a large body of practical artifacts to demonstrate attacks on both software and hardware implementations, it is surprising that these artifacts are generally lacking for ECC. In this work we begin to remedy this by providing a complete open-source ECDSA attack artifact, based on a high-quality hardware ECDSA core from the CrypTech project. We demonstrate an effective power analysis attack against an FPGA implementation of this core.

As many recent secure boot solutions are using ECDSA, efforts into building open-source artifacts to evaluate attacks on ECDSA are highly relevant to ongoing academic and industrial research programs. To demonstrate the value of this evaluation platform, we implement several countermeasures and show that evaluating leakage on hardware is critical to understand the effectiveness of a countermeasure.
Expand

20 November 2021

David Heath, Vladimir Kolesnikov, Rafail Ostrovsky
ePrint Report ePrint Report
Garbled RAM (GRAM) is a powerful technique introduced by Lu and Ostrovsky that equips Garbled Circuit (GC) with a sublinear cost RAM without adding rounds of interaction. While multiple GRAM constructions are known, none are suitable for practice, due to costs that have high constants and poor scaling.

We present the first GRAM suitable for practice. For computational security parameter $\kappa$ and for a size-$n$ RAM that stores blocks of size $w = \Omega(\log^2 n)$ bits, our GRAM incurs amortized $O(w \cdot \log^2 n \cdot \kappa)$ communication and computation per access. We evaluate the concrete cost of our GRAM; our approach outperforms trivial linear-scan-based RAM for as few as $512$ $128$-bit elements.
Expand
Valence Cristiani, Maxime Lecomte, Philippe Maurine
ePrint Report ePrint Report
Recent works showed how Mutual Information Neural Estimation (MINE) could be applied to side-channel analysis in order to evaluate the amount of leakage of an electronic device. One of the main advantages of MINE over classical estimation techniques is to enable the computation between high dimensional traces and a secret,which is relevant for leakage assessment. However, optimally exploiting this information in an attack context in order to retrieve a secret remains a non-trivial task especially when a profiling phase of the target is not allowed. Within this context, the purpose of this paper is to address this problem based on a simple idea: there are multiple leakage sources in side-channel traces and optimal attacks should necessarily exploit most/all of them. To this aim, a new mathematical framework, designed to bridge classical Mutual Information Analysis (MIA) and the multidimensional aspect of neural-based estimators, is proposed. One of the goals is to provide rigorous proofs consolidating the mathematical basis behind MIA, thus alleviating inconsistencies found in the state of the art. This framework allows to derive a new attack called Neural Estimated Mutual Information Analysis (NEMIA). To the best of our knowledge, it is the first unsupervised attack able to benefit from both the power of deep learning techniques and the valuable theoretical properties of MI. Simulations and experiments show that NEMIA outperforms classical side-channel attacks, especially in low-information contexts.
Expand
Weikeng Chen, Katerina Sotiraki, Ian Chang, Murat Kantarcioglu, Raluca Ada Popa
ePrint Report ePrint Report
Though maliciously secure multiparty computation (SMPC) ensures confidentiality and integrity of the computation from malicious parties, malicious parties can still provide malformed inputs. As a result, when using SMPC for collaborative computation, input can be manipulated to perform biasing and poisoning attacks. Parties may defend against many of these attacks by performing statistical tests over one another’s input, before the actual computation.

We present HOLMES, a platform for expressing and performing statistical tests securely and efficiently. Using HOLMES, parties can perform well-known statistical tests or define new tests. For efficiency, instead of performing such tests naively in SMPC, HOLMES blends together zero-knowledge proofs (ZK) and SMPC protocols, based on the insight that most computation for statistical tests is local to the party who provides the data.

High-dimensional tests are critical for detecting malicious inputs but are prohibitively expensive in secure computation. To reduce this cost, HOLMES provides a new secure dimensionality reduction procedure tailored for high-dimensional statistical tests. This new procedure leverages recent development of algebraic pseudorandom functions.

Our evaluation shows that, for a variety of statistical tests, HOLMES is 18x to 40x more efficient than naively implementing the statistical tests in a generic SMPC framework.
Expand
Nai-Hui Chia, Kai-Min Chung, Xiao Liang, Takashi Yamakawa
ePrint Report ePrint Report
From the minimal assumption of post-quantum semi-honest oblivious transfers, we build the first $\epsilon$-simulatable two-party computation (2PC) against quantum polynomial-time (QPT) adversaries that is both constant-round and black-box (for both the construction and security reduction). A recent work by Chia, Chung, Liu, and Yamakawa (FOCS'21) shows that post-quantum 2PC with standard simulation-based security is impossible in constant rounds, unless either $NP \subseteq BQP$ or relying on non-black-box simulation. The $\epsilon$-simulatability we target is a relaxation of the standard simulation-based security that allows for an arbitrarily small noticeable simulation error $\epsilon$. Moreover, when quantum communication is allowed, we can further weaken the assumption to post-quantum secure one-way functions (PQ-OWFs), while maintaining the constant-round and black-box property.

Our techniques also yield the following set of constant-round and black-box two-party protocols secure against QPT adversaries, only assuming black-box access to PQ-OWFs:

- extractable commitments for which the extractor is also an $\epsilon$-simulator;

- $\epsilon$-zero-knowledge commit-and-prove whose commit stage is extractable with $\epsilon$-simulation;

- $\epsilon$-simulatable coin-flipping;

- $\epsilon$-zero-knowledge arguments of knowledge for $NP$ for which the knowledge extractor is also an $\epsilon$-simulator;

- $\epsilon$-zero-knowledge arguments for $QMA$.

At the heart of the above results is a black-box extraction lemma showing how to efficiently extract secrets from QPT adversaries while disturbing their quantum state in a controllable manner, i.e., achieving $\epsilon$-simulatability of the after-extraction state of the adversary.
Expand
Ziaur Rahman, Ibrahim Khalil, Xun Yi, Mohammed Atiquzzaman
ePrint Report ePrint Report
There has been an intense concern for security alternatives because of the recent rise of cyber attacks, mainly targeting critical systems such as industry, medical, or energy ecosystem. Though the latest industry infrastructures largely depend on AI-driven maintenance, the prediction based on corrupted data undoubtedly results in loss of life and capital. Admittedly, an inadequate data-protection mechanism can readily challenge the security and reliability of the network. The shortcomings of the conventional cloud or trusted certificate-driven techniques have motivated us to exhibit a unique Blockchain-based framework for a secure and efficient industry 4.0 system. The demonstrated framework obviates the long-established certificate authority after enhancing the consortium Blockchain that reduces the data processing delay, and increases cost-effective throughput. Nonetheless, the distributed industry 4.0 security model entails cooperative trust than depending on a single party, which in essence indulges the costs and threat of the single point of failure. Therefore, multi-signature technique of the proposed framework accomplishes the multi-party authentication, which confirms its applicability for the real-time and collaborative cyber-physical system.
Expand
◄ Previous Next ►