International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

20 May 2024

Prabhanjan Ananth, Fatih Kaleoglu, Henry Yuen
ePrint Report ePrint Report
Unclonable cryptography is concerned with leveraging the no-cloning principle to build cryptographic primitives that are otherwise impossible to achieve classically. Understanding the feasibility of unclonable encryption, one of the key unclonable primitives, satisfying indistinguishability security in the plain model has been a major open question in the area. So far, the existing constructions of unclonable encryption are either in the quantum random oracle model or are based on new conjectures.

We present a new approach to unclonable encryption via a reduction to a novel question about nonlocal quantum state discrimination: how well can non-communicating -- but entangled -- players distinguish between different distributions over quantum states? We call this task simultaneous state indistinguishability. Our main technical result is showing that the players cannot distinguish between each player receiving independently-chosen Haar random states versus all players receiving the same Haar random state.

We leverage this result to present the first construction of unclonable encryption satisfying indistinguishability security, with quantum decryption keys, in the plain model. We also show other implications to single-decryptor encryption and leakage-resilient secret sharing.
Expand
Xinxin Fan, Veronika Kuchta, Francesco Sica, Lei Xu
ePrint Report ePrint Report
Multi-scalar multiplication (MSM) is one of the core components of many zero-knowledge proof systems, and a primary performance bottleneck for proof generation in these schemes. One major strategy to accelerate MSM is utilizing precomputation. Several algorithms (e.g., Pippenger and BGMW) and their variants have been proposed in this direction. In this paper, we revisit the recent precomputation-based MSM calculation method proposed by Luo, Fu and Gong at CHES 2023 and generalize their approach. In particular, we presented a general construction of optimal buckets. This improvement leads to significant performance improvements, which are verified by both theoretical analysis and experiments.
Expand
Rachit Garg, George Lu, Brent Waters, David J. Wu
ePrint Report ePrint Report
Attribute-based encryption (ABE) is a generalization of public-key encryption that enables fine-grained access control to encrypted data. In (ciphertext-policy) ABE, a central trusted authority issues decryption keys for attributes $x$ to users. In turn, ciphertexts are associated with a decryption policy $\mathcal{P}$. Decryption succeeds and recovers the encrypted message whenever $\mathcal{P}(x) = 1$. Recently, Hohenberger, Lu, Waters, and Wu (Eurocrypt 2023) introduced the notion of registered ABE, which is an ABE scheme without a trusted central authority. Instead, users generate their own public/secret keys (just like in public-key encryption) and then register their keys (and attributes) with a key curator. The key curator is a transparent and untrusted entity.

Currently, the best pairing-based registered ABE schemes support monotone Boolean formulas and an a priori bounded number of users $L$. A major limitation of existing schemes is that they require a (structured) common reference string (CRS) of size $L^2 \cdot |\mathcal{U}|$ where $|\mathcal{U}|$ is the size of the attribute universe. In other words, the size of the CRS scales quadratically with the number of users and multiplicatively with the size of the attribute universe. The large CRS makes these schemes expensive in practice and limited to a small number of users and a small universe of attributes.

In this work, we give two ways to reduce the CRS size in pairing-based registered ABE schemes. First, we introduce a combinatoric technique based on progression-free sets that enables registered ABE for the same class of policies but with a CRS whose size is sub-quadratic in the number of users. Asymptotically, we obtain a scheme where the CRS size is nearly linear in the number of users $L$ (i.e., $L^{1 + o(1)}$). If we take a more concrete-efficiency-oriented focus, we can instantiate our framework to obtain a construction with a CRS of size $L^{\log_2 3} \approx L^{1.6}$. For instance, in a scheme for 100,000 users, our approach reduces the CRS by a factor of over $115\times$ compared to previous approaches (and without incurring any overhead in encryption/decryption time). Our second approach for reducing the CRS size is to rely on a partitioning-based argument when arguing security of the registered ABE scheme. Previous approaches took a dual-system approach. Using a partitioning-based argument yields a registered ABE scheme where the size of the CRS is independent of the size of the attribute universe. The cost is the resulting scheme satisfies a weaker notion of static security. Our techniques for reducing the CRS size can be combined, and taken together, we obtain a pairing-based registered ABE scheme that supports monotone Boolean formulas with a CRS size of $L^{1 + o(1)}$. Notably, this is the first pairing-based registered ABE scheme that does not require imposing a bound on the size of the attribute universe during setup time.

As an additional application, we also show how to apply our techniques based on progression-free sets to the batch argument (BARG) for $\mathsf{NP}$ scheme of Waters and Wu (Crypto 2022) to obtain a scheme with a nearly-linear CRS without needing to rely on non-black-box bootstrapping techniques.
Expand
Slim Bettaieb, Loïc Bidoux, Victor Dyseryn, Andre Esser, Philippe Gaborit, Mukul Kulkarni, Marco Palumbi
ePrint Report ePrint Report
In this work we introduce PERK a compact digital signature scheme based on the hardness of a new variant of the Permuted Kernel Problem (PKP). PERK achieves the smallest signature sizes for any PKP-based scheme for NIST category I security with 6 kB, while obtaining competitive signing and verification timings. PERK also compares well with the general state-of-the-art. To substantiate those claims we provide an optimized constant-time AVX2 implementation, a detailed performance analysis and different size-performance trade-offs.

Technically our scheme is based on a Zero-Knowledge Proof of Knowledge following the MPC-in-the-Head paradigm and employing the Fiat-Shamir transform. We provide comprehensive security proofs, ensuring EUF-CMA security for PERK in the random oracle model. The efficiency of PERK greatly stems from our particular choice of PKP variant which allows for an application of the challenge-space amplification technique due to Bidoux-Gaborit (C2SI 2023).

Our second main contribution is an in-depth study of the hardness of the introduced problem variant. First, we establish a link between the hardness of our problem variant and the hardness of standard PKP. Then, we initiate an in-depth study of the concrete complexity to solve our variant. We present a novel algorithm which outperforms previous approaches for certain parameter regimes. However, the proximity of our problem variant to the standard variant can be controlled via a specific parameter. This enables us to effectively safeguard against our new attack and potential future extensions by a choice of parameters that ensures only a slight variation from standard PKP.
Expand
Martin R. Albrecht, Joe Rowell
ePrint Report ePrint Report
Lattice sieves are algorithms for finding short vectors in lattices. We present an implementation of two such sieves – known as “BGJ1” and “BDGL” in the literature – that scales across multiple servers (with varying success). This class of algorithms requires exponential memory which had put into question their ability to scale across sieving nodes. We discuss our architecture and optimisations and report experimental evidence of the efficiency of our approach.
Expand
Céline Chevalier, Guirec Lebrun, Ange Martinelli, Jérôme Plût
ePrint Report ePrint Report
Messaging Layer Security (MLS) is a Secure Group Messaging protocol that uses for its handshake a binary tree – called a Ratchet Tree – in order to reach a logarithmic communication cost w.r.t. the number of group members. This Ratchet Tree represents users as its leaves; therefore any change in the group membership results in adding or removing a leaf associated with that user. MLS consequently implements what we call a tree evolution mechanism, consisting in a user add algorithm – determining where to insert a new leaf – and a tree expansion process – stating how to increase the size of the tree when no space is available for a new user.

The tree evolution mechanism currently used by MLS is de- signed so that it naturally left-balances the Ratchet Tree. However, such a Ratchet Tree structure is often quite inefficient in terms of communication cost. Furthermore, one may wonder whether the binary tree used in that Ratchet Tree has a degree optimized for the features of a handshake in MLS – called a commit.

Therefore, we study in this paper how to improve the communication cost of a commit in MLS by considering both the tree evolution mechanism and the tree degree used for the Ratchet Tree. To do so, we determine the tree structure that optimizes its communication cost, and we propose optimized algorithms for both the user add and tree expansion processes, that allow to remain close to that optimal structure and thus to have a communication cost as close to optimal as possible.

We also determine the Ratchet Tree degree that is best suited to a given set of parameters induced by the encryption scheme used by MLS. This study shows that when using classical (i.e. pre-quantum) ciphersuites, a binary tree is indeed the most appropriate Ratchet Tree; nevertheless, when it comes to post-quantum algorithms, it generally becomes more interesting to use instead a ternary tree.

Our improvements do not change TreeKEM protocol and are easy to implement. With parameter sets corresponding to practical ciphersuites, they reduce TreeKEM’s communication cost by 5 to 10%. In particular, the 10% gain appears in the Post-Quantum setting – when both an optimized tree evolution mechanism and a ternary tree are necessary –, which is precisely the context where any optimization of the protocol’s communication cost is welcome, due to the important bandwidth of PQ encrypted communication.
Expand

17 May 2024

Graz, Österreich, 23 September - 27 September 2024
School School
Event date: 23 September to 27 September 2024
Expand

16 May 2024

Mingyu Cho, Woohyuk Chung, Jincheol Ha, Jooyoung Lee, Eun-Gyeol Oh, Mincheol Son
ePrint Report ePrint Report
A transciphering framework, also known as hybrid homomorphic encryption, is a practical method of combining a homomorphic encryption~(HE) scheme with a symmetric cipher in the client-server model to reduce computational and communication overload on the client side. As a server homomorphically evaluates a symmetric cipher in this framework, new design rationales are required for ``HE-friendly'' ciphers that take into account the specific properties of the HE schemes.

In this paper, we propose a new TFHE-friendly cipher, dubbed $\mathsf{FRAST}$, with a TFHE-friendly round function based on a random S-box to minimize the number of rounds. The round function of $\mathsf{FRAST}$ can be efficiently evaluated in TFHE by a new optimization technique, dubbed double blind rotation. Combined with our new WoP-PBS method, the double blind rotation allows computing multiple S-box calls in the round function of $\mathsf{FRAST}$ at the cost of a single S-box call. In this way, $\mathsf{FRAST}$ enjoys $2.768$ (resp. $10.57$) times higher throughput compared to $\mathsf{Kreyvium}$ (resp. $\mathsf{Elisabeth}$) for TFHE keystream evaluation in the offline phase of the transciphering framework at the cost of slightly larger communication overload.
Expand
Yoshihiro Ohba, Tomoya Sanuki, Claude Gravel, Kentaro Mihara
ePrint Report ePrint Report
In this paper, we introduce a new approach to secure computing by implementing a platform that utilizes an NVMe-based system with an FPGA-based Torus FHE accelerator, SSD, and middleware on the host-side. Our platform is the first of its kind to offer complete secure computing capabilities for TFHE using an FPGA-based accelerator. We have defined secure computing instructions to evaluate 14-bit to 14-bit functions using TFHE, and our middleware allows for communication of ciphertexts, keys, and secure computing programs while invoking secure computing programs through NVMe commands with metadata. Our CMux gate implementation features an optimized NTT/INTT circuit that eliminates pre-NTT and post-INTT operations by pre-scaling and pre-transforming constant polynomials such as the bootstrapping and private-functional key-switching keys. Our performance evaluation demonstrates that our secure computing platform outperforms CPU-based and GPU-based platforms by 15 to 120 times and by 2.5 to 3 times, respectively, in gate bootstrapping execution time. Additionally, our platform uses 7 to 12 times less electric energy consumption during the gate bootstrapping execution time compared to CPU-based platforms and 1.15 to 1.2 times less compared to GPU-based platforms.
Expand
Kai Hu
ePrint Report ePrint Report
The best-known distinguisher on 7-round Ascon-128 and Ascon-128a AEAD uses a 60-dimensional cube where the nonce bits are set to be equal in the third and fourth rows of the Ascon state during initialization (Rohit et al. ToSC 2021/1). It was not known how to use this distinguisher to mount key-recovery attacks. In this paper, we investigate this problem using a new strategy called \textit{break-fix} for the conditional cube attack. The idea is to introduce slightly-modified cubes which increase the degrees of 7-round output bits to be more than 59 (break phase) and then find key conditions which can bring the degree back to 59 (fix phase). Using this idea, key-recovery attacks on 7-round Ascon-128, Ascon-128a and Ascon-80pq are proposed. The attacks have better time/memory complexities than the existing attacks, and in some cases improve the weak-key attacks as well.
Expand
David Pointcheval
ePrint Report ePrint Report
Universal verifiability is a must-to-have for electronic voting schemes. It is essential to ensure honest behavior of all the players during the whole process, together with the eligibility. However, it should not endanger the privacy of the individual votes, which is another major requirement. Whereas the first property prevents attacks during the voting process, privacy of the votes should hold forever, which has been called everlasting privacy.

A classical approach for universal verifiability is to add some proofs together with the encrypted votes, which requires publication of the latter, while eligibility needs a link between the votes and the voters: it definitely excludes long-term privacy. An alternative is the use of perfectly-hiding commitments, on which proofs are published, while ciphertexts are kept private for computing the tally.

In this paper, we show how recent linearly-homomorphic signatures can be exploited for all the proofs, leading to very efficient procedures towards universal verifiability with both strong receipt-freeness and everlasting privacy. Privacy will indeed be unconditional, after the publication of the results and the proofs, whereas the soundness of the proofs holds in the algebraic group model and the random oracle model.
Expand
Rune Fiedler, Christian Janson
ePrint Report ePrint Report
Many use messaging apps such as Signal to exercise their right to private communication. To cope with the advent of quantum computing, Signal employs a new initial handshake protocol called PQXDH for post-quantum confidentiality, yet keeps guarantees of authenticity and deniability classical. Compared to its predecessor X3DH, PQXDH includes a KEM encapsulation and a signature on the ephemeral key. In this work we show that PQXDH does not meet the same deniability guarantees as X3DH due to the signature on the ephemeral key. Our analysis relies on plaintext awareness of the KEM, which Signal's implementation of PQXDH does not provide. As for X3DH, both parties (initiator and responder) obtain different deniability guarantees due to the asymmetry of the protocol.

For our analysis of PQXDH, we introduce a new model for deniability of key exchange that allows a more fine-grained analysis. Our deniability model picks up on the ideas of prior work and facilitates new combinations of deniability notions, such as deniability against malicious adversaries in the big brother model, i.e. where the distinguisher knows all secret keys. Our model may be of independent interest.
Expand
Ky Nguyen, Duong Hieu Phan, David Pointcheval
ePrint Report ePrint Report
Recent years have witnessed a significant development for functional encryption (FE) in the multi-user setting, particularly with multi-client functional encryption (MCFE). The challenge becomes more important when combined with access control, such as attribute-based encryption (ABE), which was actually not covered by the FE and MCFE frameworks. On the other hand, as for complex primitives, many works have studied the admissibility of adversaries to ensure that the security model encompasses all real threats of attacks.

In this paper, adding a public input to FE/MCFE, we cover many previous primitives, notably attribute-based function classes. Furthermore, with the strongest admissibility for inner-product functionality, our framework is quite versatile, as it encrypts multiple sub-vectors, allows repetitions and corruptions, and eventually also encompasses public-key FE and classical ABE, bridging the private setting of MCFE with the public setting of FE and ABE.

Finally, we propose an MCFE with public inputs with the class of functions that combines inner-products (on private inputs) and attribute-based access-control (on public inputs) for LSSS policies. We achieve the first AB-MCFE for inner-products with strong admissibility and with adaptive security. This also leads to MIFE for inner products, public-key single-input inner-product FE with LSSS key-policy and KPABE for LSSS, with adaptive security while the previous AB-MCFE construction of Agrawal et al. from CRYPTO '23 considers a slightly larger functionality of average weighted sum but with selective security only.
Expand
Ziyu Zhao, Jintai Ding, Bo-Yin Yang
ePrint Report ePrint Report
The focus of this paper is to tackle the issue of memory access within sieving algorithms for lattice problems. We have conducted an in-depth analysis of an optimized BGJ sieve (Becker-Gama-Joux 2015), and our findings suggest that its inherent structure is significantly more memory-efficient compared to the asymptotically fastest BDGL sieve (Becker-Ducas-Gama-Laarhoven 2016). Specifically, it necessitates merely $2^{0.2075n + o(n)}$ streamed (non-random) main memory accesses for the execution of an $n$-dimensional sieving. We also provide evidence that the time complexity of this refined BGJ sieve could potentially be $2^{0.292n + o(n)}$, or at least something remarkably close to it. Actually, it outperforms the BDGL sieve in all dimensions that are practically achievable. We hope that this study will contribute to the resolution of the ongoing debate regarding the measurement of RAM access overhead in large-scale, sieving-based lattice attacks.

The concept above is also supported by our implementation. Actually, we provide a highly efficient, both in terms of time and memory, CPU-based implementation of the refined BGJ sieve within an optimized sieving framework. This implementation results in approximately 40% savings in RAM usage and is at least $2^{4.5}$ times more efficient in terms of gate count compared to the previous 4-GPU implementation (Ducas-Stevens-Woerden 2021). Notably, we have successfully solved the 183-dimensional SVP Darmstadt Challenge in 30 days using a 112-core server and approximately 0.87TB of RAM. The majority of previous sieving-based SVP computations relied on the HK3-sieve (Herold-Kirshanova 2017), hence this implementation could offer further insights into the behavior of these asymptotically faster sieving algorithms when applied to large-scale problems. Moreover, our refined cost estimation of SVP based on this implementation suggests that some of the NIST PQC candidates, such as Falcon-512, are unlikely to achieve NIST's security requirements.
Expand
Prabhanjan Ananth, Zihan Hu, Zikuan Huang
ePrint Report ePrint Report
Quantum information can be used to achieve novel cryptographic primitives that are impossible to achieve classically. A recent work by Ananth, Poremba, Vaikuntanathan (TCC 2023) focuses on equipping the dual-Regev encryption scheme, introduced by Gentry, Peikert, Vaikuntanathan (STOC 2008), with key revocation capabilities using quantum information. They further showed that the key-revocable dual-Regev scheme implies the existence of fully homomorphic encryption and pseudorandom functions, with both of them also equipped with key revocation capabilities. Unfortunately, they were only able to prove the security of their schemes based on new conjectures and left open the problem of basing the security of key revocable dual-Regev encryption on well-studied assumptions.

In this work, we resolve this open problem. Assuming polynomial hardness of learning with errors (over sub-exponential modulus), we show that key-revocable dual-Regev encryption is secure. As a consequence, for the first time, we achieve the following results: 1. Key-revocable public-key encryption and key-revocable fully-homomorphic encryption satisfying classical revocation security and based on polynomial hardness of learning with errors. Prior works either did not achieve classical revocation or were based on sub-exponential hardness of learning with errors. 2. Key-revocable pseudorandom functions satisfying classical revocation from the polynomial hardness of learning with errors. Prior works relied upon unproven conjectures.
Expand
Rishab Goyal
ePrint Report ePrint Report
Non-interactive batch arguments (BARGs) let a prover compute a single proof $\pi$ proving validity of a `batch' of $k$ $\mathbf{NP}$ statements $x_1, \ldots, x_{k}$. The two central features of BARGs are succinctness and soundness. Succinctness states that proof size, $|\pi|$ does not grow with $k$; while soundness states a polytime cheating prover cannot create an accepting proof for any invalid batch of statements.

In this work, we put forth a new concept of mutability for batch arguments, called mutable batch arguments. Our goal is to re-envision how we think about and use BARGs. Traditionally, a BARG proof string $\pi$ is an immutable encoding of $k$ $\mathbf{NP}$ witness $\omega_1, \ldots, \omega_{k}$. In a mutable BARG system, each proof string $\pi$ is a mutable encoding of original witnesses. Thus, a mutable BARG captures and enables computations over a batch proof $\pi$. We also study new privacy notions for mutable BARGs, guaranteeing that a mutated proof hides all non-trivial information. Such mutable BARGs are a naturally good fit for many privacy sensitive applications.

Our main contributions can be summarized as introducing the general concept of mutable BARGs, identifying new non-trivial classes of feasible mutations over BARGs, designing new constructions for mutable BARGs satisfying mutation privacy for these classes from standard cryptographic assumptions, and developing applications of mutable BARGs to advanced signatures such as homomorphic signatures, redactable signatures, and aggregate signatures. Our results improve state-of-the-art known for many such signature systems either in terms of functionality, efficiency, security, or versatility (in terms of cryptographic assumptions).
Expand
James Bartusek, Justin Raizes
ePrint Report ePrint Report
Secret sharing allows a user to split a secret into many shares so that the secret can be recovered if, and only if, an authorized set of shares is collected. Although secret sharing typically does not require any computational hardness assumptions, its security does require that an adversary cannot collect an authorized set of shares. Over long periods of time where an adversary can benefit from multiple data breaches, this may become an unrealistic assumption. We initiate the systematic study of secret sharing with certified deletion in order to achieve security even against an adversary that eventually collects an authorized set of shares. In secret sharing with certified deletion, a (classical) secret $s$ is split into quantum shares that can be destroyed in a manner verifiable by the dealer. We put forth two natural definitions of security. No-signaling security roughly requires that if multiple non-communicating adversaries delete sufficiently many shares, then their combined view contains negligible information about $s$, even if the total set of corrupted parties forms an authorized set. Adaptive security requires privacy of $s$ against an adversary that can continuously and adaptively corrupt new shares and delete previously-corrupted shares, as long as the total set of corrupted shares minus deleted shares remains unauthorized. Next, we show that these security definitions are achievable: we show how to construct (i) a secret sharing scheme with no-signaling certified deletion for any monotone access structure, and (ii) a threshold secret sharing scheme with adaptive certified deletion. Our first construction uses Bartusek and Khurana's (CRYPTO 2023) 2-out-of-2 secret sharing scheme with certified deletion as a building block, while our second construction is built from scratch and requires several new technical ideas. For example, we significantly generalize the ``XOR extractor'' of Agarwal, Bartusek, Khurana, and Kumar (EUROCRYPT 2023) in order to obtain better seedless extraction from certain quantum sources of entropy, and show how polynomial interpolation can double as a high-rate randomness extractor in our context of threshold sharing with certified deletion.
Expand
Isheeta Nargis, Anwar Hasan
ePrint Report ePrint Report
We design a new MPC protocol for arithmetic circuits secure against erasure-free covert adaptive adversaries with deterrence 1/2. The new MPC protocol has the same asymptotic communication cost, the number of PKE operations and the number of exponentiation operations as the most efficient MPC protocol for arithmetic circuits secure against covert static adversaries. That means, the new MPC protocol improves security from covert static security to covert adaptive adversary almost for free. For MPC problems where the number of parties n is much larger than the number of multiplication gates M, the new MPC protocol asymptotically improves communication complexity over the most efficient MPC protocol for arithmetic circuits secure against erasure-free active adaptive adversaries.
Expand
Aram Jivanyan, Karen Terjanian
ePrint Report ePrint Report
We are introducing a novel consensus protocol for blockchain, called Proof of Stake and Activity (PoSA) which can augment the traditional Proof of Stake methods by integrating a unique Proof of Activity system. PoSA offers a compelling economic model that promotes decentralization by rewarding validators based on their staked capital and also the business value they contribute to the chain. This protocol has been implemented already into a fully-fledged blockchain platform called Bahamut (www.bahamut.io) which boasts hundreds of thousands of active users already.
Expand
Zhongtang Luo, Yanxue Jia, Yaobin Shen, Aniket Kate
ePrint Report ePrint Report
TLS oracles allow a TLS client to offer selective data provenance to an external (oracle) node such that the oracle node is ensured that the data is indeed coming from a pre-defined TLS server. Typically, the client/user supplies their credentials and reveals data in a zero-knowledge manner to demonstrate certain information to oracles while ensuring the privacy of the rest of the data. Conceptually, this is a standard three-party secure computation between the TLS server, TLS client (prover), and the oracle (verifier) node; however, the key practical requirement for TLS oracles to ensure that data provenance process remains transparent to the TLS server. Recent TLS oracle protocols such as DECO enforce the communication pattern of server-client-verifier and utilize a novel three-party handshake process during TLS to ensure data integrity against potential tempering by the client. However, this approach comes with a significant performance penalty on the client/prover and the verifier and raises the question if it is possible to improve the performance by putting the verifier (as a proxy) between the server and the client such that the correct TLS transcript is always available to the verifier.

This work offers both positive and negative answers to this oracle proxy question: We first formalize the oracle proxy notion that allows the verifier to directly proxy client-server TLS communication, without entering a three-party handshake or interfering with the connection in any way. We then show that for common TLS-based higher-level protocols such as HTTPS, data integrity to the verifier proxy is ensured by the variable padding built into the HTTP protocol semantics. On the other hand, if a TLS-based protocol comes without variable padding, we demonstrate that data integrity cannot be guaranteed. In this context, we then study the case where the TLS response is pre-determined and cannot be tampered with during the connection. We propose the concept of context unforgeability and show allows overcoming the impossibility. We further show that ChaCha20-Poly1305 satisfies the concept while AES-GCM does not under the standard model.
Expand
◄ Previous Next ►