International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

08 September 2023

Mengce Zheng
ePrint Report ePrint Report
We point out critical deficiencies in lattice-based cryptanalysis of common prime RSA presented in ``Remarks on the cryptanalysis of common prime RSA for IoT constrained low power devices'' [Information Sciences, 538 (2020) 54--68]. To rectify these flaws, we carefully scrutinize the relevant parameters involved in the analysis during solving a specific trivariate integer polynomial equation. Additionally, we offer a synthesized attack illustration of small private key attacks on common prime RSA.
Expand
Karthikeyan Bhargavan, Abhishek Bichhawat, Pedram Hosseyni, Ralf Kuesters, Klaas Pruiksma, Guido Schmitz, Clara Waldmann, Tim Würtele
ePrint Report ePrint Report
While cryptographic protocols are often analyzed in isolation, they are typically deployed within a stack of protocols, where each layer relies on the security guarantees provided by the protocol layer below it, and in turn provides its own security functionality to the layer above. Formally analyzing the whole stack in one go is infeasible even for semi-automated verification tools, and impossible for pen-and-paper proofs. The DY$^\star$ protocol verification framework offers a modular and scalable technique that can reason about large protocols, specified as a set of F$^\star$ modules. However, it does not support the compositional verification of layered protocols since it treats the global security invariants monolithically. In this paper, we extend DY$^\star$ with a new methodology that allows analysts to modularly analyze each layer in a way that compose to provide security for a protocol stack. Importantly, our technique allows a layer to be replaced by another implementation, without affecting the proofs of other layers. We demonstrate this methodology on two case studies. We also present a verified library of generic authenticated and confidential communication patterns that can be used in future protocol analyses and is of independent interest.
Expand
Intak Hwang, Jinyeong Seo, Yongsoo Song
ePrint Report ePrint Report
In lattice-based Homomorphic Encryption (HE) schemes, the key-switching procedure is a core building block of non-linear operations but also a major performance bottleneck. The computational complexity of the operation is primarily determined by the so-called gadget decomposition, which transforms a ciphertext entry into a tuple of small polynomials before being multiplied with the corresponding evaluation key. However, the previous studies such as Halevi et al. (CT-RSA 2019) and Han and Ki (CT-RSA 2020) fix a decomposition function in the setup phase which is applied commonly across all ciphertext levels, resulting in suboptimal performance.

In this paper, we introduce a novel key-switching framework for leveled HE schemes. We aim to allow the use of different decomposition functions during the evaluation phase so that the optimal decomposition method can be utilized at each level to achieve the best performance. A naive solution might generate multiple key-switching keys corresponding to all possible decomposition functions, and sends them to an evaluator. However, our solution can achieve the goal without such communication overhead since it allows an evaluator to dynamically derive other key-switching keys from a single key-switching key depending on the choice of gadget decomposition.

We implement our framework at a proof-of-concept level to provide concrete benchmark results. Our experiments show that we achieve the optimal performance at every level while maintaining the same computational capability and communication costs.
Expand
Yuyu Wang, Jiaxin Pan, Yu Chen
ePrint Report ePrint Report
Fine-grained cryptography is constructing cryptosystems in a setting where an adversary’s resource is a-prior bounded and an honest party has less resource than an adversary. Currently, only simple form of encryption schemes, such as secret-key and public-key encryption, are constructed in this setting. In this paper, we enrich the available tools in fine-grained cryptography by proposing the first fine-grained secure attribute-based encryption (ABE) scheme. Our construction is adaptively secure under the widely accepted worst-case assumption, NC1$\subsetneq \oplus$L/poly, and it is presented in a generic manner using the notion of predicate encodings (Wee, TCC’14). By properly instantiating the underlying encoding, we can obtain different types of ABE schemes, including identity-based encryption. Previously, all of these schemes were unknown in fine-grained cryptography. Our main technical contribution is constructing ABE schemes without using pairing or the Diffie-Hellman assumption. Hence, our results show that, even if one-way functions do not exist, we still have ABE schemes with meaningful security. For more application of our techniques, we construct an efficient (quasi-adaptive) non-interactive zero-knowledge (QA-NIZK) proof system.
Expand
Zhonghui Ge, Jiayuan Gu, Chenke Wang, Yu Long, Xian Xu, Dawu Gu
ePrint Report ePrint Report
Payment channel hubs (PCHs) serve as a promising solution to achieving quick off-chain payments between pairs of users. They work by using an untrusted tumbler to relay the payments between the payer and payee and enjoy the advantages of low cost and high scalability. However, the most recent privacy-preserving payment channel hub solution that supports variable payment amounts suffers from limited unlinkability, e.g., being vulnerable to the abort attack. Moreover, this solution utilizes non-interactive zero-knowledge proofs, which bring huge costs on both computation time and communication overhead. Therefore, how to design PCHs that support variable amount payments and unlinkability, but reduce the use of huge-cost cryptographic tools as much as possible, is significant for the large-scale practical applications of off-chain payments.

In this paper, we propose Accio, a variable amount payment channel hub solution with optimized unlinkability, by deepening research on unlinkability and constructing a new cryptographic tool. We provide the detailed Accio protocol and formally prove its security and privacy under the Universally Composable framework. Our prototype demonstrates its feasibility and the evaluation shows that Accio outperforms the other state-of-the-art works in both communication and computation costs.
Expand
Florian Helmschmidt, Pedram Hosseyni, Ralf Kuesters, Klaas Pruiksma, Clara Waldmann, Tim Würtele
ePrint Report ePrint Report
The Grant Negotiation and Authorization Protocol (GNAP) is an emerging authorization and authentication protocol which aims to consolidate and unify several use-cases of OAuth 2.0 and many of its common extensions while providing a higher degree of security. OAuth 2.0 is an essential cornerstone of the security of authorization and authentication for the Web, IoT, and beyond, and is used, among others, by many global players, like Google, Facebook, and Microsoft. Because of historically grown limitations and issues of OAuth 2.0 and its various extensions, prominent members of the OAuth community decided to create GNAP, a new and completely resigned authorization and authentication protocol. Given GNAP's advantages over OAuth 2.0 and its support within the OAuth community, GNAP is expected to become at least as important as OAuth 2.0.

In this paper, we present the first formal security analysis of GNAP. We build a detailed formal model of GNAP, based on the Web Infrastructure Model (WIM) of Fett, Küsters, and Schmitz. Based on this model, we provide formal statements of the key security properties of GNAP, namely, authorization, authentication, and session integrity for both authorization and authentication. In the process of trying to prove these properties, we have discovered several attacks on GNAP. We present these attacks as well as modifications to the protocol that prevent them. These modifications have been incorporated into the GNAP specification after discussion with the GNAP working group. We give the first formal security guarantees for GNAP, by proving that GNAP, with our modifications applied, satisfies the mentioned security properties.

GNAP was still an early draft when we started our analysis, but is now on track to be adopted as an IETF standard. Hence, our analysis is just in time to help ensure the security of this important emerging standard.
Expand
Yunxiao Zhou, Shengli Liu, Shuai Han, Haibin Zhang
ePrint Report ePrint Report
Proxy re-encryption (PRE) allows a proxy with a re-encryption key to translate a ciphertext intended for Alice (delegator) to another ciphertext intended for Bob (delegatee) without revealing the underlying message. However, with PRE, Bob can obtain the whole message from the re-encrypted ciphertext, and Alice cannot take flexible control of the extent of the message transmitted to Bob. In this paper, we propose a new variant of PRE, called Fine-Grained PRE (FPRE), to support fine-grained re-encryptions. An FPRE is associated with a function family F, and each re-encryption key rk_{A→B}^f is associated with a function f ∈ F. With FPRE, Alice now can authorize re-encryption power to proxy by issuing rk_{A→B}^f to it, with f chosen by herself. Then the proxy can translate ciphertext encrypting m to Bob's ciphertext encrypting f(m) with such a fine-grained re-encryption key, and Bob only obtains a function of message m. In this way, Alice can take flexible control of the message spread by specifying functions. For FPRE, we formally define its syntax and formalize security notions including CPA security, ciphertext pseudo-randomness, unidirectionality, non-transitivity, collusion-safety under adaptive corruptions in the multi-user setting. Moreover, we propose a new security notion named ciphertext unlinkability, which blurs the link between a ciphertext and its re-encrypted ciphertext to hide the proxy connections between users. We establish the relations between those security notions. As for constructions, we propose two FPRE schemes, one for bounded linear functions and the other for deletion functions, based on the learning-with-errors (LWE) assumption. Our FPRE schemes achieve all the aforementioned desirable securities under adaptive corruptions in the standard model. As far as we know, our schemes provide the first solution to PRE with security under adaptive corruptions in the standard model.
Expand
Thomas Chamelot, Damien Couroussé, Karine Heydemann
ePrint Report ePrint Report
Fault injection attacks represent an effective threat to embedded systems. Recently, Laurent et al. have reported that fault injection attacks can leverage faults inside the microarchitecture. However, state-of-the-art counter-measures, hardware-only or with hardware support, do not consider the integrity of microarchitecture control signals that are the target of these faults. We present MAFIA, a microarchitecture protection against fault injection attacks. MAFIA ensures integrity of pipeline control signals through a signature-based mechanism, and ensures fine-grained control-flow integrity with a complete indirect branch support and code authenticity. We analyse the security properties of two different implementations with different security/overhead trade-offs: one with a CBC-MAC/Prince signature function, and another one with a CRC32. We present our implementation of MAFIA in a RISC-V processor, supported by a dedicated compiler toolchain based on LLVM/Clang. We report a hardware area overhead of 23.8 % and 6.5 % for the CBC-MAC/Prince and CRC32 respectively. The average code size and execution time overheads are 29.4% and 18.4% respectively for the CRC32 implementation and are 50 % and 39 % for the CBC-MAC/Prince.
Expand
Vitor Pereira, Stéphane Graham-Lengrand, Karim Eldefrawy, Steve Lu, Samuel Dittmer, Rafail Ostrovsky
ePrint Report ePrint Report
Despite the notable advances in the development of high-assurance, verified implementations of cryptographic protocols, such implementations typically face significant performance overheads, particularly due to the penalties induced by formal verification and automated extraction of executable code. In this paper, we address some core performance challenges facing computer-aided cryptography by presenting a formal treatment for accelerating such verified implementations based on multiple generic optimizations covering parallelism and memory access. We illustrate our techniques for addressing such performance bottlenecks using the Line-Point Zero-Knowledge (LPZK) protocol as a case study. Our starting point is a new verified implementation of LPZK that we formalize and synthesize using EasyCrypt; our first implementation is developed to reduce the proof effort and without considering the performance of the extracted executable code. We then show how such (automatically) extracted code can be optimized in three different ways to obtain a 3000x speedup and thus matching the performance of the manual implementation of LPZK. We obtain such performance gains by first modifying the algorithmic specifications, then by adopting a provably secure parallel execution model, and finally by optimizing the memory access structures. All optimizations are first formally verified inside EasyCrypt, and then executable code is automatically synthesized from each step of the formalization. For each optimization, we analyze performance gains resulting from it and also address challenges facing the computer-aided security proofs thereof, and challenges facing automated synthesis of executable code with such an optimization.
Expand
Jiaxin Pan, Benedikt Wagner, Runzhi Zeng
ePrint Report ePrint Report
We propose two generic constructions of public-key encryption (PKE) with tight simulation-based selective-opening security against chosen-ciphertext attacks (SIM-SO-CCA) in the random oracle model. Our constructions can be instantiated with a small constant number of elements in the ciphertext, ignoring smaller contributions from symmetric-key encryption. That is, they have compact ciphertexts. Furthermore, three of our instantiations have compact public keys as well. Known (almost) tightly SIM-SO-CCA secure PKE schemes are due to the work of Lyu et al. (PKC 2018) and Libert et al. (Crypto 2017). They have either linear-size ciphertexts or linear-size public keys. Moreover, they only achieve almost tightness, namely, with security loss depending on the security parameter. In contrast to them, our schemes are the first ones achieving both tight SIM-SO-CCA security and compactness. More precisely, our two generic constructions are: - From Pseudorandom KEM: Our first generic construction is from a key encapsulation mechanism (KEM) with pseudorandom ciphertexts against plaintext-checking attacks. Such a KEM can be constructed directly from the Strong Diffie-Hellman (StDH), Computational DH (CDH), and Decisional DH assumptions. Both their ciphertexts and public keys are compact. Their security loss is a small constant. Interestingly, our CDH-based construction is the first scheme achieving all these advantages based on a weak search assumption. Furthermore, we also give a generic construction of such a KEM, which yields an efficient tightly SIM-SO-CCA PKE from lattices. - From Lossy Encryption: Our second scheme is the well-known Fujisaki-Okamoto transformation. We show that it can turn a lossy encryption scheme into a tightly SIM-SO-CCA secure PKE. This transformation preserves both tightness and compactness of the underlying lossy encryption, which is in contrast to the non-tight proof of Heuer et al. (PKC 2015).
Expand
Michael Brand, Gaëtan Pradel
ePrint Report ePrint Report
Machine learning is a widely-used tool for analysing large datasets, but increasing public demand for privacy preservation and the corresponding introduction of privacy regulations have severely limited what data can be analysed, even when this analysis is for societal benefit. Homomorphic encryption, which allows computation on encrypted data, is a natural solution to this dilemma, allowing data to be analysed without sacrificing privacy. Because homomorphic encryption is computationally expensive, however, current solutions are mainly restricted to use it for inference and not training.

In this work, we present a practically viable approach to privacy-preserving machine learning training using fully homomorphic encryption. Our method achieves fast training speeds, taking less than 45 seconds to train a binary classifier over thousands of samples on a single mid-range computer, significantly outperforming state-of-the-art results.
Expand
Kyosuke Yamashita, Keisuke Hara
ePrint Report ePrint Report
In this paper, we show that it is impossible to construct a public key encryption scheme (PKE) from a ring signature scheme in a black-box fashion in the standard model. Such an impossibility is highly non-trivial because, to the best of our knowledge, known generic constructions of ring signature scheme are based on public key cryptosystems or in the random oracle model. Technically, we introduce a new cryptographic primitive named indistinguishable multi-designated verifiers signature (IMDVS), and prove that (i) IMDVS is equivalent to PKE, and (ii) it is impossible to construct IMDVS from a ring signature scheme in a generic way. Our result suggests an essential gap between ring signature and group signature, as it is known that group signature implies PKE.
Expand
Kamil Doruk Gur, Jonathan Katz, Tjerand Silde
ePrint Report ePrint Report
Much recent work has developed efficient protocols for threshold signatures, where $n$ parties share a signing key and some threshold $t$ of those parties must interact to produce a signature. Yet efficient threshold signatures with post-quantum security have been elusive, with the state-of-the-art being a two-round scheme by Damgård et al. based on lattices that support only the full threshold case (i.e., $t=n$).

We show here a two-round threshold signature scheme based on standard lattice assumptions that support arbitrary thresholds $t\leq n$. Estimates of our scheme's performance at the $128$-bit security level with a trusted setup show that in the $3$-out-of-$5$ case, we obtain signatures of size $11.5$ KB and public keys of size $13.6$ KB, with an execution of the signing protocol using roughly $1.5$ MB of communication per party. We achieve improved parameters if only a small bounded number of signatures are ever issued with the same key.

As an essential building block and independent contribution, we construct a maliciously secure threshold (linearly) homomorphic encryption scheme that supports arbitrary thresholds $t \leq n$.
Expand
Ya-Nan Li, Tian Qiu, Qiang Tang
ePrint Report ePrint Report
Cryptocurrency exchange platforms such as Coinbase, Binance, enable users to purchase and sell cryptocurrencies conveniently just like trading stocks/commodities. However, because of the nature of blockchain, when a user withdraws coins (i.e., transfers coins to an external on-chain account), all future transactions can be learned by the platform. This is in sharp contrast to conventional stock exchange where all external activities of users are always hidden from the platform. Since the platform knows highly sensitive user private information such as passport number, and bank information, linking all (on-chain) transactions raises a serious privacy concern about the potential disastrous data breach in those cryptocurrency exchange platforms.

In this paper, we propose a cryptocurrency exchange that restores user anonymity for the first time. To our surprise, the seemingly well-studied privacy/anonymity problem has several new challenges in this setting. Since the public blockchain and internal transaction activities naturally provide many non-trivial leakages to the platform, internal privacy is not only useful in the usual sense but also becomes necessary for regaining the basic anonymity of user transactions. We also ensure that the user cannot double spend, and the user has to properly report accumulated profit for tax purposes, even in the private setting. We give a careful modeling and efficient construction of the system that achieves constant computation and communication overhead (with only simple cryptographic tools and rigorous security analysis); we also implement our system and evaluate its practical performance.
Expand
Erica Blum, Elette Boyle, Ran Cohen, Chen-Da Liu-Zhang
ePrint Report ePrint Report
Broadcast protocols enable a set of $n$ parties to agree on the input of a designated sender, even facing attacks by malicious parties. In the honest-majority setting, a fruitful line of work harnessed randomization and cryptography to achieve low-communication broadcast protocols with sub-quadratic total communication and with "balanced" sub-linear communication cost per party.

However, comparatively little is known in the dishonest-majority setting. Here, the most communication-efficient constructions are based on the protocol of Dolev and Strong (SICOMP '83), and sub-quadratic broadcast has not been achieved even using randomization and cryptography. On the other hand, the only nontrivial $\omega(n)$ communication lower bounds are restricted to deterministic protocols, or against strong adaptive adversaries that can perform "after the fact" removal of messages.

We provide new communication lower bounds in this space, which hold against arbitrary cryptography and setup assumptions, as well as a simple protocol showing near tightness of our first bound.

1) We demonstrate a tradeoff between resiliency and communication for randomized protocols secure against $n-o(n)$ static corruptions. For example, $\Omega(n\cdot {\sf polylog}(n))$ messages are needed when the number of honest parties is $n/{\sf polylog}(n)$; $\Omega(n\sqrt{n})$ messages are needed for $O(\sqrt{n})$ honest parties; and $\Omega(n^2)$ messages are needed for $O(1)$ honest parties.

Complementarily, we demonstrate broadcast with $O(n\cdot{\sf polylog}(n))$ total communication facing any constant fraction of static corruptions.

2) Our second bound considers $n/2 + k$ corruptions and a weakly adaptive adversary that cannot remove messages "after the fact." We show that any broadcast protocol within this setting can be attacked to force an arbitrary party to send messages to $k$ other parties. Our bound rules out, for example, broadcast facing $51\%$ corruptions, in which all non-sender parties have sublinear communication locality.
Expand

04 September 2023

Erkan Tairi, Pedro Moreno-Sanchez, Clara Schneidewind
ePrint Report ePrint Report
The scalability and interoperability challenges in current cryptocurrencies have motivated the design of cryptographic protocols that enable efficient applications on top and across widely used cryptocurrencies such as Bitcoin or Ethereum. Examples of such protocols include (virtual) payment channels, atomic swaps, oracle-based contracts, deterministic wallets, and coin mixing services. Many of these protocols are built upon minimal core functionalities supported by a wide range of cryptocurrencies. Most prominently, adaptor signatures (AS) have emerged as a powerful tool for constructing blockchain protocols that are (mostly) agnostic to the specific logic of the underlying cryptocurrency. Even though AS-based protocols are built upon the same cryptographic principles, there exists no modular and faithful way for reasoning about their security. Instead, all the works analyzing such protocols focus on reproving how adaptor signatures are used to cryptographically link transactions while considering highly simplified blockchain models that do not capture security-relevant aspects of transaction execution in blockchain-based consensus.

To help this, we present LedgerLocks, a framework for the secure design of AS-based blockchain applications in the presence of a realistic blockchain. LedgerLocks defines the concept of AS-locked transactions, transactions whose publication is bound to the knowledge of a cryptographic secret. We argue that AS-locked transactions are the common building block of AS-based blockchain protocols and we define $\mathcal{G}_{\mathsf{LedgerLocks}}$, a realistic ledger model in the Universal Composability framework with built-in support for AS-locked transactions. As LedgerLocks abstracts from the cryptographic realization of AS-locked transactions, it allows protocol designers to focus on the blockchain-specific security considerations instead.
Expand
Gregor Leander, Shahram Rasoolzadeh, Lukas Stennes
ePrint Report ePrint Report
HALFLOOP is a family of tweakable block ciphers that are used for encrypting automatic link establishment (ALE) messages in high-frequency radio, a technology commonly used by the military, other government agencies, and industries that require high robustness in long-distance communications. Recently, it was shown in [DDLS22] that the smallest version of the cipher, HALFLOOP-24, can be attacked within a practical time and memory complexity. However, in the real-word ALE setting, it turns out that this attack requires waiting more than 500 years to collect the necessary amount of plaintext-tweak-ciphertext pairs fulfilling the conditions of the attack. In this paper, we present real-world practical attacks against HALFLOOP-24 which are based on a probability-one differential distinguisher. In our attacks, we significantly reduce the data complexity to three differential pairs in the chosen-plaintext (CPA) setting which is optimal in the sense that even a brute force attack needs at least six plaintext-tweak-ciphertext pairs to uniquely identify the correct key. Considering the same ALE setting as [DDLS22], this translates to a reduction from 541 years to 2 hours worth of intercepted traffic. Besides, we provide the first, non generic, public cryptanalysis of HALFLOOP-48 and HALFLOOP-96. More precisely, we present Demirci-Selçuk meet-in-the-middle attacks against full-round HALFLOOP-48 and round-reduced HALFLOOP-96 to recover the complete master key in a CPA setting. However, unlike the attacks on HALFLOOP-24, our attacks on the larger versions are only theoretical. Moreover, for HALFLOOP-96 the known generic time-memory trade-off attack, based on a flawed tweak handling, remains the strongest attack vector. In conclusion, we iterate what was already stated in [DDLS22]: HALFLOOP does not provide adequate protection and should not be used.
Expand
Sietse Ringers
ePrint Report ePrint Report
For $n = pq$ a product of two safe primes, we construct and prove security of a cryptographic hash function $H$ mapping into the square residues $QR_n \subset (\mathbb{Z}/n\mathbb{Z})^*$, by squaring the output of an ordinary cryptographic hash function $H$ of sufficiently long output.
Expand
Haiyang Xue, Man Ho Au, Mengling Liu, Kwan Yin Chan, Handong Cui, Xiang Xie, Tsz Hon Yuen, Chengru Zhang
ePrint Report ePrint Report
Threshold ECDSA receives interest lately due to its widespread adoption in blockchain applications. A common building block of all leading constructions involves a secure conversion of multiplicative shares into additive ones, which is called the multiplicative-to-additive (MtA) function. MtA dominates the overall complexity of all existing threshold ECDSA constructions. Specifically, $O(n^2)$ invocations of MtA are required in the case of $n$ active signers. Hence, improvement of MtA leads directly to significant improvements for all state-of-the-art threshold ECDSA schemes.

In this paper, we design a novel MtA by revisiting the Joye-Libert (JL) cryptosystem. Specifically, we revisit JL encryption and propose a JL-based commitment, then give efficient zero-knowledge proofs for JL cryptosystem which are the first to have standard soundness. Our new MtA offers the best time-space complexity trade-off among all existing MtA constructions. It outperforms state-of-the-art constructions from Paillier by a factor of $1.85$ to $2$ in bandwidth and $1.2$ to $1.7$ in computation. It is $7\times$ faster than those based on Castagnos-Laguillaumie encryption only at the cost of $2\times$ more bandwidth. While our MtA is slower than OT-based constructions, it saves $18.7\times$ in bandwidth requirement. In addition, we also design a batch version of MtA to further reduce the amotised time and space cost by another $25$%.
Expand
Debajyoti Das, Claudia Diaz, Aggelos Kiayias, Thomas Zacharias
ePrint Report ePrint Report
This work formally analyzes the anonymity guarantees of continuous stop-and-go mixnets and attempts to answer the above question. Existing mixnet based anonymous communication protocols that aim to provide provable anonymity guarantees rely on round-based communication models --- which requires synchronization among all the nodes and clients, and difficult to achieve in practice. Continuous stop-and-go mixnets (e.g., Loopix and Nym) provide a nice alternative by adding a random delay for each message on every hop independent of all other hops and all other messages. The core anonymization technique of continuous mixnets combined with the fact that the messages are sent by the clients to the mixnet at different times makes it a difficult problem to formally prove security for such mixnet protocols; all existing analyses for such designs provide only experimental evaluations for anonymity.

We are the first to close that gap and provide a formal analysis. We provide two indistinguishability based definitions (of sender anonymity), namely pairwise unlinkability and user unlinkability, tuned specifically for continuous stop-and-go mixnets. We derive the adversarial advantage as a function of the protocol parameters for the two definitions. We show that there is a fundamental lower bound on the adversarial advantage $\delta$ for pairwise unlinkability; however, strong user unlinkability (negligible adversarial advantage) can be achieved if the users message rate ($\lambda_u$) is proportional to message processing rate ($\lambda$) on the nodes.
Expand
◄ Previous Next ►