IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
20 May 2021
Bhavana Kanukurthi, Sai Lakshmi Bhavana Obbattu, Sruthi Sekar, Jenit Tomy
ePrint Report
Non-malleable secret sharing (NMSS) schemes, introduced by Goyal and Kumar (STOC 2018), ensure that a secret $m$ can be distributed into shares $m_1,...,m_n$ (for some $n$), such that any $t$ (a parameter $<=n$) shares can be reconstructed to recover the secret $m$, any $t-1$ shares doesn't leak information about $m$ and even if the shares that are used for reconstruction are tampered, it is guaranteed that the reconstruction of these tampered shares will either result in the original $m$ or something independent of $m$. Since their introduction, non-malleable secret sharing schemes sparked a very impressive line of research.
In this work, we introduce a feature of local reconstructability in NMSS, which allows reconstruction of any portion of a secret by reading just a few locations of the shares. This is a useful feature, especially when the secret is long or when the shares are stored in a distributed manner on a communication network. In this work, we give a compiler that takes in any non-malleable secret sharing scheme and compiles it into a locally reconstructable non-malleable secret sharing scheme. To secret share a message consisting of $k$ blocks of length $l$ each, our scheme would only require reading $l + log k$ bits (in addition to a few more bits, whose quantity is independent of $l$ and $k$) from each party's share (of a reconstruction set) to locally reconstruct a single block of the message.
We show an application of our locally reconstructable non-malleable secret sharing scheme to a computational non-malleable secure message transmission scheme in the pre-processing model, with an improved communication complexity, when transmitting multiple messages.
In this work, we introduce a feature of local reconstructability in NMSS, which allows reconstruction of any portion of a secret by reading just a few locations of the shares. This is a useful feature, especially when the secret is long or when the shares are stored in a distributed manner on a communication network. In this work, we give a compiler that takes in any non-malleable secret sharing scheme and compiles it into a locally reconstructable non-malleable secret sharing scheme. To secret share a message consisting of $k$ blocks of length $l$ each, our scheme would only require reading $l + log k$ bits (in addition to a few more bits, whose quantity is independent of $l$ and $k$) from each party's share (of a reconstruction set) to locally reconstruct a single block of the message.
We show an application of our locally reconstructable non-malleable secret sharing scheme to a computational non-malleable secure message transmission scheme in the pre-processing model, with an improved communication complexity, when transmitting multiple messages.
Lingyue Qin, Xiaoyang Dong, Xiaoyun Wang, Keting Jia, Yunwen Liu
ePrint Report
Automatic modelling to search distinguishers with high probability covering as many rounds as possible, such as MILP, SAT/SMT, CP models, has become a very popular cryptanalysis topic today. In those models, the optimizing objective is usually the probability or the number of rounds of the distinguishers. If we want to recover the secret key for a round-reduced block cipher, there are usually two phases, i.e., finding an efficient distinguisher and performing key-recovery attack by extending several rounds before and after the distinguisher. The totally attacked number of rounds is not only related to the chosen distinguisher, but also to the extended rounds before and after the distinguisher. In this paper, we try to combine the two phases in a uniform automatic model.
Concretely, we apply this idea to automate the related-key rectangle attacks on SKINNY and ForkSkinny. We propose some new distinguishers with advantage to perform key-recovery attacks. Our key-recovery attacks on a few versions of round-reduced SKINNY and ForkSkinny cover 1 to 2 more rounds than the best previous attacks.
Concretely, we apply this idea to automate the related-key rectangle attacks on SKINNY and ForkSkinny. We propose some new distinguishers with advantage to perform key-recovery attacks. Our key-recovery attacks on a few versions of round-reduced SKINNY and ForkSkinny cover 1 to 2 more rounds than the best previous attacks.
Morten Øygarden, Daniel Smith-Tone, Javier Verbel
ePrint Report
The multivariate scheme HFEv- used to be considered a promising candidate for a post-quantum signature system. First
suggested in the early 2000s, a version of the scheme made it to the third round of the ongoing NIST post-quantum standardization process. In late 2020, the system suffered from an efficient rank attack due to Tao, Petzoldt, and Ding. In this paper, we inspect how this recent rank attack is affected by the projection modification. This modification was introduced to secure the signature scheme PFLASH against its predecessor's attacks. We prove upper bounds for the rank of projected HFEv- (pHFEv-) and PFLASH under the new attack, which are tight for the experiments we have performed. We conclude that projection could be a useful tool in protecting against this recent cryptanalysis.
Carlo Brunetta, Georgia Tsaloli, Bei Liang, Gustavo Banegas, Aikaterini Mitrokotsa
ePrint Report
We propose a novel primitive called NIVA that allows the distributed aggregation of multiple users' secret inputs by multiple untrusted servers.
The returned aggregation result can be publicly verified in a non-interactive way, i.e. the users are not required to participate in the aggregation except for providing their secret inputs.
NIVA allows the secure computation of the sum of a large amount of users' data and can be employed, for example, in the federated learning setting in order to aggregate the model updates for a deep neural network.
We implement NIVA and evaluate its communication and execution performance and compare it with the current state-of-the-art, i.e. Segal et al. protocol (CCS 2017) and Xu et al. VerifyNet protocol (IEEE TIFS 2020), resulting in better user's communicated data and
We implement NIVA and evaluate its communication and execution performance and compare it with the current state-of-the-art, i.e. Segal et al. protocol (CCS 2017) and Xu et al. VerifyNet protocol (IEEE TIFS 2020), resulting in better user's communicated data and
Behzad Abdolmaleki, Hamidreza Khoshakhlagh, Helger Lipmaa
ePrint Report
We define smooth zero-knowledge hash functions (SZKHFs) as smooth projective hash functions (SPHFs) for which the completeness holds even when the language parameter lpar and the projection key HP were maliciously generated.
We prove that blackbox SZKHF in the plain model is impossible even if lpar was honestly generated. We then define SZKHF in the registered public key (RPK) model, where both lpar and HP are possibly maliciously generated but accepted by an RPK server, and show that the CRS-model trapdoor SPHFs of Benhamouda et al. are also secure in the weaker RPK model. Then, we define and instantiate subversion-zero knowledge SZKHF in the plain model. In this case, both lpar and HP are completely untrusted, but one uses non-blackbox techniques in the security proof.
Arsalan Javeed, Cemal Yilmaz, Erkay Savas
ePrint Report
In this work, we present a novel approach, called Detector+ , to detect, isolate, and prevent timing-based side channel attacks (i.e., timing attacks) at runtime. The proposed approach is based on a simple observation that the time measurements required by the timing attacks differ from those required by the benign applications as these attacks need to measure the execution times of typically quite short-running operations. Detector+ , therefore, monitors the time readings made by processes and mark consecutive pairs of readings that are close to each other in time as suspicious. In the presence of suspicious time measurements, Detector+ introduces noise into the measurements to prevent the attacker from extracting information by using these measurements. The sequence of suspicious time measurements are then analyzed by using a sliding window based approach to pinpoint the malicious processes at runtime. We have empirically evaluated the proposed approach by using five well known timing attacks, including Meltdown, together with their variations, representing some of the mechanisms that an attacker can employ to become stealthier. In one evaluation setup, each type of attack was carried out concurrently by multiple processes. In the other setup, multiple types of attacks were carried out concurrently. In all the experiments, Detector+ detected all the malicious time measurements with almost a perfect accuracy, prevented all the attacks, and correctly pinpointed all the malicious processes involved in the attacks without any false positives after they have made a few time measurements with an average runtime overhead of 1.56%.
Collin Chin, Howard Wu, Raymond Chu, Alessandro Coglio, Eric McCarthy, Eric Smith
ePrint Report
Decentralized ledgers that support rich applications suffer from three limitations. First, applications are provisioned tiny execution environments with limited running time, minimal stack size, and restrictive instruction sets. Second, applications must reveal their state transition, enabling miner frontrunning attacks and consensus instability. Third, applications offer weak guarantees of correctness and safety.
We design, implement, and evaluate Leo, a new programming language designed for formally verified, zero-knowledge applications. Leo provisions a powerful execution environment that is not restricted in running time, stack size, or instruction sets. Besides offering application privacy and mitigating miner-extractable value (MEV), Leo achieves two fundamental properties. First, applications are formally verified with respect to their high-level specification. Second, applications can be succinctly verified by anyone, regardless of the size of application.
Leo is the first known programming language to introduce a testing framework, package registry, import resolver, remote compiler, formally defined language, and theorem prover for general-purpose, zero-knowledge applications.
We design, implement, and evaluate Leo, a new programming language designed for formally verified, zero-knowledge applications. Leo provisions a powerful execution environment that is not restricted in running time, stack size, or instruction sets. Besides offering application privacy and mitigating miner-extractable value (MEV), Leo achieves two fundamental properties. First, applications are formally verified with respect to their high-level specification. Second, applications can be succinctly verified by anyone, regardless of the size of application.
Leo is the first known programming language to introduce a testing framework, package registry, import resolver, remote compiler, formally defined language, and theorem prover for general-purpose, zero-knowledge applications.
Gilles Barthe, Benjamin Gregoire, Vincent Laporte, Swarn Priya
ePrint Report
Many security properties of interest are captured by instrumented
semantics that model the functional behavior and the leakage of
programs. For several important properties, including cryptographic
constant-time (CCT), leakage models are sufficiently abstract that
one can define instrumented semantics for high-level and low-level
programs. One important goal is then to relate leakage of source programs and leakage of their compilation---this can be used, e.g.\, to prove preservation of CCT.
To simplify this task, we put forward the idea of structured leakage.
In contrast to the usual modeling of leakage as a sequence of observations,
structured leakage is tightly coupled with the operational semantics of programs.
This coupling greatly simplifies the definition of leakage transformers that map the
leakage of source programs to leakage of their compilation and
yields more precise statements about the preservation of security
properties. We illustrate our methods on the Jasmin
compiler and prove preservation results for two policies of interest: CCT and cost.
Aurélien Dupin, Pierrick Méaux, Mélissa Rossi
ePrint Report
Goldreich's pseudorandom generator is a well-known building block for many theoretical cryptographic constructions from multi-party computation to indistinguishability obfuscation. Its unique efficiency comes from the use of random local functions: each bit of the output is computed by applying some fixed public n-variable Boolean function f to a random public size-n tuple of distinct input bits.
The characteristics that a Boolean function f must have to ensure pseudorandomness is a puzzling issue. It has been studied in several works and particularly by Applebaum and Lovett (STOC 2016) who showed that resiliency and algebraic immunity are key parameters in this purpose.
In this paper, we propose the first study on Boolean functions that reach together maximal algebraic immunity and high resiliency.
1) We assess the possible consequences of the asymptotic existence of such optimal functions. We show how they allow to build functions reaching all possible algebraic immunity-resiliency trade-offs (respecting the algebraic immunity and Siegenthaler bounds).
We provide a new bound on the minimal number of variables n, and thus on the minimal locality, necessary to ensure a secure Goldreich pseudorandom generator. Our results come with a granularity level depending on the strength of our assumptions, from none to the conjectured asymptotic existence of optimal functions.
2) We extensively analyze the possible existence and the properties of such optimal functions. In a first step, we naturally focus on existing families of Boolean functions that are known optimal with respect to their algebraic immunity, starting by the promising XOR-MAJ functions.
Interestingly, we were able to show that these families do not reach optimality with respect to their resiliency, and they could be beaten by optimal functions if our conjecture is verified. Thus, one needs to look in another direction for constructing optimal functions. We introduce necessary and sufficient conditions for the construction of optimal functions. Finally, we prove the existence of optimal functions in low number of variables by experimentally exhibiting some of them up to 12 variables. This directly provides better candidates for Goldreich's pseudorandom generator than the existing XOR-MAJ candidates for polynomial stretches from 2 to 6.
Mustafa Khairallah
ePrint Report
COFB is a lightweight authenticated encryption (AE) mode based on block ciphers, proposed in CHES 2017 and is the basis for GIFT-COFB, a finalist
in the NIST lightweight standardization project. It comes with provable security results that guarantee its security up to the birthday
bound in the nonce-respecting model. However, the designers offer multiple versions of this analysis with different details and the implications
of attacks against the scheme are not discussed deeply. In this article, we look at different possible attacks against COFB-like designs against both forgery
and confidentiality. We show that the security for both forgery and confidentiality is bounded by the amount of forgery attempts. In particular, we show the
existence of forgery and confidentiality attacks with success probability $q_f/2^{n/2}$, given $q_f$ forgery attempts. In particular, we show that both forgery and
confidentiality can be broken with $2^{n/2}$ attempts using only a single
known-plaintext encryption query.
While these attacks do not contradict the claims made by the GIFT-COFB designers, it shows its limitations in terms
of the number of forgery attempts. It also shows that while GIFT-COFB generates a 128-bit tag it behaves in a very similar manner to an AE scheme with
64-bit tag. As an independent result, our analysis provides a contradiction to main in theorem of Journal of Cryptology volume 33, pages 703741 (2020),
which is an includes an improved security proof of COFB compared to the CHES 2017 version. Finally, we discuss the term $nq_f/2^{n/2}$ that appears in the security
proof of GIFT-COFB and CHES 2017, showing why this term is unlikely to be tight and it is likely that $q_f/2^{n/2}$ is sufficient. We emphasize that the results in this article do not threaten
the security of GIFT-COFB in the scope of the NIST lightweight cryptography requirements or the claims made by the designers in the specification of the design.
Ripon Patgiri
ePrint Report
RSA cryptography is an asymmetric communication protocol, and it is facing diverse issues. Recent research works suggest that RSA security has already broken. On the contrary, AES is the most used symmetric-key cryptography protocol, and it is also facing issues. Literature search suggests that there is an issue of cryptanalysis attacks. A shared secret key requires for AES cryptography. The most famous key exchange protocol is Diffie-Hellman; however, it has an issue of the number field sieve discrete log algorithm attacks. Moreover, recent research suggested that Diffie-Hellman is less secure than widely perceived. Moreover, there is another issue of Logjam attack that allows man-in-middle attack in Diffie-Hellman. Thus, we combine RSA, AES, and Diffie-Hellman algorithm to provide security on the key exchange protocol, called privateDH. Our key objective is to provide security to the Diffie-Hellman Algorithm. Therefore, privateDH does not share the data publicly with the intended party. Instead, privateDH encrypts all shareable data in the time of key exchange by encrypting using the AES algorithm. privateDH uses the RSA algorithm and retrieves the public key to avoid a man-in-the-middle attack. Thus, we demonstrate how to provide security to the Diffie-Hellman algorithm to defeat various kinds of attacks.
Cihangir Tezcan
ePrint Report
Graphics processing units (GPUs) are specially designed for parallel applications and perform parallel operations much faster than central processing units (CPUs). In this work, we focus on the performance of the Advanced Encryption Standard (AES) on GPUs. We present optimizations which remove bank conflicts in shared memory accesses and provide 878.6 Gbps throughput for AES-128 encryption on an RTX 2070 Super, which is equivalent to 4.1 Gbps per Watt. Our optimizations provide more than 2.56x speed-up against the best GPU results in the literature. Our optimized AES implementations on GPUs even outperform any CPU using the hardware level AES New Instructions (AES-NI) and legacy FPGA-based cluster architectures like COPACOBANA and RIVYERA. Even on a low-end GPU like MX 250, we obtained 60.0 Gbps throughput for AES-256 which is generally faster than the read/write speeds of solid disks. Thus, transition from AES-128 to AES-256 when using GPUs would provide military grade security with no visible performance loss. With these breakthrough performances, GPUs can be used as a cryptographic co-processor for file or full disk encryption to remove performance loss coming from CPU encryption. With a single GPU as a co-processor, busy SSL servers can be free from the burden of encryption and use their whole CPU power for other operations. Moreover, these optimizations can help GPUs to practically verify theoretically obtained cryptanalysis results or their reduced versions in reasonable time.
Alex May, Floyd Zweydinger
ePrint Report
Due to its amazing speed and multiplicative properties the Legendre PRF recently finds widespread applications e.g. in Ethereum 2.0, multiparty computation and in the quantum-secure signature proposal LegRoast. However, its security is not yet extensively studied.
The Legendre PRF computes for a key $k$ on input $x$ the Legendre symbol $L_k(x) = \left( \frac {x+k} {p} \right)$ in some finite field $\F_p$. As standard notion, PRF security is analysed by giving an attacker oracle access to $L_k(\cdot)$. Khovratovich's collision-based algorithm recovers $k$ using $L_k(\cdot)$ in time $\sqrt{p}$ with constant memory. It is a major open problem whether this birthday-bound complexity can be beaten.
We show a somewhat surprising wide-ranging analogy between the discrete logarithm problem and Legendre symbol computations. This analogy allows us to adapt various algorithmic ideas from the discrete logarithm setting.
More precisely, we present a small memory multiple-key attack on $m$ Legendre keys $k_1, \ldots, k_m$ in time $\sqrt{mp}$, i.e. with amortized cost $\sqrt{p/m}$ per key. This multiple-key attack might be of interest in the Ethereum context, since recovering many keys simultaneously maximizes an attacker's profit.
Moreover, we show that the Legendre PRF admits precomputation attacks, where the precomputation depends on the public $p$ only -- and not on a key $k$. Namely, an attacker may compute e.g. in precomputation time $p^{\frac 2 3}$ a hint of size $p^{\frac 1 3}$. On receiving access to $L_k(\cdot)$ in an online phase, the attacker then uses the hint to recover the desired key $k$ in time only $p^{\frac 1 3}$. Thus, the attacker's online complexity again beats the birthday-bound.
In addition, our precomputation attack can also be combined with our multiple-key attack. We explicitly give various tradeoffs between precomputation and online phase. E.g. for attacking $m$ keys one may spend time $mp^{\frac 2 3}$ in the precomputation phase for constructing a hint of size $m^2 p^{\frac 1 3}$. In an online phase, one then finds {\em all $m$ keys in total time} only $p^{\frac 1 3}$.
Precomputation attacks might again be interesting in the Ethereum 2.0 context, where keys are frequently changed such that a heavy key-independent precomputation pays off.
The Legendre PRF computes for a key $k$ on input $x$ the Legendre symbol $L_k(x) = \left( \frac {x+k} {p} \right)$ in some finite field $\F_p$. As standard notion, PRF security is analysed by giving an attacker oracle access to $L_k(\cdot)$. Khovratovich's collision-based algorithm recovers $k$ using $L_k(\cdot)$ in time $\sqrt{p}$ with constant memory. It is a major open problem whether this birthday-bound complexity can be beaten.
We show a somewhat surprising wide-ranging analogy between the discrete logarithm problem and Legendre symbol computations. This analogy allows us to adapt various algorithmic ideas from the discrete logarithm setting.
More precisely, we present a small memory multiple-key attack on $m$ Legendre keys $k_1, \ldots, k_m$ in time $\sqrt{mp}$, i.e. with amortized cost $\sqrt{p/m}$ per key. This multiple-key attack might be of interest in the Ethereum context, since recovering many keys simultaneously maximizes an attacker's profit.
Moreover, we show that the Legendre PRF admits precomputation attacks, where the precomputation depends on the public $p$ only -- and not on a key $k$. Namely, an attacker may compute e.g. in precomputation time $p^{\frac 2 3}$ a hint of size $p^{\frac 1 3}$. On receiving access to $L_k(\cdot)$ in an online phase, the attacker then uses the hint to recover the desired key $k$ in time only $p^{\frac 1 3}$. Thus, the attacker's online complexity again beats the birthday-bound.
In addition, our precomputation attack can also be combined with our multiple-key attack. We explicitly give various tradeoffs between precomputation and online phase. E.g. for attacking $m$ keys one may spend time $mp^{\frac 2 3}$ in the precomputation phase for constructing a hint of size $m^2 p^{\frac 1 3}$. In an online phase, one then finds {\em all $m$ keys in total time} only $p^{\frac 1 3}$.
Precomputation attacks might again be interesting in the Ethereum 2.0 context, where keys are frequently changed such that a heavy key-independent precomputation pays off.
Christopher Battarbee, Delaram Kahrobaei, Siamak F. Shahandashti
ePrint Report
It was recently demonstrated that the Matrix Action Key Exchange (MAKE) algorithm, a new type of key exchange protocol using the semidirect product of matrix groups, is vulnerable to a linear algebraic attack if the matrices are over a commutative ring. In this note, we establish conditions under which protocols using matrices over a non-commutative ring are also vulnerable to this attack. We then demonstrate that group rings $R[G]$ used in a similar key exchange protocol, where $R$ is a commutative ring and $G$ is a non-abelian group, are examples of non-commutative rings that satisfy these conditions.
Virtual event, Anywhere on Earth, 10 November - 11 November 2021
Event Calendar
Event date: 10 November to 11 November 2021
Submission deadline: 14 June 2021
Notification: 9 July 2021
Submission deadline: 14 June 2021
Notification: 9 July 2021
Virtual event, Anywhere on Earth, 6 September - 9 September 2021
Event Calendar
Event date: 6 September to 9 September 2021
Submission deadline: 30 May 2021
Notification: 30 June 2021
Submission deadline: 30 May 2021
Notification: 30 June 2021
17 May 2021
Virtual event, Anywhere on Earth, 17 September 2021
Event Calendar
Event date: 17 September 2021
Submission deadline: 19 July 2021
Submission deadline: 19 July 2021
Telecom Paris - Institut Polytechnique de Paris
Job Posting
A permanent position is open at Telecom Paris, for an Assistant or Associate Professor in the Theory of Quantum Information and Computation.
The successful candidate will contribute to the activities of the IQA group, at Telecom Paris, Institut Polytechnique de Paris and to the Quantum@Paris-Saclay center.
We are looking for candidates who have a strong research record and a commitment to undergraduate and graduate education and training, and who have demonstrated their ability to carry out and develop a research activity combining computer science and quantum technologies.
Areas of expertise may include, but are not limited to, quantum algorithms and complexity, quantum control, quantum communications and cryptography, and quantum information processing
Closing date for applications:
Contact: romain.alleaume@telecom-paris.fr
More information: https://institutminestelecom.recruitee.com/l/en/o/maitre-de-conferences-en-theorie-de-linformation-quantique-et-du-calcul-quantique-fh-a-telecom-paris-cdi
Muhammad ElSheikh, Amr M. Youssef
ePrint Report
With the introduction of the division trail, the bit-based division property (BDP) has become the most efficient method to search for integral distinguishers. The notation of the division trail allows us to automate the search process by modelling the propagation of the DBP as a set of constraints that can be solved using generic Mixed-integer linear programming (MILP) and SMT/SAT solvers. The current models for the basic operations and Sboxes are efficient and accurate. In contrast, the two approaches to model the propagation of the BDP for the non-bit-permutation linear layer are either inaccurate or inefficient. The first approach relies on decomposing the matrix multiplication of the linear layer into COPY and XOR operations. The model obtained by this approach is efficient, in terms of the number of the constraints, but it is not accurate and might add invalid division trails to the search space, which might lead to missing the balanced property of some bits. The second approach employs a one-to-one map between the valid division trails through the primitive matrix represented the linear layer and its invertible sub-matrices. Despite the fact that the current model obtained by this approach is accurate, it is inefficient, i.e., it produces a large number of constraints for large linear layers like the one of Kuznyechik. In this paper, we address this problem by utilizing the one-to-one map to propose a new MILP model and a search procedure for large non-bit-permutation layers. As a proof of the effectiveness of our approach, we improve the previous 3- and 4-round integral distinguishers of Kuznyechik and the 4-round one of PHOTON's internal permutation ($P_{288}$). We also report, for the fist time, a 4-round integral distinguisher for Kalyna block cipher and a 5-round integral distinguisher for PHOTON's internal permutation ($P_{288}$).
Nihal Vatandas, Rosario Gennaro, Bertrand Ithurburn, Hugo Krawczyk
ePrint Report
Offline deniability is the ability to a-posteriori deny having participated in a particular communication session. This property has been widely assumed for the Signal messaging application, yet no formal proof has appeared in the literature. In this paper, we present what we believe is the first formal study of the offline deniability of the Signal protocol. Our analysis shows that building a deniability proof for Signal is non-trivial and requires strong assumptions on the underlying mathematical groups where the protocol is run.
To do so, we study various *implicitly authenticated* key exchange protocols including MQV, HMQV and 3DH/X3DH, the latter being the core key agreement protocol in Signal. We first present examples of mathematical groups where running MQV results in a provably non-deniable interaction. While the concrete attack applies only to MQV, it also exemplifies the problems in attempting to prove the deniability of other implicitly authenticated protocols, such as 3DH. In particular, it shows that the intuition that the minimal transcript produced by these protocols suffices for ensuring deniability does not hold. We then provide a characterization of the groups where deniability holds, defined in terms of a knowledge assumption that extends the Knowledge of Exponent Assumption (KEA).
We conclude the paper by showing two additional positive results. The first is a general theorem that links the deniability of a communication session to the deniability of the key agreement protocol starting the session. This allows us to extend our results on the deniability of 3DH/X3DH to the entire Signal communication session.
To do so, we study various *implicitly authenticated* key exchange protocols including MQV, HMQV and 3DH/X3DH, the latter being the core key agreement protocol in Signal. We first present examples of mathematical groups where running MQV results in a provably non-deniable interaction. While the concrete attack applies only to MQV, it also exemplifies the problems in attempting to prove the deniability of other implicitly authenticated protocols, such as 3DH. In particular, it shows that the intuition that the minimal transcript produced by these protocols suffices for ensuring deniability does not hold. We then provide a characterization of the groups where deniability holds, defined in terms of a knowledge assumption that extends the Knowledge of Exponent Assumption (KEA).
We conclude the paper by showing two additional positive results. The first is a general theorem that links the deniability of a communication session to the deniability of the key agreement protocol starting the session. This allows us to extend our results on the deniability of 3DH/X3DH to the entire Signal communication session.