International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

06 November 2021

Leonie Reichert, Marcel Pazelt, Björn Scheuermann
ePrint Report ePrint Report
—Many solutions have been proposed to improve manual contact tracing for infectious diseases through automation. Privacy is crucial for the deployment of such a system as it greatly influences adoption. Approaches for digital contact tracing like Google Apple Exposure Notification (GAEN) protect the privacy of users by decentralizing risk scoring. But GAEN leaks information about diagnosed users as ephemeral pseudonyms are broadcast to everyone. To combat deanonymisation based on the time of encounter while providing extensive risk scoring functionality we propose to use a private set intersection (PSI) protocol based on garbled circuits. Using oblivious programmable pseudo random functions PSI (PPRF-PSI) , we implement our solution CERTAIN which leaks no information to querying users other than one risk score for each of the last 14 days representing their risk of infection. We implement payload inclusion for OPPRF-PSI and evaluate the efficiency and performance of different risk scoring mechanisms on an Android device
Expand
Hao Chung, Elaine Shi
ePrint Report ePrint Report
In blockchains such as Bitcoin and Ethereum, users compete in a transaction fee auction to get their transactions confirmed in the next block. A line of recent works set forth the desiderata for a “dream” transaction fee mechanism (TFM), and explored whether such a mechanism existed. A dream TFM should satisfy 1) user incentive compatibility (UIC), i.e., truthful bidding should be a user’s dominant strategy; 2) miner incentive compatibility (MIC), i.e., the miner’s dominant strategy is to faithfully implement the prescribed mechanism; and 3) miner-user side contract proofness (SCP), i.e., no coalition of the miner and one or more user(s) can increase their joint utility by deviating from the honest behavior. The weakest form of SCP is called 1-SCP, where we only aim to provide resilience against the collusion of the miner and a single user. Sadly, despite the various attempts, to the best of knowledge, no existing mechanism can satisfy all three properties in all situations.

Since the TFM departs from classical mechanism design in modeling and assumptions, to date, our understanding of the design space is relatively little. In this paper, we further unravel the mathematical structure of transaction fee mechanism design by proving the following results: - Can we have a dream TFM? We prove a new impossibility result: assuming finite block size, no single-parameter, non-trivial, possibly randomized TFM can simultaneously satisfy UIC and 1-SCP. Consequently, no non-trivial TFM can satisfy all three desired properties simultaneously. This answers an important open question raised by Roughgarden in his recent work. - Rethinking the incentive compatibility notions. We observe that the prevalently adopted incentive compatibility notions may be too draconian and somewhat flawed. We rectify the existing modeling techniques, and suggest a relaxed incentive compatibility notion that captures additional hidden costs of strategic deviation. We construct a new mechanism called the “burning second-price auction”, and show that it indeed satisfies the new incentive compatibility notions. We additionally prove that the use of randomness is necessary under the new incentive compatibility notions for “useful” mechanisms that resist the coalitions of the miner and at least 2 users. - Do the new design elements make a difference? Unlike classical mechanisms, TFMs may employ a couple new design elements that are idiosyncratic to blockchains. For example, a burn rule (employed by Ethereum’s EIP-1559) allows part to all of the payment from the users to be burnt rather than paid to the miner. Some mechanisms also allow unconfirmed transactions to be included in the block, to set the price for others. Our work unveils how these new design elements actually make a difference in TFM design, allowing us to achieve incentive compatible properties that would otherwise be impossible.
Expand
Akihiro Mizutani, Yuki Takeuchi, Ryo Hiromasa, Yusuke Aikawa, Seiichiro Tani
ePrint Report ePrint Report
In the seminal paper [Metger and Vidick, Quantum ’21], they proposed a computational self-testing protocol for Bell states in a single quantum device. Their protocol relies on the fact that the target states are stabilizer states, and hence it is highly non-trivial to reveal whether the other class of quantum states, non-stabilizer states, can be self-tested within their framework. Among non-stabilizer states, magic states are indispensable resources for universal quantum computation. In this letter, we show that a magic state for the CCZ gate can be self-tested while that for the T gate cannot. Our result is applicable to a proof of quantumness, where we can classically verify whether a quantum device generates a quantum state having non zero magic.
Expand
Anisha Mukherjee, Saibal K. Pal
ePrint Report ePrint Report
Entropic quasigroups or entropoids provide an attractive option for development of post-quantum cryptographic schemes. We elaborate on the mathematical properties of entropoids with modifications in the initial operation. The starting entropic quasigroups obtained by this process can be applied to generate higher-order structures suitable for cryptography. We also propose an encryption/decryption scheme analogous to the ElGamal scheme with quasigroup string transformations in the entropoid setting. We then move on to enumerate important properties that are beneficial in cryptographic use together with algorithms for their verification.
Expand
Charanjit Jutla, Sikhar Patranabis
ePrint Report ePrint Report
The Oblivious Cross-Tags (OXT) protocol due to Cash et al. (CRYPTO'13) is a highly scalable searchable symmetric encryption (SSE) scheme that allows fast processing of conjunctive and more general Boolean queries over encrypted relational databases. A longstanding open question has been to extend OXT to also support queries over joins of tables without pre-computing the joins. In this paper, we solve this open question without compromising on the nice properties of OXT with respect to both security and efficiency. We propose Join Cross-Tags (JXT) - a purely symmetric-key solution that supports efficient conjunctive queries over (equi) joins of encrypted tables without any pre-computation at setup. JXT is fully compatible with OXT, and can be used in conjunction with OXT to support a wide class of SQL queries directly over encrypted relational databases. We prove the (adaptive) simulation-based security of JXT with respect to a rigorously defined leakage profile.
Expand
Saikrishna Badrinarayanan, Rex Fernando, Amit Sahai
ePrint Report ePrint Report
Very recently, two works were able to construct two-round secure multi-party computation (MPC) protocols in the plain model, without setup, relying on the superpolynomial simulation framework of Pass [Pas03]. The first work [ABG+21] achieves this relying on subexponential non-interactive witness indistinguishable arguments, the subexponential SXDH assumption, and the existence of a special type of non-interactive non-malleable commitment. The second work [FJK21] additionally achieves concurrent security, and relies on subexponential quantum hardness of the learning-with-errors (LWE) problem, subexponential classical hardness of SXDH, the existence of a subexponentially-secure (classically-hard) indistinguishablity obfuscation (iO) scheme, and time-lock puzzles.

This paper focuses on the assumptions necessary to construct secure computation protocols in two rounds without setup, focusing on the subcase of two-party functionalities. In this particular case, we show how to build a two-round, concurrent-secure, two-party computation (2PC) protocol based on a single, standard, post-quantum assumption, namely subexponential hardness of the learning-with-errors (LWE) problem.

We note that our protocol is the first two-round concurrent-secure 2PC protocol that does not require the existence of a one-round non-malleable commitment (NMC). Instead, we are able to use the two-round NMCs of [KS17a], which is instantiable from subexponential LWE.
Expand
Chun Guo, Tetsu Iwata, Kazuhiko Minematsu
ePrint Report ePrint Report
MDPH is a double-block-length hash function proposed by Naito at Latincrypt 2019.This is a combination of Hirose's compression function and the domain extender called Merkle-Damg\r{a}rd with permutation (MDP). When instantiated with an $n$-bit block cipher, Naito proved that this achieves the (nearly) optimal indifferentiable security bound of $O(n-\log n)$-bit security. In this paper, we first point out that the proof of the claim contains a gap, which is related to the definition of the simulator in simulating the decryption of the block cipher. We then show that the proof can be fixed. We introduce a new simulator that addresses the issue, showing that MDPH retains its (nearly) optimal indifferentiable security bound of $O(n-\log n)$-bit security.
Expand
Quentin L. Meunier, Etienne Pons, Karine Heydemann
ePrint Report ePrint Report
Side-channel attacks are a powerful class of attacks targeting cryptographic devices. Masking is a popular protection technique to thwart such attacks as it can be theoretically proven secure. However, correctly implementing masking schemes is a non-trivial task and error-prone. If several techniques have been proposed to formally verify masked implementations, they all come with limitations regarding expressiveness, scalability or accuracy. In this work, we propose a symbolic approach, based on a variant of the classical substitution method, for formally verifying arithmetic and boolean masked programs. This approach is more accurate and scalable than existing approaches thanks to a careful design and implementation of key heuristics, algorithms and data structures involved in the verification process. We present all the details of this approach and the open-source tool called LeakageVerif which implements it as a python library, and which offers constructions for symbolic expressions and functions for their verification. We compare LeakageVerif to three existing state-of-the-art tools on a set of 46 masked programs, and we show that it has very good scalability and accuracy results while providing all the necessary constructs for describing algorithmic to assembly masking schemes. Finally, we also provide the set of 46 benchmarks, named MaskedVerifBenchs and written for comparing the different verification tools, in the hope that they will be useful to the community for future comparisons.
Expand
Yuval Ishai, Dakshita Khurana, Amit Sahai, Akshayaram Srinivasan
ePrint Report ePrint Report
We consider the question of minimizing the round complexity of secure multiparty computation (MPC) protocols that make a black-box use of simple cryptographic primitives in the setting of security against any number of malicious parties. In the plain model, previous black-box protocols required a high constant number of rounds (>15). This is far from the known lower bound of 4 rounds for protocols with black-box simulators.

When allowing a random oblivious transfer (OT) correlation setup, 2-round protocols making a black-box use of a pseudorandom generator were previously known. However, such protocols were obtained via a round-collapsing ``protocol garbling'' technique that has poor concrete efficiency and makes a non-black-box use of an underlying malicious-secure protocol.

We improve this state of affairs by presenting the following types of black-box protocols.

- 4-round ``pairwise MPC'' in the plain model.

This round-optimal protocol enables each ordered pair of parties to compute a function of both inputs whose output is delivered to the second party. The protocol makes black-box use of any public-key encryption (PKE) with pseudorandom public keys. As a special case, we get a black-box round-optimal realization of secure (copies of) OT between every ordered pair of parties.

- 2-round MPC from OT correlations.

This round-optimal protocol makes a black-box use of any general 2-round MPC protocol satisfying an augmented notion of {\em semi-honest} security. In the two-party case, this yields new kinds of 2-round black-box protocols.

- 5-round MPC in the plain model.

This protocol makes a black-box use of PKE with pseudorandom public keys, and 2-round oblivious transfer with ``semi-malicious'' security.

A key technical tool for the first result is a novel combination of split-state non-malleable codes (Dziembowski, Pietrzak and Wichs, JACM '18) with standalone secure two-party protocols. The second result is based on a new round-optimized variant of the ``IPS compiler'' (Ishai, Prabhakaran and Sahai, Crypto '08). The third result is obtained via a specialized combination of these two techniques.
Expand
V. Ustimenko
ePrint Report ePrint Report
Time dependent linguistic graphs over abelian group H are introduced. In the case $H=K*$ such bipartite graph with point set $P=H^n$ can be used for generation of Eulerian transformation of $(K*)^n$, i.e. the endomorphism of $K[x_1, x_2,… , x_n]$ sending each variable to a monomial term. Subsemigroups of such endomorphisms together with their special homomorphic images are used as platforms of cryptographic protocols of noncommutative cryptography. The security of these protocol is evaluated via complexity of hard problem of decomposition of Eulerian transformation into the product of known generators of the semigroup. Nowadays the problem is intractable one in the Postquantum setting. The symbiotic combination of such protocols with special graph based stream ciphers working with plaintext space of kind $K^m$ where $m=n^t$ for arbitrarily chosen parameter $t$ is proposed. This way we obtained a cryptosystem with encryption/decryption procedure of complexity $O(m^{1+2/t})$.
Expand
Mahimna Kelkar, Soubhik Deb, Sishan Long, Ari Juels, Sreeram Kannan
ePrint Report ePrint Report
We introduce Themis, a scheme for introducing fair ordering of transactions into (permissioned) Byzantine consensus protocols with at most $f$ faulty nodes among $n \geq 4f +1$. Themis is the first such scheme to achieve (optimistic) linear communication complexity. At the same time, it enforces the strongest notion of fair ordering proposed to date. Themis also achieves standard liveness, rather than the weaker notion of previous work.

We show experimentally that Themis can be integrated into state-of-the-art consensus protocols with minimal modification or performance overhead. Additionally, we introduce a suite of experiments of general interest for evaluating the practical strength of various notions of fair ordering and the resilience of fair-ordering protocols to adversarial manipulation. We use this suite of experiments to show that the notion of fair ordering enforced by Themis is significantly stronger in theory and for realistic workloads than those of competing systems.

We believe Themis offers strong practical protection against many types of transaction-ordering attacks---such as front-running and back-running---that are currently impacting commonly used smart contract systems.
Expand
Omid Etesami, Ji Gao, Saeed Mahloujifar, Mohammad Mahmoody
ePrint Report ePrint Report
Consider an $n$-message coin-tossing protocol between $n$ parties $P_1,\dots,P_n$, in which $P_i$ broadcasts a single message $w_i$ in round $i$ (possibly based on the previously shared messages) and at the end they agree on bit $b$. A $k$-replacing adversary $A_k$ can change up to $k$ of the messages as follows. In every round $i$, the adversary who knows all the messages broadcast so far, as well as a message $w_i$ that is prepared by $P_i$ to be just sent, can can to replace the prepared message $w_i$ with its own choice. A targeted adversary prefers the outcome $b'=1$, and its bias is defined as $\mu'-\mu$, where $\mu'=\Pr[b'=1]$ (resp. $\Pr[b=1]=\mu$) refers to the probability of outputting $1$ when the attack happens (resp. does not happen). In this work, we study $k$-replacing targeted attacks, their computational efficiency, and optimality, for all $k \in [n]$.

Large messages: When the messages are allowed to be arbitrarily long, we show that polynomial-time $k$-replacing targeted attacks can achieve bias $\Omega(\mu k/\sqrt n)$ for any $k$ (and any protocol), which is optimal up to a constant factor for any $\mu = \Theta(1)$. Previously, it was known how to achieve such bias only for $k = \Omega(\sqrt n)$ (Komargodski-Raz [DISC'18], Mahloujifar-Mahmoody [ALT'19], and Etesami-Mahloujifar-Mahmoody [SODA'20]). This proves a computational variant of the isoperimetric inequality for product spaces under $k=o(\sqrt n)$ Hamming distance. As a corollary, we also obtain improved $poly(n)$-time targeted poisoning attacks on deterministic learners, in which the adversary can increase the probability of any efficiently testable bad event over the produced model from $\mu=1/poly(n)$ to $\mu + \Omega(\mu k /\sqrt n)$ by changing $k$ out of $n$ training examples.

Binary messages: When the messages $w_1,\dots,w_n$ are uniformly random bits, we show that if $\mu=\Pr[b=1]= \Pr[\sum w_i \geq t] = \beta^{(t)}_n$ for $t \in [n]$ is the probability of falling into a Hamming ball, then polynomial-time $k$-replacing targeted attacks can achieve $\mu'=\Pr[b'=1]=\beta^{(t-k)}_n $, which is optimal due to the simple majority protocol. Thus, as corollary we obtain an alternative proof of the Harper's celebrated vertex isoperimetric inequality in which the optimal adversary (that maps random points to a set of measure $\mu$ by changing at most $k$ bits) is limited to be online and run in polynomial time. Previously, Lichtenstein, Linial, and Saks [Combinatorica'89] showed how to achieve $\mu'=\Pr[b'=1] = \beta^{(t-k)}_{ n-k }$ (using computationally unbounded attacks), which is optimal for adaptive adversaries who decide on corrupting parties before seeing their messages.
Expand
Brett Hemenway Falk, Daniel Noble, Rafail Ostrovsky
ePrint Report ePrint Report
Distributed Oblivious RAM (DORAM) protocols allow a group of participants to obliviously access a secret-shared array at a secret-shared index, and DORAM is the key tool for secure multiparty computation (MPC) in the RAM model. In this work, we present a novel 3-party semi-honest DORAM protocol with O((κ + D) log N) communication per access, where N is the size of the memory, κ is a security parameter and D is the block size. Our protocol performs polylogarithmic computation and does not require homomorphic encryption. Under natural parameter choices, this is the most communication-efficient DORAM with these properties. To build this DORAM protocol, we first present an extremely efficient oblivious data structure for answering set membership queries. From this we build an oblivious hash table with asymptotically optimal memory usage and access cost and with negligible failure probability. We believe these are of independent interest.
Expand
Pavel Atnashev, George Woltman
ePrint Report ePrint Report
Factorization methods like P−1, P+1, ECM have a stage which deals with primes of $a ± b$ form, where $a + b$ and $a − b$ are processed by a single operation. Selecting $a$ and $b$ such that both $a + b$ and $a − b$ are prime is called `prime pairing` and can significantly improve performance of the stage. This paper introduces new methods of pairing, which in some cases find pairs for up to 99.9% of primes in a range. A practical algorithm and its implementations are presented.
Expand
Aikata, Ahmet Can Mert, David Jacquemin, Amitabh Das, Donald Matthews, Santosh Ghosh, Sujoy Sinha Roy
ePrint Report ePrint Report
In this paper, we propose a compact, unified and instruction-set cryptoprocessor architecture for performing both lattice-based digital signature and key exchange operations. As a case study, the cryptoprocessor architecture has been optimized targeting the signature scheme 'Crystals-Dilithium' and the key encapsulation mechanism 'Saber', both finalists in the NIST’s post-quantum cryptography standardization project. The implementation is entirely in hardware and leverages from algorithmic as well as structural synergies in the two schemes to realize a high-speed unified post-quantum key-exchange and digital signature engine within a compact area. The area consumption of the entire cryptoprocessor architecture is 18,040 LUTs, 9,101 flip-flops, 4 DSP units, and 14.5 BRAMs on the Xilinx Zynq Ultrascale+ ZCU102 FPGA. The FPGA implementation of the cryptoprocessor achieving 200 MHz clock frequency finishes the CCA-secure key generation, encapsulation, and decapsulation operations for Saber in 54.9, 72.5 and 94.7 $\mu$s, respectively. For Dilithium-II, the key generation, signature generation, and signature verification operations take 78.0, 164.8 and 88.5 $\mu$s, respectively, for the best-case scenario where a valid signature is generated after the first loop iteration. The cryptoprocessor is also synthesized for ASIC with the UMC 65nm library. It achieves 370 MHz clock frequency and consumes 0.301 mm$^2$ area ($\approx$200.6 kGE) excluding on-chip memory. The ASIC implementation can perform the key generation, encapsulation, and decapsulation operations for Saber in 29.6, 39.2, and 51.2 $\mu$s, respectively, while it can perform the key generation, signature generation, and signature verification operations for Dilithium-II in 42.2, 89.1, and 47.8 $\mu$s, respectively.
Expand
Itai Dinur, Nathan Keller, Ohad Klein
ePrint Report ePrint Report
An average-case variant of the $k$-SUM conjecture asserts that finding $k$ numbers that sum to 0 in a list of $r$ random numbers, each of the order $r^k$, cannot be done in much less than $r^{\lceil k/2 \rceil}$ time. On the other hand, in the dense regime of parameters, where the list contains more numbers and many solutions exist, the complexity of finding one of them can be significantly improved by Wagner's $k$-tree algorithm. Such algorithms for $k$-SUM in the dense regime have many applications, notably in cryptanalysis.

In this paper, assuming the average-case $k$-SUM conjecture, we prove that known algorithms are essentially optimal for $k= 3,4,5$. For $k>5$, we prove the optimality of the $k$-tree algorithm for a limited range of parameters. We also prove similar results for $k$-XOR, where the sum is replaced with exclusive or.

Our results are obtained by a self-reduction that, given an instance of $k$-SUM which has a few solutions, produces from it many instances in the dense regime. We solve each of these instances using the dense $k$-SUM oracle, and hope that a solution to a dense instance also solves the original problem. We deal with potentially malicious oracles (that repeatedly output correlated useless solutions) by an obfuscation process that adds noise to the dense instances. Using discrete Fourier analysis, we show that the obfuscation eliminates correlations among the oracle's solutions, even though its inputs are highly correlated.
Expand
Jeonghyuk Lee, Jaekyung Choi, Hyunok Oh, Jihye Kim
ePrint Report ePrint Report
Recently, a self-sovereign identity model has been researched actively as an alternative to the existing identity models such as a centralized identity model, federated identity model, and user-centric model. The self-sovereign identity model allows a user to have complete control of his identity. Meanwhile, the core component of the self-sovereign identity model is data minimization. The data minimization signifies that the extent of the exposure of user private identity should be minimized. As a solution to data minimization, zero-knowledge proofs can be grafted to the self-sovereign identity model. Specifically, zero-knowledge Succinct Non-interactive ARgument of Knowledges(zk-SNARKs) enables proving the truth of the statement on an arbitrary relation. In this paper, we propose a privacy-preserving self-sovereign identity model based on zk-SNARKs to allow any type of data minimization beyond the selective disclosure and range proof. The security of proposed model is formally proven under the security of the zero-knowledge proof and the unforgeability of the signature in the random oracle model. Furthermore, we optimize the proving time by checking the correctness of the commitment outside of the proof relation for practical use. The resulting scheme improves proving time for hash computation (to verify a commitment input) from 0.5 s to about 0.1 ms on a 32-bit input.
Expand
Valentin Vasseur
ePrint Report ePrint Report
The aim of this document is to clarify the DFR (Decoding Failure Rate) claims made for BIKE, a third round alternate candidate KEM (Key Encapsulation Mechanism) to the NIST call for post-quantum cryptography standardization. For the most part, the material presented here is not new, it is extracted from the relevant scientific literature, in particular [V21].

Even though a negligible DFR is not needed for a KEM using ephemeral keys (e.g. TLS) which only requires IND-CPA security, it seems that IND-CCA security, relevant for reusable/static keys, has become a requirement. Therefore, a negligible DFR is needed both for the security reduction [FO99, HHK17] and to thwart existing attacks [GJS16].

Proving a DFR lower than $2^{-\lambda}$ where $\lambda$ is the security parameter (e.g. $\lambda=128$ or $256$) is hardly possible with mere simulation. Instead a methodology based on modelization, simulation, and extrapolation with confidence estimate was devised [V21]. Models are backed up by theoretical results [T18,SV19], but do not account for some combinatorial properties of the underlying error correcting code. Those combinatorial properties give rise to what is known in telecommunication as "error floors" [R03].

The statistical modeling predicts a fast decrease of the DFR as the block size grows, the waterfall region, whereas the combinatorial properties, weak keys [DGK19] or near-codewords [V21], predict a slower decrease, the error floor region. The issue here is to show that the error floor occurs in a region where the DFR is already below the security requirement. This would validate the extrapolation approach, and, as far as we can say, this appears to be the case for the QC-MDPC codes corresponding to BIKE parameters.

The impact of the QC-MDPC code combinatorial properties on decoding, as reported in this document, is better and better understood. In particular, it strongly relates with the spectrum of low weight vectors, as defined in [GJS16]. At this point, none of the results we are aware of and which are presented here contradict in any way the DFR claims made for BIKE. Admittedly those claims remain heuristic in part, but could be understood as an additional assumption, just like the computational assumptions made for all similar primitives, under which the BIKE scheme is IND-CCA secure.
Expand
Karthikeyan Bhargavan, Abhishek Bichhawat, Quoc Huy Do, Pedram Hosseyni, Ralf Kuesters, Guido Schmitz, Tim Wuertele
ePrint Report ePrint Report
he ACME certificate issuance and management protocol, standardized as IETF RFC 8555, is an essential element of the web public key infrastructure (PKI). It has been used by Let’s Encrypt and other certification authorities to issue over a billion certificates, and a majority of HTTPS connections are now secured with certificates issued through ACME. Despite its importance, however, the security of ACME has not been studied at the same level of depth as other protocol standards like TLS 1.3 or OAuth. Prior formal analyses of ACME only considered the cryptographic core of early draft versions of ACME, ignoring many security-critical low-level details that play a major role in the 100 page RFC, such as recursive data structures, long-running sessions with asynchronous sub-protocols, and the issuance for certificates that cover multiple domains.

We present the first in-depth formal security analysis of the ACME standard. Our model of ACME is executable and comprehensive, with a level of detail that lets our ACME client interoperate with other ACME servers. We prove the security of this model using a recent symbolic protocol analysis framework called DY* , which in turn is based on the F* programming language. Our analysis accounts for all prior attacks on ACME in the literature, including both cryptographic attacks and low-level attacks on stateful protocol execution. To analyze ACME, we extend DY ★ with authenticated channels, key substitution attacks, and a concrete execution framework, which are of independent interest. Our security analysis of ACME totaling over 16,000 lines of code is one of the largest proof developments for a cryptographic protocol standard in the literature, and it serves to provide formal security assurances for a crucial component of web security.
Expand
Joël Alwen, Dominik Hartmann, Eike Kiltz, Marta Mularczyk
ePrint Report ePrint Report
Continuous Group Key Agreement (CGKA) -- or Group Ratcheting -- lies at the heart of a new generation of End-to-End (E2E) secure group messaging (SGM) and VoIP protocols supporting very large groups. Yet even for these E2E protocols the primary constraint limiting practical group sizes continues to be their communication complexity. To date, the most important (and only deployed) CGKA is ITK which underpins the IETF's upcoming Messaging Layer Security SGM standard.

In this work, we introduce server-aided CGKA (saCGKA) to more precisely model how E2E protocols are usually deployed. saCGKA makes explicit the presence of an (untrusted) server mediating communication between honest parties (as opposed to mere insecure channels of some form or another). Next, we provide a simple and intuitive security model for saCGKA. We modify ITK accordingly to obtain SAIK; a practically efficient and easy to implement saCGKA designed to leverage the server to obtain greatly reduced communication and computational complexity (e.g. relative to ITK). Under the hood, SAIK uses a new type of signature called Reducible Signature which we construct from, so called, Weighted Accumulators. SAIK obtains further advantages by using Multi-Recipient Multi-Message PKE. Finally, we provide empirical data comparing the communication complexity for senders, receivers and the server in ITK vs. three saCGKAs including two instantiations of SAIK.
Expand
◄ Previous Next ►