International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

27 January 2021

Daniel Heinz, Thomas Pöppelmann
ePrint Report ePrint Report
The progress on constructing quantum computers and the ongoing standardization of post-quantum cryptography (PQC) have led to the development and refinement of promising new digital signature schemes and key encapsulation mechanisms (KEM). Especially lattice-based schemes have gained some popularity in the research community, presumably due to acceptable key, ciphertext, and signature sizes as well as good performance results and cryptographic strength. However, in some practical applications like smart cards, it is also crucial to secure cryptographic implementations against side-channel and fault attacks. In this work, we analyze the so-called redundant number representation (RNR) that can be used to counter side-channel attacks. We show how to avoid security issues with the RNR due to unexpected de-randomization and we apply it to the Kyber KEM and show that the RNR has a very low overhead. We then verify the RNR methodology by practical experiments, using the non-specific t-test methodology and the ChipWhisperer platform. Furthermore, we present a novel countermeasure against fault attacks based on the Chinese remainder theorem (CRT). On an ARM Cortex-M4, our implementation of the RNR and fault countermeasure offers better performance than masking and redundant calculation. Our methods thus have the potential to expand the toolbox of a defender implementing lattice-based cryptography with protection against two common physical attacks.
Expand
Sourav Das, Vinith Krishnan, Irene Miriam Isaac, Ling Ren
ePrint Report ePrint Report
Having shared access to high-quality random numbers is essential in many important applications. Yet, existing constructions of distributed random beacons still have limitations such as imperfect security guarantees, strong setup or network assumptions, or high costs. In this paper, we present SPURT, an efficient distributed randomness beacon protocol that does not require any trusted or expensive setup and is secure against a malicious adversary that controls up to one-third of the nodes in a partially synchronous network. We formally prove that each output of SPURT is unpredictable, bias-resistant, and publicly verifiable. SPURT has an amortized total communication cost of O(\lambda n^2) per beacon output in the fault-free case and O(\lambda n^2\log n + n^3) in the worst case. We implement SPURT and evaluate it using a network of up to 128 nodes running in geographically distributed AWS instances. Our evaluation shows that SPURT has practical computation and bandwidth costs and can produce beacon outputs every second for a network of 64 nodes, and every 3 seconds for a network of 128 nodes.
Expand
Melissa Chase, Esha Ghosh, Saeed Mahloujifar
ePrint Report ePrint Report
A major concern in training and releasing machine learning models is to what extent the model contains sensitive information that the data holders do not want to reveal. Property inference attacks consider an adversary who has access to the trained model and tries to extract some global statistics of the training data. In this work, we study property inference in scenarios where the adversary can maliciously control part of the training data (poisoning data) with the goal of increasing the leakage.

Previous work on poisoning attacks focused on trying to decrease the accuracy of models either on the whole population or on specific sub-populations or instances. Here, for the first time, we study poisoning attacks where the goal of the adversary is to increase the information leakage of the model. Our findings suggest that poisoning attacks can boost the information leakage significantly and should be considered as a stronger threat model in sensitive applications where some of the data sources may be malicious.

We first describe our property inference poisoning attack that allows the adversary to learn the prevalence in the training data of any property it chooses: it chooses the property to attack, then submits input data according to a poisoned distribution, and finally uses black box queries (label-only queries) on the trained model to determine the frequency of the chosen property. We theoretically prove that our attack can always succeed as long as the learning algorithm used has good generalization properties.

We then verify effectiveness of our attack by experimentally evaluating it on two datasets: a Census dataset and the Enron email dataset. In the first case we show that classifiers that recognizes whether an individual has high income (Census data) also leak information about the race and gender ratios of the underlying dataset. In the second case, we show classifiers trained to detect spam emails (Enron data) can also reveal the fraction of emails which show negative sentiment (according to a sentiment analysis algorithm); note that the sentiment is not a feature in the training dataset, but rather some feature that the adversary chooses and can be derived from the existing features (in this case the words). Finally, we add an additional feature to each dataset that is chosen at random, independent of the other features, and show that the classifiers can also be made to leak statistics about this feature; this shows that the attack can target features completely uncorrelated with the original training task. We were able to achieve above $90\%$ attack accuracy with $9-10\%$ poisoning in all of these experiments.
Expand
Lukas Kölsch, Björn Kriepke, Gohar Kyureghyan
ePrint Report ePrint Report
We consider image sets of $d$-uniform maps of finite fields. We present a lower bound on the image size of such maps and study their preimage distribution, by extending methods used for planar maps. We apply the results to study $d$-uniform Dembowsi-Ostrom polynomials. Further, we focus on a particularly interesting case of APN maps on binary fields $\F_{2^n}$. For these maps our lower bound coincides with previous bounds. We show that APN maps fulfilling the lower bound have a very special preimage distribution. We observe that for an even $n$ the image sets of several well-studied families of APN maps are minimal. In particular, for $n$ even, a Dembowski-Ostrom polynomial of form $f(x) =f'(x^3)$ is APN if and only if $f$ is almost-3-to-1, that is when its image set is minimal. Also, any almost-3-to-1 component-wise plateaued map is necessarily APN, if $n$ is even. For $n$ odd, we believe that the lower bound is not sharp. For $n$ odd, we present APN Dembowski-Ostrom polynomials of form $f'(x^3)$ on $\F_{2^n}$ with image sizes $ 2^{n-1}$ and $5\cdot 2^{n-3}$.

We present results connecting the image sets of special APN maps with their Walsh spectrum. Especially, we show that a large class of APN maps has the classical Walsh spectrum. Finally, we present upper bounds on the image size of APN maps. In particular, we show that the image set of a non-bijective almost bent map contains at most $2^n-2^{(n-1)/2}$ elements.
Expand
Mridul Nandi
ePrint Report ePrint Report
The prefix-free security of a cascade function based on a $c$-bit compression function $f$ is reduced to the $q$-query PRF security of $f$ with a tightness gap $\ell q$ where $q$ represents the maximum number of queries to the cascade and $\ell$ represents the length of the longest query. A two-stage proof for this reduction was first given by Bellare et al. in FOCS-96 for an adaptive distinguisher, and later a similar two-stage reduction was proved in CRYPTO-14 by Gazi et al. for a non-adaptive distinguisher.

In this paper we prove a direct single-stage reduction with a tightness gap of $\sigma$ (the total length of all queries). This is an improvement over existing reductions whenever the lengths of queries vary widely. In the case of non-adaptive prefix-free security, we also show a reduction proof which reduces PRF advantage of the cascade to two terms -- (i) a $q$-query PRF security of $f$ with a tightness gap of $q$ (without a factor of $\ell$) and (ii) a single query PRF security of $f$ with a tightness gap of $\sigma$. We further extend to a more general finer reduction to multiple terms over different limits on the queries to $f$. All these reductions can be easily extended to a multiuser setup. In particular, we reduce multiuser prefix-free PRF security of the cascade to a single user $q_{\max}$-query PRF security of $f$ with a tightness gap $\overline{\sigma}$ (the total length of all queries to all users), where $q_{\max}$ is the maximum number of queries allowed to any user. We have shown similar improved bounds (with respect to query complexity) for non-adaptive multiuser PRF security. In addition to immediate applications to multiuser security of HMAC and NMAC, our improved analysis has the following useful applications:

1. We show that the multiuser non-adaptive PRF security of the cascade does not degrade even if $f$ assures a weaker non-adaptive PRF security advantage.

2. The PRF security of single-keyed NMAC and Envelope MAC can be reduced to the non-adaptive multiuser prefix-free PRF security of the cascade construction and hence all improved reductions are applicable to these constructions. As a result, the constants ipad and opad used in HMAC are redundant. Moreover, the existing PRB assumption on $f$ can be replaced by a simple regular property for the constant-free HMAC.
Expand
Kelong Cong, Daniele Cozzo, Varun Maram, Nigel P. Smart
ePrint Report ePrint Report
Standard hybrid encryption schemes based on the KEM-DEM framework are hard to implement efficiently in a distributed manner whilst maintaining the CCA security property of the scheme. This is because the DEM needs to be decrypted under the key encapsulated by the KEM, before the whole ciphertext is declared valid. In this paper we present a new variant of the KEM-DEM framework, closely related to Tag-KEMs, which sidesteps this issue. We then present a post-quantum KEM for this framework based on Learning-with-Rounding, which is designed specifically to have fast distributed decryption. Our combined construction of a hybrid encryption scheme with Learning-with-Rounding based KEM, called Gladius, is closely related to the NIST Round 3 candidate called Saber. Finally, we give a prototype distributed implementation that achieves a decapsulation time of 4.99 seconds for three parties.
Expand
Easwar Vivek Mangipudi, Donghang Lu, Aniket Kate
ePrint Report ePrint Report
Timed-release encryption (TRE) is a prominent distributed way for sending messages to the future. Beyond its applications to e- voting and auctions, TRE can be easily generalized to a threshold information escrow (TIE) service, where a user can encrypt her message to any condition instead of just expiration of time as in TRE. Nevertheless, TRE and by extension TIE realized using threshold es- crow agents is vulnerable to premature, selective, and undetectable unlocking of messages through collusion among curious agents offering the service. This work presents a novel provably secure TIE scheme where any collusion attempt among the escrow agents offering the service towards premature decryption results in penalization through a loss of cryptocurrency and getting banned from the system. The proposed collusion-deterrent escrow (CDE) scheme intro- duces a novel incentive-penalty mechanism using a user-induced in- formation asymmetry among the agents such that they stay honest until the user-specified condition for decryption is met. In particular, each agent makes an escrow deposit before the start of the protocol such that the cryptocurrency deposit amount is transferred back to the agent when the condition specified by the user is met or can be transferred by anyone who holds the secret key corresponding to the public key of the protocol instance. CDE offers information escrow as a service and ensures that whenever the agents collude to decrypt the user data before the condition is met, there would be at least one whistle-blower agent who can withdraw/transfer the deposits of all other agents thereby penalizing them. We analyse the CDE protocol and model collusion as a game induced among rational agents offering the service and show in game-theoretic terms that the agents do not collude at equilibrium. We also present a prototype implementation of the CDE protocol and demonstrate its efficiency towards use in practice. We find this work to be an important step towards weakening the strong non-collusion assumptions across multi-party computation applications.
Expand
Sivanarayana Gaddam, Atul Luykx, Rohit Sinha, Gaven Watson
ePrint Report ePrint Report
Credit and debit-card payments are typically authenticated with PINs. Once entered into a terminal, the PIN is sent as an encrypted \emph{PIN block} across a payments network to the destination bank, which decrypts and verifies the PIN block. Each node in the payments network routes the PIN block to the next node by decrypting the block with its own key, and then re-encrypting the PIN block with the next node's key; nodes establish shared secret keys with their neighbors to do so. This decrypt-then-encrypt operation over PIN blocks is known as \emph{PIN translation}, and it is currently performed in Hardware Security Modules (HSMs) to avoid possible PIN exposure. However, HSMs incur heavy acquisition and operational expenses. Introduced at EUROCRYPT'98, proxy re-encryption (PRE) is a cryptographic primitive which can re-encrypt without exposing sensitive data. We perform an extensive study of PRE as applied to PIN translation, and show through formalization, security analysis, and an implementation study that PRE is a practical alternative to HSMs. With PRE, we eliminate the need for HSMs during re-encryption of a PIN, thus greatly reducing the number of HSMs needed by each participant in the payments ecosystem. Along the way we conduct practice-oriented PRE research, with novel theoretical contributions to resolve issues in comparing so-called honest re-encryption to chosen-ciphertext PRE security, and a new efficient PRE scheme achieving a type of chosen-ciphertext security.
Expand
Evgenios M. Kornaropoulos, Charalampos Papamanthou, Roberto Tamassia
ePrint Report ePrint Report
Despite a growing body of work on leakage-abuse attacks for encrypted databases, attacks on practical response-hiding constructions are yet to appear. Response-hiding constructions are superior in that they nullify access-pattern based attacks by revealing only the search token and the result size of each query. Response-hiding schemes are vulnerable to existing volume attacks, which are, however, based on strong assumptions such as the uniform query assumption or the dense database assumption. More crucially, these attacks only apply to schemes that cannot be deployed in practice (ones with quadratic storage and increased leakage) while practical response-hiding schemes (Demertzis et al. [SIGMOD’16] and Faber et al. [ESORICS’15]) have linear storage and less leakage. Due to these shortcomings, the value of existing volume attacks on response-hiding schemes is unclear.

In this work, we close the aforementioned gap by introducing a parametrized leakage-abuse attack that applies to practical response-hiding structured encryption schemes. The use of non-parametric estimation techniques makes our attack agnostic to both the data and the query distribution. At the very core of our technique lies the newly defined concept of a counting function with respect to a range scheme. We propose a two-phase framework to approximate the counting function for any range scheme. By simply switching one counting function for another, i.e., the so-called “parameter” of our modular attack, an adversary can attack different encrypted range schemes. We propose a constrained optimization formulation for the attack algorithm that is based on the counting functions. We demonstrate the effectiveness of our leakage-abuse attack on synthetic and real-world data under various scenarios.
Expand
Dieaa I. Nassr, M. Anwar, Hatem M. Bahig
ePrint Report ePrint Report
In this article, we propose a new public key cryptosystem, called \textbf{NAB}. The most important features of NAB are that its security strength is no easier than the security issues of the NTRU cryptosystem~\cite{Hoffstein96} and the encryption/decryption process is very fast compared to the previous public key cryptosystems RSA~\cite{Rivest78amethod}, Elgamal~\cite{ElGamal85}, NTRU~\cite{Hoffstein96}. Since the NTRU cryptosystem~\cite{Hoffstein96} is still not known to be breakable using quantum computers, NAB is also the same. In addition, the expansion of the ciphertext is barely greater than the plaintext and the ratio of the bit-size of the ciphertext to the bit-size of the plaintext can be reduced to just over one. We suggest that NAB is an alternative to RSA~\cite{Rivest78amethod}, Elgamal~\cite{ElGamal85} and NTRU~\cite{Hoffstein96} cryptosystems.
Expand
Ilaria Chillotti, Marc Joye, Pascal Paillier
ePrint Report ePrint Report
In many cases, machine learning and privacy are perceived to be at odds. Privacy concerns are especially relevant when the involved data are sensitive. This paper deals with the privacy-preserving inference of deep neural networks. We report on first experiments with a new library implementing a variant of the TFHE fully homomorphic encryption scheme. The underlying key technology is the programmable bootstrapping. It enables the homomorphic evaluation of any function of a ciphertext, with a controlled level of noise. Our results indicate for the first time that deep neural networks are now within the reach of fully homomorphic encryption. Importantly, in contrast to prior works, our framework does not necessitate re-training the model.
Expand
Bei Wang; Yi Ouyang; Honggang Hu ; Songsong Li
ePrint Report ePrint Report
Until now, there are two different methods to compute 4-GLV decompositions on the elliptic curves over $\mathbb{F}_{p^2}$ which are quadratic twists and possess a “restricted” endomorphism $\psi$ satisfying $\psi^{2}+1=0$. They are Longa and Sica's twofold Cornacchia-type algorithm (ASIACRYPT 2012) and Benjamin Smith's method--ready-made short bases (AMS 2015). In this paper, we first extend Smith's method from the case of quadratic twists to the case of quartic or sextic twists and give ready-made short bases for 4-GLV decompositions on these high degree twisted curves. After our supplements, Smith's method can be used to compute 4-GLV decompositions on more elliptic curves. Secondly, we focus on exploring more potential of Longa and Sica's algorithm, which is an elaborate iterated Cornacchia algorithm that can compute short bases for 4-GLV decompositions. The algorithm consists of two sub-algorithms, the first one in the ring of integers $\mathbb{Z}$ and the second one in the Gaussian integer ring $\mathbb{Z}[i]$. We observe that $\Z[i]$ in the second sub-algorithm can be replaced by another Euclidean domain $\mathbb{Z}[\omega]$ $(\omega=\frac{-1+\sqrt{-3}}{2})$. As a consequence, we design a new twofold Cornacchia-type algorithm with a theoretic upper bound of output $C\cdot n^{1/4}$, where $C=\frac{3+\sqrt{3}}{2}\sqrt{1+|r|+|s|}$ with small values $r, s$ given by the curve. The new twofold algorithm can be used to compute $4$-GLV decompositions on two classes of curves. First it gives a new and unified method to compute all $4$-GLV decompositions on $j$-invariant $0$ elliptic curves over $\mathbb{F}_{p^2}$. We exploit the fact that this family of curves must have an endomorphism $\phi$ satisfying $\phi^{2}+\phi+1=0$ (and hence $\mathbb{Z}[\phi]=\mathbb{Z}[\omega]$). Of the two previous methods on this class of elliptic curves, the first one was proposed by Hu, Longa and Xu in 2012 (Designs, Codes and Cryptography) but is applicable only to curves which are twists of degree $6$ and possess a “restricted” endomorphism $\psi$ satisfying $\psi^{4}-\psi^{2}+1=0$, the second one follows from the the work of Longa and Sica (ASIACRYPT 2012) and is applicable only to curves with a “restricted” endomorphism $\psi$ satisfying $\psi^{2}+1=0$. Second it can be used to compute the $4$-GLV decomposition on the Jacobian of the hyperelliptic curve defined as $\mathcal{C}/\mathbb{F}_{p}:y^{2}=x^{6}+ax^{3}+b$, which has an endomorphism $\phi$ with the characteristic equation $\phi^2+\phi+1=0$ (hence $\Z[\phi]=\Z[\omega]$).
Expand
Gabrielle Beck, Julia Len, Ian Miers, Matthew Green
ePrint Report ePrint Report
Many privacy-preserving protocols employ a primitive that allows a sender to "flag" a message to a recipient's public key, such that only the recipient (who possesses the corresponding secret key) can detect that the message is intended for their use. Examples of such protocols include anonymous messaging, privacy-preserving payments, and anonymous tracing. A limitation of the existing techniques is that recipients cannot easily outsource the detection of messages to a remote server, without revealing to the server the exact set of matching messages. In this work we propose a new class of cryptographic primitives called fuzzy message detection schemes. These schemes allow a recipient to derive a specialized message detection key that can identify correct messages, while also incorrectly identifying non-matching messages with a specific and chosen false positive rate $p$. This allows recipients to outsource detection work to an untrustworthy server, without revealing precisely which messages belong to the receiver. We show how to construct these schemes under a variety of assumptions; describe several applications of the new technique; and show that our schemes are efficient enough to use in real applications.
Expand
Marc Fischlin, Arno Mittelbach
ePrint Report ePrint Report
The hybrid argument is a fundamental and well-established proof technique of modern cryptography for showing the indistinguishability of distributions. As such, its details are often glossed over and phrases along the line of "this can be proven via a standard hybrid argument" are common in the cryptographic literature. Yet, the hybrid argument is not always as straightforward as we make it out to be, but instead comes with its share of intricacies. For example, a commonly stated variant says that if one has a sequence of hybrids $H_0,...,H_t$, and each pair $H_i$, $H_{i+1}$ is computationally indistinguishable, then so are the extreme hybrids $H_0$ and $H_t$. We iterate the fact that, in this form, the statement is only true for constant $t$, and we translate the common approach for general $t$ into a rigorous statement.

The paper here is not a research paper in the traditional sense. It mainly consists of an excerpt from the book "The Theory of Hash Functions and Random Oracles - An Approach to Modern Cryptography" (Information Security and Cryptography, Springer, 2021), providing a detailed discussion of the intricacies of the hybrid argument that we believe is of interest to the broader cryptographic community. The excerpt is reproduced with permission of Springer.
Expand
Boyuan Feng, Lianke Qin, Zhenfei Zhang, Yufei Ding, Shumo Chu
ePrint Report ePrint Report
In this paper, we present ZEN, a toolchain for producing efficient zero-knowledge proof systems of privacy-preserving verifiable neural network models. Taking an existing neural network as an input, ZEN produces a verifiable computation scheme for a classification task or a recognition task, namely ZEN$_{class}$ and ZEN$_{rec}$. Both ZEN$_{class}$ and ZEN$_{rec}$ ensure the privacy, more precisely, the zero-knowledge property of the input data. In practice, this means removing the personal identifications, such as the facial image or other biometric data, from the attack surface. And thanks to three decades’ consecutive efforts on zkSNARK from our community, the entire process is non-interactive and verifiable. Thus, our schemes potentially enable many important applications, ranging from trustless oracles for decentralized ledgers to privacy-preserving facial identification systems. To our best knowledge, ZEN is the first zero-knowledge neural network scheme that preserves the privacy of input data while delivering verifiable outputs. To build efficient schemes with no additional accuracy loss, ZEN includes two major technical contributions. First, we propose a zkSNARK friendly quantization approach, which is semantically equivalent to the state-of-the-art quantization algorithm, yet brings significant savings in constraint size. Second, we propose a novel encoding scheme, namely stranded encoding, that encodes batched dot products, the workhorse of many matrix operations, using only a fraction of finite field elements. This brings sizable savings in terms of the number of constraints for the matrix operation circuits. Our end-to-end evaluation demonstrates the effectiveness of ZEN: compared with simply combining the state-of-the-art full quantization scheme with zkSNARK (ZEN-vanilla), ZEN has $3.68 \sim 20.99 \times$ ($14.14 \times$ on average) savings in the number of constraints (as a result, in prover time as well) thanks to our zkSNARK friendly quantization and stranded encoding.
Expand
Mic Bowman, Debajyoti Das, Avradip Mandal, Hart Montgomery
ePrint Report ePrint Report
Proof of Elapsed Time (PoET) is a Nakamoto-style consensus algorithm where proof of work is replaced by a wait time randomly generated by a trusted execution environment (TEE). PoET was originally developed by Intel engineers and contributed to Hyperledger Sawtooth, but has never been formally defined or analyzed. In particular, PoET enables consensus on a bitcoin-like scale without having to resort to mining. Proof of Luck (PoL), designed by Milutinovic et. al., is a similar (but not identical) protocol that also builds a Nakamoto-style consensus algorithm using a TEE. Like PoET, it also lacks a formal proof.

In this work, we formally define a simplified version of PoET and Proof of Luck, which we call elapsed time (ET) consensus with a trusted timer. We prove the security of our ET consensus protocol with a trusted gimer given an honest majority assumption in a model very similar to the bitcoin backbone model proposed by Garay et al. which we call the elapsed time backbone model. Our model and protocol aims to capture the essence of PoeT and PoL while ignoring some of the more practical difficulties associated with such protocols, such as bootstrapping and setting up the TEE.

The PoET protocol also contains a function called the $z$-test that limits the number of blocks a player can publish in any particular larger set of blocks. Surprisingly, by improving this $z$-test a little bit we can prove the security of our ET consensus protocol without any TEEs with a (slightly stronger) honest majority assumption. This implies that Nakamoto-style consensus with rate limiting and no proofs of work can be used to obtained scalable consensus in a permissioned setting: in other words, ``bitcoin without proofs of work'' can be made secure without a TEE for private blockchains!
Expand
Suhri Kim
ePrint Report ePrint Report
In this paper, we present the complete analysis of Huff curves for implementing isogeny-based cryptography. In this regard, we first investigate the computational cost of the building-blocks when compression functions are used for Huff curves and presented an additional formula on Huff curves for implementing isogeny-based cryptography. From our implementation, the performance of Huff-SIDH and Montgomery-SIDH is almost the same, and Huff-CSIDH is 6.4\% faster than Montgomery-CSIDH but 4\% slower than Hybrid-CSIDH. The result of our work shows that the Huff curve can be quite practical for implementing isogeny-based cryptography but has some limitations.
Expand
Gilles Macario-Rat, Jacques Patarin
ePrint Report ePrint Report
In this paper, we present two new perturbations for the design of multivariate schemes that we call ``Ariadne Thread'' and ``Salt''. From these ideas, we present some efficient multivariate encryption and signature schemes with explicit parameters that resist all known attacks. In particular they resist the two main (and often very powerful) attacks in this area: the Gröbner attacks (to compute a cleartext from a ciphertext) and the MinRank attacks (to recover the secret key). Ariadne Threat and Salt can also be seen as new ``perturbations'' that we can use to enforce many multivariate schemes. The ``Salt'' perturbation works only for public key equations of degree (at least) 3. Similarly at present the ``Ariadne Thread'' perturbation seems to be particularly powerful with public keys of degree 3. Despite this, the size of the public key may still be reasonable since we can use larger fields (and also maybe non dense equations). Ariadne Thread perturbation seems to be particularly interesting for encryption. This is unusual since in multivariate cryptography encryption is generally more difficult than signatures.
Expand
Michael Troncoso, Britta Hale
ePrint Report ePrint Report
In this paper, we computationally analyze Passkey Entry in its entirety as a cryptographic authenticated key exchange (AKE) - including user-protocol interactions that are typically ignored as out-of-band. To achieve this, we model the user-to-device channels, as well as the typical device-to-device channel, and adversarial control scenarios in both cases. In particular, we separately capture adversarial control of device displays on the initiating and responding devices as well as adversarial control of user input mechanisms using what we call a CYBORG model. The CYBORG model enables realistic real-world security analysis in light of published attacks on user-mediated protocols such as Bluetooth that leverage malware and device displays. In light of this, we show that all versions of Passkey Entry fail to provide security in our model. Finally, we demonstrate how slight modi cations to the protocol would allow it to achieve stronger security guarantees for all current variants of passkey generation, as well as a newly proposed twofold mode of generation we term Dual Passkey Entry. These proof-of-concept modi cations point to improved design approaches for user-mediated protocols. Finally, this work points to categories of vulnerabilities, based on compromise type, that could be exploited in Bluetooth Passkey Entry.
Expand
Jaskaran V. Singh, Nicholas J. Hopper
ePrint Report ePrint Report
Secure Multiparty Computation involves a protocol between parties with an aim to produce a computed result just as a Trust Party would produce if the parties provided their inputs to it. The Trusted party in conventional computation is replaced by "un-trusted" parties in Secure Multiparty Computation. We first show that this existing Binary definition of Trust is inadequate. Real world is rife with disparities, that which produce a perceivable trust gradient between the participants. Conventional MPC models do not take this into account and rather provide security guarantees based on the threshold of corrupted parties. The thresholds are supposed to cover for some of the parties turning out to be corrupt. Often, with the knowledge of \emph{prior probability} of a party being corrupt, we can do better if we allot weight to each party based on how trusted we perceive it to be. Our paper explores this idea and our contributions towards it are two folds. First, we introduce the Graded Trust model where each party essentially has a \emph{Trust Grade} assigned to it in the protocol based on the prior of it being corrupt. Then, we present a discussion on the philosophy behind graded trust, and the potential benefits for large scale public MPC systems.
Expand
◄ Previous Next ►