International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

25 February 2022

Ziyu Zhao, Jintai Ding
ePrint Report ePrint Report
Lattice problem such as NTRU problem and LWE problem is widely used as the security base of post-quantum cryptosystems. And currently doing lattice reduction by BKZ algorithm is the most efficient way to solve it. In this paper, we give several further improvements on BKZ algorithm, which can be used for different SVP subroutines base on both enumeration and sieving. These improvements in combination provide a speed up of $2^{3\sim 4}$ in total. It is significant in concrete attacks. Using these new techniques, we solved the 656 dimensional ideal lattice challenge in only 380 thread hours (also with a enumeration based SVP subroutine), much less than the previous records (which costs 4600 thread hours in total). With these improvements enabled, we can still simulate the new BKZ algorithm easily. One can also use this simulator to find the blocksize strategy (and the corresponding cost) to make $\mathrm{Pot}$ of the basis (defined in section 5.2) decrease as fast as possible, which means the length of the first basis vector decrease the fastest if we accept the GSA assumption. It is useful for analyzing concrete attacks on lattice-based cryptography.
Expand
Lijing Zhou, Ziyu Wang, Xiao Zhang, Yu Yu
ePrint Report ePrint Report
Fully homomorphic encryption (FHE) provides a natural solution for privacy-preserving outsourced computation, but a straightforward FHE-based protocol may suffer from high computational overhead and large ciphertext expansion rate, especially for computation-intensive tasks over large data, which are the main obstacles towards practical outsourced computation. In this paper, we present HEAD, a generic outsourced computation protocol that can be based on most mainstream (typically a BGV or GSW style scheme) FHE schemes with more compact storage and less computational costs than the straightforward FHE-based counterpart. In particular, our protocol enjoys a ciphertext/plaintext expansion rate of 1 (i.e., no expansion) at the server side. This is achieved by means of ``pseudorandomly masked'' ciphertexts, and the efficient transformations of them into FHE ciphertexts to facilitate privacy-preserving computation. Depending on the underlying FHE in use, our HEAD protocol can be instantiated with the three masking techniques, namely modulo-subtraction-masking, modulo-division-masking, and XOR-masking, to support the decimal integer, real, or binary messages. Thanks to these masking techniques, various homomorphic computation tasks are made more efficient and less noise accumulative. We evaluate the performance of our protocol on BFV, BGV, CKKS, and FHEW schemes based on the PALISADE and SEAL libraries, which confirms the theoretical analysis of the reduction computation costs and noise. For example, the computation time in our BFV tests in an x86 server for the sum or product of eight ciphertexts is reduced from 336.3 ms to 6.3 ms, or from 1219.4 ms to 9.5 ms, respectively. Furthermore, our multi-input masking and unmasking operations are more flexible than the FHE SIMD-batching, by supporting an on-demand configuration of FHE during each outsourced computation request.
Expand
Jesper Buus Nielsen, João Ribeiro, Maciej Obremski (randomized author ordering)
ePrint Report ePrint Report
We distill a simple information-theoretic model for randomness extraction motivated by the task of generating publicly verifiable randomness in blockchain settings and which is closely related to You-Only-Speak-Once (YOSO) protocols (CRYPTO 2021). With the goal of avoiding denial-of-service attacks, parties speak only once and in sequence by broadcasting a public value and forwarding secret values to future parties. Additionally, an unbounded adversary can corrupt any chosen subset of at most $t$ parties. In contrast, existing YOSO protocols only handle random corruptions. As a notable example, considering worst-case corruptions allows us to reduce trust in the role assignment mechanism, which is assumed to be perfectly random in YOSO.

We study the maximum corruption threshold $t$ which allows for unconditional randomness extraction in our model:

- With respect to feasibility, we give protocols for $t$ corruptions and $n=6t+1$ or $n=5t$ parties depending on whether the adversary learns secret values forwarded to corrupted parties immediately once they are sent or only once the corrupted party is executed, respectively. Both settings are motivated by practical implementations of secret value forwarding. To design such protocols, we go beyond the committee-based approach that is sufficient for random corruptions in YOSO but turns out to be sub-optimal for chosen corruptions.

- To complement our protocols, we show that low-error randomness extraction is impossible with corruption threshold $t$ and $n \leq 4t$ parties in both settings above. This also provides a separation between chosen and random corruptions, since the latter allows for randomness extraction with close to $n/2$ random corruptions.
Expand
Tristan NEMOZ, Zoé AMBLARD, Aurélien DUPIN
ePrint Report ePrint Report
We extend the work performed by Anand, Targhi, Tabia and Unruh (PQCrypto 2016) of studying the post-quantum security of the CBC, CFB, OFB and CTR modes of operation by considering all possible notions of qIND-qCPA security defined by Carstens, Ebrahimi, Tabia and Unruh (TCC 2021).

We show that the results obtained by Anand et al. for the qIND-qCPA-P6 security of these modes carry on to the others IND-qCPA notions, namely the qIND-qCPA-P10 and qIND-qCPA-P11 ones. We also show that CFB, CTR and OFB are insecure according to all of the other notions, regardless of the block cipher they are used with. We provide several results concerning the (in)security of CBC. First of all, we show that it is insecure according to the qIND-qCPA-P9 notion. By distinguishing on the nature of the underlying block cipher, we prove its qIND-qCPA-P5 security when based upon a qPRP and we prove that it can be qIND-qCPA-P13 insecure when based upon a PRP, thus fully characterizing it. We illustrate the later result by using as a counter-example the same block cipher used by Anand et al.
Expand
Giuseppe Persiano, Kevin Yeo
ePrint Report ePrint Report
We present a lower bound for the static cryptographic data structure problem of single-server private information retrieval (PIR). PIR considers the setting where a server holds a database of $n$ entries and a client wishes to privately retrieve the $i$-th entry without revealing the index $i$ to the server. In our work, we focus on PIR with preprocessing where an $r$-bit hint may be computed in a preprocessing stage and stored by the server to be used to perform private queries in expected time $t$. We consider the public preprocessing setting of Beimel et al. [JoC, 2004] where the hint is publicly available to everyone including the adversary.

We prove that for any single-server computationally secure PIR with preprocessing it must be that $tr = \Omega(n \log n)$ when $r = \Omega(\log n)$. If $r = O(\log n)$, we show that $t = \Omega(n)$. Our lower bound holds even when the scheme errs with probability $1/n^2$ and the adversary’s distinguishing advantage is $1/n$. Our work improves upon the $tr = \Omega(n)$ lower bound of Beimel et al. [JoC, 2004].

We prove our lower bound in a variant of the cell probe model where only accesses to the memory are charged cost and computation and accesses to the hint are free. Our main technical contribution is a novel use of the cell sampling technique (also known as the incompressibility technique) used to obtain lower bounds on data structures. In previous works, this technique only leveraged the correctness guarantees to prove lower bounds even when used for cryptographic primitives. Our work combines the cell sampling technique with the privacy guarantees of PIR to construct a powerful, polynomial-time adversary that is critical to proving our higher lower bounds.
Expand
Luca De Feo, Antonin Leroux, Benjamin Wesolowski
ePrint Report ePrint Report
The Deuring correspondence defines a bijection between isogenies of supersingular elliptic curves and ideals of maximal orders in a quaternion algebra. We present a new algorithm to translate ideals of prime-power norm to their corresponding isogenies --- a central task of the effective Deuring correspondence. The new method improves upon the algorithm introduced in 2021 by De Feo, Kohel, Leroux, Petit and Wesolowski as a building-block of the SQISign signature scheme. SQISign is the most compact post-quantum signature scheme currently known, but is several orders of magnitude slower than competitors, the main bottleneck of the computation being the ideal-to-isogeny translation. We implement the new algorithm and apply it to SQISign, achieving a more than twofold speed-up in key generation and signing. Verification time is not directly impacted by the change, however we also achieve a twofold speed-up through various other improvements.

In a second part of the article, we advance cryptanalysis by showing a very simple distinguisher against one of the assumptions used in SQISign. We present a way to impede the distinguisher through a few changes to the generic KLPT algorithm. We formulate a new assumption capturing these changes, and provide an analysis together with experimental evidence for its validity.
Expand
Martin R. Albrecht, Miloš Prokop, Yixin Shen, Petros Wallden
ePrint Report ePrint Report
A fundamental computational problem is to find a shortest non-zero vector in Euclidean lattices, a problem known as the Shortest Vector Problem (SVP). This problem is believed to be hard even on quantum computers and thus plays a pivotal role in post-quantum cryptography. In this work we explore how (efficiently) Noisy Intermediate Scale Quantum (NISQ) devices may be used to solve SVP. Specifically, we map the problem to that of finding the ground state of a suitable Hamiltonian. In particular, (i) we establish new bounds for lattice enumeration, this allows us to obtain new bounds (resp. estimates) for the number of qubits required per dimension for any lattices (resp. random q-ary lattices) to solve SVP; (ii) we exclude the zero vector from the optimization space by proposing (a) a different classical optimisation loop or alternatively (b) a new mapping to the Hamiltonian. These improvements allow us to solve SVP in dimension up to 28 in a quantum emulation, significantly more than what was previously achieved, even for special cases. Finally, we extrapolate the size of NISQ devices that is required to be able to solve instances of lattices that are hard even for the best classical algorithms and find that with ≈ 10^3 noisy qubits such instances can be tackled.
Expand
Gabriel Zaid, Lilian Bossuet, Mathieu Carbone, Amaury Habrard, Alexandre Venelli
ePrint Report ePrint Report
Over the recent years, the cryptanalysis community leveraged the potential of research on Deep Learning to enhance attacks. In particular, several studies have recently highlighted the benefits of Deep Learning based Side-Channel Attacks (DLSCA) to target real-world cryptographic implementations. While this new research area on applied cryptography provides impressive result to recover a secret key even when countermeasures are implemented (e.g. desynchronization, masking schemes), the lack of theoretical results make the construction of appropriate models a notoriously hard problem. In this work, we propose the first solution that bridges DL and SCA. Based on theoretical results, we develop the first generative model, called Conditionnal Variational AutoEncoder based on Stochastic Attacks (cVAE-SA), designed from the well-known Stochastic Attacks, that have been introduced by Schindler et al. in $2005$. This model reduces the black-box property of DL and eases the architecture design for every real-world crypto-system as we define theoretical complexity bounds which only depend on the dimension of the (reduced) trace and the targeting variable over $\mathbb{F}_{2}^{n}$. We validate our theoretical proposition through simulations and public datasets on wide-range of use-cases, including multi-task learning, curse of dimensionality and masking scheme.
Expand
Qun Liu, Weijia Wang, Yanhong Fan, Lixuan Wu, Ling Sun, Meiqin Wang
ePrint Report ePrint Report
Lightweight cryptography features a small footprint and/or low computational complexity. Low-cost implementations of linear layers usually play an important role in lightweight cryptography. Although it has been shown by Boyar et al. that finding the optimal implementation of a linear layer is a Shortest Linear Program (SLP) problem and NP-hard, there exist a variety of heuristic methods to search for near-optimal solutions. This paper considers the low-latency criteria and focuses on the heuristic search of lightweight implementation for linear layers. Most of the prior approach iteratively combines the inputs (of linear layers) to reach the output, which can be regarded as the forward search. To better adapt the low-latency criteria, we propose a new framework of backward search that attempts to iteratively split every output (into an XORing of two bits) until all inputs appear. By bounding the time of splitting, the new framework can find a sub-optimal solution with a minimized depth of circuits. We apply our new search algorithm to linear layers of block ciphers and find many low-latency candidates for implementations. Notably, for AES Mixcolumns, we provide an implementation with 103 XOR gates with a depth of 3, which is among the best hardware implementations of the AES linear layer. Besides, we obtain better implementations in XOR gates for 54.3% of 4256 Maximum Distance Separable (MDS) matrices proposed by Li et al. at FSE 2019. We also achieve an involutory MDS matrix (in M4(GL(8, F_2))) whose implementation uses the lowest number (i.e., 86, saving 2 from the state-of-the-art result) of XORs with the minimum depth.
Expand
Gregor Haas, Aydin Aysu
ePrint Report ePrint Report
Cryptographic instruction set extensions are commonly used for ciphers which would otherwise face unacceptable side channel risks. A prominent example of such an extension is the ARMv8 Cryptographic Extension, or ARM CE for short, which defines dedicated instructions to securely accelerate AES. However, while these extensions may be resistant to traditional "digital" side channel attacks, they may still vulnerable to physical side channel attacks.

In this work, we demonstrate the first such attack on a standard ARM CE AES implementation. We specifically focus on the implementation used by Apple’s CoreCrypto library which we run on the Apple A10 Fusion SoC. To that end, we implement an optimized side channel acquisition infrastructure involving both custom iPhone software and accelerated analysis code. We find that an adversary which can observe 5-30 million known-ciphertext traces can reliably extract secret AES keys using electromagnetic (EM) radiation as a side channel. This corresponds to an encryption operation on less than half of a gigabyte of data, which could be acquired in less than 2 seconds on the iPhone 7 we examined. Our attack thus highlights the need for side channel defenses for real devices and production, industry-standard encryption software.
Expand
Markku-Juhani O. Saarinen
ePrint Report ePrint Report
FIPS 140-3 is the main standard defining security requirements for cryptographic modules in U.S. and Canada; commercially viable hardware modules generally need to be compliant with it. The scope of FIPS 140-3 will also expand to the new NIST Post-Quantum Cryptography (PQC) standards when migration from older RSA and Elliptic Curve cryptography begins. FIPS 140-3 mandates the testing of the effectiveness of ``non-invasive attack mitigations'', or side-channel attack countermeasures. At higher security levels 3 and 4, the testing methods and metrics are expected to be based on ISO 17825, which is based on the older Test Vector Leakage Assessment (TVLA) methodology. We discuss how to apply ISO 17825 to hardware modules that implement lattice-based PQC standards for public-key cryptography -- Key Encapsulation Mechanisms (KEMs) and Digital Signatures. We find that simple ``random key'' vs. ``fixed key'' tests are unsatisfactory due to the close linkage between public and private components of PQC keypairs. While the general statistical testing approach and requirements can remain consistent with older public-key algorithms, a non-trivial challenge in creating ISO 17825 testing procedures for PQC is the careful design of test vectors used for power, electromagnetic, and timing measurements so that relevant Critical Security Parameter (CSP) leakage is captured.
Expand
Omri Shmueli
ePrint Report ePrint Report
Quantum tokenized signature schemes (Ben-David and Sattath, QCrypt 2017) allow a sender to generate and distribute quantum unclonable states which grant their holder a one-time permission to sign in the name of the sender. Such schemes are a strengthening of public-key quantum money schemes, as they imply public-key quantum money where some channels of communication in the system can be made classical.

An even stronger primitive is semi-quantum tokenized signatures, where the sender is classical and can delegate the generation of the token to a (possibly malicious) quantum receiver. Semi-quantum tokenized signature schemes imply a powerful version of public-key quantum money satisfying two key features:

1. The bank is classical and the scheme can execute on a completely classical communication network. In addition, the bank is \emph{stateless} and after the creation of a banknote, does not hold any information nor trapdoors except the balance of accounts in the system. Such quantum money scheme solves the main open problem presented by Radian and Sattath (AFT 2019). 2. Furthermore, the classical-communication transactions between users in the system are \emph{direct} and do not need to go through the bank. This enables the transactions to be both classical and private.

While fully-quantum tokenized signatures (where the sender is quantum and generates the token by itself) are known based on quantum-secure indistinguishability obfuscation and injective one-way functions, the semi-quantum version is not known under any computational assumption. In this work we construct a semi-quantum tokenized signature scheme based on quantum-secure indistinguishability obfuscation and the sub-exponential hardness of the Learning with Errors problem. In the process, we show new properties of quantum coset states and a new hardness result on indistinguishability obfuscation of classical subspace membership circuits.
Expand
Ben Nassi, Ras Swissa, Yuval Elovici, Boris Zadov
ePrint Report ePrint Report
In this paper, we introduce "the little seal bug" attack, an optical side-channel attack which exploits lightweight reflective objects (e.g., an iced coffee can, a smartphone stand, a souvenir) as optical implants for the purpose of recovering the content of a conversation. We show how fluctuations in the air pressure on the surface of a shiny object can be exploited by eavesdroppers to recover speech passively and externally, using equipment not likely to be associated with spying. These air pressure fluctuations, which occur in response to sound, cause the shiny object to vibrate and reflect light which modulates the nearby sound; as a result, seemingly innocuous objects like an empty beverage can, desk ornament, or smartphone stand, which are often placed on desks, can provide the infrastructure required for eavesdroppers to recover the content of a victim’s conversation held when the victim is sitting at his/her desk. First, we conduct a series of experiments aimed at learning the characteristics of optical measurements obtained from shiny objects that reflect light, by using a photodiode to analyze the movement of a shiny weight in response to sound. Based on our findings, we propose an optical acoustical transformation (OAT) to recover speech from the optical measurements obtained from light reflected from shiny objects. Finally, we compare the performance of the little seal bug attack to related methods presented in other studies. We show that eavesdroppers located 35 meters away from a victim can use the little seal bug attack to recover speech at the sound level of a virtual meeting with fair intelligibility whe
Expand
Mark Zhandry
ePrint Report ePrint Report
Generic groups are an important tool for analyzing the feasibility and in-feasibility of group-based cryptosystems. There are two distinct wide-spread versions of generic groups, Shoup's and Maurer's, the main difference being whether or not group elements are given explicit labels. The two models are often treated as equivalent. In this work, however, we demonstrate that the models are in fact quite different, and care is needed when stating generic group results:

- We show that numerous textbook constructions are *not* captured by Maurer, but are captured by Shoup. In the other direction, any construction captured by Maurer *is* captured by Shoup.

- For constructions that exist in both models, we show that security is equivalent for "single stage" games, but Shoup security is strictly stronger than Maurer security for some "multi-stage" games.

- The existing generic group un-instantiability results do not apply to Maurer. We fill this gap with a new un-instantiability result.

- We explain how the known black box separations between generic groups and identity-based encryption do not fully apply to Shoup, and resolve this by providing such a separation.

- We give a new un-instantiability result for the *algebraic* group model.
Expand
Monika Henzinger, Jalaj Upadhyay
ePrint Report ePrint Report
We study fine-grained error bounds for differentially private algorithms for averaging and counting in the continual observation model. For this, we use the completely bounded spectral norm (cb norm) from operator algebra. For a matrix $W$, its cb norm is defined as \[ \|{W}\|_{\mathsf{cb}} = \max_{Q} \left\{ \frac{\|{Q \bullet W}\|}{\|{Q}\|} \right\}, \] where $Q \bullet W$ denotes the Schur product and $\|{\cdot}\|$ denotes the spectral norm. We bound the cb norm of two fundamental matrices studied in differential privacy under the continual observation model: the counting matrix $M_{\mathsf{counting}}$ and the averaging matrix $M_{\mathsf{average}}$. For $M_{\mathsf{counting}}$, we give lower and upper bound whose additive gap is $1 + \frac{1}{\pi}$. Our factorization also has two desirable properties sufficient for streaming setting: the factorization contains of lower-triangular matrices and the number of distinct entries in the factorization is exactly $T$. This allows us to compute the factorization on the fly while requiring the curator to store a $T$-dimensional vector. For $M_{\mathsf{average}}$, we show an additive gap between the lower and upper bound of $\approx 0.64$.
Expand
Daniel Rausch, Ralf Kuesters, Céline Chevalier
ePrint Report ePrint Report
Universal Composability is a widely used concept for the design and analysis of protocols. Since Canetti's original UC model and the model by Pfitzmann and Waidner several different models for universal composability have been proposed, including, for example, the IITM model, GNUC, CC, but also extensions and restrictions of the UC model, such as JUC, GUC, and SUC. These were motivated by the lack of expressivity of existing models, ease of use, or flaws in previous models. Cryptographers choose between these models based on their needs at hand (e.g., support for joint state and global state) or simply their familiarity with a specific model. While all models follow the same basic idea, there are huge conceptually differences, which raises fundamental and practical questions: (How) do the concepts and results proven in one model relate to those in another model? Do the different models and the security notions formulated therein capture the same classes of attacks? Most importantly, can cryptographers re-use results proven in one model in another model, and if so, how?

In this paper, we initiate a line of research with the aim to address this lack of understanding, consolidate the space of models, and enable cryptographers to re-use results proven in other models. As a start, here we focus on Canetti's prominent UC model and the IITM model proposed by Kuesters et al. The latter is an interesting candidate for comparison with the UC model since it has been used to analyze a wide variety of protocols, supports a very general protocol class and provides, among others, seamless treatment of protocols with shared state, including joint and global state. Our main technical contribution is an embedding of the UC model into the IITM model showing that all UC protocols, security and composition results carry over to the IITM model. Hence, protocol designers can profit from the features of the IITM model while being able to use all their results proven in the UC model. We also show that, in general, one cannot embed the full IITM model into the UC model.
Expand
Thibauld Feneuil, Jules Maire, Matthieu Rivain, Damien Vergnaud
ePrint Report ePrint Report
We propose (honest verifier) zero-knowledge arguments for the modular subset sum problem. Given a set of integers, this problem asks whether a subset adds up to a given integer t modulo a given integer q. This NP-complete problem is considered since the 1980s as an interesting alternative in cryptography to hardness assumptions based on number theory and it is in particular believed to provide post-quantum security. Previous combinatorial approaches, notably one due to Shamir, yield arguments with cubic communication complexity (in the security parameter). More recent methods, based on the MPC-in-the-head technique, also produce arguments with cubic communication complexity and only for prime modulus q.

We improve this approach by using a secret-sharing over small integers (rather than modulo q) to reduce the size of the arguments and remove the prime modulus restriction. Since this sharing may reveal information on the secret subset, we introduce the idea of rejection to the MPC-in-the-head paradigm. Special care has to be taken to balance completeness and soundness and preserve zero-knowledge of our arguments. We combine this idea with two techniques to prove that the secret vector (which selects the subset) is well made of binary coordinates. Our new techniques have the significant advantage to result in arguments of size independent of the modulus q.

Our new protocols for the subset sum problem achieve an asymptotic improvement by producing arguments of quadratic size (against cubic size for previous proposals). This improvement is also practical: for a 256-bit modulus q, the best variant of our protocols yields 13KB arguments while previous proposals gave 1180KB arguments, for the best general protocol, and 122KB, for the best protocol restricted to prime modulus. Our techniques can also be applied to vectorial variants of the subset sum problem and in particular the inhomogeneous short integer solution (ISIS) problem for which they provide competitive alternatives to state-of-the-art protocols. We also show the application of our protocol to build efficient zero-knowledge arguments of plaintext and/or key knowledge in the context of fully-homomorphic encryption. When applied to the TFHE scheme, the obtained arguments are more than 20 times smaller than those obtained with previous protocols.
Expand
Yanbo Chen, Yunlei Zhao
ePrint Report ePrint Report
An aggregate signature (AS) scheme allows an unspecified aggregator to compress many signatures into a short aggregation. AS schemes can save storage costs and accelerate verification. They are desirable for applications where many signatures need to be stored, transferred, or verified together, like blockchain systems, network routing, e-voting, and certificate chains. However, constructing AS schemes based on general groups, only requiring the hardness of the discrete logarithm problem, is quite tricky and has been a long-standing research question. Recently, Chalkias et al. (CT-RSA 2021) proposed a half-aggregate scheme for Schnorr signatures. We observe the scheme lacks a tight security proof and does not well support incremental aggregation, i.e., adding more signatures into a pre-existing aggregation. Chalkias et al. also presented an aggregate scheme for Schnorr signatures whose security can be tightly reduced to the security of Schnorr signatures in the random oracle model (ROM). However, the scheme is rather expensive and does not achieve half-aggregation. It is a fundamental question whether there exists half-aggregation of Schnorr signatures with tight reduction in the ROM, of both theoretical and practical interests.

This work's contributions are threefold. We first give a tight security proof for the scheme in CT-RSA 2021 in the ROM and the algebraic group model (AGM). Second, we provide a new half-aggregate scheme for Schnorr signatures that perfectly supports incremental aggregation, whose security also tightly reduces to Schnorr's security in the AGM+ROM. Third, we present a Schnorr-based sequential aggregate signature (SAS) scheme that is tightly secure as Schnorr signature scheme in the ROM (without the AGM). Our work may pave the way for applying Schnorr aggregation in real-world cryptographic applications.
Expand
Onur Gunlu, Matthieu Bloch, Rafael F. Schaefer, Aylin Yener
ePrint Report ePrint Report
This work considers mitigation of information leakage between communication and sensing operations in joint communication and sensing systems. Specifically, a discrete memoryless state-dependent broadcast channel model is studied in which (i) the presence of feedback enables a transmitter to simultaneously achieve reliable communication and channel state estimation; (ii) one of the receivers is treated as an eavesdropper whose state should be estimated but which should remain oblivious to a part of the transmitted information. The model abstracts the challenges behind security for joint communication and sensing if one views the channel state as a characteristic of the receiver, e.g., its location. For independent identically distributed (i.i.d.) states, perfect output feedback, and when part of the transmitted message should be kept secret, a partial characterization of the secrecy-distortion region is developed. The characterization is exact when the broadcast channel is either physically-degraded or reversely-physically-degraded. The characterization is also extended to the situation in which the entire transmitted message should be kept secret. The benefits of a joint approach compared to separation-based secure communication and state-sensing methods are illustrated with a binary joint communication and sensing model.
Expand
Keita Emura, Shiho Moriai, Takuma Nakajima, Masato Yoshimi
ePrint Report ePrint Report
Cache systems are crucial for reducing communication overhead on the Internet. The importance of communication privacy is being increasingly and widely recognized; therefore, we anticipate that nearly all end-to-end communication will be encrypted via secure sockets layer/transport layer security (SSL/TLS) in the near future. Herein we consider a catch-22 situation, wherein the cache server checks whether content has been cached or not, i.e., the cache server needs to observe it, thereby violating end-to-end encryption. We avoid this catch-22 situation by proposing an encrypted cache system which we call Cache-22. To maximize its deployability, we avoid heavy, advanced cryptographic tools, and instead base our Cache-22 system purely on traditional SSL/TLS communication. It employs tags for searching, and its design concept enables the service provider to decide, e.g., via an authentication process, whether or not a particular user should be allowed to access particular content. We provide a prototype implementation of the proposed system using the color-based cooperative cache proposed by Nakajima et al. (IEICE Trans. 2017) under several ciphersuites containing post-quantum key exchanges in addition to ECDHE (Elliptic Curve-based). We consider NIST Post-Quantum Cryptography round 3 finalists and alternate candidates: lattice-based (Kyber, SABER, NTRU), code-based (BIKE), and isogeny-based (SIKE). Compared to direct HTTPS communication between a service provider and a user, employing our Cache-22 system has a merit to drastically reduce communications between a cache server and the service provider (approximately 95%) which is effective in a hierarchical network with a cost disparity.
Expand
◄ Previous Next ►