IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
25 August 2021
Stjepan Picek, Guilherme Perin, Luca Mariot, Lichao Wu, Lejla Batina
Side-channel attacks represent a realistic and serious threat to the security of embedded devices for almost three decades. The variety of attacks and targets they can be applied to have been introduced, and while the area of side-channel attacks and mitigations is very well-researched, it is yet to be consolidated.
Deep learning-based side-channel attacks entered the field in recent years with the promise of more competitive performance and enlarged attackers' capabilities compared to other techniques. At the same time, the new attacks bring new challenges and complexities to the domain, making a systematization of the existing knowledge ever more necessary.
In this SoK, we do exactly that, and by bringing new insights, we systematically structure the current knowledge of deep learning in side-channel analysis. We first dissect deep learning-assisted attacks into different phases and map those phases to the efforts conducted so far in the domain. For each of the phases, we identify the weaknesses and challenges that triggered the known open problems.
We connect the attacks to the existing threat models and evaluate their advantages and drawbacks. We finish by discussing other threat models that should be investigated and propose directions for future works.
In this SoK, we do exactly that, and by bringing new insights, we systematically structure the current knowledge of deep learning in side-channel analysis. We first dissect deep learning-assisted attacks into different phases and map those phases to the efforts conducted so far in the domain. For each of the phases, we identify the weaknesses and challenges that triggered the known open problems.
We connect the attacks to the existing threat models and evaluate their advantages and drawbacks. We finish by discussing other threat models that should be investigated and propose directions for future works.
Maikel Kerkhof, Lichao Wu, Guilherme Perin, Stjepan Picek
Deep learning is a powerful direction for profiling side-channel analysis as it can break targets protected with countermeasures even with a relatively small number of attack traces.
Still, it is necessary to conduct hyperparameter tuning for strong attack performance, which can be far from trivial.
Besides a plethora of options stemming from the machine learning domain, recent years also brought neural network elements specially designed for side-channel analysis.
An important hyperparameter is the loss function, which calculates the error or loss between the actual and desired output. The resulting loss is used to update the weights associated with the connections between the neurons or filters of the deep learning neural network. Unfortunately, despite being a highly relevant hyperparameter, there are no systematic comparisons among different loss functions. This work provides a detailed study on the performance of different loss functions in the SCA context. We evaluate five loss functions commonly used in machine learning and two loss functions proposed for SCA. Our results show that one of the SCA-specific loss functions (called CER) performs very well and outperforms other loss functions in most evaluated settings. Finally, our results show that categorical cross-entropy represents a good option for most settings, especially if there is a requirement to work well with different neural network architectures.
An important hyperparameter is the loss function, which calculates the error or loss between the actual and desired output. The resulting loss is used to update the weights associated with the connections between the neurons or filters of the deep learning neural network. Unfortunately, despite being a highly relevant hyperparameter, there are no systematic comparisons among different loss functions. This work provides a detailed study on the performance of different loss functions in the SCA context. We evaluate five loss functions commonly used in machine learning and two loss functions proposed for SCA. Our results show that one of the SCA-specific loss functions (called CER) performs very well and outperforms other loss functions in most evaluated settings. Finally, our results show that categorical cross-entropy represents a good option for most settings, especially if there is a requirement to work well with different neural network architectures.
Prabhanjan Ananth, Gilad Asharov, Hila Dahari, Vipul Goyal
It is well known that several cryptographic primitives cannot be achieved without a common reference string (CRS). Those include, for instance, non-interactive zero-knowledge for NP, or maliciously secure computation in fewer than four rounds. The security of those primitives heavily relies upon on the assumption that the trusted authority, who generates the CRS, does not misuse the randomness used in the CRS generation. However, we argue that there is no such thing as an unconditionally trusted authority and every authority must be held accountable for any trust to be well-founded. Indeed, a malicious authority can, for instance, recover private inputs of honest parties given transcripts of the protocols executed with respect to the CRS it has generated.
While eliminating trust in the trusted authority may not be entirely feasible, can we at least move towards achieving some notion of accountability? We propose a new notion in which, if the CRS authority releases the private inputs of protocol executions to others, we can then provide a publicly-verifiable proof that certifies that the authority misbehaved. We study the feasibility of this notion in the context of non-interactive zero knowledge and two-round secure two-party computation.
While eliminating trust in the trusted authority may not be entirely feasible, can we at least move towards achieving some notion of accountability? We propose a new notion in which, if the CRS authority releases the private inputs of protocol executions to others, we can then provide a publicly-verifiable proof that certifies that the authority misbehaved. We study the feasibility of this notion in the context of non-interactive zero knowledge and two-round secure two-party computation.
Sergij V. Goncharov
In this short note we consider the scheme to share a bitstring secret among $n$ parties such that any $m$ of them, cooperating, can reconstruct it, but any $m - 1$ of them cannot (a so-called $(m,n)$-threshold scheme). The scheme is based on the sound ranging problem, which is to determine the unknown position of the source and the unknown instant when it emitted the sound from known instants when the sound wave reached known sensors. The features are 1) shadows are computed not so much by the secret dealer, but rather by environment where the sound propagates, so the amount of computations performed by the dealer is $O(1)$ instead of $O(n)$ as $n \rightarrow \infty$, and 2) the dealer does not need to establish separate secure channel with each party. There are severe inherent drawbacks though.
Simin Ghesmati, Walid Fdhila, Edgar Weippl
The Bitcoin blockchain was the first publicly verifiable, and distributed ledger, where it is possible for everyone to download and check the full history of all data records from the genesis block. These properties lead to the emergence of new types of applications and the redesign of traditional systems that no longer respond to current business needs (e.g., transparency, protection against censorship, decentralization). One particular application is the use of blockchain technology to enable decentralized and self-sovereign identities including new mechanisms for creating, resolving, and revoking them.
The public availability of data records has, in turn, paved the way for new kinds of attacks that combine sophisticated heuristics with auxiliary information to compromise users' privacy and deanonymize their identities.
In this paper, we review and categorize Bitcoin privacy attacks, investigate their impact on one of the Bitcoin-based identity methods namely did:btcr, and analyze and discuss its privacy properties.
Walid Fdhila , Nicholas Stifter, Kristian Kostal, Cihan Saglam, Markus Sabadello
Recent technological shifts have pressured businesses to reshape the way they operate and transact. At the hart of this restructuring, identity management established itself as an essential building block in both B2C and B2B business models. Trustworthy identities may refer to customers, businesses, suppliers or assets, and enable trusted communications between different actors. Unfortunately, traditional identity management systems rely on centralized architectures and trust in third
party services. With the inception of blockchain technology, new methods for managing identity emerged, which promise better decentralization and self-sovereignty. This paper provides an evaluation of a selection of distributed identity methods, and analyzes their properties based on the categorization specified in the W3C recommendation rubric.
Animesh Roy, Dibyendu Roy, Subhamoy Maitra
Arbiter based Physical Unclonable Function (sometimes called Physically Unclonable Function, or in short PUF) is a hardware based pseudorandom bit generator. The pseudorandomness in the output bits depends on device specific parameters. For example, based on the delay parameters, an $n$-length Arbiter PUF can be considered as an n-variable Boolean function. We note that the random variation of the delay parameters cannot exhaust all the Boolean functions and the class is significantly smaller as well as restricted. While this is expected (as the autocorrelation property in certain cases is quite biased), we present a more disciplined and first theoretical combinatorial study in this domain. Our work shows how one can explore the functions achieved through an Arbiter based PUF construction with random delay parameters. Our technique mostly shows limitation of such functions from the angle of cryptographic evaluation as the subclass of the Boolean function can be identified with much better efficiency (much less complexity) than random. On the other hand, we note that under certain constrains on the weights of inputs, such a simple model of Arbiter PUFs provide good cryptographic parameters in terms of differential analysis. In this regard, we theoretically solve the problem of autocorrelation properties in a restricted space of input variables with a fixed weight. Experimental evidences complement our theoretical findings.
Jeongeun Park
Keeping privacy for every entity in outsourced computation is always a crucial issue. For efficient secure computation, homomorphic encryption (HE) can be one of nice solutions. Especially, multikey homomorphic encryption (MKHE) which allows homomorphic evaluation on encrypted data under different keys can be one of the simplest solutions for a secure computation which handles multiple users' data. However, the current main problem of MKHE is that the dimension of its evaluated ciphertext relies on the number of users. To solve this problem, there are several variants of multikey homomorphic encryption schemes to keep the size of ciphertext constant for a fixed number of users. However, users interact one another before computation to provide their inputs, which increases setup complexity. Moreover, all the existing MKHE schemes and their variants have unique benefits which cannot be easily achieved at the same time in one scheme. In other words, each type of scheme has a suitable computational scenario to put its best performance.
In this paper, we suggest more efficient evaluation key generation algorithm for the existing variants of MKHE schemes which have no ciphertext expansion for a fixed number of users. Our method only requires a very simple and minor pre-processing; distributing public keys, which is not counted as a round at all in many other applications. As a result, participants have less communication, computation, and memory cost in online phase. Moreover, we provide a practical conversion algorithm between the two types of schemes in order to \emph{efficiently} utilize both schemes' advantages together in more various applications. We also provide detailed comparison among similar results so that users can choose a suitable scheme for their homomorphic encryption based application scenarios.
In this paper, we suggest more efficient evaluation key generation algorithm for the existing variants of MKHE schemes which have no ciphertext expansion for a fixed number of users. Our method only requires a very simple and minor pre-processing; distributing public keys, which is not counted as a round at all in many other applications. As a result, participants have less communication, computation, and memory cost in online phase. Moreover, we provide a practical conversion algorithm between the two types of schemes in order to \emph{efficiently} utilize both schemes' advantages together in more various applications. We also provide detailed comparison among similar results so that users can choose a suitable scheme for their homomorphic encryption based application scenarios.
Yao Sun
Mixed Integer Linear Programming (MILP) solvers have become one of the most powerful tools for searching cryptographic characteristics, including differentials, impossible differentials, and division trails. Generally, one MILP problem can be formulated by several different MILP models, and the models with fewer constraints and variables are usually easier to solve. How to model a problem with the least number of constraints is also an interesting mathematical problem. In this paper, we discuss this problem in a general form. Specifically, given a set $C \subset F_2^n$, let $L$ be a set of inequalities, and we say $L$ describes $C$ if the inequalities in $L$ only involve $n$ variables and the solution set to $L$ is exactly $C$. Our goal is to find such a set $L$ with the least number of inequalities. We present a brand new approach, named as SuperBall approach, for resolving this problem completely. Our approach is able to generate all available inequalities. Once these inequalities are obtained, Sasaki and Todo's method is used to find out the smallest subset of inequalities that describes $C$. If Sasaki and Todo's method succeeds, the found subset will be proved as the smallest. As a result, we found the smallest subsets of inequalities that describe the Sboxes of Keccak and APN-6. The previous best results were 34 and 167, presented in FSE 2020, and we decreased these numbers to 26 and 145. Moreover, we can prove these numbers are the smallest in case no dummy variables are involved.
Joël Alwen, Sandro Coretti, Yevgeniy Dodis, Yiannis Tselekounis
The Messaging Layer Security (MLS) project is an IETF effort aiming to establish an industry-
wide standard for secure group messaging (SGM). Its development is supported by several major
secure-messaging providers (with a combined user base in the billions) and a growing body of
academic research.
MLS has evolved over many iterations to become a complex, non-trivial, yet relatively
ad-hoc cryptographic protocol. In an effort to tame its complexity and build confidence in
its security, past analyses of MLS have restricted themselves to sub-protocols of MLSmost
prominently a type of sub-protocol embodying so-called continuous group key agreement (CGKA).
However, to date the task of proving or even defining the security of the full MLS protocol has
been left open.
In this work, we fill in this missing piece. First, we formally capture the security of SGM
protocols by defining a corresponding security game, which is parametrized by a safety predicate
that characterizes the exact level of security achieved by a construction. Then, we cast MLS as
an SGM protocol, showing how to modularly build it from the following three main components
(and some additional standard cryptographic primitives) in a black-box fashion: (a) CGKA, (b)
forward-secure group AEAD (FS-GAEAD), which is a new primitive and roughly corresponds
to an epoch of group messaging, and (c) a so-called PRF-PRNG, which is a two-input hash
function that is a pseudorandom function (resp. generator with input) in its first (resp. second)
input. Crucially, the security predicate for the SGM security of MLS can be expressed purely as
a function of the security predicates of the underlying primitives, which allows to swap out any of
the components and immediately obtain a security statement for the resulting SGM construction.
Furthermore, we provide instantiations of all component primitives, in particular of CGKA with
MLSs TreeKEM sub-protocol (which we prove adaptively secure) and of FS-GAEAD with a
novel construction (which has already been adopted by MLS).
Along the way we introduce a collection of new techniques, primitives, and results with
applications to other SGM protocols and beyond. For example, we extend the Generalized
Selective Decryption proof technique (which is central in CGKA literature) and prove adaptive
security for another (practical) more secure CGKA protocol called RTreeKEM (Alwen et al.,
CRYPTO 20). The modularity of our approach immediately yields a corollary characterizing
the security of an SGM construction using RTreeKEM.
Dmitrii Koshelev
Let $E_1$ be an ordinary pairing-friendly elliptic curve of embedding degree $k > 1$ over a finite field $\mathbb{F}_{\!q}$. Besides, let $E_2$ be a twist of $E_1$ of degree $d := \#\mathrm{Aut}(E_1)$ over the field $\mathbb{F}_{\!q^e}$, where $e := k/d \in \mathbb{N}$. As is customary, for a common prime divisor $r$ of the orders $N_1 := \#E_1(\mathbb{F}_{\!q})$ and $N_2 := \#E_2(\mathbb{F}_{\!q^e})$ denote by $\mathbb{G}_1 \subset E_1(\mathbb{F}_{\!q})$ and $\mathbb{G}_2 \hookrightarrow E_2(\mathbb{F}_{\!q^e})$ the eigenspaces of the Frobenius endomorphism on $E_1[r] \subset E_1(\mathbb{F}_{\!q^k})$, associated with the eigenvalues $1$, $q$ respectively.
This short note explains how to hash onto $\mathbb{G}_2$ more efficiently and why we do not need to hash directly onto $\mathbb{G}_1$. In the first case, we significantly exploit the presence of clearing the cofactor $c_2 := N_2/r$. In the second one, on the contrary, clearing the cofactor $c_1 := N_1/r$ can be fully avoided. The fact is that optimal ate pairings $a\!: \mathbb{G}_2 \!\times\! \mathbb{G}_1 \to \mu_r \subset \mathbb{F}_{\!q^k}^*$ can be painlessly (unlike $E_2(\mathbb{F}_{\!q^e}) \!\times\! \mathbb{G}_1$) extended to $\mathbb{G}_2 \!\times\! E_1(\mathbb{F}_{\!q})$, at least in main pairing-based protocols. Throughout the text we mean hashing indifferentiable from a random oracle.
At the moment, the curve BLS12-381 (with $e = 2$) is the most popular in practice. Earlier for this curve (and a number of others) the author constructed encodings $\mathbb{F}_{\!q}^2 \to E_1(\mathbb{F}_{\!q})$ and $\mathbb{F}_{\!q} \to E_2(\mathbb{F}_{\!q^2})$ computable in constant time of one exponentiation in $\mathbb{F}_{\!q}$. Combining the new ideas with these encodings, we obtain hash functions $\{0, 1\}^* \to E_1(\mathbb{F}_{\!q})$ and $\{0, 1\}^* \to \mathbb{G}_2$, which seem to be difficult to speed up even more. We also discuss how much performance gain they provide over hash functions that are actively applied in the industry.
This short note explains how to hash onto $\mathbb{G}_2$ more efficiently and why we do not need to hash directly onto $\mathbb{G}_1$. In the first case, we significantly exploit the presence of clearing the cofactor $c_2 := N_2/r$. In the second one, on the contrary, clearing the cofactor $c_1 := N_1/r$ can be fully avoided. The fact is that optimal ate pairings $a\!: \mathbb{G}_2 \!\times\! \mathbb{G}_1 \to \mu_r \subset \mathbb{F}_{\!q^k}^*$ can be painlessly (unlike $E_2(\mathbb{F}_{\!q^e}) \!\times\! \mathbb{G}_1$) extended to $\mathbb{G}_2 \!\times\! E_1(\mathbb{F}_{\!q})$, at least in main pairing-based protocols. Throughout the text we mean hashing indifferentiable from a random oracle.
At the moment, the curve BLS12-381 (with $e = 2$) is the most popular in practice. Earlier for this curve (and a number of others) the author constructed encodings $\mathbb{F}_{\!q}^2 \to E_1(\mathbb{F}_{\!q})$ and $\mathbb{F}_{\!q} \to E_2(\mathbb{F}_{\!q^2})$ computable in constant time of one exponentiation in $\mathbb{F}_{\!q}$. Combining the new ideas with these encodings, we obtain hash functions $\{0, 1\}^* \to E_1(\mathbb{F}_{\!q})$ and $\{0, 1\}^* \to \mathbb{G}_2$, which seem to be difficult to speed up even more. We also discuss how much performance gain they provide over hash functions that are actively applied in the industry.
Muhammad Haris Mughees, Hao Chen, Ling Ren
This paper presents OnionPIR and stateful OnionPIR, two singleserver PIR schemes that significantly improve the response size and
computation cost over state-of-the-art schemes. OnionPIR scheme utilizes recent advances in somewhat homomorphic encryption
(SHE) and carefully composes two lattice-based SHE schemes and homomorphic operations to control the noise growth and response size. Stateful OnionPIR uses a technique based on the homomorphic evaluation of copy networks. OnionPIR achieves a response overhead of just 4.2x over the insecure baseline, in contrast to the 100x response overhead of state-of-the-art schemes. Our stateful OnionPIR scheme improves upon the recent stateful PIR framework of Patel et al. and reduces its response overhead by 22x by avoiding downloading the entire database in the offline stage. Compared to state-of-the-art stateless PIR schemes including OnionPIR, stateful OnionPIR reduces the computation cost by 7x.
23 August 2021
Ege Erdogan, Alptekin Kupcu, A. Ercument Cicek
Distributed deep learning frameworks, such as split learning, have recently been proposed to enable a group of participants to collaboratively train a deep neural network without sharing their raw data. Split learning in particular achieves this goal by dividing a neural network between a client and a server so that the client computes the initial set of layers, and the server computes the rest. However, this method introduces a unique attack vector for a malicious server attempting to steal the client's private data: the server can direct the client model towards learning a task of its choice. With a concrete example already proposed, such training-hijacking attacks present a significant risk for the data privacy of split learning clients.
In this paper, we propose SplitGuard, a method by which a split learning client can detect whether it is being targeted by a training-hijacking attack or not. We experimentally evaluate its effectiveness, and discuss in detail various points related to its use. We conclude that SplitGuard can effectively detect training-hijacking attacks while minimizing the amount of information recovered by the adversaries.
In this paper, we propose SplitGuard, a method by which a split learning client can detect whether it is being targeted by a training-hijacking attack or not. We experimentally evaluate its effectiveness, and discuss in detail various points related to its use. We conclude that SplitGuard can effectively detect training-hijacking attacks while minimizing the amount of information recovered by the adversaries.
Zhiyuan Fan, Jiatu Li, Tianqi Yang
How much computational resource do we need for cryptography? This is an important question of both theoretical and practical interests. In this paper, we study the problem on pseudorandom functions (PRFs) in the context of circuit complexity. Perhaps surprisingly, we prove extremely tight upper and lower bounds in various circuit models.
* In general $B_2$ circuits, assuming the existence of PRFs, PRFs can be constructed in $2n + o(n)$ size, simplifying and improving the $O(n)$ bound by Ishai et al. (STOC 2008). We show that such construction is almost optimal by giving an unconditional $2n-O(1)$ lower bound.
* In logarithmic depth circuits, assuming the existence of $NC^1$ PRFs, PRFs can be constructed in $2n + o(n)$ size and $(1+\epsilon) \log n$ depth simultaneously.
* In constant depth linear threshold circuits, assuming the existence of $TC^0$ PRFs, PRFs can be constructed with wire complexity $n^{1+O(1.61^{-d})}$. We also give an $n^{1+\Omega(c^{-d})}$ wire complexity lower bound for some constant $c$.
The upper bounds are proved with generalized Levin's trick and novel constructions of "almost" universal hash functions; the lower bound for general circuits is proved via a tricky but elementary wire-counting argument; and the lower bound for $TC^0$ circuits is proved by extracting a "black-box" property of $TC^0$ circuits from the "white-box" restriction lemma of Chen, Santhanam, and Srinivasan (Theory Comput. 2018). As a byproduct, we prove unconditional tight upper and lower bounds for "almost" universal hashing, which we believe to have independent interests.
Following Natural Proofs by Razborov and Rudich (J. Comput. Syst. Sci. 1997), our results make progress in realizing the difficulty to improve known circuit lower bounds which recently becomes significant due to the discovery of several "bootstrapping results". In $TC^0$, this reveals the limitation of the current restriction-based methods; in particular, it brings new insights in understanding the strange phenomenon of "sharp threshold results" such as the one presented by Chen and Tell (STOC 2019).
* In general $B_2$ circuits, assuming the existence of PRFs, PRFs can be constructed in $2n + o(n)$ size, simplifying and improving the $O(n)$ bound by Ishai et al. (STOC 2008). We show that such construction is almost optimal by giving an unconditional $2n-O(1)$ lower bound.
* In logarithmic depth circuits, assuming the existence of $NC^1$ PRFs, PRFs can be constructed in $2n + o(n)$ size and $(1+\epsilon) \log n$ depth simultaneously.
* In constant depth linear threshold circuits, assuming the existence of $TC^0$ PRFs, PRFs can be constructed with wire complexity $n^{1+O(1.61^{-d})}$. We also give an $n^{1+\Omega(c^{-d})}$ wire complexity lower bound for some constant $c$.
The upper bounds are proved with generalized Levin's trick and novel constructions of "almost" universal hash functions; the lower bound for general circuits is proved via a tricky but elementary wire-counting argument; and the lower bound for $TC^0$ circuits is proved by extracting a "black-box" property of $TC^0$ circuits from the "white-box" restriction lemma of Chen, Santhanam, and Srinivasan (Theory Comput. 2018). As a byproduct, we prove unconditional tight upper and lower bounds for "almost" universal hashing, which we believe to have independent interests.
Following Natural Proofs by Razborov and Rudich (J. Comput. Syst. Sci. 1997), our results make progress in realizing the difficulty to improve known circuit lower bounds which recently becomes significant due to the discovery of several "bootstrapping results". In $TC^0$, this reveals the limitation of the current restriction-based methods; in particular, it brings new insights in understanding the strange phenomenon of "sharp threshold results" such as the one presented by Chen and Tell (STOC 2019).
Denis Firsov, Dominique Unruh
In this paper we derive a suit of lemmas which allows users to internally reflect EasyCrypt programs into distributions which correspond to their denotational semantics (probabilistic reflection). Based on this we develop techniques for reasoning about rewinding of adversaries in EasyCrypt. (A widely used technique in cryptology.) We use reflection and rewindability results to prove the security of a coin-toss protocol.
Arijit Dutta, Suyash Bagad, Saravanan Vijayakumaran
Proof of reserves protocols enable cryptocurrency
exchanges to prove solvency, i.e. prove that they have enough
reserves to meet their liabilities towards their customers. MProve
(EuroS&PW, 2019) was the first proof of reserves protocol
for Monero which provided some privacy to the exchanges
addresses. As the key images and the addresses are inherently
linked in the MProve proof, an observer could easily recognize
the exchange-owned address when a transaction spending from it
appears on the blockchain. This is detrimental for an exchanges
privacy and becomes a natural reason for exchanges to not adopt
MProve. To this end, we propose MProve+, a Bulletproofs-
based (S&P, 2018) NIZK protocol, which unlinks the key images
and the addresses, thus alleviating the drawback of MProve.
Furthermore, MProve+ presents a promising alternative to
MProve due to an order of magnitude smaller proof sizes along
with practical proof generation and verification times.
Hanlin Ren, Rahul Santhanam
A recent breakthrough of Liu and Pass (FOCS'20) shows that one-way functions exist if and only if the (polynomial-)time-bounded Kolmogorov complexity, ${\rm K}^t$, is bounded-error hard on average to compute. In this paper, we strengthen this result and extend it to other complexity measures:
* We show, perhaps surprisingly, that the $\rm KT$ complexity is bounded-error average-case hard if and only if there exist one-way functions in *constant parallel time* (i.e., ${\sf NC}^0$). This result crucially relies on the idea of *randomized encodings*. Previously, a seminal work of Applebaum, Ishai, and Kushilevitz (FOCS'04; SICOMP'06) used the same idea to show that ${\sf NC}^0$-computable one-way functions exist if and only if logspace-computable one-way functions exist.
* Inspired by the above result, we present randomized average-case reductions among the ${\sf NC}^1$-versions and logspace-versions of ${\rm K}^t$ complexity, and the $\rm KT$ complexity. Our reductions preserve both bounded-error average-case hardness and zero-error average-case hardness. To the best of our knowledge, this is the first reduction between the $\rm KT$ complexity and a variant of ${\rm K}^t$ complexity.
* We prove tight connections between the hardness of ${\rm K}^t$ complexity and the hardness of (the hardest) one-way functions. In analogy with the Exponential-Time Hypothesis and its variants, we define and motivate the *Perebor Hypotheses* for complexity measures such as ${\rm K}^t$ and $\rm KT$. We show that a Strong Perebor Hypothesis for ${\rm K}^t$ implies the existence of (weak) one-way functions of near-optimal hardness $2^{n-o(n)}$. To the best of our knowledge, this is the first construction of one-way functions of near-optimal hardness based on a natural complexity assumption about a search problem.
* We show that a Weak Perebor Hypothesis for ${\rm MCSP}$ implies the existence of one-way functions, and establish a partial converse. This is the first unconditional construction of one-way functions from the hardness of ${\rm MCSP}$ over a natural distribution.
* Finally, we study the average-case hardness of ${\rm MKtP}$. We show that it characterizes cryptographic pseudorandomness in one natural regime of parameters, and complexity-theoretic pseudorandomness in another natural regime.
V. Vysotskaya, I. Chizhov
The paper provides a complete description of the digital signature scheme based on the Stern identification protocol. We also present the proof of the existential unforgeability of the scheme under the chosen message attack (EUF-CMA) in the random oracle model (ROM) under assumptions of hardness of syndrome decoding and hash function collision finding problems. Finally, we discuss the choice of the signature parameters and introduce a parameter set providing 80-bit security.
Ege Erdogan, Alptekin Kupcu, A. Ercument Cicek
Training deep neural networks requires large scale data, which often forces users to work in a distributed or outsourced setting, accompanied with privacy concerns. Split learning framework aims to address this concern by splitting up the model among the client and the server. The idea is that since the server does not have access to client's part of the model, the scheme supposedly provides privacy. We show that this is not true via two novel attacks. (1) We show that an honest-but-curious split learning server, equipped only with the knowledge of the client neural network architecture, can recover the input samples and also obtain a functionally similar model to the client model, without the client being able to detect the attack. (2) Furthermore, we show that if split learning is used naively to protect the training labels, the honest-but-curious server can infer the labels with perfect accuracy. We test our attacks using three benchmark datasets and investigate various properties of the overall system that affect the attacks' effectiveness. Our results show that plaintext split learning paradigm can pose serious security risks and provide no more than a false sense of security.
Thore Tiemann, Sebastian Berndt, Thomas Eisenbarth, Maciej Liskiewicz
Chats have become an essential means of interpersonal interaction. Yet untraceable private communication remains an elusive goal, as most messengers hide content, but not communication patterns. The knowledge of communication patterns can by itself reveal too much, as happened e.g., in the context of the Arab Spring. The subliminal channel in cryptographic systems - as introduced by Simmons in his pioneering works - enables untraceable private communication in plain sight. In this context, blockchains are a natural object for subliminal communication: accessing them is innocuous, as they rely on distributed access for verification and extension. At the same time, blockchain transactions generate hundreds of thousands transactions per day that are individually signed and placed on the blockchain. This significantly increases the availability of publicly accessible cryptographic transactions where subliminal channels can be placed. In this paper we propose a public-key subliminal channel using ECDSA signatures on blockchains and prove that our construction is undetectable in the random oracle model under a common cryptographic assumption. While our approach is applicable to any blockchain platform relying on (variants of) ECDSA signatures, we present a proof of concept of our method for the popular Bitcoin protocol and show the simplicity and practicality of our approach.