International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

03 June 2019

Ariel Hamlin, Justin Holmgren, Mor Weiss, Daniel Wichs
ePrint Report ePrint Report
We initiate the study of fully homomorphic encryption for RAMs (RAM-FHE). This is a public-key encryption scheme where, given an encryption of a large database $D$, anybody can efficiently compute an encryption of $P(D)$ for an arbitrary RAM program $P$. The running time over the encrypted data should be as close as possible to the worst case running time of $P$, which may be sub-linear in the data size.

A central difficulty in constructing a RAM-FHE scheme is hiding the sequence of memory addresses accessed by $P$. This is particularly problematic because an adversary may homomorphically evaluate many programs over the same ciphertext, therefore effectively ``rewinding'' any mechanism for making memory accesses oblivious.

We identify a necessary prerequisite towards constructing RAM-FHE that we call \emph{rewindable oblivious RAM} (rewindable ORAM), which provides security even in this strong adversarial setting. We show how to construct rewindable ORAM using \emph{symmetric-key doubly efficient PIR (SK-DEPIR)} (Canetti-Holmgren-Richelson, Boyle-Ishai-Pass-Wootters: TCC '17). We then show how to use rewindable ORAM, along with virtual black-box (VBB) obfuscation for specific circuits, to construct RAM-FHE. The latter primitive can be heuristically instantiated using existing indistinguishability obfuscation candidates. Overall, we obtain a RAM-FHE scheme where the multiplicative overhead in running time is polylogarithmic in the data size $N$. Our basic scheme is single-hop, but we also extend it to get multi-hop RAM-FHE with overhead $N^\epsilon$ for arbitrarily small $\epsilon>0$.

We view our work as the first evidence that RAM-FHE is likely to exist.
Expand
Cody Freitag, Ilan Komargodski, Rafael Pass
ePrint Report ePrint Report
We introduce the notion of non-uniformly sound certificates: succinct single-message (unidirectional) argument systems that satisfy a ``best-possible security'' against non-uniform polynomial-time attackers. In particular, no polynomial-time attacker with s bits of non-uniform advice can find significantly more than s accepting proofs for false statements. Our first result is a construction of non-uniformly sound certificates for all NP in the random oracle model, where the attacker's advice can depend arbitrarily on the random oracle.

We next show that the existence of non-uniformly sound certificates for P (and collision resistant hash functions) yields a public-coin constant-round fully concurrent zero-knowledge argument for NP.
Expand
Junqing Gong, Brent Waters, Hoeteck Wee
ePrint Report ePrint Report
We present the first attribute-based encryption (ABE) scheme for deterministic finite automaton (DFA) based on static assumptions in bilinear groups; this resolves an open problem posed by Waters (CRYPTO 2012). Our main construction achieves selective security against unbounded collusions under the standard $k$-linear assumption in prime-order bilinear groups, whereas previous constructions all rely on $q$-type assumptions.
Expand
Shweta Agrawal, Monosij Maitra, Shota Yamada
ePrint Report ePrint Report
Constructing Attribute Based Encryption (ABE) [SW05] for uniform models of computation from standard assumptions, is an important problem, about which very little is known. The only known ABE schemes in this setting that i) avoid reliance on multilinear maps or indistinguishability obfuscation, ii) support unbounded length inputs and iii) permit unbounded key requests to the adversary in the security game, are by Waters from Crypto, 2012 [Wat12] and its variants. Waters provided the first ABE for Deterministic Finite Automata (DFA) satisfying the above properties, from a parametrized or ``q-type'' assumption over bilinear maps. Generalizing this construction to Nondeterministic Finite Automata (NFA) was left as an explicit open problem in the same work, and has seen no progress to date. Constructions from other assumptions such as more standard pairing based assumptions, or lattice based assumptions has also proved elusive.

In this work, we construct the first symmetric key attribute based encryption scheme for nondeterministic finite automata (NFA) from the learning with errors (LWE) assumption. Our scheme supports unbounded length inputs as well as unbounded length machines. In more detail, secret keys in our construction are associated with an NFA M of unbounded length, ciphertexts are associated with a tuple (x;m) where x is a public attribute of unbounded length and m is a secret message bit, and decryption recovers m if and only if M(x) = 1.

Further, we leverage our ABE to achieve (restricted notions of) attribute hiding analogous to the circuit setting, obtaining the first predicate encryption and bounded key functional encryption schemes for NFA from LWE. We achieve machine hiding in the single/bounded key setting to obtain the first reusable garbled NFA from standard assumptions. In terms of lower bounds, we show that secret key functional encryption even for DFAs, with security against unbounded key requests implies indistinguishability obfuscation (iO) for circuits; this suggests a barrier in achieving full fledged functional encryption for NFA.
Expand
Rishab Goyal, Sam Kim, Nathan Manohar, Brent Waters, David J. Wu
ePrint Report ePrint Report
A software watermarking scheme enables users to embed a message or mark within a program while preserving its functionality. Moreover, it is difficult for an adversary to remove a watermark from a marked program without corrupting its behavior. Existing constructions of software watermarking from standard assumptions have focused exclusively on watermarking pseudorandom functions (PRFs).

In this work, we study watermarking public-key primitives such as the signing key of a digital signature scheme or the decryption key of a public-key (predicate) encryption scheme. While watermarking public-key primitives might intuitively seem more challenging than watermarking PRFs, our constructions only rely on simple assumptions. Our watermarkable signature scheme can be built from the minimal assumption of one-way functions while our watermarkable public-key encryption scheme can be built from most standard algebraic assumptions that imply public-key encryption (e.g., factoring, discrete log, or lattice assumptions). Our schemes also satisfy a number of appealing properties: public marking, public mark-extraction, and collusion resistance. Our schemes are the first to simultaneously achieve all of these properties.

The key enabler of our new constructions is a relaxed notion of functionality-preserving. While traditionally, we require that a marked program (approximately) preserve the input/output behavior of the original program, in the public-key setting, preserving the "functionality" does not necessarily require preserving the exact input/output behavior. For instance, if we want to mark a signing algorithm, it suffices that the marked algorithm still output valid signatures (even if those signatures might be different from the ones output by the unmarked algorithm). Similarly, if we want to mark a decryption algorithm, it suffices that the marked algorithm correctly decrypt all valid ciphertexts (but may behave differently from the unmarked algorithm on invalid or malformed ciphertexts). Our relaxed notion of functionality-preserving captures the essence of watermarking and still supports the traditional applications, but provides additional flexibility to enable new and simple realizations of this powerful cryptographic notion.
Expand
Andrej Bogdanov, Yuval Ishai, Akshayaram Srinivasan
ePrint Report ePrint Report
We consider the problem of constructing leakage-resilient circuit compilers that are secure against global leakage functions with bounded output length. By global, we mean that the leakage can depend on all circuit wires and output a low-complexity function (represented as a multi-output Boolean circuit) applied on these wires. In this work, we design compilers both in the stateless (a.k.a. single-shot leakage) setting and the stateful (a.k.a. continuous leakage) setting that are unconditionally secure against AC0 leakage and similar low-complexity classes. In the stateless case, we show that the original private circuits construction of Ishai, Sahai, and Wagner (Crypto 2003) is actually secure against AC0 leakage. In the stateful case, we modify the construction of Rothblum (Crypto 2012), obtaining a simple construction with unconditional security. Prior works that designed leakage-resilient circuit compilers against AC0 leakage had to rely either on secure hardware components (Faust et al., Eurocrypt 2010, Miles-Viola, STOC 2013) or on (unproven) complexity-theoretic assumptions (Rothblum, Crypto 2012).
Expand
Vipul Goyal, Aayush Jain, Amit Sahai
ePrint Report ePrint Report
In this work, we explore the question of simultaneous privacy and soundness amplification for non-interactive zero-knowledge argument systems (NIZK). We show that any $\delta_s-$sound and $\delta_z-$zero-knowledge NIZK candidate satisfying $\delta_s+\delta_z=1-\epsilon$, for any constant $\epsilon>0$, can be turned into a computationally sound and zero-knowledge candidate with the only extra assumption of a subexponentially secure public-key encryption.

We develop novel techniques to leverage the use of leakage simulation lemma (Jetchev-Peitzrak TCC 2014) to argue amplification. A crucial component of our result is a new notion for secret sharing $\mathsf{NP}$ instances. We believe that this may be of independent interest.

To achieve this result we analyze following two transformations:

- Parallel Repetition: We show that using parallel repetition any $\delta_s-$sound and $\delta_z-$zero-knowledge NIZK candidate can be turned into (roughly) $\delta^n_s-$sound and $1-(1-\delta_{z})^n-$zero-knowledge candidate. Here $n$ is the repetition parameter.

- MPC based Repetition: We propose a new transformation that amplifies zero-knowledge in the same way that parallel repetition amplifies soundness. We show that using this any $\delta_s-$sound and $\delta_z-$zero-knowledge NIZK candidate can be turned into (roughly) $1-(1-\delta_s)^n-$sound and $2\cdot \delta^n_{z}-$zero-knowledge candidate.

Then we show that using these transformations in a zig-zag fashion we can obtain our result. Finally, we also present a simple transformation which directly turns any NIZK candidate satisfying $\delta_s,\delta_z<1/3 -1/\textrm{poly}(\lambda)$ to a secure one.
Expand
Rio Lavigne, Andrea Lincoln, Virginia Vassilevska Williams
ePrint Report ePrint Report
Cryptography is largely based on unproven assumptions, which, while believable, might fail. Notably if $P = NP$, or if we live in Pessiland, then all current cryptographic assumptions will be broken. A compelling question is if any interesting cryptography might exist in Pessiland.

A natural approach to tackle this question is to base cryptography on an assumption from fine-grained complexity. Ball, Rosen, Sabin, and Vasudevan [BRSV'17] attempted this, starting from popular hardness assumptions, such as the Orthogonal Vectors (OV) Conjecture. They obtained problems that are hard on average, assuming that OV and other problems are hard in the worst case. They obtained proofs of work, and hoped to use their average-case hard problems to build a fine-grained one-way function. Unfortunately, they proved that constructing one using their approach would violate a popular hardness hypothesis. This motivates the search for other fine-grained average-case hard problems.

The main goal of this paper is to identify sufficient properties for a fine-grained average-case assumption that imply cryptographic primitives such as fine-grained public key cryptography (PKC). Our main contribution is a novel construction of a cryptographic key exchange, together with the definition of a small number of relatively weak structural properties, such that if a computational problem satisfies them, our key exchange has provable fine-grained security guarantees, based on the hardness of this problem. We then show that a natural and plausible average-case assumption for the key problem Zero-$k$-Clique from fine-grained complexity satisfies our properties. We also develop fine-grained one-way functions and hardcore bits even under these weaker assumptions.

Where previous works had to assume random oracles or the existence of strong one-way functions to get a key-exchange computable in $O(n)$ time secure against $O(n^2)$ adversaries (see [Merkle'78] and [BGI'08]), our assumptions seem much weaker. Our key exchange has a similar gap between the computation of the honest party and the adversary as prior work, while being non-interactive, implying fine-grained PKC.
Expand
Mihir Bellare, Ruth Ng, Björn Tackmann
ePrint Report ePrint Report
We draw attention to a gap between theory and usage of nonce-based symmetric encryption, under which the way the former treats nonces can result in violation of privacy in the latter. We bridge the gap with a new treatment of nonce-based symmetric encryption that modifies the syntax (decryption no longer takes a nonce), upgrades the security goal (asking that not just messages, but also nonces, be hidden) and gives simple, efficient schemes conforming to the new definitions. We investigate both basic security (holding when nonces are not reused) and advanced security (misuse resistance, providing best-possible guarantees when nonces are reused).
Expand
Shuichi Katsumata, Ryo Nishimaki, Shota Yamada, Takashi Yamakawa
ePrint Report ePrint Report
A non-interactive zero-knowledge (NIZK) protocol allows a prover to non-interactively convince a verifier of the truth of the statement without leaking any other information. In this study, we explore shorter NIZK proofs for all NP languages. Our primary interest is NIZK proofs from falsifiable pairing/pairing-free group-based assumptions. Thus far, NIZKs in the common reference string model (CRS-NIZKs) for NP based on falsifiable pairing-based assumptions all require a proof size at least as large as $O(|C| k)$, where $C$ is a circuit computing the NP relation and $k$ is the security parameter. This holds true even for the weaker designated-verifier NIZKs (DV-NIZKs). Notably, constructing a (CRS, DV)-NIZK with proof size achieving an additive-overhead $O(|C|) + poly(k)$, rather than a multiplicative-overhead $|C| \cdot poly(k)$, based on any falsifiable pairing-based assumptions is an open problem.

In this work, we present various techniques for constructing NIZKs with compact proofs, i.e., proofs smaller than $O(|C|) + poly(k)$, and make progress regarding the above situation. Our result is summarized below.

- We construct CRS-NIZK for all NP with proof size $|C| + poly(k)$ from a (non-static) falsifiable Diffie-Hellman (DH) type assumption over pairing groups. This is the first CRS-NIZK to achieve a compact proof without relying on either lattice-based assumptions or non-falsifiable assumptions. Moreover, a variant of our CRS-NIZK satisfies universal composability (UC) in the erasure-free adaptive setting. Although it is limited to NP relations in NC1, the proof size is $|w| \cdot poly(k)$ where $w$ is the witness, and in particular, it matches the state-of-the-art UC-NIZK proposed by Cohen, shelat, and Wichs (EPRINT'18) based on lattices.

- We construct (multi-theorem) DV-NIZKs for NP with proof size $|C|+poly(k)$ from the computational DH assumption over pairing-free groups. This is the first DV-NIZK that achieves a compact proof from a standard DH type assumption. Moreover, if we further assume the NP relation to be computable in NC1 and assume hardness of a (non-static) falsifiable DH type assumption over pairing-free groups, the proof size can be made as small as $|w| + poly(k)$.

Another related but independent issue is that all (CRS, DV)-NIZKs require the running time of the prover to be at least $|C|\cdot poly(k)$. Considering that there exists NIZKs with efficient verifiers whose running time is strictly smaller than $|C|$, it is an interesting problem whether we can construct prover-efficient NIZKs. To this end, we construct prover-efficient CRS-NIZKs for NP with compact proof through a generic construction using laconic functional evaluation schemes (Quach, Wee, and Wichs (FOCS'18)). This is the first NIZK in any model where the running time of the prover is strictly smaller than the time it takes to compute the circuit $C$ computing the NP relation.

Finally, perhaps of an independent interest, we formalize the notion of homomorphic equivocal commitments, which we use as building blocks to obtain the first result, and show how to construct them from pairing-based assumptions.
Expand
Zhenzhen Bao, Jian Guo, Eik List
ePrint Report ePrint Report
Distinguishers on round-reduced AES have attracted considerable attention in the recent years. Although the number of rounds covered in key-recovery attacks has not been increased since, subspace, yoyo, and multiple-of-n cryptanalysis advanced the understanding of properties of the cipher. Expectation cryptanalysis is an umbrella term for all forms of statistical analysis that try to identify properties whose expectation differs from that of an ideal primitive. For substitution-permutation networks, integral attacks seem a suitable target for extension since they usually end after a linear layer sums several subcomponents. Based on results by Patarin, Chen et al. already observed that the expected number of collisions differs slightly for a sum of permutations from the ideal. Though, their target remained lightweight primitives. The present work applies expectation-based distinguisher from a sum of PRPs to round-reduced AES. We show how to extend the well-known 3-round integral distinguisher to expectation distinguishers over 4 and 5 rounds. In contrast to previous expectation distinguishers by Grassi et al., our approach allows to prepend a round that starts from a diagonal subspace. We demonstrate how the prepended round can be used for key recovery. Moreover, we show how the prepended round can be integrated to form a six-round distinguisher. For all distinguishers, our results are supported by their implementations with Cid et al.'s established Small-AES version.
Expand
Bruce Kallick
ePrint Report ePrint Report
The classic simple substitution cipher is modified by randomly inserting key-defined noise characters into the ciphertext in encryption which are ignored in decryption. Interestingly, this yields a finite-key cipher system with unbounded unicity.
Expand
Steven D. Galbraith, Lukas Zobernig
ePrint Report ePrint Report
We consider the problem of obfuscating programs for fuzzy matching (in other words, testing whether the Hamming distance between an $n$-bit input and a fixed $n$-bit target vector is smaller than some predetermined threshold). This problem arises in biometric matching and other contexts. We present a virtual-black-box (VBB) secure and input-hiding obfuscator for fuzzy matching for Hamming distance, based on certain natural number-theoretic computational assumptions. In contrast to schemes based on coding theory, our obfuscator is based on computational hardness rather than information-theoretic hardness, and can be implemented for a much wider range of parameters. The Hamming distance obfuscator can also be applied to obfuscation of matching under the $\ell_1$ norm on $\Z^n$.

We also consider obfuscating conjunctions. Conjunctions are equivalent to pattern matching with wildcards, which can be reduced in some cases to fuzzy matching. Our approach does not cover as general a range of parameters as other solutions, but it is much more compact. We study the relation between our obfuscation schemes and other obfuscators and give some advantages of our solution.
Expand
Naomi Ephraim, Cody Freitag, Ilan Komargodski, Rafael Pass
ePrint Report ePrint Report
We introduce the notion of a $\textit{continuous verifiable delay function}$ (cVDF): a function $g$ which is (a) $\textit{iteratively sequential}$---meaning that evaluating the composed function $g^{(t)}$ of $g$ (on a random input) takes roughly $t$ times the time to evaluate $g$, even with many parallel processors, and (b) $\textit{(iteratively) verifiable}$---the output of $g^{(t)}$ can be efficiently verified (in time that is essentially independent of $t$). In other words, the iteration $g^{(t)}$ of $g$ is a verifiable delay function (VDF) (Boneh et al., EUROCRYPT '19), having the property that intermediate steps of the computation (i.e., $g^{(t')}$ for $t' < t$) are publicly and continuously verifiable.

We demonstrate that cVDFs have intriguing applications: (a) they can be used to construct a $\textit{public randomness beacon}$ that only requires an initial random seed (and no further unpredictable sources of randomness), (b) enable $\textit{outsourceable VDFs}$ where any part of the VDF computation can be verifiably outsourced, and (c) have deep complexity-theoretic consequences: in particular, they imply that $\mathsf{PPAD} \cap \mathsf{P} \not\subseteq \mathsf{NC}$ (i.e., the existence of "easy" Nash equilibrium problem instances that require a high $\textit{sequential}$ running time).

Our main result is the construction of a cVDF based on the repeated squaring assumption and the soundness of the Fiat-Shamir (FS) heuristic for $\textit{constant-round}$ proofs. We highlight that even when viewed as a (plain) VDF, our construction enjoys several advantages over previous ones: it satisfies a stronger soundness property under a weaker FS assumption (earlier constructions require the FS heuristic for either super-logarithmic round protocols, or for $\textit{arguments}$ (as opposed to proofs)).
Expand
Fukang Liu, Takanori Isobe
ePrint Report ePrint Report
Troika is a recently proposed sponge-based hash function for IOTA's ternary architecture and platform, which is developed by CYBERCRYPT. In this paper, we introduce the preimage attack on 2 and 3 rounds of Troika with a divide-and-conquer approach. Instead of directly matching a given hash value, we propose equivalent conditions to determine whether a message is the preimage before computing the complete hash value. As a result, for the two-round hash value that can be generated with one block, we can search the preimage only in a valid space and efficiently enumerate the messages which can satisfy most of the equivalent conditions with a guess-and-determine technique. For the three-round preimage attack, an MILP-based method is applied to separate the one-block message space into two parts in order to obtain the best advantage over brute force. Our experiments show that the time complexity of the preimage attack on 2 (out of 24) rounds of Troika can be improved to $3^{79}$, which is $3^{164}$ times faster than the brute force. For the preimage attack on 3 (out of 24) rounds of Troika, we can obtain an advantage of $3^{25.7}$ over brute force. In addition, how to construct the second preimage for two-round Troika in seconds is presented as well. Our attacks do not threaten the security of Troika.
Expand
Sebastian Gajek, Marco Lewandowsky
ePrint Report ePrint Report
Voting systems are the tool of choice when it comes to settle an agreement of different opinions. We propose a solution for a trustless, censorship-resilient and scalable electronic voting platform. By leveraging the blockchain together with the functional encryption paradigm, we fully decentralize the system and reduce the risks that a voting provider, like a corrupt government, does censor or manipulate the outcome.
Expand
Iftach Haitner, Noam Mazor, Ronen Shaltiel, Jad Silbak
ePrint Report ePrint Report
Consider a PPT two-party protocol $\Pi=(A,B)$ in which the parties get no private inputs and obtain outputs $O^A,O^B\in \{0,1\}$, and let $V^A$ and $V^B$ denote the parties' individual views. Protocol $\Pi$ has $\alpha$-agreement if $Pr[O^A=O^B]=1/2+\alpha$. The leakage of $\epsilon$ is the amount of information a party obtains about the event $\{O^A=O^B\}$; that is, the leakage $\epsilon$ is the maximum, over $P\in \{A,B\}$, of the distance between $V^P|_{O^A=O^B}$ and $V^P|_{O^A\neq O^B}$. Typically, this distance is measured in statistical distance, or, in the computational setting, in computational indistinguishability. For this choice, Wullschleger [TCC '09] showed that if $\epsilon<<\alpha$ then the protocol can be transformed into an OT protocol.

We consider measuring the protocol leakage by the log-ratio distance (which was popularized by its use in the differential privacy framework). The log-ratio distance between $X,Y$ over domain $\Omega$ is the minimal $\epsilon\geq 0$ for which, for every $v\in\Omega, \log(Pr[X=v]/Pr[Y=v])\in [-\epsilon,\epsilon]$. In the computational setting, we use computational indistinguishability from having log-ratio distance $\epsilon$. We show that a protocol with (noticeable) accuracy $\alpha\in\Omega(\epsilon^2)$ can be transformed into an OT protocol (note that this allows $\epsilon>>\alpha$). We complete the picture, in this respect, showing that a protocol with $\alpha\in o(\epsilon^2)$ does not necessarily imply OT. Our results hold for both the information theoretic and the computational settings, and can be viewed as a ``fine grained'' approach to ``weak OT amplification''.

We then use the above result to fully characterize the complexity of differentially private two-party computation for the XOR function, answering the open question put by Goyal, Khurana, Mironov, Pandey, and Sahai [ICALP '16] and Haitner, Nissim, Omri, Shaltiel, and Silbak [FOCS '18]. Specifically, we show that for any (noticeable) $\alpha\in\Omega(\epsilon^2)$, a two-party protocol that computes the XOR function with $\alpha$-accuracy and $\epsilon$-differential privacy can be transformed into an OT protocol. This improves upon Goyal et al. that only handle $\alpha\in\Omega(\epsilon)$, and upon Haitner et al. who showed that such a protocol implies (infinitely-often) key agreement (and not OT). Our characterization is tight since OT does not follow from protocols in which $\alpha\in o(\epsilon^2)$, and extends to functions (over many bits) that ``contain'' an ``embedded copy'' of the XOR function.
Expand
Siemen Dhooghe, Svetla Nikova
ePrint Report ePrint Report
In order to thwart Differential Power Analysis (DPA) and Differential Fault Analysis (DFA) attacks, we require the implemented algorithm to ensure correct output and sensitive variable privacy. We propose security notions to determine an algorithm's security against combined attacks consisting of both faults and probes on circuit wires. To ease verification, help create secure components, and isolate primitives in protocols, we extend our notions to capture secure compositions. We propose the NINA property which forms the link between the established Non-Interference (NI) property and our composable active security property, Non-Accumulation (NA).

To illustrate the NINA property, we prove the security of three multiplication gadgets: an error checking duplication gadget; an error correcting duplication gadget; and an error checking polynomial gadget. Our proofs illustrate that the error detecting gadgets admit to statistical ineffective faults. We also prove the error correcting gadget attains the stronger Independent NINA property meaning that faults do not affect its sensitive variable privacy. Lastly, we prove the combined security of a polynomial based method using the error detecting properties of Shamir's secret sharing.
Expand
Xavier Bonnetain, Akinori Hosoyamada, María Naya-Plasencia, Yu Sasaki, André Schrottenloher
ePrint Report ePrint Report
In symmetric cryptanalysis, the model of superposition queries has lead to surprising results, with many constructions being broken in polynomial time thanks to Simon's period-finding algorithm. But the practical implications of these attacks remain blurry. In contrast, the results obtained so far for a quantum adversary making classical queries only are less impressive.

In this paper, we introduce a new quantum algorithm which uses Simon's subroutines in a novel way. We manage to leverage the algebraic structure of cryptosystems in the context of a quantum attacker limited to classical queries and offline quantum computations. We obtain improved quantum-time/classical-data tradeoffs with respect to the current literature, while using only as much hardware requirements (quantum and classical) as a standard exhaustive search using Grover's algorithm. In particular, we are able to break the Even-Mansour construction in quantum time $O(2^{n/3})$, with $O(2^{n/3})$ classical queries and $O(n^2)$ qubits only. In addition, we propose an algorithm that allows to improve some previous superposition attacks by reducing the data complexity from exponential to polynomial, with the same time complexity.

Our approach can be seen in two complementary ways: reusing superposition queries during the iteration of a search using Grover's algorithm, or alternatively, removing the memory requirement in some quantum attacks based on a collision search, thanks to their algebraic structure.

We provide a list of cryptographic applications, including the Even-Mansour construction, the FX construction, some Sponge authenticated modes of encryption, and many more.
Expand
Taha Atahan Akyildiz, Can Berk Guzgeren, Cemal Yilmaz, Erkay Savas
ePrint Report ePrint Report
In this work, we present a runtime approach, called MeltdownDetector, for detecting, isolating, and preventing ongoing Meltdown attacks that operate by causing segmentation faults. Meltdown exploits a hardware vulnerability that allows a malicious process to access memory locations, which do not belong to the process, including the physical and kernel memory. The proposed approach is based on a simple observation that in order for a Meltdown attack to be successful, either a single byte of data located at a particular memory address or a sequence of consecutive memory addresses (i.e., sequence of bytes) need to be read, so that a meaningful piece of information can be extracted from the data leaked. MeltdownDetector, therefore, monitors segmentation faults occurring at memory addresses that are close to each other and issues a warning at runtime when these faults become ``suspicious.'' Furthermore, MeltdownDetector flushes the caches after every suspicious segmentation fault, preventing even a single byte of data from being leaked. In the experiments we carried out to evaluate the proposed approach, MeltdownDetector successfully detected all the attacks, correctly isolated all the malicious processes, and did so at the earliest possible time after the attacks have started with an average runtime overhead of 0.34% and without even leaking a single byte of information.
Expand
◄ Previous Next ►