Here you can see all recent updates to the IACR webpage. These updates are also available:

22
April
2019

ePrint Report
Two-Round Oblivious Transfer from CDH or LPN
Nico Dottling, Sanjam Garg, Mohammad Hajiabadi, Daniel Masny, Daniel Wichs

We show a new general approach for constructing maliciously secure two-round oblivious transfer (OT). Specifically, we provide a generic sequence of transformations to upgrade a very basic notion of two-round OT, which we call elementary OT, to UC-secure OT. We then give simple constructions of elementary OT under the Computational Diffie-Hellman (CDH) assumption or the Learning Parity with Noise (LPN) assumption, yielding the first constructions of malicious (UC-secure) two-round OT under these assumptions. Since two-round OT is complete for two-round 2-party and multi-party computation in the malicious setting, we also achieve the first constructions of the latter under these assumptions.

ePrint Report
On the Streaming Indistinguishability of a Random Permutation and a Random Function
Itai Dinur

An adversary with $S$ bits of
memory obtains a stream of $Q$ elements that are uniformly drawn from the set $\{1,2,\ldots,N\}$, either with or without replacement. This corresponds to sampling $Q$ elements using either a random function or a random permutation. The adversary's goal is to distinguish between these two cases.

This problem was first considered by Jaeger and Tessaro (EUROCRYPT 2019), which proved that the adversary's advantage is upper bounded by $\sqrt{Q \cdot S/N}$. Jaeger and Tessaro used this bound as a streaming switching lemma which allowed proving that known time-memory tradeoff attacks on several modes of operation (such as counter-mode) are optimal up to a factor of $O(\log N)$ if $Q \cdot S \approx N$. However, if $Q \cdot S \ll N$ there is a gap between the upper bound of $\sqrt{Q \cdot S/N}$ and the $Q \cdot S/N$ advantage obtained by known attacks. Moreover, the bound's proof assumed an unproven combinatorial conjecture.

In this paper, we prove a tight upper bound (up to poly-logarithmic factors) of $O(\log Q \cdot Q \cdot S/N)$ on the adversary's advantage in the streaming distinguishing problem. The proof does not require a conjecture and is based on a reduction from communication complexity to streaming.

This problem was first considered by Jaeger and Tessaro (EUROCRYPT 2019), which proved that the adversary's advantage is upper bounded by $\sqrt{Q \cdot S/N}$. Jaeger and Tessaro used this bound as a streaming switching lemma which allowed proving that known time-memory tradeoff attacks on several modes of operation (such as counter-mode) are optimal up to a factor of $O(\log N)$ if $Q \cdot S \approx N$. However, if $Q \cdot S \ll N$ there is a gap between the upper bound of $\sqrt{Q \cdot S/N}$ and the $Q \cdot S/N$ advantage obtained by known attacks. Moreover, the bound's proof assumed an unproven combinatorial conjecture.

In this paper, we prove a tight upper bound (up to poly-logarithmic factors) of $O(\log Q \cdot Q \cdot S/N)$ on the adversary's advantage in the streaming distinguishing problem. The proof does not require a conjecture and is based on a reduction from communication complexity to streaming.

ePrint Report
On the complexity of the Permuted Kernel Problem
Eliane KOUSSA, Gilles MACARIO-RAT, Jacques PATARIN

In this document, we investigate the complexity of an old-time combinatorial
problem - namely Permuted Kernel Problem (PKP) - about which no
new breakthrough were reported for a while. PKP is an NP-complete algebraic
problem that consists of finding a kernel vector with particular entries for a
publicly known matrix. It’s simple, and needs only basic linear algebra. Hence,
this problem was used to develop the first Identification Scheme (IDS) which has
an efficient implementation on low-cost smart cards.
The Permuted Kernel Problem has been extensively studied. We
present a summary of previously known algorithms devoted to solve this problem
and give an update to their complexity. Contrary to what is shown in JOUX-JAULMES's article,
and after a thorough analysis of the State-of-the-art attacks of PKP, we claim that
the most efficient algorithm for solving PKP is still the one introduced by J.
PATARIN and P. CHAUVAUD. We have been able to specify a theoretical
bound on the complexity of PKP which allows us to identify hard instances
of this latter.
Among all the attacks, the problem PKP is in spite of the research effort, still
exponential.

ePrint Report
Exploring the Monero Peer-to-Peer Network
Tong Cao, Jiangshan Yu, Jérémie Decouchant, Xiapu Luo, Paulo Verissimo

As of 12th January 2019, Monero is ranked as the
first privacy-preserving cryptocurrency by market capitalization,
and the 14th among all cryptocurrencies.
This paper aims at improving the understanding of the Monero
peer-to-peer network. We develop a tool set to explore the Monero
peer-to-peer network, including its network size, distribution, and
connectivity. In addition, we set up four Monero nodes that run
our tools: two in the U.S., one in Asia, and one in Europe. We
show that through a short time (one week) data collection, our
tool is already able to discover 68.7% more daily active peers (in
average) compared to a Monero mining pool, called Monerohash,
that also provides data on the Monero node distribution.
Overall, we discovered 21,678 peer IP addresses in the Monero
network. Our results show that only 4,329 (about 20%) peers
are active. We also show that the nodes in the Monero network
follow the power law distribution concerning the number of
connections they maintain. In particular, only 0.7% of the active
maintain more than 250 outgoing connections simultaneously,
and a large proportion (86.8%) of the nodes maintain less than
8 outgoing connections. These 86.8% nodes only maintain 17.14%
of the overall connections in the network, whereas the remaining
13.2% nodes maintain 82.86% of the overall connections. We
also show that our tool is able to dynamically infer the complete
outgoing connections of 99.3% nodes in the network, and infer
250 outgoing connections of the remaining 0.7% nodes.

Sanitizable signatures allow a single, and signer-defined, sanitizer to modify signed messages in a controlled way without invalidating the respective signature. They turned out to be a fascinating primitive, proven by different variants and extensions, e.g., allowing multiple sanitizers or adding
new sanitizers one-by-one. Still, existing constructions are very limited regarding their flexibility in specifying potential sanitizers. In this paper, we propose a different and more powerful approach: Instead of using the sanitizers' public keys directly,
we assign attributes to them. Sanitizing is then based on policies, i.e., access structures defined over attributes.
A sanitizer can sanitize, if, and only if, it holds a secret key to attributes satisfying the policy associated to a signature,
while offering full-scale accountability.

ePrint Report
Post-Quantum Provably-Secure Authentication and MAC from Mersenne Primes
Houda Ferradi, Keita Xagawa

This paper presents a novel, yet efficient secret-key authentication and MAC, which provide post-quantum security promise, whose security is reduced to the quantum-safe conjectured hardness of Mersenne Low Hamming Combination (MERS) assumption recently introduced by Aggarwal, Joux, Prakash, and Santha (CRYPTO 2018). Our protocols are very suitable to weak devices like smart card and RFID tags.

This document includes a collision/forgery attack against SNEIKEN128/192/256,
where every message with more than 128 bytes of associated data can be converted
into another message with different associated data and the same ciphertext/tag.
The attack is a direct application of the probability 1 differential of the SNEIK
permutation found by Léo Perrin in [Per19]. We verify the attack using the reference
implementation of SNEIKEN128 provided by the designers, providing an example of
such collisions.

ePrint Report
Privacy-Preserving Network Path Validation
Binanda Sengupta, Yingjiu Li, Kai Bu, Robert H. Deng

The end-users communicating over a network path currently have no control over the path. For a better quality of service, the source node often opts for a superior (or premium) network path in order to send packets to the destination node. However, the current Internet architecture provides no assurance that the packets indeed follow the designated path. Network path validation schemes address this issue and enable each node present on a network path to validate whether each packet has followed the specific path so far. In this work, we introduce two notions of privacy -- path privacy and index privacy -- in the context of network path validation. We show that, in case a network path validation scheme does not satisfy these two properties, the scheme is vulnerable to certain practical attacks (that affect the reliability, neutrality and quality of service offered by the underlying network). To the best of our knowledge, ours is the first work that addresses privacy issues related to network path validation. We design PrivNPV, a privacy-preserving network path validation protocol, that satisfies both path privacy and index privacy. We discuss several attacks related to network path validation and how PrivNPV defends against these attacks. Finally, we discuss the practicality of PrivNPV based on relevant parameters.

ePrint Report
Fine-Grained and Controlled Rewriting in Blockchains: Chameleon-Hashing Gone Attribute-Based
David Derler, Kai Samelin, Daniel Slamanig, Christoph Striecks

Blockchain technologies recently received a considerable amount
of attention. While the initial focus was mainly on the use of
blockchains in the context of cryptocurrencies such as Bitcoin, application
scenarios now go far beyond this. Most blockchains have the property
that once some object, e.g., a block or a transaction, has been registered
to be included into the blockchain, it is persisted and there are
no means to modify it again. While this is an essential feature of most
blockchain scenarios, it is still often desirable - at times it may be even
legally required - to allow for breaking this immutability in a controlled
way.
Only recently, Ateniese et al. (EuroS&P 2017) proposed an elegant
solution to this problem on the block level. Thereby, the authors replace
standard hash functions with so-called chameleon-hashes (Krawczyk and
Rabin, NDSS 2000). While their work seems to offer a suitable solution to
the problem of controlled re-writing of blockchains, their approach is too
coarse-grained in that it only offers an all-or-nothing solution. We revisit
this idea and introduce the novel concept of policy-based chameleonhashes
(PCH). PCHs generalize the notion of chameleon-hashes by giving
the party computing a hash the ability to associate access policies to the
generated hashes. Anyone who possesses enough privileges to satisfy the
policy can then find arbitrary collisions for a given hash. We then apply
this concept to transaction-level rewriting within blockchains, and thus
support fine-grained and controlled modifiability of blockchain objects.
Besides modeling PCHs, we present a generic construction of PCHs (using
a strengthened version of chameleon-hashes with ephemeral trapdoors
which we also introduce), rigorously prove its security, and instantiate it
with efficient building blocks. We report first implementation results.

ePrint Report
A Novel FPGA Architecture and Protocol for the Self-attestation of Configurable Hardware
Jo Vliegen, Md Masoom Rabbani, Mauro Conti, Nele Mentens

Field-Programmable Gate Arrays or FPGAs are popular platforms for hardware-based attestation. They offer protection
against physical and remote attacks by verifying if an embedded processor is running the intended application code. However, since FPGAs are configurable after deployment (thus not tamper-resistant), they are susceptible to attacks, just like microprocessors. Therefore, attesting an electronic system that uses an FPGA should be done by verifying the status of both the software and the hardware, without the availability of a dedicated tamper-resistant hardware module.
Inspired by the work of Perito and Tsudik, this paper proposes a partially reconfigurable FPGA architecture and attestation protocol that enable the self-attestation of the FPGA. Through the use of our solution, the FPGA can be used as a trusted hardware module to perform hardware-based attestation of a processor. This way, an entire hardware/software system can be protected against malicious code updates.

ePrint Report
Efficient Message Authentication Codes with Combinatorial Group Testing
Kazuhiko Minematsu

Message authentication code, MAC for short, is a symmetric-key cryptographic function for authenticity. A standard MAC verification only tells whether the message is valid or invalid, and thus we can not identify which part is corrupted in case of invalid message.
In this paper we study a class of MAC functions that enables to identify the part of corruption, which we call group testing MAC (GTM). This can be seen as an application of a classical (non-adaptive) combinatorial group testing to MAC.
Although the basic concept of GTM (or its keyless variant) has been proposed in various application areas, such as data forensics and computer virus testing, they rather treat the underlying MAC function as a black box, and exact computation cost for GTM seems to be overlooked.
In this paper, we study the computational aspect of GTM, and show that a simple yet non-trivial extension of parallelizable MAC (PMAC) enables $O(m+t)$ computation for $m$ data items and $t$ tests, irrespective of the underlying test matrix we use, under a natural security model. This greatly improves efficiency from naively applying a black-box MAC for each test, which requires $O(mt)$ time. Based on existing group testing methods, we also present experimental results of our proposal and observe that ours runs as fast as taking single MAC tag, with speed-up from the conventional method by factor around 8 to 15 for $m=10^4$ to $10^5$ items.

ePrint Report
Fast and simple constant-time hashing to the BLS12-381 elliptic curve
Riad S. Wahby, Dan Boneh

Pairing-friendly elliptic curves in the Barreto-Lynn-Scott family have
experienced a resurgence in popularity due to their use in a number of
real-world projects. One particular Barreto-Lynn-Scott curve, called
BLS12-381, is the locus of significant development and deployment effort,
especially in blockchain applications. This effort has sparked interest
in using BLS12-381 for BLS signatures, and in particular for aggregatable
signatures, which requires hashing to one of the groups of the bilinear
pairing defined by the BLS12-381 elliptic curve.

While there is a substantial body of literature on the problem of hashing to elliptic curves, much of this work does not apply to Barreto-Lynn-Scott curves. Moreover, the work that does apply has the unfortunate property that fast implementations are complex, while simple implementations are slow.

In this work, we address these issues. First, we show a straightforward way of adapting the "simplified SWU" map of Brier et al. to BLS12-381. Second, we describe optimizations to the SWU map that both simplify its implementation and improve its performance; these optimizations may be of interest in other contexts. Third, we implement and evaluate. We find that our work yields constant-time hash functions that are simple to implement, yet perform within 9% of the fastest, non--constant-time alternatives, which require much more complex implementations.

While there is a substantial body of literature on the problem of hashing to elliptic curves, much of this work does not apply to Barreto-Lynn-Scott curves. Moreover, the work that does apply has the unfortunate property that fast implementations are complex, while simple implementations are slow.

In this work, we address these issues. First, we show a straightforward way of adapting the "simplified SWU" map of Brier et al. to BLS12-381. Second, we describe optimizations to the SWU map that both simplify its implementation and improve its performance; these optimizations may be of interest in other contexts. Third, we implement and evaluate. We find that our work yields constant-time hash functions that are simple to implement, yet perform within 9% of the fastest, non--constant-time alternatives, which require much more complex implementations.

ePrint Report
ILC: A Calculus for Composable, Computational Cryptography
Kevin Liao, Matthew A. Hammer, Andrew Miller

The universal composability (UC) framework is the established standard for
analyzing cryptographic protocols in a modular way, such that security is
preserved under concurrent composition with arbitrary other protocols.
However, although UC is widely used for on-paper proofs, prior attempts at
systemizing it have fallen short, either by using a symbolic model (thereby
ruling out computational reduction proofs), or by limiting its expressiveness.

In this paper, we lay the groundwork for building a concrete, executable implementation of the UC framework. Our main contribution is a process calculus, dubbed the Interactive Lambda Calculus (ILC). ILC faithfully captures the computational model underlying UC---interactive Turing machines (ITMs)---by adapting ITMs to a subset of the pi-calculus through an affine typing discipline. In other words, well-typed ILC programs are expressible as ITMs. In turn, ILC's strong confluence property enables reasoning about cryptographic security reductions. We use ILC to develop a simplified implementation of UC called SaUCy.

In this paper, we lay the groundwork for building a concrete, executable implementation of the UC framework. Our main contribution is a process calculus, dubbed the Interactive Lambda Calculus (ILC). ILC faithfully captures the computational model underlying UC---interactive Turing machines (ITMs)---by adapting ITMs to a subset of the pi-calculus through an affine typing discipline. In other words, well-typed ILC programs are expressible as ITMs. In turn, ILC's strong confluence property enables reasoning about cryptographic security reductions. We use ILC to develop a simplified implementation of UC called SaUCy.

ePrint Report
Side-Channel assessment of Open Source Hardware Wallets
Manuel San Pedro, Victor Servant, Charles Guillemet

Side-channel attacks rely on the fact that the physical behavior of a device depends on the data it manipulates. We show in this paper how to use this class of attacks to break the security of some cryptocurrencies hardware wallets when the attacker is given physical access to them. We mounted two profiled side-channel attacks: the first one extracts the user PIN used through the verification function, and the second one extracts the private signing key from the ECDSA scalar multiplication using a single signature. The results of our study were responsibly disclosed to the manufacturer who patched the PIN vulnerability through a firmware upgrade.

18
April
2019

ePrint Report
Degenerate Fault Attacks on Elliptic Curve Parameters in OpenSSL
Akira Takahashi, Mehdi Tibouchi

In this paper, we describe several practically exploitable fault attacks against OpenSSL's implementation of elliptic curve cryptography, related to the singular curve point decompression attacks of BlÃ¶mer and Günther (FDTC2015) and the degenerate curve attacks of Neves and Tibouchi (PKC 2016).

In particular, we show that OpenSSL allows to construct EC key files containing explicit curve parameters with a compressed base point. A simple single fault injection upon loading such a file yields a full key recovery attack when the key file is used for signing with ECDSA, and a complete recovery of the plaintext when the file is used for encryption using an algorithm like ECIES. The attack is especially devastating against curves with $j$-invariant equal to 0 such as the Bitcoin curve secp256k1, for which key recovery reduces to a single division in the base field.

Additionally, we apply the present fault attack technique to OpenSSL's implementation of ECDH, by combining it with Neves and Tibouchi's degenerate curve attack. This version of the attack applies to usual named curve parameters with nonzero $j$-invariant, such as P192 and P256. Although it is typically more computationally expensive than the one against signatures and encryption, and requires multiple faulty outputs from the server, it can recover the entire static secret key of the server even in the presence of point validation.

These various attacks can be mounted with only a single instruction skipping fault, and therefore can be easily injected using low-cost voltage glitches on embedded devices. We validated them in practice using concrete fault injection experiments on a Rapsberry Pi single board computer running the up to date OpenSSL command line tools---a setting where the threat of fault attacks is quite significant.

In particular, we show that OpenSSL allows to construct EC key files containing explicit curve parameters with a compressed base point. A simple single fault injection upon loading such a file yields a full key recovery attack when the key file is used for signing with ECDSA, and a complete recovery of the plaintext when the file is used for encryption using an algorithm like ECIES. The attack is especially devastating against curves with $j$-invariant equal to 0 such as the Bitcoin curve secp256k1, for which key recovery reduces to a single division in the base field.

Additionally, we apply the present fault attack technique to OpenSSL's implementation of ECDH, by combining it with Neves and Tibouchi's degenerate curve attack. This version of the attack applies to usual named curve parameters with nonzero $j$-invariant, such as P192 and P256. Although it is typically more computationally expensive than the one against signatures and encryption, and requires multiple faulty outputs from the server, it can recover the entire static secret key of the server even in the presence of point validation.

These various attacks can be mounted with only a single instruction skipping fault, and therefore can be easily injected using low-cost voltage glitches on embedded devices. We validated them in practice using concrete fault injection experiments on a Rapsberry Pi single board computer running the up to date OpenSSL command line tools---a setting where the threat of fault attacks is quite significant.

Non-malleable codes, introduced by Dziembowski, Pietrzak and Wichs in ICS 2010, have emerged in the last few years as a fundamental object at the intersection of cryptography and coding theory. Non-malleable codes provide a useful message integrity guarantee in situations where traditional error-correction (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely ``unrelated value''. Although such codes do not exist if the family of ``tampering functions'' {\mathcal F} allowed to modify the original codeword is completely unrestricted, they are known to exist for many broad tampering families {\mathcal F}.

The family which received the most attention is the family of tampering functions in the so called (2-part) {\em split-state} model: here the message x is encoded into two shares L and R, and the attacker is allowed to arbitrarily tamper with each L and R individually.

Dodis, Kazana, and the authors in STOC 2015 developed a generalization of non-malleable codes called the concept of non-malleable reduction, where a non-malleable code for a tampering family {\mathcal F} can be seen as a non-malleable reduction from {\mathcal F} to a family NM of functions comprising the identity function and constant functions. They also gave a constant-rate reduction from a split-state tampering family to a tampering family {\mathcal G} containing so called $2$-lookahead functions, and forgetful functions.

In this work, we give a constant rate non-malleable reduction from the family {\mathcal G} to NM, thereby giving the first {\em constant rate non-malleable code in the split-state model.}

Central to our work is a technique called inception coding which was introduced by Aggarwal, Kazana and Obremski in TCC 2017, where a string that detects tampering on a part of the codeword is concatenated to the message that is being encoded.

The family which received the most attention is the family of tampering functions in the so called (2-part) {\em split-state} model: here the message x is encoded into two shares L and R, and the attacker is allowed to arbitrarily tamper with each L and R individually.

Dodis, Kazana, and the authors in STOC 2015 developed a generalization of non-malleable codes called the concept of non-malleable reduction, where a non-malleable code for a tampering family {\mathcal F} can be seen as a non-malleable reduction from {\mathcal F} to a family NM of functions comprising the identity function and constant functions. They also gave a constant-rate reduction from a split-state tampering family to a tampering family {\mathcal G} containing so called $2$-lookahead functions, and forgetful functions.

In this work, we give a constant rate non-malleable reduction from the family {\mathcal G} to NM, thereby giving the first {\em constant rate non-malleable code in the split-state model.}

Central to our work is a technique called inception coding which was introduced by Aggarwal, Kazana and Obremski in TCC 2017, where a string that detects tampering on a part of the codeword is concatenated to the message that is being encoded.

ePrint Report
Constant-Round Group Key Exchange from the Ring-LWE Assumption
Daniel Apon, Dana Dachman-Soled, Huijing Gong, Jonathan Katz

Group key-exchange protocols allow a set of N parties to agree on a shared, secret key by communicating over a public network. A number of solutions to this problem have been proposed over the years, mostly based on variants of Diffie-Hellman (two-party) key exchange. There has been relatively little work, however, looking at candidate post-quantum group key-exchange protocols.

Here, we propose a constant-round protocol for unauthenticated group key exchange (i.e., with security against a passive eavesdropper) based on the hardness of the Ring-LWE problem. By applying the Katz-Yung compiler using any post-quantum signature scheme, we obtain a (scalable) protocol for authenticated group key exchange with post-quantum security. Our protocol is constructed by generalizing the Burmester-Desmedt protocol to the Ring-LWE setting, which requires addressing several technical challenges.

Here, we propose a constant-round protocol for unauthenticated group key exchange (i.e., with security against a passive eavesdropper) based on the hardness of the Ring-LWE problem. By applying the Katz-Yung compiler using any post-quantum signature scheme, we obtain a (scalable) protocol for authenticated group key exchange with post-quantum security. Our protocol is constructed by generalizing the Burmester-Desmedt protocol to the Ring-LWE setting, which requires addressing several technical challenges.

ePrint Report
Feistel Structures for MPC, and More
Martin R. Albrecht, Lorenzo Grassi, Léo Perrin, Sebastian Ramacher, Christian Rechberger, Dragos Rotaru, Arnab Roy, Markus Schofnegger

We study approaches to generalized Feistel constructions with low-degree round functions with a focus on x → x^3. Besides known constructions, we also provide a new balanced Feistel construction with improved diffusion properties. This then allows us to propose more efficient generalizations of the MiMC design (Asiacrypt’16), which we in turn evaluate in three application areas. Whereas MiMC was not competitive at all in a recently proposed new class of PQ-secure signature schemes, our new construction leads to about 30 times smaller signatures than MiMC. In MPC use cases, where MiMC outperforms all other competitors, we observe improvements in throughput by a factor of more than 7 and simultaneously a 16-fold reduction of preprocessing effort, albeit at the cost of a higher latency. Another use case where MiMC already outperforms other designs, in the area of SNARKs, sees modest improvements. Additionally, this use case benefits from the flexibility to use smaller fields.

ePrint Report
Mitigation Techniques for Attacks on 1-Dimensional Databases that Support Range Queries
Evangelia Anna Markatou, Roberto Tamassia

In recent years, a number of attacks have been developed that can reconstruct encrypted one-dimensional databases that support range queries under the persistent passive adversary model. These attacks allow an (honest but curious) adversary (such as the cloud provider) to find the order of the elements in the database and, in some cases, to even reconstruct the database itself.

In this paper we present two mitigation techniques to make it harder for the adversary to reconstruct the database. The first technique makes it impossible for an adversary to reconstruct the values stored in the database with an error smaller than $k/2$, for $k$ chosen by the client. By fine-tuning $k$, the user can increase the adversary's error at will.

The second technique is targeted towards adversaries who have managed to learn the distribution of the queries issued. Such adversaries may be able to reconstruct most of the database after seeing a very small (i.e. poly-logarithmic) number of queries. To neutralize such adversaries, our technique turns the database to a circular buffer. All known techniques that exploit knowledge of distribution fail, and no technique can determine which record is first (or last) based on access pattern leakage.

In this paper we present two mitigation techniques to make it harder for the adversary to reconstruct the database. The first technique makes it impossible for an adversary to reconstruct the values stored in the database with an error smaller than $k/2$, for $k$ chosen by the client. By fine-tuning $k$, the user can increase the adversary's error at will.

The second technique is targeted towards adversaries who have managed to learn the distribution of the queries issued. Such adversaries may be able to reconstruct most of the database after seeing a very small (i.e. poly-logarithmic) number of queries. To neutralize such adversaries, our technique turns the database to a circular buffer. All known techniques that exploit knowledge of distribution fail, and no technique can determine which record is first (or last) based on access pattern leakage.

ePrint Report
Full Database Reconstruction with Access and Search Pattern Leakage
Evangelia Anna Markatou, Roberto Tamassia

The widespread use of cloud computing has enabled several database
providers to store their data on servers in the cloud and answer
queries from those servers. In order to protect the confidentiality
of the data stored in the cloud, a database can be stored in an
encrypted form and all queries can be executed on top of the
encrypted database. Recent research results suggest that a curious cloud provider may be able to decrypt some of the items in the database after seeing a large number of queries and their (encrypted) results.

In this paper, we focus on one-dimensional databases that support range queries and develop an attack that can achieve full database reconstruction, inferring the exact value of every element in the database. Previous work on full database reconstruction depends on a client issuing queries uniformly at random.

Let $N$ be the number of elements in the database. Our attack succeeds after the attacker has seen each of the possible query results at least once, independent of their distribution. For the sake of query complexity analysis and comparison with relevant work, if we assume that the client issues queries uniformly at random, we can decrypt the entire database after observing $O(N^2 \log N)$ queries with high probability, an improvement upon Kellaris et al.'s $O(N^4 \log N)$.

In this paper, we focus on one-dimensional databases that support range queries and develop an attack that can achieve full database reconstruction, inferring the exact value of every element in the database. Previous work on full database reconstruction depends on a client issuing queries uniformly at random.

Let $N$ be the number of elements in the database. Our attack succeeds after the attacker has seen each of the possible query results at least once, independent of their distribution. For the sake of query complexity analysis and comparison with relevant work, if we assume that the client issues queries uniformly at random, we can decrypt the entire database after observing $O(N^2 \log N)$ queries with high probability, an improvement upon Kellaris et al.'s $O(N^4 \log N)$.

ePrint Report
Masking Dilithium: Efficient Implementation and Side-Channel Evaluation
Vincent Migliore, Benoı̂t Gérard, Mehdi Tibouchi, Pierre-Alain Fouque

Although security against side-channel attacks is not an explicit design criterion of the NIST post-quantum standardization effort, it is certainly a major concern for schemes that are meant for real-world deployment. In view of the numerous physical attacks that have been proposed against post-quantum schemes in recent literature, it is in particular very important to evaluate the cost and effectiveness of side-channel countermeasures in that setting.

For lattice-based signatures, this work was initiated by Barthe et al., who showed at EUROCRYPT 2018 how to apply arbitrary order masking to the GLP signature scheme presented at CHES 2012 by Güneysu, Lyubashevsky and PÃ¶ppelman. However, although Barthe et al.’s paper provides detailed proofs of security in the probing model of Ishai, Sahai and Wagner, it does not include practical side-channel evaluations, and its proof-of-concept implementation has limited efficiency. Moreover, the GLP scheme has historical significance but is not a NIST candidate, nor is it being considered for concrete deployment.

In this paper, we look instead at Dilithium, one of the most promising NIST candidates for postquantum signatures. This scheme, presented at CHES 2018 by Ducas et al. and based on module lattices, can be seen as an updated variant of both GLP and its more efficient sibling BLISS; it comes with an implementation that is both efficient and constant-time.

Our analysis of Dilithium from a side-channel perspective is threefold. We first evaluate the side-channel resistance of an ARM Cortex-M3 implementation of Dilithium without masking, and identify exploitable side-channel leakage. We then describe how to securely mask the scheme, and verify that the masked implementation no longer leaks. Finally, we show how a simple tweak to Dilithium (namely, replacing the prime modulus by a power of two) makes it possible to obtain a considerably more efficient masked scheme, by a factor of 7.3 to 9 for the most time-consuming masking operations, without affecting security.

For lattice-based signatures, this work was initiated by Barthe et al., who showed at EUROCRYPT 2018 how to apply arbitrary order masking to the GLP signature scheme presented at CHES 2012 by Güneysu, Lyubashevsky and PÃ¶ppelman. However, although Barthe et al.’s paper provides detailed proofs of security in the probing model of Ishai, Sahai and Wagner, it does not include practical side-channel evaluations, and its proof-of-concept implementation has limited efficiency. Moreover, the GLP scheme has historical significance but is not a NIST candidate, nor is it being considered for concrete deployment.

In this paper, we look instead at Dilithium, one of the most promising NIST candidates for postquantum signatures. This scheme, presented at CHES 2018 by Ducas et al. and based on module lattices, can be seen as an updated variant of both GLP and its more efficient sibling BLISS; it comes with an implementation that is both efficient and constant-time.

Our analysis of Dilithium from a side-channel perspective is threefold. We first evaluate the side-channel resistance of an ARM Cortex-M3 implementation of Dilithium without masking, and identify exploitable side-channel leakage. We then describe how to securely mask the scheme, and verify that the masked implementation no longer leaks. Finally, we show how a simple tweak to Dilithium (namely, replacing the prime modulus by a power of two) makes it possible to obtain a considerably more efficient masked scheme, by a factor of 7.3 to 9 for the most time-consuming masking operations, without affecting security.

ePrint Report
A Tight Parallel-Repetition Theorem for Random-Terminating Interactive Arguments
Itay Berman, Iftach Haitner, Eliad Tsfadia

Soundness amplification is a central problem in the study of interactive protocols. While ``natural'' parallel repetition transformation is known to reduce the soundness error of some special cases of interactive arguments: three-message protocols and public-coin protocols, it fails to do so in the general case.

The only known round-preserving approach that applies to the general case of interactive arguments is Haitner's "random-terminating" transform [FOCS '09, SiCOMP '13]. Roughly speaking, a protocol $\pi$ is first transformed into a new slightly modified protocol $\widetilde{\pi}$, referred as the random terminating variant of $\pi$, and then parallel repetition is applied. Haitner's analysis shows that the parallel repetition of $\widetilde{\pi}$ does reduce the soundness error at a weak exponential rate. More precisely, if $\pi$ has $m$ rounds and soundness error $1-\epsilon$, then $\widetilde{\pi}^k$, the $k$-parallel repetition of $\widetilde{\pi}$, has soundness error $(1-\epsilon)^{\epsilon k / m^4}$. Since the security of many cryptographic protocols (e.g., binding) depends on the soundness of a related interactive argument, improving the above analysis is a key challenge in the study of cryptographic protocols.

In this work we introduce a different analysis for Haitner's method, proving that parallel repetition of random terminating protocols reduces the soundness error at a much stronger exponential rate: the soundness error of $\widetilde{\pi}^k$ is $(1-\epsilon)^{k / m}$, only an $m$ factor from the optimal rate of $(1-\epsilon)^k$, achievable in public-coin and three-message protocols. We prove the tightness of our analysis by presenting a matching protocol.

The only known round-preserving approach that applies to the general case of interactive arguments is Haitner's "random-terminating" transform [FOCS '09, SiCOMP '13]. Roughly speaking, a protocol $\pi$ is first transformed into a new slightly modified protocol $\widetilde{\pi}$, referred as the random terminating variant of $\pi$, and then parallel repetition is applied. Haitner's analysis shows that the parallel repetition of $\widetilde{\pi}$ does reduce the soundness error at a weak exponential rate. More precisely, if $\pi$ has $m$ rounds and soundness error $1-\epsilon$, then $\widetilde{\pi}^k$, the $k$-parallel repetition of $\widetilde{\pi}$, has soundness error $(1-\epsilon)^{\epsilon k / m^4}$. Since the security of many cryptographic protocols (e.g., binding) depends on the soundness of a related interactive argument, improving the above analysis is a key challenge in the study of cryptographic protocols.

In this work we introduce a different analysis for Haitner's method, proving that parallel repetition of random terminating protocols reduces the soundness error at a much stronger exponential rate: the soundness error of $\widetilde{\pi}^k$ is $(1-\epsilon)^{k / m}$, only an $m$ factor from the optimal rate of $(1-\epsilon)^k$, achievable in public-coin and three-message protocols. We prove the tightness of our analysis by presenting a matching protocol.

ePrint Report
New Conditional Cube Attack on Keccak Keyed Modes
Zheng Li, Xiaoyang Dong, Wenquan Bi, Keting Jia, Xiaoyun Wang, Willi Meier

Conditional cube attack on round-reduced \textsc{Keccak} keyed modes was proposed by Huang et al. at EUROCRYPT 2017. In their attack, a conditional cube variable was introduced, whose diffusion was significantly reduced by certain key bit conditions. Then, a set of cube variables were found, that were not multiplied after the first round, meanwhile, the conditional cube variable was not multiplied with other cube variables (called ordinary cube variables) after the second round. This has an impact on the degree of the output of \textsc{Keccak} and hence gives a distinguisher. Later, MILP method was applied to find ordinary cube variables. However, for some \textsc{Keccak} based versions with low degrees of freedom, one could not find enough ordinary cube variables, which weakens or even invalidates the conditional cube attack.

In this paper, a new conditional cube attack on \textsc{Keccak} is proposed. We remove the limitation that no cube variables multiply with each other in the first round. As a result, some quadratic terms may appear after the first round. We make use of some new bit conditions to prevent the quadratic terms from multiplying with other cube variables in the second round, so that there will be no cubic terms after the second round. Furthermore, we introduce the kernel quadratic term and construct a 6-2-2 pattern to reduce the diffusion of quadratic terms significantly, where the $\theta$ operation even in the second round becomes an identity transformation (CP-kernel property) for the kernel quadratic term. Previous conditional cube attacks on \textsc{Keccak} only explored the CP-kernel property of $\theta$ operation in the first round. Therefore, more degrees of freedom are available for ordinary cube variables and fewer bit conditions are used to remove the cubic terms after the second round, which plays a key role in the conditional cube attack on versions with very low degrees of freedom. We also use MILP method in the search of cube variables and give key-recovery attacks on round-reduced \textsc{Keccak} keyed modes.

As a result, we reduce the time complexity of key-recovery attacks on 7-round \textsc{Keccak}-MAC-512, and 7-round \textsc{Ketje Sr} v2 from $2^{111}$, $2^{99}$ to $2^{72}$, $2^{77}$, respectively. Additionally, we have reduced the time complexity of attacks on 9-round \texttt{KMAC256} and 7-round \textsc{Ketje Sr} v1. Besides, practical attacks on 6-round \textsc{Ketje Sr} v1 and v2 are also given in this paper for the first time.

In this paper, a new conditional cube attack on \textsc{Keccak} is proposed. We remove the limitation that no cube variables multiply with each other in the first round. As a result, some quadratic terms may appear after the first round. We make use of some new bit conditions to prevent the quadratic terms from multiplying with other cube variables in the second round, so that there will be no cubic terms after the second round. Furthermore, we introduce the kernel quadratic term and construct a 6-2-2 pattern to reduce the diffusion of quadratic terms significantly, where the $\theta$ operation even in the second round becomes an identity transformation (CP-kernel property) for the kernel quadratic term. Previous conditional cube attacks on \textsc{Keccak} only explored the CP-kernel property of $\theta$ operation in the first round. Therefore, more degrees of freedom are available for ordinary cube variables and fewer bit conditions are used to remove the cubic terms after the second round, which plays a key role in the conditional cube attack on versions with very low degrees of freedom. We also use MILP method in the search of cube variables and give key-recovery attacks on round-reduced \textsc{Keccak} keyed modes.

As a result, we reduce the time complexity of key-recovery attacks on 7-round \textsc{Keccak}-MAC-512, and 7-round \textsc{Ketje Sr} v2 from $2^{111}$, $2^{99}$ to $2^{72}$, $2^{77}$, respectively. Additionally, we have reduced the time complexity of attacks on 9-round \texttt{KMAC256} and 7-round \textsc{Ketje Sr} v1. Besides, practical attacks on 6-round \textsc{Ketje Sr} v1 and v2 are also given in this paper for the first time.

ePrint Report
Fooling the Sense of Cross-core Last-level Cache Eviction based Attacker by Prefetching Common Sense
Biswabandan Panda

Timing Channels
Cache side-channels
Information leakage

ePrint Report
KeyForge: Mitigating Email Breaches with Forward-Forgeable Signatures
Michael Specter, Sunoo Park, Matthew Green

Email breaches are commonplace, and they expose a wealth of personal, business, and political data that may have devastating consequences. The current email system allows any attacker who gains access to your email to prove the authenticity of the stolen messages to third parties -- a property arising from a necessary anti-spam / anti-spoofing protocol called DKIM. This exacerbates the problem of email breaches by greatly increasing the potential for attackers to damage the users' reputation, blackmail them, or sell the stolen information to third parties.

In this paper, we introduce "non-attributable email", which guarantees that a wide class of adversaries are unable to convince any third party of the authenticity of stolen emails. We formally define non-attributability, and present two practical system proposals -- KeyForge and TimeForge -- that provably achieve non-attributability while maintaining the important protection against spam and spoofing that is currently provided by DKIM. Moreover, we implement KeyForge and demonstrate that that scheme is practical, achieving competitive verification and signing speed while also requiring 42% less bandwidth per email than RSA2048.

In this paper, we introduce "non-attributable email", which guarantees that a wide class of adversaries are unable to convince any third party of the authenticity of stolen emails. We formally define non-attributability, and present two practical system proposals -- KeyForge and TimeForge -- that provably achieve non-attributability while maintaining the important protection against spam and spoofing that is currently provided by DKIM. Moreover, we implement KeyForge and demonstrate that that scheme is practical, achieving competitive verification and signing speed while also requiring 42% less bandwidth per email than RSA2048.

ePrint Report
Achieving secure and efficient lattice-based public-key encryption: the impact of the secret-key distribution
Sauvik Bhattacharya, Oscar Garcia-Morchon, Rachel Player, Ludo Tolhuizen

Lattice-based public-key encryption has a large
number of design choices that can be combined in
diverse ways to obtain different tradeoffs. One of these choices is
the distribution from which secret keys are sampled.
Numerous secret-key distributions exist in the state of the
art, including (discrete) Gaussian, binomial, ternary, and fixed-weight ternary.
Although the choice of the distribution
impacts both the concrete security and performance of
the schemes, it has not been compared explicitly how
the choice of secret-key distribution affects this tradeoff.

In this paper, we compare different aspects of secret-key distributions that appear in submissions to the NIST post-quantum standardization effort. We first consider their impact on concrete security (influenced by the entropy and variance of the distribution), on decryption failures and IND-CCA2 security (influenced by the probability of sampling keys with ``non average, large'' norm), and on the key sizes. Next, we select concrete parameters of an encryption scheme instantiated with the above distributions, optimized for key sizes, to identify which distribution(s) offer the best tradeoffs between security and performance.

We draw two main conclusions from the results of the above optimization. Firstly, fixed-weight ternary secret keys result in the smallest key sizes of the encryption scheme. Such secret keys reduce the decryption failure rate and hence allow for a higher noise-to-modulus ratio, alleviating the slight increase in lattice dimension required for countering specialized attacks that apply in this case. Secondly, fixed-weight ternary secret keys result in the scheme becoming more secure against decryption failure-based IND-CCA2 attacks, as compared to secret keys with independently sampled components.

In this paper, we compare different aspects of secret-key distributions that appear in submissions to the NIST post-quantum standardization effort. We first consider their impact on concrete security (influenced by the entropy and variance of the distribution), on decryption failures and IND-CCA2 security (influenced by the probability of sampling keys with ``non average, large'' norm), and on the key sizes. Next, we select concrete parameters of an encryption scheme instantiated with the above distributions, optimized for key sizes, to identify which distribution(s) offer the best tradeoffs between security and performance.

We draw two main conclusions from the results of the above optimization. Firstly, fixed-weight ternary secret keys result in the smallest key sizes of the encryption scheme. Such secret keys reduce the decryption failure rate and hence allow for a higher noise-to-modulus ratio, alleviating the slight increase in lattice dimension required for countering specialized attacks that apply in this case. Secondly, fixed-weight ternary secret keys result in the scheme becoming more secure against decryption failure-based IND-CCA2 attacks, as compared to secret keys with independently sampled components.

While digital secret keys appear indispensable in
modern cryptography and security, they also routinely constitute
a main attack point of the resulting hardware systems. Some
recent approaches have tried to overcome this problem by simply
avoiding keys and secrets in vulnerable systems. To start with,
physical unclonable functions (PUFs) have demonstrated how
“classical keys”, i.e., permanently stored digital secret keys, can
be evaded, realizing security devices that might be called “classically
key-free”. Still, most PUFs induce certain types of physical
secrets deep in the hardware, whose disclosure to adversaries
breaks security as well. Examples include the manufacturing
variations that determine the power-up states of SRAM PUFs,
or the signal runtimes of Arbiter PUFs, both of which have been
extracted from PUF-hardware in practice, breaking security.
A second generation of physical security primitives, such a
SIMPLs/PPUFs and Unique Objects, recently has shown promise
to overcome this issue, however. Perhaps counterintuitively, they
would enable completely “secret-free” hardware, where adversaries
might inspect every bit and atom, and learn any information
present in any form in the hardware, without being able to break
security. This concept paper takes this situation as starting point,
and categorizes, formalizes, and surveys the currently emerging
areas of key-free and, more importantly, secret-free security. Our
treatment puts keys, secrets, and their respective avoidance into
the center of the currently emerging physical security methods.
It so aims to lay the foundations for future, secret-free security
hardware, which would be innately and provably immune against
any physical probing and key extraction.

17
April
2019

Event Calendar
IFIP SC 2019: 14th IFIP Summer School on Privacy and Identity Management
Windisch, Switzerland, 19 August - 23 August 2019

Event date: 19 August to 23 August 2019

Submission deadline: 6 May 2019

Notification: 6 June 2019

Submission deadline: 6 May 2019

Notification: 6 June 2019

Event Calendar
IEEE TII: IEEE Transactions on Industrial Informatics Special Section
7 April - 30 November 2019

Event date: 7 April to 30 November 2019

Submission deadline: 30 June 2019

Submission deadline: 30 June 2019

Event Calendar
ICISC'19: The 22nd Annual International Conference on Information Security and Crypto
Seoul, Republic of Korea, 4 December - 6 December 2019

Event date: 4 December to 6 December 2019

Submission deadline: 29 August 2019

Notification: 21 October 2019

Submission deadline: 29 August 2019

Notification: 21 October 2019