International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

23 September 2018

Apoorvaa Deshpande, Yael Kalai
ePrint Report ePrint Report
We consider the following paradoxical question: Can one prove lack of knowledge? We define the notion of Proofs of Ignorance, construct such proofs, and use these proofs to construct a 2-message witness hiding protocol for all of NP.

More specifically, we define a proof of ignorance (PoI) with respect to any language L in NP and distribution D over instances in L. Loosely speaking, such a proof system allows a prover to generate an instance x according to D along with a proof that she does not know a witness corresponding to x. We construct construct a PoI protocol for any random self-reducible NP language L that is hard on average. Our PoI protocol is non-interactive assuming the existence of a common reference string.

We use such a PoI protocol to construct a 2-message witness hiding protocol for NP with adaptive soundness. Constructing a 2-message WH protocol for all of NP has been a long standing open problem. We construct our witness hiding protocol using the following ingredients (where T is any super-polynomial function in the security parameter):

1. T-secure PoI protocol,

2. T-secure non-interactive witness indistinguishable (NIWI) proofs,

3. T-secure rerandomizable encryption with strong KDM security,

where the first two ingredients can be constructed based on the T-security of DLIN.

At the heart of our witness-hiding proof is a new non-black-box technique. As opposed to previous works, we do not apply an efficiently computable function to the code of the cheating verifier, rather we resort to a form of case analysis and show that the prover's message can be simulated in both cases, without knowing in which case we reside.
Expand
Nir Bitansky, Omer Paneth
ePrint Report ePrint Report
The round complexity of zero-knowledge protocols is a long-standing open question, yet to be settled under standard assumptions. So far, the question has appeared equally challenging for relaxations such as weak zero-knowledge and witness hiding. Protocols satisfying these relaxed notions, under standard assumptions, have at least four messages, just like full-fledged zero knowledge. The difficulty in improving round complexity stems from a fundamental barrier: non of these notions can be achieved in three messages via reductions (or simulators) that treat the verifier as a black box.

We introduce a new non-black-box technique and use it to obtain the first protocols that cross this barrier under standard assumptions. Our main results are: - Weak zero-knowledge for NP in three messages, assuming fully-homomorphic encryption, and other standard primitives (known for example under the Learning with Errors assumption). - Weak zero-knowledge for NP intersect coNP in two messages, assuming in addition non-interactive witness-indistinguishable proofs.

We also give witness-hiding protocol for NP in two-message, assuming in addition witness encryption. This protocol is also publicly verifiable.

Our technique is based on a new homomorphic trapdoor paradigm, which can be seen as a non-black-box analog of the classical Feige-Lapidot-Shamir trapdoor paradigm.
Expand
Benny Applebaum, Zvika Brakerski, Rotem Tsabary
ePrint Report ePrint Report
We show that any multi-party functionality can be evaluated using a two-round protocol with perfect correctness and perfect semi-honest security, provided that the majority of parties are honest. This settles the round complexity of information-theoretic semi-honest MPC, resolving a longstanding open question (cf. Ishai and Kushilevitz, FOCS 2000). The protocol is efficient for $NC^1$ functionalities. Furthermore, given black-box access to a one-way function, the protocol can be made efficient for any polynomial functionality, at the cost of only guaranteeing computational security.

Technically, we extend and relax the notion of randomized encoding to specifically address multi-party functionalities. The property of a multi-party randomized encoding (MPRE) is that if the functionality $g$ is an encoding of the functionality $f$, then for any (permitted) coalition of players, their respective outputs and inputs in $g$ allow them to simulate their respective inputs and outputs in $f$, without learning anything else, including the other outputs of $f$.
Expand
Manfred Lochter
ePrint Report ePrint Report
One approach for blockchain based applications to provide a proof-of-work is the computation of hash-values. In our opinion these computations are a waste of energy. It would be highly desirable to find an alternative method that generates useful output. We show how to substitute hashing by performing multiplications on Elliptic Curves in order to find distinguished points that can then be used to solve the discrete logarithm problem on a chosen curve. Today's digital infrastructures rely on only a few curves. We argue that the advent of blockchain based technologies makes the use of only few standardised curves questionable.

In principle all cryptanalytic algorithms that use Rabin's idea of distinguished points can be used in blockchain based attacks. Similar ideas can be used for the number field sieve.
Expand
Gilad Asharov, Ilan Komargodski, Wei-Kai Lin, Kartik Nayak, Elaine Shi
ePrint Report ePrint Report
Oblivious RAM (ORAM), first introduced in the ground-breaking work of Goldreich and Ostrovsky (STOC '87 and J. ACM '96) is a technique for provably obfuscating programs' access patterns, such that the access patterns leak no information about the programs' secret inputs. To compile a general program to an oblivious counterpart, it is well-known that $\Omega(\log N)$ amortized blowup is necessary, where $N$ is the size of the logical memory. This was shown in Goldreich and Ostrovksy's original ORAM work for statistical security and in a somewhat restricted model (the so called balls-and-bins model), and recently by Larsen and Nielsen (CRYPTO '18) for computational security.

A long standing open question is whether there exists an optimal ORAM construction that matches the aforementioned logarithmic lower bounds (without making large memory word assumptions, and assuming a constant number of CPU registers). In this paper, we resolve this problem and present the first secure ORAM with $O(\log N)$ amortized blowup, assuming one-way functions. Our result is inspired by and non-trivially improves on the recent beautiful work of Patel et al. (FOCS '18) who gave a construction with $O(\log N\cdot {\sf poly}(\log\log N))$ amortized blowup, assuming one-way functions.
Expand
Seyed Farhad Aghili, Hamid Mala
ePrint Report ePrint Report
The concept of the Industrial Internet of Things (IIoT) can be defined as the integration of smart sensor networks and the Internet of Things (IoT). This technology can be employed in various industries such as agriculture, food processing industry, environmental monitoring, security surveillance, and so on. Generally, a smart sensor is a resource-constrained device which is responsible for gathering data from the monitored area. Machine-to-Machine (M2M) communication is one of the most important technologies to exchange information between entities in industrial areas. However, due to the insecure wireless communication channel and the smart sensor’s limitations, security and privacy concerns are the important challenges in IIoT environments. The goal of this paper is to address the security flaws of a recent M2M authentication protocol proposed for employing in IIoT including DoS, router impersonation and smart sensor traceability attacks. Moreover, we showed that an untrusted smart sensor can obtain the secret key of the router and the session key which another sensor establishes with the target router.
Expand
Alex Davidson, Ryo Nishimaki
ePrint Report ePrint Report
Constrained pseudorandom functions (CPRFs) allow learning modified PRF keys that can evaluate the PRF on a subset of the input space, or based on some sort of predicate. First introduced by Boneh and Waters [Asiacrypt 2013], they have been shown to be a useful cryptographic primitive with many applications. The full security definition of CPRFs requires the adversary to learn multiple constrained keys, a requirement for all of these applications. Unfortunately, existing constructions of CPRFs satisfying this security notion are only known from exceptionally strong cryptographic assumptions, such as indistinguishability obfuscation and the existence of multilinear maps, even for very weak predicates. CPRFs from more standard assumptions only satisfy security when one key is learnt.

In this work, we give the first construction of a CPRF that can issue a constant number of constrained keys for bit-fixing predicates, from learning with errors (LWE). Our construction immediately leads to the first constructions of broadcast encryption with optimal ciphertext size and non-interactive ID-based key exchange from LWE, for constant-sized user groups. It also satisfies \(1\)-key privacy (otherwise known as constraint-hiding). Finally, our construction achieves fully adaptive security with polynomial security loss; the only construction to achieve such security under a standard assumption.

Our technique represents a noted departure existing for CPRF constructions. We hope that it may lead to future constructions that can expose a greater number of keys, or consider more expressive predicates (such as circuit-based constraints).
Expand
F. Betül Durak, Serge Vaudenay
ePrint Report ePrint Report
Following up mass surveillance and privacy issues, modern secure communication protocols now seek for more security such as forward secrecy and post-compromise security. They cannot rely on any assumption such as synchronization, predictable sender/receiver roles, or online availability. At EUROCRYPT 2017 and 2018, key agreement with forward secrecy and zero round-trip time (0-RTT) were studied. Ratcheting was introduced to address forward secrecy and post-compromise security in real-world messaging protocols. At CSF 2016 and CRYPTO 2017, ratcheting was studied either without 0-RTT or without bidirectional communication. At CRYPTO 2018, it was done using key-update primitives, which involve hierarchical identity-based encryption (HIBE).

In this work, we define the bidirectional asynchronous ratcheted key agreement (BARK) with formal security notions. We provide a simple security model with a pragmatic approach and design the first secure BARK scheme not using key-update primitives. Our notion offers forward secrecy and post-compromise security. It is asynchronous, with random roles, and 0-RTT. It is based on a cryptosystem, a signature scheme, and a collision-resistant hash function family without key-update primitives or random oracles. We further show that BARK (even unidirectional) implies public-key cryptography, meaning that it cannot solely rely on symmetric cryptography.
Expand
Thom Wiggers
ePrint Report ePrint Report
Servers with many cores cost a lot of money and consume large amounts of energy. The developments in hardware for mobile devices has resulted in a surge in relatively cheap, powerful, and low-energy CPUs. In this paper we show how to build a low-energy, eighty-core cluster built around twenty ODROID-C2 development boards for under 1500 USD. The ODROID-C2 is a 46 USD microcomputer that provides a 1.536 GHz quad-core Cortex-A53-based CPU and 2 GB of RAM. We investigate the cluster's application to cryptanalysis by implementing Pollard's Rho method to tackle the Certicom ECC2K-130 elliptic curve challenge. We optimise software from the "Breaking ECC2K-130" technical report for the Cortex-A53. To do so, we show how to use microbenchmarking to derive the needed instruction characteristics which ARM neglected to document for the public. The implementation of the ECC2K-130 attack finally allows us to compare the proposed platform to various other platforms, including “classical” desktop CPUs, GPUs and FPGAs. Although it may still be slower than for example FPGAs, our cluster still provides a lot of value for money.
Expand
Serge Fehr
ePrint Report ePrint Report
Hash functions are of fundamental importance in theoretical and in practical cryptography, and with the threat of quantum computers possibly emerging in the future, it is an urgent objective to understand the security of hash functions in the light of potential future quantum attacks. To this end, we reconsider the collapsing property of hash functions, as introduced by Unruh, which replaces the notion of collision resistance when considering quantum attacks. Our contribution is a formalism and a framework that offers significantly simpler proofs for the collapsing property of hash functions. With our framework, we can prove the collapsing property for hash domain extension constructions entirely by means of decomposing the iteration function into suitable elementary composition operations. In particular, given our framework, one can argue purely classically about the quantum-security of hash functions; this is in contrast to previous proofs which are in terms of sophisticated quantum-information-theoretic and quantum-algorithmic reasoning.
Expand
Oleg Taraskin, Vladimir Soukharev, David Jao, Jason LeGrow
ePrint Report ePrint Report
Password authenticated key establishment (PAKE) is a cryptographic primitive that allows two parties who share a low-entropy secret (a password) to securely establish cryptographic keys in the absence of public key infrastructure. We present the first quantum-resistant password-authenticated key exchange scheme based on supersingular elliptic curve isogenies. The scheme is built upon supersingular isogeny Diffie-Hellman, and uses the password to generate functions which obscure the auxiliary points used in the computation. We include a detailed security proof based on a number of reasonable computational problems on supersingular elliptic curves.
Expand
Shashank Agrawal, Peihan Miao, Payman Mohassel, Pratyay Mukherjee
ePrint Report ePrint Report
Token-based authentication is commonly used to enable a single-sign-on experience on the web, in mobile applications and on enterprise networks using a wide range of open standards and network authentication protocols: clients sign on to an identity provider using their username/password to obtain a cryptographic token generated with a master secret key, and store the token for future accesses to various services and applications. The authentication server(s) are single point of failures that if breached, enable attackers to forge arbitrary tokens or mount offline dictionary attacks to recover client credentials.

Our work is the first to introduce and formalize the notion of password-based threshold token-based authentication which distributes the role of an identity provider among $n$ servers. Any t servers can collectively verify passwords and generate tokens, while no t-1 servers can forge a valid token or mount offline dictionary attacks. We then introduce PASTA, a general framework that can be instantiated using any threshold token generation scheme, wherein clients can "sign-on" using a two-round (optimal) protocol that meets our strong notions of unforgeability and password-safety.

We instantiate and implement our framework in C++ using two threshold message authentication codes (MAC) and two threshold digital signatures with different trade-offs. Our experiments show that the overhead of protecting secrets and credentials against breaches in PASTA, i.e. compared to a naive single server solution, is extremely low (1-5%) in the most likely setting where client and servers communicate over the internet. The overhead is higher in case of MAC-based tokens over a LAN (though still only a few milliseconds) due to public-key operations in PASTA. We show, however, that this cost is inherent by proving a symmetric-key only solution impossible.
Expand
Alan Szepieniec, Reza Reyhanitabar, Bart Preneel
ePrint Report ePrint Report
A multitude of post-quantum key encapsulation mechanisms (KEMs) and public key encryption (PKE) schemes implicitly rely on a protocol by which Alice and Bob exchange public messages and converge on secret values that are identical up to some small noise. By our count, 24 out of 49 KEM or PKE submissions to the NIST Post-Quantum Cryptography Standardization project follow this strategy. Yet the notion of a noisy key agreement (NKA) protocol lacks a formal definition as a primitive in its own right. We provide such a formalization by defining the syntax and security for an NKA protocol. This formalization brings out four generic problems, called A and B State Recovery, Noisy Key Search and Noisy Key Distinguishing, whose solutions must be hard in the quantum computing model. Informally speaking, these can be viewed as noisy, quantum-resistant counterparts of the problems arising from the classical Diffie-Hellman type protocols. We show that many existing proposals contain an NKA component that fits our formalization and we reveal the induced concrete hardness assumptions. The question arises whether considering NKA as an independent primitive can help provide modular designs with improved efficiency and/or proofs. As the second contribution of this paper, we answer this question positively by presenting a generic transform from a secure NKA protocol to an IND-CCA secure KEM in the quantum random oracle model, with a security bound tightly related to the NKD problem. This transformation is essentially the same as that of the NIST candidate Ramstake. While establishing the security of Ramstake was our initial objective, the collection of tools that came about as a result of this journey is of independent interest.
Expand
Suvradip Chakraborty, C. Pandu Rangan
ePrint Report ePrint Report
In this paper, we introduce a new framework for constructing public-key encryption (PKE) schemes resilient to joint post-challenge/after-the-fact leakage and tampering attacks in the bounded leakage and tampering (BLT) model, introduced by Damgård et al. (Asiacrypt 2013). All the prior formulations of PKE schemes considered leakage and tampering attacks only before the challenge ciphertext is made available to the adversary. However, this restriction seems necessary, since achieving security against post-challenge leakage and tampering attacks in its full generality is impossible as shown in previous works. In this paper, we study the post-challenge/after-the-fact security for PKE schemes against bounded leakage and tampering under a restricted yet meaningful and reasonable notion of security, namely, the split-state leakage and tampering model. We show that it is possible to construct secure PKE schemes in this model, tolerating arbitrary (but bounded) leakage and tampering queries; thus overcoming the previous impossibility results.

To this end, we formulate a new notion of security, which we call entropic post-challenge IND-CCA-BLT secure PKE. We first define a weaker notion called entropic restricted post-challenge IND-CCA-BLT secure PKE, which can be instantiated using the (standard) DDH assumption. We then show a generic compiler from our entropic restricted notion to the entropic notion of security using a simulation-extractable non-interactive zero-knowledge argument system. This requires an untamperable common reference string as in previous works. Finally, we demonstrate the usefulness of our entropic notion of security by giving a simple and generic construction of post-challenge IND-CCA-BLT secure PKE scheme in the split-state leakage and tampering model. This also settles the open problem posed by Faonio and Venturi (Asiacrypt 2016).
Expand
Benjamin Smith
ePrint Report ePrint Report
Diffie--Hellman key exchange is at the foundations of public-key cryptography, but conventional group-based Diffie--Hellman is vulnerable to Shor's quantum algorithm. A range of ``post-quantum Diffie--Hellman'' protocols have been proposed to mitigate this threat, including the Couveignes, Rostovtsev--Stolbunov, SIDH, and CSIDH schemes, all based on the combinatorial and number-theoretic structures formed by isogenies of elliptic curves. Pre- and post-quantum Diffie--Hellman schemes resemble each other at the highest level, but the further down we dive, the more differences emerge---differences that are critical when we use Diffie--Hellman as a basic component in more complicated constructions. In this survey we compare and contrast pre- and post-quantum Diffie--Hellman algorithms, highlighting some important subtleties.
Expand
Falk Schellenberg, Dennis R.E. Gnad, Amir Moradi, Mehdi B. Tahoori
ePrint Report ePrint Report
The current practice in board-level integration is to incorporate chips and components from numerous vendors. A fully trusted supply chain for all used components and chipsets is an important, yet extremely difficult to achieve, prerequisite to validate a complete board-level system for safe and secure operation. An increasing risk is that most chips nowadays run software or firmware, typically updated throughout the system lifetime, making it practically impossible to validate the full system at every given point in the manufacturing, integration and operational life cycle. This risk is elevated in devices that run 3rd party firmware. In this paper we show that an FPGA used as a common accelerator in various boards can be reprogrammed by software to introduce a sensor, suitable as a remote power analysis side-channel attack vector at the board-level. We show successful power analysis attacks from one FPGA on the board to another chip implementing RSA and AES cryptographic modules. Since the sensor is only mapped through firmware, this threat is very hard to detect, because data can be exfiltrated without requiring inter-chip communication between victim and attacker. Our results also prove the potential vulnerability in which any untrusted chip on the board can launch such attacks on the remaining system.
Expand
Christophe Pfeifer, Patrick Haddad
ePrint Report ePrint Report
Recent publications, such as [10] and [13], exploit the advantages of deep-learning techniques in performing Side-Channel Attacks. One example of the Side-Channel community interest for such techniques is the release of the public ASCAD database, which provides power consumption traces of a masked 128-bit AES implementation, and is meant to be a common benchmark to compare deep-learning techniques performances. In this paper, we propose two ways of improving the effectiveness of such attacks. The first one is as new kind of layer for neural networks, called "Spread" layer, which is efficient at tackling side-channel attacks issues, since it reduces the number of layers required and speeds up the learning phase. Our second proposal is an efficient way to correct the neural network predictions, based on its confusion matrix. We have validated both methods on ASCAD database, and conclude that they reduce the number of traces required to succeed attacks. In this article, we show their effectiveness for first-order and second-order attacks.
Expand
Ke Gu, Bo Yin
ePrint Report ePrint Report
Group signature is a useful cryptographic primitive, which makes every group member sign messages on behalf of a group they belong to. Namely group signature allows that group member anonymously signs any message without revealing his/her specific identity. However, group signature may make the signers abuse their signing rights if there are no measures of keeping them from abusing signing rights in the group signature schemes. So, group manager must be able to trace (or reveal) the identity of the signer by the signature when the result of the signature needs to be arbitrated, and some revoked group members must fully lose their capability of signing a message on behalf of the group they belong to. A practical model meeting the requirement is verifier-local revocation, which supports the revocation of group member. In this model, the verifiers receive the group member revocation messages from the trusted authority when the relevant signatures need to be verified. Although currently many group signature schemes have been proposed, most of them are constructed on pairings. In this paper, we present an efficient group signature scheme without pairings under the model of verifier-local revocation, which is based on the modified EDL signature (first proposed by D. Chaum et al. in Crypto 92). Compared with other group signature schemes, the proposed scheme does not employ pairing computation and has the constant signing time and signature size, whose security can be reduced to the computational Diffie-Hellman (CDH) assumption in the random oracle model. Also, we give a formal security model for group signature and prove that the proposed scheme has the properties of traceability and anonymity.
Expand
Marc Joye, Yan Michalevsky
ePrint Report ePrint Report
We would like to compute RSA signatures with the help of a Hardware Security Module (HSM). But what can we do when we want to use a certain public exponent that the HSM does not allow or support? Surprisingly, this scenario comes up in real-world settings such as code-signing of Intel SGX enclaves. Intel SGX enclaves have to be signed in order to execute in release mode, using 3072-bit RSA signature scheme with a particular public exponent. However, we encountered commercial hardware security modules that do not support storing RSA keys corresponding to this exponent. We ask whether it is possible to overcome such a limitation of an HSM and answer it in the affirmative (under stated assumptions). We show how to convert RSA signatures corresponding to one public exponent, to valid RSA signatures corresponding to another exponent. We define security and show that it is not compromised by the additional public knowledge available to an adversary in this setting.
Expand
Behzad Abdolmaleki, Helger Lipmaa, Janno Siim, Micha{\l} Zaj\k{a}c
ePrint Report ePrint Report
While the CRS model is widely accepted for construction of non-interactive zero knowledge (NIZK) proofs, from the practical viewpoint, a very important question is to minimize the trust needed from the creators of the CRS. Recently, Bellare et al. defined subversion-resistance (security in the case the CRS creator may be malicious) for NIZK. First, we observe that subversion zero knowledge (Sub-ZK) in the CRS model corresponds to no-auxiliary-string non-black-box NIZK (also known as nonuniform NIZK) in the Bare Public Key (BPK) model. Due to well-known impossibility results, this observation provides a simple proof that the use of non-black-box techniques is needed to obtain Sub-ZK. Second, we prove that the most efficient known QA-NIZK for linear subspaces by Kiltz and Wee is nonuniform zero knowledge in the BPK model under two alternative novel knowledge assumptions, both secure in the subversion generic bilinear group model. We prove that (for a different set of parameters) a slightly less efficient variant of Kiltz-Wee is nonuniform zero knowledge in the BPK model under a known knowledge assumption that is also secure in the subversion generic bilinear group model.
Expand
◄ Previous Next ►