International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

26 September 2018

Nasrollah Pakniat
ePrint Report ePrint Report
Recently, Chen et al. proposed the first non-delegatable certificateless strong designated verifier signature scheme and claimed that their scheme achieves all security requirements. However, in this paper, we disprove their claim and present a concrete attack which shows that their proposed scheme is forgeable. More precisely, we show that there exist adversaries who are able to forge any signer's signature for any designated verifier on any message of his choice.
Expand
Shuichi Katsumata, Shota Yamada
ePrint Report ePrint Report
Constrained pseudorandom functions (CPRFs) are a type of PRFs that allows one to derive a constrained key $\mathsf{K}_C$ from the master key $\mathsf{K}$. While the master key $\mathsf{K}$ allows one to evaluate on any input as a standard PRF, the constrained key $\mathsf{K}_C$ only allows one to evaluate on inputs $x$ such that $C(x) = 1$. Since the introduction of CPRFs by Boneh and Waters (ASIACRYPT'13), Kiayias et al. (CCS'13), and Boyle et al. (PKC'14), there have been various constructions of CPRFs. However, thus far, almost all constructions (from standard assumptions and non-trivial constraints) are only proven to be secure if at most one constrained key $\mathsf{K}_C$ is known to the adversary, excluding the very recent work of Davidson and Nishimaki (EPRINT'18). Considering the interesting applications of CPRFs such as ID-based non-interactive key exchange, we desire CPRFs that are collusion resistance with respect to the constrained keys. In this work, we make progress in this direction and construct a CPRF for the bit-fixing predicates that are collusion resistance for a constant number of constrained keys. Surprisingly, compared to the heavy machinery that was used by previous CPRF constructions, our construction only relies on the existence of one-way functions.
Expand
Shai Halevi, Yuval Ishai, Eyal Kushilevitz, Tal Rabin
ePrint Report ePrint Report
We reconsider the security guarantee that can be achieved by general protocols for secure multiparty computation in the most basic of settings: information-theoretic security against a semi-honest adversary.

Since the 1980s, we have elegant solutions to this problem that offer full security, as long as the adversary controls a minority of the parties, but fail completely when that threshold is crossed. In this work, we revisit this problem, questioning the optimality of the standard notion of security. We put forward a new notion of information-theoretic security which is strictly stronger than the standard one, and which we argue to be ``best possible.'' Our new notion still requires full security against dishonest minority in the usual sense, but also requires a meaningful notion of information-theoretic security against dishonest majority.

We present protocols for useful classes of functions that satisfy this new notion of security. Our protocols have the unique feature of combining the efficiency benefits of protocols for an honest majority and (most of) the security benefits of protocols for dishonest majority. We further extend some of the solutions to the malicious setting.
Expand

25 September 2018

Carmit Hazay, Muthuramakrishnan Venkitasubramaniam
ePrint Report ePrint Report
In this paper, we revisit the round complexity of designing zero-knowledge (ZK) arguments via a black-box construction from minimal assumptions. Our main result implements a 4-round ZK argument for any language in NP, based on injective one-way functions, that makes black-box use of the underlying function. As a corollary, we also obtain the first 4-round perfect zero-knowledge argument for NP based on claw-free permutations via a black-box construction and 4-round input-delayed commit-and-prove zero-knowledge argument based on injective one-way functions.
Expand
Andrew Morgan, Rafael Pass
ePrint Report ePrint Report
Fairness in classification has become an increasingly relevant and controversial issue as computers replace humans in many of today’s classification tasks. In particular, a subject of much recent debate is that of finding, and subsequently achieving, suitable definitions of fairness in an algorithmic context. In this work, following the work of Hardt et al. (NIPS’16), we consider and formalize the task of sanitizing an unfair classifier C into a classifier C' satisfying an approximate notion of "equalized odds", or fair treatment. Our main result shows how to take any (possibly unfair) classifier C over a finite outcome space, and transform it—-by just perturbing the output of C—according to some distribution learned by just having black-box access to samples of labeled, and previously classified, data, to produce a classifier C' that satisfies fair treatment; we additionally show that our derived classifier is near-optimal in terms of accuracy. We also experimentally evaluate the performance of our method.
Expand
Frédéric Dupuis, Serge Fehr, Philippe Lamontagne, Louis Salvail
ePrint Report ePrint Report
We investigate sampling procedures that certify that an arbitrary quantum state on $n$ subsystems is close to an ideal mixed state $\varphi^{\otimes n}$ for a given reference state $\varphi$, up to errors on a few positions. This task makes no sense classically: it would correspond to certifying that a given bitstring was generated according to some desired probability distribution. However, in the quantum case, this is possible if one has access to a prover who can supply a purification of the mixed state.

In this work, we introduce the concept of mixed-state certification, and we show that a natural sampling protocol offers secure certification in the presence of a possibly dishonest prover: if the verifier accepts then he can be almost certain that the state in question has been correctly prepared, up to a small number of errors.

We then apply this result to two-party quantum coin-tossing. Given that strong coin tossing is impossible, it is natural to ask ``how close can we get". This question has been well studied and is nowadays well understood from the perspective of the bias of individual coin tosses. We approach and answer this question from a different---and somewhat orthogonal---perspective, where we do not look at individual coin tosses but at the global entropy instead. We show how two distrusting parties can produce a common high-entropy source, where the entropy is an arbitrarily small fraction below the maximum.
Expand
Sanjam Garg, Yuval Ishai, Akshayaram Srinivasan
ePrint Report ePrint Report
We continue the study of protocols for secure multiparty computation (MPC) that require only two rounds of interaction. The recent works of Garg and Srinivasan (Eurocrypt 2018) and Benhamouda and Lin (Eurocrypt 2018) essentially settle the question by showing that such protocols are implied by the minimal assumption that a two-round oblivious transfer (OT) protocol exists. However, these protocols inherently make a non-black-box use of the underlying OT protocol, which results in poor concrete efficiency. Moreover, no analogous result was known in the information-theoretic setting, or alternatively based on one-way functions, given an OT correlations setup or an honest majority.

Motivated by these limitations, we study the possibility of obtaining information-theoretic and ``black-box'' implementations of two-round MPC protocols. We obtain the following results:

- Two-round MPC from OT correlations. Given an OT correlations setup, we get protocols that make a black-box use of a pseudorandom generator (PRG) and are secure against a malicious adversary corrupting an arbitrary number of parties. For a semi-honest adversary, we get similar information-theoretic protocols for branching programs.

- New NIOT constructions. Towards realizing OT correlations, we extend the DDH-based non-interactive OT (NIOT) protocol of Bellare and Micali (Crypto '89) to the malicious security model, and present new NIOT constructions from the Quadratic Residuosity Assumption (QRA) and the Learning With Errors (LWE) assumption.

- Two-round black-box MPC with strong PKI setup. Combining the two previous results, we get two-round MPC protocols that make a black-box use of any DDH-hard or QRA-hard group. The protocols can offer security against a malicious adversary, and require a PKI setup that depends on the number of parties and the size of computation, but not on the inputs or the identities of the participating parties.

- Two-round honest-majority MPC from secure channels. Given secure point-to-point channels, we get protocols that make a black-box use of a pseudorandom generator (PRG), as well as information-theoretic protocols for branching programs. These protocols can tolerate a semi-honest adversary corrupting a strict minority of the parties, where in the information-theoretic case the complexity is quasi-polynomial in the number of parties.
Expand
Shweta Agrawal, Monosij Maitra
ePrint Report ePrint Report
We construct Indistinguishability Obfuscation (iO) and Functional Encryption (FE) schemes in the Turing machine model from the minimal assumption of compact FE for circuits (CktFE). Our constructions overcome the barrier of sub-exponential loss incurred by all prior work. Our contributions are:

1. We construct iO in the Turing machine model from the same assumptions as required in the circuit model, namely, sub-exponentially secure FE for circuits. The previous best constructions [41, 6] require sub-exponentially secure iO for circuits, which in turn requires sub-exponentially secure FE for circuits [5, 15].

2. We provide a new construction of single input FE for Turing machines with unbounded length inputs and optimal parameters from polynomially secure, compact FE for circuits. The previously best known construction by Ananth and Sahai [7] relies on iO for circuits, or equivalently, sub-exponentially secure FE for circuits.

3. We provide a new construction of multi-input FE for Turing machines. Our construction supports a fixed number of encryptors (say k), who may each encrypt a string xi of unbounded length. We rely on sub-exponentially secure FE for circuits, while the only previous construction [10] relies on a strong knowledge type assumption, namely, public coin differing inputs obfuscation.

Our techniques are new and from first principles, and avoid usage of sophisticated iO specific machinery such as positional accumulators and splittable signatures that were used by all relevant prior work [41, 7, 6].
Expand

24 September 2018

Srinath Setty, Sebastian Angel, Trinabh Gupta, Jonathan Lee
ePrint Report ePrint Report
This paper introduces Spice, a system for building verifiable state machines (VSMs). A VSM is a request-processing service that produces proofs establishing that requests were executed correctly according to a specification. Such proofs are succinct (a verifier can check them efficiently without reexecution) and zero- knowledge (a verifier learns nothing about the content of the requests, responses, or the internal state of the service). Recent systems for proving the correct execution of state- ful computations—Pantry [24], Geppetto [34], CTV [30], Hawk [49], vSQL [87]—implicitly implement VSMs, but they incur prohibitive costs. Spice reduces these costs by over two orders of magnitude with a new storage primitive. More notably, Spice’s storage primitive supports multiple writers, making Spice the first system that can succinctly prove the correct execution of concurrent services. We nd that Spice running on a cluster of 16 servers achieves 488–1048 transactions/second for a variety of applications including inter-bank transactions [27], cloud- hosted ledgers [28], and dark pools [65]. This represents a 16,000—620,000× higher throughput than prior work.
Expand
Daniel Wichs, Willy Quach, Giorgos Zirdelis
ePrint Report ePrint Report
A software watermarking scheme can embed some information called a mark into a program while preserving its functionality. No adversary can remove the mark without damaging the functionality of the program. Cohen et al. (STOC '16) gave the first positive results for watermarking, showing how to watermark certain pseudorandom function (PRF) families using indistinguishability obfuscation (iO). Their scheme has a secret marking procedure to embed marks in programs and a public extraction procedure to extract the marks from programs; security holds even against an attacker that has access to a marking oracle. Kim and Wu (CRYPTO '17) later constructed a PRF watermarking scheme under only the LWE assumption. In their scheme, both the marking and extraction procedures are secret, but security only holds against an attacker with access to a marking oracle but not an extraction oracle. In fact, it is possible to completely break the security of the latter scheme using extraction queries, which is a significant limitation in any foreseeable application.

In this work, we construct a new PRF watermarking scheme with the following properties.

* The marking procedure is public and therefore anyone can embed marks in PRFs from the family. Previously we had no such construction even using obfuscation.

* The extraction key is secret, but marks remain unremovable even if the attacker has access to an extraction oracle. Previously we had no such construction under standard assumptions.

* Our scheme is simple, uses generic components and can be instantiated under many different assumptions such as DDH, Factoring or LWE.

The above benefits come with one caveat compared to prior work: the PRF family that we can watermark depends on the public parameters of the watermarking scheme and the watermarking authority has a secret key which can break the security of all of the PRFs in the family. Since the watermarking authority is usually assumed to be trusted, this caveat appears to be acceptable.
Expand
Andrew Morgan, Rafael Pass
ePrint Report ePrint Report
We consider the question of whether the security of unique digital signature schemes can be based on game-based cryptographic assumptions using linear-preserving black-box security reductions—that is, black-box reductions for which the security loss (i.e., the ratio between "work" of the adversary and the "work" of the reduction) is some a priori bounded polynomial. A seminal result by Coron (Eurocrypt'02) shows limitations of such reductions; however, his impossibility result and its subsequent extensions all suffer from two notable restrictions: (1) they only rule out so-called "simple" reductions, where the reduction is restricted to only sequentially invoke "straight-line" instances of the adversary; and (2) they only rule out reductions to non-interactive (two-round) assumptions.

In this work, we present the first full impossibility result: our main result shows that the existence of any linear-preserving black-box reduction for basing the security of unique signatures on some bounded-round assumption implies that the assumption can be broken in polynomial time.
Expand
Andris Ambainis, Mike Hamburg, Dominique Unruh
ePrint Report ePrint Report
We present an improved version of the one-way to hiding (O2H) lemma by Unruh, J ACM 2015. Our new O2H lemma gives higher flexibility (arbitrary joint distributions of oracles and inputs, multiple reprogrammed points) as well as tighter bounds (removing square-root factors, taking parallelism into account).
Expand
Nina Bindel, Jacqueline Brendel, Marc Fischlin, Brian Goncalves, Douglas Stebila
ePrint Report ePrint Report
Concerns about the impact of quantum computers on currently deployed public key cryptography have instigated research into not only quantum-resistant cryptographic primitives but also how to transition applications from classical to quantum-resistant solutions. One approach to mitigate the risk of quantum attacks and to preserve common security guarantees are hybrid schemes, which combine classically secure and quantum-resistant schemes. Various academic and industry experiments and draft standards related to the Transport Layer Security (TLS) protocol already use some form of hybrid key exchange; however sound theoretical approaches to substantiate the design and security of such hybrid key exchange protocols are missing so far.

We initiate the modeling of hybrid authenticated key exchange protocols. We consider security against adversaries with varying levels of quantum power over time, such as adversaries who may become quantum in the future or are quantum in the present. We reach our goal using a three-step approach: First, we introduce security notions for key encapsulation mechanisms (KEMs) that enable a fine-grained distinction between different quantum scenarios. Second, we propose several combiners for constructing hybrid KEMs that correspond closely to recently proposed Internet-Drafts for hybrid key exchange in TLS 1.3. Finally, we present a provably sound design for hybrid key exchange using KEMs as building blocks.
Expand
Aritra Dhar, Ivan Puddu, Kari Kostiainen, Srdjan Capkun
ePrint Report ePrint Report
Intel's Software Guard Extensions (SGX) enables isolated execution environments, called enclaves, on untrusted operating systems (OS), and thus it can improve the security for various applications and online services. However, SGX has also well-known limitations. First, its remote attestation mechanism is vulnerable to relay attacks that allow the attacker to redirect attestation and the following provisioning of secrets to an unintended platform. Second, attestation keys have been shown to leak thus enabling attackers to fake the secure attested environment by emulating it. Third, there exists no secure way to let enclaves communicate with the I/O devices and as a consequence the user.

To address these shortcomings, we propose a hardened variant of SGX attestation using proximity verification. We design and implement a system called ProximiTEE, where a simple embedded device with a low TCB is attached to the target platform. The embedded device verifies the proximity of the attested enclave by using distance bounding and secure boot-time initialization, thus allowing secure attestation regardless of a compromised OS or leaked attestation keys. Our boot-time initialization can be seen as a novel variant of ``trust on first use'' (TOFU) that makes deployment of secure attestation easier, reduces the system's attack surface and enables secure revocation. We further leverage the embedded device to build a trusted I/O path between peripherals (e.g., keyboards, displays) and enclaves, by letting it securely mediate every I/O communication between them. Our prototype implementation shows that such proximity verification is reliable in practice.
Expand
Iftach Haitner, Nikolaos Makriyannis, Eran Omri
ePrint Report ePrint Report
A two-party coin-flipping protocol is $\epsilon$-fair if no efficient adversary can bias the output of the honest party (who always outputs a bit, even if the other party aborts) by more than $\epsilon$. Cleve [STOC '86] showed that $r$-round $o(1/r)$-fair coin-flipping protocols do not exist. Awerbuch et al. [Manuscript '85] constructed a $\Theta(1/\sqrt{r})$-fair coin-flipping protocol, assuming the existence of one-way functions. Moran et al. [Journal of Cryptology '16] constructed an $r$-round coin-flipping protocol that is $\Theta(1/r)$-fair (thus matching the aforementioned lower bound of Cleve [STOC '86]), assuming the existence of oblivious transfer.

The above gives rise to the intriguing question of whether oblivious transfer, or more generally ``public-key primitives'', is required for an $o(1/\sqrt r)$-fair coin flipping. This question was partially answered by Dachman-Soled et al. [TCC '11] and Dachman-Soled et al. [TCC '14], who showed that restricted types of fully black-box reductions cannot establish $o(1/\sqrt r)$-fair coin-flipping protocols from one-way functions. In particular, for constant-round coin-flipping protocols, Dachman-Soled et al. showed that black-box techniques from one-way functions can only guarantee fairness of order $1/\sqrt{r}$.

We make progress towards answering the above question by showing that, for any constant $r\in \mathbb N$, the existence of an $1/(c\cdot \sqrt{r})$-fair, $r$-round coin-flipping protocol implies the existence of an infinitely-often key-agreement protocol, where $c$ denotes some universal constant (independent of $r$). Our reduction is non black-box and makes a novel use of the recent dichotomy for two-party protocols of Haitner et al. [FOCS '18] to facilitate a two-party variant of the recent attack of Beimel et al. [FOCS '18] on multi-party coin-flipping protocols.
Expand
Mohammad Hajiabadi
ePrint Report ePrint Report
Trapdoor permutations (TDP) are a fundamental primitive in cryptography. Over the years, several variants of this notion have emerged as a result of various applications. However, it is not clear whether these variants may be based on the standard notion of TDPs.

We study the question of whether enhanced trapdoor permutations can be based on classical trapdoor permutations. The main motivation of our work is in the context of existing TDP-based constructions of oblivious transfer and non-interactive zero-knowledge protocols, which require enhancements to the classical TDP notion. We prove that these enhancements are non-trivial, in the sense that there does not exist fully blackbox constructions of enhanced TDPs from classical TDPs.

At a technical level, we show that the enhanced TDP security of any construction in the random TDP oracle world can be broken via a polynomial number of queries to the TDP oracle as well as a weakening oracle, which provides inversion with respect to randomness. We also show that the standard one-wayness of a random TDP oracle stays intact in the presence of this weakening oracle.
Expand
Ashutosh Dhar Dwivedi, Pawel Morawiecki
ePrint Report ePrint Report
We propose a new algorithm inspired by Nested to find differential path in ARX ciphers. To enhance the decision process of our algorithm and to reduce the search space of our heuristic nested tool, we used the concept of partial difference distribution table (pDDT) along with the algorithm. The algorithm is applied on reduced round variants of SPECK block cipher family. In our previous paper we applied naive algorithm with a large search space of values and presented the result only for one block size variant of SPECK. Our new approach in this paper provide the results within a simpler framework and within a very short period of time (few minutes) for all bigger block size variants of SPECK. More specifically, we report the differential path for up to 8, 9, 11, 10 and 11 rounds of SPECK32, SPECK48, SPECK64, SPECK96 and SPECK128, respectively. To construct a differential characteristics for large number of rounds, we divide long characteristics into short ones, say constructing a large characteristics from two short characteristics. Instead of starting from first round we start from the middle and run experiment in forward as well as reverse direction. Using this method we improved our results and report the differential path for up to 9, 10, 12, 13 and 15 rounds of SPECK32, SPECK48, SPECK64, SPECK96 and SPECK128, respectively.
Expand
Ashutosh Dhar Dwivedi, Gautam Srivastava
ePrint Report ePrint Report
In this paper we focus on differential cryptanalysis dedicated to a particular class of cryptographic algorithms, namely ARX ciphers. We propose a new algorithm inspired by the Nested Monte-Carlo Search algorithm to find a differential path in ARX ciphers. We apply our algorithm to a round reduced variant of the block cipher LEA. We use the concept of a partial difference distribution table (pDDT) in our algorithm to reduce the search space. This methodology reduced the search space of the algorithm by using only those differentials whose probabilities are greater than or equal to pre-defined threshold. Using this concept we removed many differentials which are not valid or whose probabilities are very low. By doing this we decreased the time of finding a differential path by our nested algorithm due to a smaller search space. This partial difference distribution table also made our nested algorithm suitable for bigger block size ARX ciphers. Finding long differential characteristics is one of the hardest problems where we have seen other algorithms take many hours or days to find differential characteristics in ARX ciphers. Our algorithm finds the differential characteristics in just a few minutes with a very simple framework. We report the differential path for up to 9 rounds in LEA. To construct differential characteristics for a large number of rounds, we divide long characteristics into short ones, by constructing a large characteristic from two short characteristics. Instead of starting from the first round, we start from the middle and run experiments in forward as well as in the reverse direction. Using this method we improved our results and report the differential path for up to 13 rounds. Overall, the best property of our algorithm is that it has potential to provide state-of-the-art results but within a simpler framework as well as less time. Our algorithm is also very interesting for future aspect of research, as it could be applied to other ARX ciphers with a very easy going framework.
Expand
Yilei Chen, Vinod Vaikuntanathan, Brent Waters, Hoeteck Wee, Daniel Wichs
ePrint Report ePrint Report
A traitor tracing scheme is a public key encryption scheme for which there are many secret decryption keys. Any of these keys can decrypt a ciphertext; moreover, even if a coalition of users collude, put together their decryption keys and attempt to create a new decryption key, there is an efficient algorithm to trace the new key to at least one the colluders.

Recently, Goyal, Koppula and Waters (GKW, STOC 18) provided the first traitor tracing scheme from LWE with ciphertext and secret key sizes that grow polynomially in $\log n$, where $n$ is the number of users. The main technical building block in their construction is a strengthening of (bounded collusion secure) secret-key functional encryption which they refer to as mixed functional encryption (FE).

In this work, we improve upon and extend the GKW traitor tracing scheme: - We provide simpler constructions of mixed FE schemes based on the LWE assumption. Our constructions improve upon the GKW construction in terms of expressiveness, modularity, and security. - We provide a construction of attribute-based traitor tracing for all circuits based on the LWE assumption.
Expand
Solaris, Croatia, 17 June - 21 June 2019
Event Calendar Event Calendar
Event date: 17 June to 21 June 2019
Expand
◄ Previous Next ►