International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

25 September 2020

Ryo Nishimaki
ePrint Report ePrint Report
Program watermarking enables users to embed an arbitrary string called a mark into a program while preserving the functionality of the program. Adversaries cannot remove the mark without destroying the functionality. Although there exist generic constructions of watermarking schemes for public-key cryptographic (PKC) primitives, those schemes are constructed from scratch and not efficient.

In this work, we present a general framework to equip a broad class of PKC primitives with an efficient watermarking scheme. The class consists of PKC primitives that have a canonical all-but-one (ABO) reduction. Canonical ABO reductions are standard techniques to prove selective security of PKC primitives, where adversaries must commit a target attribute at the beginning of the security game. Thus, we can obtain watermarking schemes for many existing efficient PKC schemes from standard cryptographic assumptions via our framework. Most well-known selectively secure PKC schemes have canonical ABO reductions. Notably, we can achieve watermarking for public-key encryption whose ciphertexts and secret-keys are constant-size, and that is chosen-ciphertext secure.

Our approach accommodates the canonical ABO reduction technique to the puncturable pseudorandom function (PRF) technique, which is used to achieve watermarkable PRFs. We find that canonical ABO reductions are compatible with such puncturable PRF-based watermarking schemes.
Expand
Zi-Yuan Liu, Yi-Fan Tseng, Raylin Tso
ePrint Report ePrint Report
Kansal and Dutta recently proposed a multisignature scheme at AFRICACRYPT 2020. This is the first lattice-based multisignature scheme that generates a multisignature in only a single round of interaction and supports public key aggregation. In this letter, we provide a cryptanalysis of this multisignature scheme and demonstrate that the scheme does not satisfy unforgeability requirements. We present an attack strategy to demonstrate that if an adversary obtains a sufficient number of signatures from a signer, he/she can recover the private key of the signer in polynomial time. We also uncover the root cause of the attack and provide a possible solution for this attack to aid future designs of secure multisignature schemes.
Expand
Alexander Bienstock, Yevgeniy Dodis, Paul Rösler
ePrint Report ePrint Report
Post-Compromise Security, or PCS, refers to the ability of a given protocol to recover—by means of normal protocol operations—from the exposure of local states of its (otherwise honest) participants. While PCS in the two-party setting has attracted a lot of attention recently, the problem of achieving PCS in the group setting—called group ratcheting here—is much less understood. On the one hand, one can achieve excellent security by simply executing, in parallel, a two-party ratcheting protocol (e.g., Signal) for each pair of members in a group. However, this incurs $\mathcal{O}(n)$ communication overhead for every message sent, where $n$ is the group size. On the other hand, several related protocols were recently developed in the context of the IETF Messaging Layer Security (MLS) effort that improve the communication overhead per message to $\mathcal{O}(\log n)$. However, this reduction of communication overhead involves a great restriction: group members are not allowed to send and recover from exposures concurrently such that reaching PCS is delayed up to $n$ communication time slots (potentially even more).

In this work we formally study the trade-off between PCS, concurrency, and communication overhead in the context of group ratcheting. Since our main result is a lower bound, we define the cleanest and most restrictive setting where the tension already occurs: static groups equipped with a synchronous (and authenticated) broadcast channel, where up to $t$ arbitrary parties can concurrently send messages in any given round. Already in this setting, we show in a symbolic execution model that PCS requires $\Omega(t)$ communication overhead per message. Our symbolic model permits as building blocks black-box use of (even "dual") PRFs, (even key-updatable) PKE (with functionality and security of HIBE), and broadcast encryption, capturing all tools used in previous constructions, but prohibiting the use of exotic primitives.

To complement our result, we also prove an almost matching upper bound of $\mathcal{O}(t\cdot(1+\log(n/t)))$, which smoothly increases from $\mathcal{O}(\log n)$ with no concurrency, to $\mathcal{O}(n)$ with unbounded concurrency, matching the previously known protocols.
Expand
Bar Alon, Ran Cohen, Eran Omri, Tom Suad
ePrint Report ePrint Report
Fully secure multiparty computation (MPC) allows a set of parties to compute some function of their inputs, while guaranteeing correctness, privacy, fairness, and output delivery. Understanding the necessary and sufficient assumptions that allow for fully secure MPC is an important goal. Cleve (STOC'86) showed that full security cannot be obtained in general without an honest majority. Conversely, by Rabin and Ben-Or (FOCS'89), assuming a broadcast channel and an honest majority enables a fully secure computation of any function.

Our goal is to characterize the set of functionalities that can be computed with full security, assuming an honest majority, but no broadcast. This question was fully answered by Cohen et al. (TCC'16) -- for the restricted class of symmetric functionalities (where all parties receive the same output). Instructively, their results crucially rely on agreement and do not carry over to general asymmetric functionalities. In this work, we focus on the case of three-party asymmetric functionalities, providing a variety of necessary and sufficient conditions to enable fully secure computation.

An interesting use-case of our results is server-aided computation, where an untrusted server helps two parties to carry out their computation. We show that without a broadcast assumption, the resource of an external non-colluding server provides no additional power. Namely, a functionality can be computed with the help of the server if and only if it can be computed without it. For fair coin tossing, we further show that the optimal bias for three-party (server-aided) $r$-round protocol remains $\Theta(1/r)}$ (as in the two-party setting).
Expand
Sigurd Eskeland
ePrint Report ePrint Report
Common for the overwhelming majority of privacy-preserving greater-than integer comparison schemes is that cryptographic computations are conducted in a bitwise manner. To ensure secrecy, each bit must be encoded in such a way that nothing is revealed to the opposite party. The most noted disadvantage is that the computational and communication cost of bitwise encoding is at best linear to the number of bits. Also, many proposed schemes have complex designs that may be difficult to implement. Carlton et al. proposed in 2018 an interesting scheme that avoids bitwise decomposition and works on whole integers. A variant was proposed by Bourse et al. in 2019. Despite that the stated adversarial model of these schemes is honest-but-curious users, we show that they are vulnerable to malicious users. Inspired by the two mentioned papers, we propose a novel comparison scheme, which is resistant to malicious users.
Expand
Zvika Brakerski, Sanjam Garg, Rotem Tsabary
ePrint Report ePrint Report
We present a novel tree-based technique that can convert any designated-prover NIZK proof system (DP-NIZK) which maintains zero-knowledge only for single statement, into one that allows to prove an unlimited number of statements in ZK, while maintaining all parameters succinct. Our transformation requires leveled fully-homomorphic encryption. We note that single-statement DP-NIZK can be constructed from any one-way function. We also observe a two-way derivation between DP-NIZK and attribute-based signatures (ABS), and as a result derive now constructions of ABS and homomorphic signatures (HS).

Our construction improves upon the prior construction of lattice-based DP-NIZK by Kim and Wu (Crypto 2018) since we only require leveled FHE as opposed to HS (which also translates to improved LWE parameters when instantiated). Alternatively, the recent construction of NIZK without preprocessing from either circular-secure FHE (Canetti et al., STOC 2019) or polynomial Learning with Errors (Peikert and Shiehian, Crypto 2019) could be used to obtain a similar final statement. Nevertheless, we note that our statement is formally incomparable to these works (since leveled FHE is not known to imply circular secure FHE or the hardness of LWE). We view this as evidence for the potential in our technique, which we hope can find additional applications in future works.
Expand
Inbar Kaslasi, Guy N. Rothblum, Ron D. Rothblum, Adam Sealfon, Prashant Nalini Vasudevan
ePrint Report ePrint Report
A statistical zero-knowledge proof (SZK) for a problem $\Pi$ enables a computationally unbounded prover to convince a polynomial-time verifier that $x \in \Pi$ without revealing any additional information about $x$ to the verifier, in a strong information-theoretic sense.

Suppose, however, that the prover wishes to convince the verifier that $k$ separate inputs $x_1,\dots,x_k$ all belong to $\Pi$ (without revealing anything else). A naive way of doing so is to simply run the SZK protocol separately for each input. In this work we ask whether one can do better -- that is, is efficient batch verification possible for SZK?

We give a partial positive answer to this question by constructing a batch verification protocol for a natural and important subclass of SZK -- all problems $\Pi$ that have a non-interactive SZK protocol (in the common random string model). More specifically, we show that, for every such problem $\Pi$, there exists an honest-verifier SZK protocol for batch verification of $k$ instances, with communication complexity $poly(n) + k \cdot poly(\log{n},\log{k})$, where $poly$ refers to a fixed polynomial that depends only on $\Pi$ (and not on $k$). This result should be contrasted with the naive solution, which has communication complexity $k \cdot poly(n)$.

Our proof leverages a new NISZK-complete problem, called Approximate Injectivity, that we find to be of independent interest. The goal in this problem is to distinguish circuits that are nearly injective, from those that are non-injective on almost all inputs.
Expand
Jorge Nakahara Jr
ePrint Report ePrint Report
Substitution boxes (S-boxes) based on the inversion mapping in even-characteristic finite fields are widely used components in the design of cryptographic primitives such as block ciphers (notably the AES cipher). This report focuses on the inversion mapping in finite fields GF(p^n) where p is a (small) odd prime and n is a (small) integer. We compare the differential and linear profiles of S-boxes over odd- and even-characteristic fields, which also motivates the design and analysis of AES variants operating in fields of odd-characteristic. Even for GF(2^n), the study of S-boxes which are APN permutations (odd-valued n)already shows resistance to differential and linear cryptanalysis after three rounds.
Expand
Bor de Kock, Kristian Gjøsteen, Mattia Veroni
ePrint Report ePrint Report
We exploit the Diffie-Hellman-like structure of CSIDH to build a quantum-resistant authenticated key-exchange algorithm. Our security proof has optimal tightness, which means that the protocol is efficient even when instantiated with theoretically-sound security parameters. Compared to previous isogeny-based authenticated key-exchange protocols, our scheme is extremely simple, its security relies only on the underlying CSIDH-problem and it has optimal communication complexity for CSIDH-based protocols. Our security proof relies heavily on the rerandomizability of CSIDH-like problems and carries on in the ROM.
Expand
Min Yang, Qingshu Meng, An Wang, Xin Liu
ePrint Report ePrint Report
For template attacks, it is ideal if templates can be built for each (data,key) pair. However, it requires a lot of power traces and computation. In this paper, firstly, the properties of the UMJD(unisource multivariate joint distribution) are studied, and then a template attack based on the UMJD is presented. For power traces with much noise, the experiments show that its attack effect is much better than that of the CPA(correlation power analysis) based template attacks and that of the SOST(sum of square wise pair t-differences) based template attacks. Secondly, the problem to build a template for each (data,key) pair can be reduced to build templates for an MMJD (multisource multivariate joint distribution). An MMJD can be divided into several UMJDs. Based on the analysis, we give a template attack that does not require large amounts of computations, and neither a large number of power traces for profiling, but with its attack effect equivalent to that of the template attack which aims to build a template for each (data,key) pair. Third, from the process of the UMJD based template attacks, using the POI (points of interest) of all variables together as the POI of the template attack is an extension to the existing conclusion on the optimal number of POI. Lastly, the UMJD can also be applied in the SOST method to obtain better quality of POI.
Expand
Guoqiang Deng, Yongzhuang Wei, Xuefeng Duan, Enes Pasalic, Samir Hodzic
ePrint Report ePrint Report
With the advances of Internet-of-Things (IoT) applications in smart cities and the pervasiveness of network devices with limited resources, lightweight block ciphers have achieved rapid development recently. Due to their relatively simple key schedule, nonlinear invariant attacks have been successfully applied to several families of lightweight block ciphers. This attack relies on the existence of a nonlinear invariant $g:\F_2^n \rightarrow \F_2$ for the round function $F_k$ so that $g(x) + g(F_k(x))$ is constant for any input value $x$. Whereas invariants of the entire $S$-box layer has been studied in terms of the corresponding cycle structure [TLS16,WRP20] (assuming the use of bijective S-boxes), a similar analysis for the linear layer has not been performed yet. In this article, we provide a theoretical analysis for specifying the minimal length of cycles for commonly used linear permutations (implementing linear layers) in lightweight block ciphers. Namely, using a suitable matrix representation, we exactly specify the minimal cycle lengths for those (efficiently implemented) linear layers that employ ShiftRows, Rotational-XOR and circular Boolean matrix operations which can be found in many well-known families of block ciphers. These results are practically useful for the purpose of finding nonlinear invariants of the entire encryption rounds since these can be specified using the intersection of cycles corresponding to the linear and S-box layer. We also apply our theoretical analysis practically and specify minimal cycle lengths of linear layers for certain families of block ciphers including some NIST candidates.
Expand
Pavel Hubáček, Chethan Kamath, Karel Král, Veronika Slívová
ePrint Report ePrint Report
The complexity class TFNP consists of all NP search problems that are total in the sense that a solution is guaranteed to exist for all instances. Over the years, this class has proved to illuminate surprising connections among several diverse subfields of mathematics like combinatorics, computational topology, and algorithmic game theory. More recently, we are starting to better understand its interplay with cryptography. We know that certain cryptographic primitives (e.g. one-way permutations, collision-resistant hash functions, or indistinguishability obfuscation) imply average-case hardness in TFNP and its important subclasses. However, its relationship with the most basic cryptographic primitive. i.e., one-way functions (OWFs), still remains unresolved. Under an additional complexity theoretic assumption, OWFs imply hardness in TFNP (Hubacek, Naor, and Yogev, ITCS 2017). It is also known that average-case hardness in most structured subclasses of TFNP does not imply any form of cryptographic hardness in a black-box way (Rosen, Segev, and Shahaf, TCC 2017), and, thus, one-way functions might be sufficient. Specifically, no negative result which would rule out basing average-case hardness in TFNP solely on OWFs is currently known. In this work, we further explore the interplay between TFNP and OWFs and give the first negative results. As our main result, we show that there cannot exist constructions of average-case (and, in fact, even worst-case) hardness in TFNP from OWFs with a certain type of simple black-box security reductions. The class of reductions we rule out is, however, rich enough to capture many of the currently known cryptographic hardness results for TFNP. Our results are established using the framework of black-box separations (Impagliazzo and Rudich, STOC 1989) and involve a novel application of the reconstruction paradigm (Gennaro and Trevisan, FOCS 2000).
Expand
Shashank Agrawal, Srinivasan Raghuraman
ePrint Report ePrint Report
As blockchains grow in size, validating new transactions becomes more and more resource intensive. To deal with this, there is a need to discover compact encodings of the (effective) state of a blockchain -- an encoding that allows for efficient proofs of membership and updates. In the case of account-based cryptocurrencies, the state can be represented by a key-value map, where keys are the account addresses and values consist of account balance, nonce, etc.

We propose a new commitment scheme for key-value maps whose size does not grow with the number of keys, yet proofs of membership are of constant-size. In fact, both the encoding and the proofs consist of just two and three group elements respectively (in groups of unknown order like class groups). Verifying and updating proofs involves just a few group exponentiations. Additive updates to key values enjoy the same level of efficiency too.

Key-value commitments can be used to build dynamic accumulators and vector commitments, which find applications in group signatures, anonymous credentials, verifiable databases, interactive oracle proofs, etc. Using our new key-value commitment, we provide the most efficient constructions of (sub)vector commitments to date.
Expand
Nir Bitansky, Arka Rai Choudhuri
ePrint Report ePrint Report
Randomness is typically thought to be essential for zero knowledge protocols. Following this intuition, Goldreich and Oren (Journal of Cryptology 94) proved that auxiliary-input zero knowledge cannot be achieved with a deterministic prover. On the other hand, positive results are only known in the honest-verifier setting, or when the prover is given at least a restricted source of entropy.

We prove that removing (or just bounding) the verifier's auxiliary input, deterministic-prover zero knowledge becomes feasible:

- Assuming non-interactive witness-indistinguishable proofs and subexponential indistinguishability obfuscation and one-way functions, we construct deterministic-prover zero-knowledge arguments for $\mathsf{NP}\cap \mathsf{coNP}$ against verifiers with bounded non-uniform auxiliary input.

- Assuming also keyless hash functions that are collision-resistant against bounded-auxiliary-input quasipolynomial-time attackers, we construct similar arguments for all of $\mathsf{NP}$.

Together with the result of Goldreich and Oren, this characterizes when deterministic-prover zero knowledge is feasible. We also demonstrate the necessity of strong assumptions, by showing that deterministic prover zero knowledge arguments for a given language imply witness encryption for that language. We further prove that such arguments can always be collapsed to two messages and be made laconic. These implications rely on a more general connection with the notion of predictable arguments by Faonio, Nielsen, and Venturi (PKC 17).
Expand
Rintaro Fujita, Takanori Isobe, Kazuhiko Minematsu
ePrint Report ePrint Report
We present malleability attacks against encrypted binary executable files when they are encrypted by CBC mode of operation. While the CBC malleability is classic and has been used to attack on various real-world applications, the risk of encrypting binary executable via CBC mode on common OSs has not been widely recognized. We showed that, with a certain non-negligible probability, it is possible to manipulate the CBC-encrypted binary files so that the decryption result allows an arbitrary code execution (ACE), which is one of the most powerful exploits, even without the knowledge of plaintext binary. More specifically, for both 32- and 64-bit Linux and Windows OS, we performed a thorough analysis on the binary executable format to evaluate the practical impact of ACE on CBC encryption, and showed that the attack is possible if the adversary is able to correctly guess 13 to 25 bits of the address to inject code. In principle, our attack affects a wide range of storage/file encryption systems that adopt CBC encryption. In addition, a manual file encryption using OpenSSL API (AES-256-CBC) is affected, which is presumed to be frequently used in practice for file encryption. We provide Proof-of-Concept implementations for Linux and Windows. We have communicated our findings to the appropriate institution and have informed to vendors as an act of responsible disclosure.
Expand
Daan Sprenkels, Bas Westerbaan
ePrint Report ePrint Report
We suggest a small change to the Dilithium signature scheme, that allows reusing computation between aborted attempts for a speed-up in signing time.
Expand
Rex Fernando, Ilan Komargodski, Yanyi Liu, Elaine Shi
ePrint Report ePrint Report
This work concerns secure protocols in the massively parallel computation (MPC) model, which is one of the most widely-accepted models for capturing the challenges of writing protocols for the types of parallel computing clusters which have become commonplace today (MapReduce, Hadoop, Spark, etc.). Recently, the work of Chan et al. (ITCS '20) initiated this study, giving a way to compile any MPC protocol into a secure one in the common random string model, achieving the standard secure multi-party computation definition of security with up to 1/3 of the parties being corrupt.

We are interested in achieving security for much more than 1/3 corruptions. To that end, we give two compilers for MPC protocols, which assume a simple public-key infrastructure, and achieve semi-honest security for all-but-one corruptions. Our first compiler assumes hardness of the learning-with-errors (LWE) problem, and works for any MPC protocol with ``short'' output---that is, where the output of the protocol can fit into the storage space of one machine, for instance protocols that output a trained machine learning model. Our second compiler works for any MPC protocol (even ones with a long output, such as sorting) but assumes, in addition to LWE, indistinguishability obfuscation and a circular secure variant of threshold FHE. Both protocols allow the attacker to choose corrupted parties based on the trusted setup, an improvement over Chan et al., whose protocol requires that the CRS is chosen independently of the attacker's choices.
Expand
Anna Lisa Ferrara, Chiara Ricciardi
ePrint Report ePrint Report
A hierarchical key assignment scheme (HKAS) is a method to assign some private information and encryption keys to a set of classes in a partially ordered hierarchy, so that the private information of a higher class together with some public information can be used to derive the keys of all classes lower down in the hierarchy. Historically, HKAS have been introduced to enforce multi-level access control, where it can be safely assumed that the public information is made available in some authenticated form. Subsequently, HKAS have found application in several other contexts where, instead, it would be convenient to certify the trustworthiness of public information. Such application contexts include key management for IoT and for emerging distributed data acquisition systems such as wireless sensor networks. In this paper, motivated by the need of accommodating this additional security requirement, we first introduce a new cryptographic primitive: Verifiable Hierarchical Key Assignment Scheme (VHKAS). A VHKAS is a key assignment scheme with a verification procedure that allows honest users to verify whether public information has been maliciously modified so as to induce an honest user to obtain an incorrect key. Then, we design and analyse verifiable hierarchical key assignment schemes which are provably secure. Our solutions support key update for compromised encryption keys by making a limited number of changes to public and private information.
Expand
Dimitris Mouris, Nektarios Georgios Tsoutsos
ePrint Report ePrint Report
As cloud computing becomes more popular, research has focused on usable solutions to the problem of verifiable computation (VC), where a computationally weak device (Verifier) outsources a program execution to a powerful server (Prover) and receives guarantees that the execution was performed faithfully. A Prover can further demonstrate knowledge of a secret input that causes the Verifier’s program to satisfy certain assertions, without ever revealing which input was used. State-of-the-art Zero-Knowledge Proofs of Knowledge (ZKPK) methods encode a computation using arithmetic circuits and preserve the privacy of Prover’s inputs while attesting the integrity of program execution. Nevertheless, developing, debugging and optimizing programs as circuits remains a daunting task, as most users are unfamiliar with this programming paradigm.

In this work we present Zilch, a framework that accelerates and simplifies the deployment of VC and ZKPK for any application transparently, i.e., without the need of trusted setup. Zilch uses traditional instruction sequences rather than static arithmetic circuits that would need to be regenerated for each different computation. Towards that end we have implemented ZMIPS: a MIPS-like processor model that allows verifying each instruction independently and compose a proof for the execution of the target application. To foster usability, Zilch incorporates a novel cross-compiler from an object-oriented Java- like language tailored to ZKPK and optimized our ZMIPS model, as well as a powerful API that enables integration of ZKPK within existing C/C++ programs. In our experiments, we demonstrate the flexibility of Zilch using two real-life applications, and evaluate Prover and Verifier performance on a variety of benchmarks.
Expand
Kwangsu Lee, Minhye Seo
ePrint Report ePrint Report
Functional encryption for set intersection (FE-SI) in the multi-client environment is that each client $i$ encrypts a set $X_i$ associated with time $T$ by using its own encryption key and uploads it to a cloud server, and then the cloud server which receives a function key of the client indexes $i, j$ from a trusted center can compute the intersection $X_i \cap X_j$ of the two client ciphertexts. In this paper, we first newly define the concept of FE-SI suitable for the multi-client setting. Then, we propose an efficient FE-SI scheme in asymmetric bilinear groups and prove the static security of our scheme under newly introduced assumptions. In our FE-SI scheme, a ciphertext consists of $O(\ell)$ group elements, a function key consists of a single group element, and the decryption algorithm has $O(\ell^2)$ complexity where $\ell$ is the size of a set in the ciphertext. Next, we propose another FE-SI scheme with time-constrained keys that limits the ability of function keys to be valid only for a specified time period $T$, and proves the static security of our scheme. Finally, we prove that the two assumptions hold in the general group model to provide confidence in the two newly introduced assumptions.
Expand
◄ Previous Next ►