IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
27 September 2020
Protocol Labs
Job Posting
We seek a Research Project Manager to support and coordinate our research on problems within the field of consensus while assembling a team of world-class researchers.
Research at Protocol Labs
We’re not an ordinary Research Lab. We’re working hard to build the core protocols of the decentralized web pursuant to our vision for the future of the Internet. As part of our research team, you’ll be enabling outcomes at the forefront of our mission. Our research team is granted both the freedom to develop knowledge by working on novel applications and a responsibility to contribute those skills toward advancing the flagship projects of Protocol Labs and the research endeavors of leading institutions in the field. You’ll feel at home working with us if your knowledge and optimism enable you to craft creative solutions working around evolving needs.
What will the ConsensusLab do
The goal of ConsensusLab will be to focus on long-term research problems facing Filecoin and other Protocol Labs projects related to secure, scalable consensus mechanisms.
As ConsensusLab Lead you will…
Coordinate with teammates across the research and project teams to drive progress against the goals we want ConsensusLab to accomplish, driving consensus research to support multiple autonomous projects within the company
Track and coordinate the known problems that must be solved to hit those goals, the proposed solutions to those problems, the interactions and constraints that proposed solutions have on each other, and the effects design requirements have on proposed solutions.
Assemble a team of researchers to drive progress against these problems.
You may be a fit for this role if you have...
Strong organizational and project management skills
A familiarity with consensus research, in which you explored security or scalability of a system with byzantine or rational actors
A love of supporting others in their efforts to learn and push the boundaries of human knowledge
Bonus points...
You’ve applied game theory, mechanism design, or agent-based modeling to fault tolerant system
You’ve previously managed a team of researchers working on novel consensus algorithms
Closing date for applications:
Contact: Ed Burns
More information: https://jobs.lever.co/protocol/45adb8e8-4f5b-4da5-9c25-d1b84f3792e9
University of Maryland
Job Posting
Several postdoctoral positions are available in the following areas:
- Post-quantum (including lattice-based) zero-knowledge proofs.
- Fast implementation of fully homomorphic encryption and lattice-based cryptography.
- Adversarial machine learning, broadly defined
Closing date for applications:
Contact: Jonathan Katz
25 September 2020
Rami Khalil, Naranker Dulay
ePrint Report
This paper introduces the PoSH Consensus protocol, a novel work-in-progress construction for achieving Sybil-resistant Nakamoto-style probabilistic consensus on the contents of a cryptocurrency ledger in a permissionless decentralized network where parties stake their hardwares computational power towards participation in leader election. PoSH aims to establish an openly mintable cryptocurrency that eliminates the requirement for block rewards and disincentivizes mining pools.
David Heath, Vladimir Kolesnikov, Stanislav Peceny
ePrint Report
MPC functionalities are increasingly specified in high-level languages, where control-flow constructions such as conditional statements are extensively used. Today, concretely efficient MPC protocols are circuit-based and must evaluate all conditional branches at high cost to hide the taken branch.
The Goldreich-Micali-Wigderson, or GMW, protocol is a foundational circuit-based technique that realizes MPC for p players and is secure against up to p - 1 semi-honest corruptions. While GMW requires communication rounds proportional to the computed circuits depth, it is effective in many natural settings.
Our main contribution is MOTIF (Minimizing OTs for IFs), a novel GMW extension that evaluates conditional branches almost for free by amortizing Oblivious Transfers (OTs) across branches. That is, we simultaneously evaluate multiple independent AND gates, one gate from each mutually exclusive branch, by representing them as a single cheap vector-scalar multiplication (VS) gate.
For 2PC with b branches, we simultaneously evaluate up to b AND gates using only two 1-out-of-2 OTs of b-bit secrets. This is a factor ~b improvement over the state-of-the-art 2b 1-out-of-2 OTs of 1-bit secrets. Our factor b improvement generalizes to the multiparty setting as well: b AND gates consume only p(p - 1) 1-out-of-2 OTs of b-bit secrets.
We implemented our approach and report its performance. For 2PC and a circuit with 16 branches, each comparing two length-65000 bitstrings, MOTIF outperforms standard GMW in terms of communication by ~9.4x. Total wall-clock time is improved by 4.1 - 9.2x depending on network settings.
Our work is in the semi-honest model, tolerating all-but-one corruptions.
The Goldreich-Micali-Wigderson, or GMW, protocol is a foundational circuit-based technique that realizes MPC for p players and is secure against up to p - 1 semi-honest corruptions. While GMW requires communication rounds proportional to the computed circuits depth, it is effective in many natural settings.
Our main contribution is MOTIF (Minimizing OTs for IFs), a novel GMW extension that evaluates conditional branches almost for free by amortizing Oblivious Transfers (OTs) across branches. That is, we simultaneously evaluate multiple independent AND gates, one gate from each mutually exclusive branch, by representing them as a single cheap vector-scalar multiplication (VS) gate.
For 2PC with b branches, we simultaneously evaluate up to b AND gates using only two 1-out-of-2 OTs of b-bit secrets. This is a factor ~b improvement over the state-of-the-art 2b 1-out-of-2 OTs of 1-bit secrets. Our factor b improvement generalizes to the multiparty setting as well: b AND gates consume only p(p - 1) 1-out-of-2 OTs of b-bit secrets.
We implemented our approach and report its performance. For 2PC and a circuit with 16 branches, each comparing two length-65000 bitstrings, MOTIF outperforms standard GMW in terms of communication by ~9.4x. Total wall-clock time is improved by 4.1 - 9.2x depending on network settings.
Our work is in the semi-honest model, tolerating all-but-one corruptions.
Chloe Cachet, Luke Demarest, Benjamin Fuller, Ariel Hamlin
ePrint Report
Biometric databases collect entire countries worth of citizens' sensitive information with few cryptographic protections. The critical required functionality is proximity search, the ability to search for all records close to a queried value, that is within a bounded distance. Biometrics usually operate in high dimensional space where an exponential number (in the dimension) of values are close.
This work builds searchable encryption that supports proximity queries for the Hamming metric. The Hamming metric is frequently used for the iris biometric. Searchable encryption schemes have leakage, which is information revealed to the database server such as identifiers of records returned which is known as access pattern leakage.
Prior work on proximity searchable encryption falls into two classes: 1) Li et al. (INFOCOM 2010) and Boldyreva and Chenette (FSE 2014) support only a polynomial number of close values, 2) Kim et al. (SCN 2018) leak the distance between the query and all stored records. The first class is not feasible due to the exponential number of close values. The second class allows the server to compute geometry of the space, enabling attacks akin to those on nearest neighbor schemes (Kornaropoulos et al. IEEE S&P 2019, 2020).
We build proximity search out of a new variant of inner product encryption called multi-point inner product encryption (MPIPE). MPIPE is built from function-hiding, secret-key, inner product predicate encryption (Shen, Shi, and Waters, TCC 2009). Our construction leaks access pattern and when two database records are the same distance from the queried point.
In most applications of searchable encryption the data distribution is not known a priori, making it prudent to consider leakage in a variety of settings. However, biometrics' statistics are well studied and static. Frequently in biometric search at most one record is returned. In this setting, access pattern leakage and the additional leakage of distance equality is unlikely to be harmful.
We also introduce a technique for reducing key size of a class of inner product encryption schemes based on dual pairing vector spaces. Our technique splits these vector spaces into multiple, smaller components, yielding keys that are a linear number of group elements instead of quadratic. We instantiate this technique on the scheme of Okamoto and Takashima (Eurocrypt, 2012) and show security under the same assumption (decisional linear).
This work builds searchable encryption that supports proximity queries for the Hamming metric. The Hamming metric is frequently used for the iris biometric. Searchable encryption schemes have leakage, which is information revealed to the database server such as identifiers of records returned which is known as access pattern leakage.
Prior work on proximity searchable encryption falls into two classes: 1) Li et al. (INFOCOM 2010) and Boldyreva and Chenette (FSE 2014) support only a polynomial number of close values, 2) Kim et al. (SCN 2018) leak the distance between the query and all stored records. The first class is not feasible due to the exponential number of close values. The second class allows the server to compute geometry of the space, enabling attacks akin to those on nearest neighbor schemes (Kornaropoulos et al. IEEE S&P 2019, 2020).
We build proximity search out of a new variant of inner product encryption called multi-point inner product encryption (MPIPE). MPIPE is built from function-hiding, secret-key, inner product predicate encryption (Shen, Shi, and Waters, TCC 2009). Our construction leaks access pattern and when two database records are the same distance from the queried point.
In most applications of searchable encryption the data distribution is not known a priori, making it prudent to consider leakage in a variety of settings. However, biometrics' statistics are well studied and static. Frequently in biometric search at most one record is returned. In this setting, access pattern leakage and the additional leakage of distance equality is unlikely to be harmful.
We also introduce a technique for reducing key size of a class of inner product encryption schemes based on dual pairing vector spaces. Our technique splits these vector spaces into multiple, smaller components, yielding keys that are a linear number of group elements instead of quadratic. We instantiate this technique on the scheme of Okamoto and Takashima (Eurocrypt, 2012) and show security under the same assumption (decisional linear).
Ryo Nishimaki
ePrint Report
Program watermarking enables users to embed an arbitrary string called a mark into a program while preserving the functionality of the program. Adversaries cannot remove the mark without destroying the functionality. Although there exist generic constructions of watermarking schemes for public-key cryptographic (PKC) primitives, those schemes are constructed from scratch and not efficient.
In this work, we present a general framework to equip a broad class of PKC primitives with an efficient watermarking scheme. The class consists of PKC primitives that have a canonical all-but-one (ABO) reduction. Canonical ABO reductions are standard techniques to prove selective security of PKC primitives, where adversaries must commit a target attribute at the beginning of the security game. Thus, we can obtain watermarking schemes for many existing efficient PKC schemes from standard cryptographic assumptions via our framework. Most well-known selectively secure PKC schemes have canonical ABO reductions. Notably, we can achieve watermarking for public-key encryption whose ciphertexts and secret-keys are constant-size, and that is chosen-ciphertext secure.
Our approach accommodates the canonical ABO reduction technique to the puncturable pseudorandom function (PRF) technique, which is used to achieve watermarkable PRFs. We find that canonical ABO reductions are compatible with such puncturable PRF-based watermarking schemes.
In this work, we present a general framework to equip a broad class of PKC primitives with an efficient watermarking scheme. The class consists of PKC primitives that have a canonical all-but-one (ABO) reduction. Canonical ABO reductions are standard techniques to prove selective security of PKC primitives, where adversaries must commit a target attribute at the beginning of the security game. Thus, we can obtain watermarking schemes for many existing efficient PKC schemes from standard cryptographic assumptions via our framework. Most well-known selectively secure PKC schemes have canonical ABO reductions. Notably, we can achieve watermarking for public-key encryption whose ciphertexts and secret-keys are constant-size, and that is chosen-ciphertext secure.
Our approach accommodates the canonical ABO reduction technique to the puncturable pseudorandom function (PRF) technique, which is used to achieve watermarkable PRFs. We find that canonical ABO reductions are compatible with such puncturable PRF-based watermarking schemes.
Zi-Yuan Liu, Yi-Fan Tseng, Raylin Tso
ePrint Report
Kansal and Dutta recently proposed a multisignature scheme at AFRICACRYPT 2020. This is the first lattice-based multisignature scheme that generates a multisignature in only a single round of interaction and supports public key aggregation. In this letter, we provide a cryptanalysis of this multisignature scheme and demonstrate that the scheme does not satisfy unforgeability requirements. We present an attack strategy to demonstrate that if an adversary obtains a sufficient number of signatures from a signer, he/she can recover the private key of the signer in polynomial time. We also uncover the root cause of the attack and provide a possible solution for this attack to aid future designs of secure multisignature schemes.
Alexander Bienstock, Yevgeniy Dodis, Paul Rösler
ePrint Report
Post-Compromise Security, or PCS, refers to the ability of a given protocol to recoverby means of normal protocol operationsfrom the exposure of local states of its (otherwise honest) participants. While PCS in the two-party setting has attracted a lot of attention recently, the problem of achieving PCS in the group settingcalled group ratcheting hereis much less understood. On the one hand, one can achieve excellent security by simply executing, in parallel, a two-party ratcheting protocol (e.g., Signal) for each pair of members in a group. However, this incurs $\mathcal{O}(n)$ communication overhead for every message sent, where $n$ is the group size. On the other hand, several related protocols were recently developed in the context of the IETF Messaging Layer Security (MLS) effort that improve the communication overhead per message to $\mathcal{O}(\log n)$. However, this reduction of communication overhead involves a great restriction: group members are not allowed to send and recover from exposures concurrently such that reaching PCS is delayed up to $n$ communication time slots (potentially even more).
In this work we formally study the trade-off between PCS, concurrency, and communication overhead in the context of group ratcheting. Since our main result is a lower bound, we define the cleanest and most restrictive setting where the tension already occurs: static groups equipped with a synchronous (and authenticated) broadcast channel, where up to $t$ arbitrary parties can concurrently send messages in any given round. Already in this setting, we show in a symbolic execution model that PCS requires $\Omega(t)$ communication overhead per message. Our symbolic model permits as building blocks black-box use of (even "dual") PRFs, (even key-updatable) PKE (with functionality and security of HIBE), and broadcast encryption, capturing all tools used in previous constructions, but prohibiting the use of exotic primitives.
To complement our result, we also prove an almost matching upper bound of $\mathcal{O}(t\cdot(1+\log(n/t)))$, which smoothly increases from $\mathcal{O}(\log n)$ with no concurrency, to $\mathcal{O}(n)$ with unbounded concurrency, matching the previously known protocols.
In this work we formally study the trade-off between PCS, concurrency, and communication overhead in the context of group ratcheting. Since our main result is a lower bound, we define the cleanest and most restrictive setting where the tension already occurs: static groups equipped with a synchronous (and authenticated) broadcast channel, where up to $t$ arbitrary parties can concurrently send messages in any given round. Already in this setting, we show in a symbolic execution model that PCS requires $\Omega(t)$ communication overhead per message. Our symbolic model permits as building blocks black-box use of (even "dual") PRFs, (even key-updatable) PKE (with functionality and security of HIBE), and broadcast encryption, capturing all tools used in previous constructions, but prohibiting the use of exotic primitives.
To complement our result, we also prove an almost matching upper bound of $\mathcal{O}(t\cdot(1+\log(n/t)))$, which smoothly increases from $\mathcal{O}(\log n)$ with no concurrency, to $\mathcal{O}(n)$ with unbounded concurrency, matching the previously known protocols.
Bar Alon, Ran Cohen, Eran Omri, Tom Suad
ePrint Report
Fully secure multiparty computation (MPC) allows a set of parties to compute some function of their inputs, while guaranteeing correctness, privacy, fairness, and output delivery. Understanding the necessary and sufficient assumptions that allow for fully secure MPC is an important goal. Cleve (STOC'86) showed that full security cannot be obtained in general without an honest majority. Conversely, by Rabin and Ben-Or (FOCS'89), assuming a broadcast channel and an honest majority enables a fully secure computation of any function.
Our goal is to characterize the set of functionalities that can be computed with full security, assuming an honest majority, but no broadcast. This question was fully answered by Cohen et al. (TCC'16) -- for the restricted class of symmetric functionalities (where all parties receive the same output). Instructively, their results crucially rely on agreement and do not carry over to general asymmetric functionalities. In this work, we focus on the case of three-party asymmetric functionalities, providing a variety of necessary and sufficient conditions to enable fully secure computation.
An interesting use-case of our results is server-aided computation, where an untrusted server helps two parties to carry out their computation. We show that without a broadcast assumption, the resource of an external non-colluding server provides no additional power. Namely, a functionality can be computed with the help of the server if and only if it can be computed without it. For fair coin tossing, we further show that the optimal bias for three-party (server-aided) $r$-round protocol remains $\Theta(1/r)}$ (as in the two-party setting).
Our goal is to characterize the set of functionalities that can be computed with full security, assuming an honest majority, but no broadcast. This question was fully answered by Cohen et al. (TCC'16) -- for the restricted class of symmetric functionalities (where all parties receive the same output). Instructively, their results crucially rely on agreement and do not carry over to general asymmetric functionalities. In this work, we focus on the case of three-party asymmetric functionalities, providing a variety of necessary and sufficient conditions to enable fully secure computation.
An interesting use-case of our results is server-aided computation, where an untrusted server helps two parties to carry out their computation. We show that without a broadcast assumption, the resource of an external non-colluding server provides no additional power. Namely, a functionality can be computed with the help of the server if and only if it can be computed without it. For fair coin tossing, we further show that the optimal bias for three-party (server-aided) $r$-round protocol remains $\Theta(1/r)}$ (as in the two-party setting).
Sigurd Eskeland
ePrint Report
Common for the overwhelming majority of privacy-preserving greater-than integer comparison schemes is that cryptographic computations are conducted in a bitwise manner. To ensure secrecy, each bit must be encoded in such a way that nothing is revealed to the opposite party. The most noted disadvantage is that the computational and communication cost of bitwise encoding is at best linear to the number of bits. Also, many proposed schemes have complex designs that may be difficult to implement. Carlton et al. proposed in 2018 an interesting scheme that avoids bitwise decomposition and works on whole integers. A variant was proposed by Bourse et al. in 2019. Despite that the stated adversarial model of these schemes is honest-but-curious users, we show that they are vulnerable to malicious users. Inspired by the two mentioned papers, we propose a novel comparison scheme, which is resistant to malicious users.
Zvika Brakerski, Sanjam Garg, Rotem Tsabary
ePrint Report
We present a novel tree-based technique that can convert any designated-prover NIZK proof system (DP-NIZK) which maintains zero-knowledge only for single statement, into one that allows to prove an unlimited number of statements in ZK, while maintaining all parameters succinct. Our transformation requires leveled fully-homomorphic encryption. We note that single-statement DP-NIZK can be constructed from any one-way function.
We also observe a two-way derivation between DP-NIZK and attribute-based signatures (ABS), and as a result derive now constructions of ABS and homomorphic signatures (HS).
Our construction improves upon the prior construction of lattice-based DP-NIZK by Kim and Wu (Crypto 2018) since we only require leveled FHE as opposed to HS (which also translates to improved LWE parameters when instantiated). Alternatively, the recent construction of NIZK without preprocessing from either circular-secure FHE (Canetti et al., STOC 2019) or polynomial Learning with Errors (Peikert and Shiehian, Crypto 2019) could be used to obtain a similar final statement. Nevertheless, we note that our statement is formally incomparable to these works (since leveled FHE is not known to imply circular secure FHE or the hardness of LWE). We view this as evidence for the potential in our technique, which we hope can find additional applications in future works.
Our construction improves upon the prior construction of lattice-based DP-NIZK by Kim and Wu (Crypto 2018) since we only require leveled FHE as opposed to HS (which also translates to improved LWE parameters when instantiated). Alternatively, the recent construction of NIZK without preprocessing from either circular-secure FHE (Canetti et al., STOC 2019) or polynomial Learning with Errors (Peikert and Shiehian, Crypto 2019) could be used to obtain a similar final statement. Nevertheless, we note that our statement is formally incomparable to these works (since leveled FHE is not known to imply circular secure FHE or the hardness of LWE). We view this as evidence for the potential in our technique, which we hope can find additional applications in future works.
Inbar Kaslasi, Guy N. Rothblum, Ron D. Rothblum, Adam Sealfon, Prashant Nalini Vasudevan
ePrint Report
A statistical zero-knowledge proof (SZK) for a problem $\Pi$ enables a computationally unbounded prover to convince a polynomial-time verifier that $x \in \Pi$ without revealing any additional information about $x$ to the verifier, in a strong information-theoretic sense.
Suppose, however, that the prover wishes to convince the verifier that $k$ separate inputs $x_1,\dots,x_k$ all belong to $\Pi$ (without revealing anything else). A naive way of doing so is to simply run the SZK protocol separately for each input. In this work we ask whether one can do better -- that is, is efficient batch verification possible for SZK?
We give a partial positive answer to this question by constructing a batch verification protocol for a natural and important subclass of SZK -- all problems $\Pi$ that have a non-interactive SZK protocol (in the common random string model). More specifically, we show that, for every such problem $\Pi$, there exists an honest-verifier SZK protocol for batch verification of $k$ instances, with communication complexity $poly(n) + k \cdot poly(\log{n},\log{k})$, where $poly$ refers to a fixed polynomial that depends only on $\Pi$ (and not on $k$). This result should be contrasted with the naive solution, which has communication complexity $k \cdot poly(n)$.
Our proof leverages a new NISZK-complete problem, called Approximate Injectivity, that we find to be of independent interest. The goal in this problem is to distinguish circuits that are nearly injective, from those that are non-injective on almost all inputs.
Suppose, however, that the prover wishes to convince the verifier that $k$ separate inputs $x_1,\dots,x_k$ all belong to $\Pi$ (without revealing anything else). A naive way of doing so is to simply run the SZK protocol separately for each input. In this work we ask whether one can do better -- that is, is efficient batch verification possible for SZK?
We give a partial positive answer to this question by constructing a batch verification protocol for a natural and important subclass of SZK -- all problems $\Pi$ that have a non-interactive SZK protocol (in the common random string model). More specifically, we show that, for every such problem $\Pi$, there exists an honest-verifier SZK protocol for batch verification of $k$ instances, with communication complexity $poly(n) + k \cdot poly(\log{n},\log{k})$, where $poly$ refers to a fixed polynomial that depends only on $\Pi$ (and not on $k$). This result should be contrasted with the naive solution, which has communication complexity $k \cdot poly(n)$.
Our proof leverages a new NISZK-complete problem, called Approximate Injectivity, that we find to be of independent interest. The goal in this problem is to distinguish circuits that are nearly injective, from those that are non-injective on almost all inputs.
Jorge Nakahara Jr
ePrint Report
Substitution boxes (S-boxes) based on the inversion mapping in even-characteristic finite fields are widely used components in the design of cryptographic primitives such as block ciphers (notably the AES cipher). This report focuses on the inversion mapping in finite fields GF(p^n) where p is a (small) odd prime and n is a (small) integer.
We compare the differential and linear profiles of S-boxes over odd- and even-characteristic fields, which also motivates the design and analysis of AES variants operating in fields of odd-characteristic. Even for GF(2^n), the study of S-boxes which are APN permutations (odd-valued n)already shows resistance to differential and linear cryptanalysis after three rounds.
Bor de Kock, Kristian Gjøsteen, Mattia Veroni
ePrint Report
We exploit the Diffie-Hellman-like structure of CSIDH to build a quantum-resistant authenticated key-exchange algorithm. Our security proof has optimal tightness, which means that the protocol is efficient even when instantiated with theoretically-sound security parameters. Compared to previous isogeny-based authenticated key-exchange protocols, our scheme is extremely simple, its security relies only on the underlying CSIDH-problem and it has optimal communication complexity for CSIDH-based protocols.
Our security proof relies heavily on the rerandomizability of CSIDH-like problems and carries on in the ROM.
Min Yang, Qingshu Meng, An Wang, Xin Liu
ePrint Report
For template attacks, it is ideal if templates can be built for each (data,key) pair. However, it requires a lot of power traces and computation. In this paper, firstly, the properties of the UMJD(unisource multivariate joint distribution) are studied, and then a template attack based on the UMJD is presented. For power traces with much noise, the experiments show that its attack effect is much better than that of the CPA(correlation power analysis) based template attacks and that of the SOST(sum of square wise pair t-differences) based template attacks. Secondly, the problem to build a template for each (data,key) pair can be reduced to build templates for an MMJD (multisource multivariate joint distribution). An MMJD can be divided into several UMJDs. Based on the analysis, we give a template attack that does not require large amounts of computations, and neither a large number of power traces for profiling, but with its attack effect equivalent to that of the template attack which aims to build a template for each (data,key) pair. Third, from the process of the UMJD based template attacks, using the POI (points of interest) of all variables together as the POI of the template attack is an extension to the existing conclusion on the optimal number of POI. Lastly, the UMJD can also be applied in the SOST method to obtain better quality of POI.
Guoqiang Deng, Yongzhuang Wei, Xuefeng Duan, Enes Pasalic, Samir Hodzic
ePrint Report
With the advances of Internet-of-Things (IoT) applications in smart cities and the pervasiveness of network devices with limited resources, lightweight block ciphers have achieved rapid development recently.
Due to their relatively simple key schedule, nonlinear invariant attacks have been successfully applied to several families of lightweight block ciphers.
This attack relies on the existence of a nonlinear invariant $g:\F_2^n \rightarrow \F_2$ for the round function $F_k$ so that $g(x) + g(F_k(x))$ is constant for any input value $x$.
Whereas invariants of the entire $S$-box layer has been studied in terms of the corresponding cycle structure [TLS16,WRP20] (assuming the use of bijective S-boxes), a similar analysis for the linear layer has not been performed yet.
In this article, we provide a theoretical analysis for specifying the minimal length of cycles for commonly used linear permutations (implementing linear layers) in lightweight block ciphers. Namely, using a suitable matrix representation, we exactly specify the minimal cycle lengths for those (efficiently implemented) linear layers that employ ShiftRows, Rotational-XOR and circular Boolean matrix operations which can be found in many well-known families of block ciphers. These results are practically useful for the purpose of finding nonlinear invariants of the entire encryption rounds since these can be specified using the intersection of cycles corresponding to the linear and S-box layer. We also apply our theoretical analysis practically and specify minimal cycle lengths of linear layers for certain families of block ciphers including some NIST candidates.
Pavel Hubáček, Chethan Kamath, Karel Král, Veronika Slívová
ePrint Report
The complexity class TFNP consists of all NP search problems that are total in the sense that a solution is guaranteed to exist for all instances. Over the years, this class has proved to illuminate surprising connections among several diverse subfields of mathematics like combinatorics, computational topology, and algorithmic game theory. More recently, we are starting to better understand its interplay with cryptography.
We know that certain cryptographic primitives (e.g. one-way permutations, collision-resistant hash functions, or indistinguishability obfuscation) imply average-case hardness in TFNP and its important subclasses. However, its relationship with the most basic cryptographic primitive. i.e., one-way functions (OWFs), still remains unresolved. Under an additional complexity theoretic assumption, OWFs imply hardness in TFNP (Hubacek, Naor, and Yogev, ITCS 2017). It is also known that average-case hardness in most structured subclasses of TFNP does not imply any form of cryptographic hardness in a black-box way (Rosen, Segev, and Shahaf, TCC 2017), and, thus, one-way functions might be sufficient. Specifically, no negative result which would rule out basing average-case hardness in TFNP solely on OWFs is currently known. In this work, we further explore the interplay between TFNP and OWFs and give the first negative results.
As our main result, we show that there cannot exist constructions of average-case (and, in fact, even worst-case) hardness in TFNP from OWFs with a certain type of simple black-box security reductions. The class of reductions we rule out is, however, rich enough to capture many of the currently known cryptographic hardness results for TFNP. Our results are established using the framework of black-box separations (Impagliazzo and Rudich, STOC 1989) and involve a novel application of the reconstruction paradigm (Gennaro and Trevisan, FOCS 2000).
Shashank Agrawal, Srinivasan Raghuraman
ePrint Report
As blockchains grow in size, validating new transactions becomes more and more resource intensive. To deal with this, there is a need to discover compact encodings of the (effective) state of a blockchain -- an encoding that allows for efficient proofs of membership and updates. In the case of account-based cryptocurrencies, the state can be represented by a key-value map, where keys are the account addresses and values consist of account balance, nonce, etc.
We propose a new commitment scheme for key-value maps whose size does not grow with the number of keys, yet proofs of membership are of constant-size. In fact, both the encoding and the proofs consist of just two and three group elements respectively (in groups of unknown order like class groups). Verifying and updating proofs involves just a few group exponentiations. Additive updates to key values enjoy the same level of efficiency too.
Key-value commitments can be used to build dynamic accumulators and vector commitments, which find applications in group signatures, anonymous credentials, verifiable databases, interactive oracle proofs, etc. Using our new key-value commitment, we provide the most efficient constructions of (sub)vector commitments to date.
We propose a new commitment scheme for key-value maps whose size does not grow with the number of keys, yet proofs of membership are of constant-size. In fact, both the encoding and the proofs consist of just two and three group elements respectively (in groups of unknown order like class groups). Verifying and updating proofs involves just a few group exponentiations. Additive updates to key values enjoy the same level of efficiency too.
Key-value commitments can be used to build dynamic accumulators and vector commitments, which find applications in group signatures, anonymous credentials, verifiable databases, interactive oracle proofs, etc. Using our new key-value commitment, we provide the most efficient constructions of (sub)vector commitments to date.
Nir Bitansky, Arka Rai Choudhuri
ePrint Report
Randomness is typically thought to be essential for zero knowledge protocols. Following this intuition, Goldreich and Oren (Journal of Cryptology 94) proved that auxiliary-input zero knowledge cannot be achieved with a deterministic prover. On the other hand, positive results are only known in the honest-verifier setting, or when the prover is given at least a restricted source of entropy.
We prove that removing (or just bounding) the verifier's auxiliary input, deterministic-prover zero knowledge becomes feasible:
- Assuming non-interactive witness-indistinguishable proofs and subexponential indistinguishability obfuscation and one-way functions, we construct deterministic-prover zero-knowledge arguments for $\mathsf{NP}\cap \mathsf{coNP}$ against verifiers with bounded non-uniform auxiliary input.
- Assuming also keyless hash functions that are collision-resistant against bounded-auxiliary-input quasipolynomial-time attackers, we construct similar arguments for all of $\mathsf{NP}$.
Together with the result of Goldreich and Oren, this characterizes when deterministic-prover zero knowledge is feasible. We also demonstrate the necessity of strong assumptions, by showing that deterministic prover zero knowledge arguments for a given language imply witness encryption for that language. We further prove that such arguments can always be collapsed to two messages and be made laconic. These implications rely on a more general connection with the notion of predictable arguments by Faonio, Nielsen, and Venturi (PKC 17).
We prove that removing (or just bounding) the verifier's auxiliary input, deterministic-prover zero knowledge becomes feasible:
- Assuming non-interactive witness-indistinguishable proofs and subexponential indistinguishability obfuscation and one-way functions, we construct deterministic-prover zero-knowledge arguments for $\mathsf{NP}\cap \mathsf{coNP}$ against verifiers with bounded non-uniform auxiliary input.
- Assuming also keyless hash functions that are collision-resistant against bounded-auxiliary-input quasipolynomial-time attackers, we construct similar arguments for all of $\mathsf{NP}$.
Together with the result of Goldreich and Oren, this characterizes when deterministic-prover zero knowledge is feasible. We also demonstrate the necessity of strong assumptions, by showing that deterministic prover zero knowledge arguments for a given language imply witness encryption for that language. We further prove that such arguments can always be collapsed to two messages and be made laconic. These implications rely on a more general connection with the notion of predictable arguments by Faonio, Nielsen, and Venturi (PKC 17).
Rintaro Fujita, Takanori Isobe, Kazuhiko Minematsu
ePrint Report
We present malleability attacks against encrypted binary executable files when they are encrypted by CBC mode of operation. While the CBC malleability is classic and has been used to attack on various real-world applications, the risk of encrypting binary executable via CBC mode on common OSs has not been widely recognized. We showed that, with a certain non-negligible probability, it is possible to manipulate the CBC-encrypted binary files so that the decryption result allows an arbitrary code execution (ACE), which is one of the most powerful exploits, even without the knowledge of plaintext binary. More specifically, for both 32- and 64-bit Linux and Windows OS, we performed a thorough analysis on the binary executable format to evaluate the practical impact of ACE on CBC encryption, and showed that the attack is possible if the adversary is able to correctly guess 13 to 25 bits of the address to inject code. In principle, our attack affects a wide range of storage/file encryption systems that adopt CBC encryption. In addition, a manual file encryption using OpenSSL API (AES-256-CBC) is affected, which is presumed to be frequently used in practice for file encryption. We provide Proof-of-Concept implementations for Linux and Windows. We have communicated our findings to the appropriate institution and have informed to vendors as an act of responsible disclosure.