International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

04 June 2019

Nico Döttling, Sanjam Garg, Giulio Malavolta, Prashant Nalini Vasudevan
ePrint Report ePrint Report
A Verifiable Delay Function (VDF) is a function that takes at least $T$ sequential steps to evaluate and produces a unique output that can be verified efficiently, in time essentially independent of $T$. In this work we study tight VDFs, where the function can be evaluated in time not much more than the sequentiality bound $T$.

On the negative side, we show the impossibility of a black-box construction from random oracles of a VDF that can be evaluated in time $T + O(T^\delta)$ for any constant $\delta < 1$. On the positive side, we show that any VDF with an inefficient prover (running in time $cT$ for some constant $c$) that has a natural self-composability property can be generically transformed into a VDF with a tight prover efficiency of $T+O(1)$. Our compiler introduces only a logarithmic factor overhead in the proof size and in the number of parallel threads needed by the prover. As a corollary, we obtain a simple construction of a tight VDF from any succinct non-interactive argument combined with repeated hashing. This is in contrast with prior generic constructions (Boneh et al, CRYPTO 2018) that required the existence of incremental verifiable computation, which entails stronger assumptions and complex machinery.
Expand
Jun Furukawa, Yehuda Lindell
ePrint Report ePrint Report
Secure multiparty computation (MPC) enables a set of parties to securely carry out a joint computation of their private inputs without revealing anything but the output. Protocols for semi-honest adversaries guarantee security as long as the corrupted parties run the specified protocol and ensure that nothing is leaked in the transcript. In contrast, protocols for malicious adversaries guarantee security in the presence of arbitrary adversaries who can run any attack strategy. Security for malicious adversaries is typically what is needed in practice (and is always preferred), but comes at a significant cost.

In this paper, we present the first protocol for a two-thirds honest majority that achieves security in the presence of malicious adversaries at essentially the exact same cost as the best known protocols for semi-honest adversaries. Our construction is not a general transformation and thus it is possible that better semi-honest protocols will be constructed which do not support our transformation. Nevertheless, for the current state-of-the-art for many parties (based on Shamir sharing), our protocol invokes the best semi-honest multiplication protocol exactly once per multiplication gate (plus some additional local computation that is negligible to the overall cost). Concretely, the best version of our protocol requires each party to send on average of just $2\frac23$ elements per multiplication gate (when the number of multiplication gates is at least the number of parties). This is four times faster than the previous-best protocol of Barak et al. (ACM CCS 2018) for small fields, and twice as fast as the previous-best protocol of Chida et al. (CRYPTO 2018) for large fields.
Expand
Leonard Assouline, Tianren Liu
ePrint Report ePrint Report
Private Simultaneous Messages (PSM) is a minimal model for information-theoretic non-interactive multi-party computation. In the 2-party case, Beimel et al. showed every function $f:[N]\times[N]\to\{0,1\}$ admits a 2-party PSM with communication complexity $O(\sqrt N)$. Recently, Applebaum et al. studied the multi-party case, showed every function $f:[N]^3\to\{0,1\}$ admits a 3-party PSM with communication computing $O(N)$.

We provide new upper bounds for general $k$-party case. Our upper bounds matches previous results when $k=2$ or $3$, and improve the communication complexity for infinitely many $k>3$. Our technique also implies 2-party PSM with unbalanced communication complexity. Concretely,

- For infinitely many $k$ --- in particular, including all $k \leq 19$ --- we construct $k$-party PSM protocols for arbitrary function $f:[N]^k\to\{0,1\}$, whose communication complexity is $O_k(N^{\frac{k-1}{2}})$. We also provide evidence suggesting the existence of such protocol for all $k$.

- For many $0<\eta<1$ --- including all rational $\eta = d/k$ such that $k\leq 12$ --- we construct 2-party PSM protocols for arbitrary function $f:[N]\times[N]\to\{0,1\}$, whose communication complexity is $O_\eta(N^\eta)$, $O_\eta(N^{1-\eta})$. We also provide evidence suggesting the existence of such protocol for all rational $\eta$.
Expand
Seetal Potluri, Akash Kumar, Aydin Aysu
ePrint Report ePrint Report
SAT-attack is known to successfully decrypt a functionally correct key of a locked combinational circuit. It is possible to extend the SAT-attack to sequential circuits through the scan-chain by selectively initializing the combinational logic and analyzing the responses. Recently, sequential locking was proposed as a defense to SAT-attack, which works by locking the scan-chains of flip-flops. ScanSAT [1], however, showed that it is possible to convert the sequentially locked instance to a locked combinational instance, and thereby decrypt the entire sequential key using SAT-attack. In this paper, we propose SeqL, a secure sequential lock defense against ScanSAT. SeqL provides functional isolation, and also encrypts selective flip-flop inputs, thereby mitigating ScanSAT and other related SAT-attacks. We conduct a formal study of the sequential locking problem and demonstrate automating our proposed defense on any given sequential circuit. We show that SeqL hides functionally correct keys from the attacker, thereby increasing the likelihood of functional output corruption. When tested on sequential benchmarks (ITC’99) and pipelined combinational benchmarks (ISCAS’85, MCNC), SeqL gave 100% resilience to ScanSAT.
Expand
Daniel J. Bernstein
ePrint Report ePrint Report
There are many proposed lattice-based encryption systems. How do these systems compare in the security that they provide against known attacks, under various limits on communication volume? There are several reasons to be skeptical of graphs that claim to answer this question. Part of the problem is with the underlying data points, and part of the problem is with how the data points are converted into graphs.
Expand
Brandon Goodell, Sarang Noether, RandomRun
ePrint Report ePrint Report
We describe an efficient linkable ring signature scheme, compact linkable spontaneous anonymous group (CLSAG) signatures, for use in confidential transactions. Compared to the existing signature scheme used in Monero, CLSAG signatures are both smaller and more efficient to generate and verify for ring sizes of interest. We generalize the construction and show how it can be used to produce signatures with coins of different type in the same transaction.
Expand
Fabrice Benhamouda, Akshay Degwekar, Yuval Ishai, Tal Rabin
ePrint Report ePrint Report
We consider the following basic question: to what extent are standard secret sharing schemes and protocols for secure multiparty computation that build on them resilient to leakage? We focus on a simple local leakage model, where the adversary can apply an arbitrary function of a bounded output length to the secret state of each party, but cannot otherwise learn joint information about the states.

We show that additive secret sharing schemes and high-threshold instances of Shamir’s secret sharing scheme are secure under local leakage attacks when the underlying field is of a large prime order and the number of parties is sufficiently large. This should be contrasted with the fact that any linear secret sharing scheme over a small characteristic field is clearly insecure under local leakage attacks, regardless of the number of parties. Our results are obtained via tools from Fourier analysis and additive combinatorics.

We present two types of applications of the above results and techniques. As a positive application, we show that the “GMW protocol” for honest-but-curious parties, when implemented using shared products of random field elements (so-called “Beaver Triples”), is resilient in the local leakage model for sufficiently many parties and over certain fields. This holds even when the adversary has full access to a constant fraction of the views. As a negative application, we rule out multiparty variants of the share conversion scheme used in the 2-party homomorphic secret sharing scheme of Boyle et al. (Crypto 2016).
Expand
Navid Ghaedi Bardeh, Sondre Rønjom
ePrint Report ePrint Report
In this paper we present exchange equivalence attacks which is a cryptanalytic attack technique suitable for SPN-like block cipher designs. Our new technique results in a secret-key chosen plaintext distinguisher for 6-round AES. The complexity of the distinguisher is about $2^{88.2}$ in terms of data, memory and computational complexity. The distinguishing attack for AES reduced to 6 rounds is a straight-forward extension of an exchange attack for 5-round AES that requires about $2^{30}$ in terms of chosen plaintexts and computation. This is also a new record for AES reduced to 5 rounds. The main result of this paper is that AES up to at least 6 rounds is biased when restricted to exchange invariant sets of plaintexts.
Expand
Muhammad Ishaq, Ana Milanova, Vassilis Zikas
ePrint Report ePrint Report
Multi-party computation (MPC) protocols have been extensively optimized in an effort to bring this technology to practice, which has already started bearing fruits. The choice of which MPC protocol to use depends on the computation we are trying to perform. Protocol mixing is an effective black-box ---with respect to the MPC protocols---approach to optimize performance. Despite, however, considerable progress in the recent years existing works are heuristic and either give no guarantee or require an exponential (brute-force) search to find the optimal assignment, a problem which was conjectured to be NP hard.

We provide a theoretically founded approach to optimal (MPC) protocol assignment, i.e., optimal mixing, and prove that under mild and natural assumptions, the problem is tractable both in theory and in practice for computing best two-out-of-three combinations. Concretely, for the case of two protocols, we utilize program analysis techniques---which we tailor to MPC---to define a new integer program, which we term the ``Optimal Protocol Assignment" (in short, OPA) problem whose solution is the optimal (mixed) protocol assignment for these two protocols. Most importantly, we prove that the solution to the linear program corresponding to the relaxation of OPA is integral, and hence is also a solution to OPA. Since linear programming can be efficiently solved, this yields the first efficient protocol mixer. We showcase the quality of our OPA solver by applying it to standard benchmarks from the mixing literature. Our OPA solver can be applied on any two-out-of-three protocol combinations to obtain a best two-out-of-three protocol assignment.
Expand
Nico Döttling, Russell W. F. Lai, Giulio Malavolta
ePrint Report ePrint Report
A proof of sequential work allows a prover to convince a verifier that a certain amount of sequential steps have been computed. In this work we introduce the notion of incremental proofs of sequential work where a prover can carry on the computation done by the previous prover incrementally, without affecting the resources of the individual provers or the size of the proofs.

To date, the most efficient instance of proofs of sequential work [Cohen and Pietrzak, Eurocrypt 2018] for $N$ steps require the prover to have $\sqrt{N}$ memory and to run for $N + \sqrt{N}$ steps. Using incremental proofs of sequential work we can bring down the prover's storage complexity to $\log N$ and its running time to $N$.

We propose two different constructions of incremental proofs of sequential work: Our first scheme requires a single processor and introduces a poly-logarithmic factor in the proof size when compared with the proposals of Cohen and Pietrzak. Our second scheme assumes $\log N$ parallel processors but brings down the overhead of the proof size to a factor of $9$. Both schemes are simple to implement and only rely on hash functions (modelled as random oracles).
Expand
Donghui Ding, Xin Jiang, Jiaping Wang, Hao Wang, Xiaobing Zhang, Yi Sun
ePrint Report ePrint Report
Current blockchains are restricted by the low throughput. Aimed at this problem, we propose Txilm, a protocol that compresses the size of transaction presentation in each block and thus saves the bandwidth of the blockchain network. In this protocol, a block carries short hashes of TXIDs instead of complete transactions. Combined with the transaction list sorted by TXIDs, Txilm realizes 80 times of data size reduction compared with the original blockchains. We also evaluate the probability of hash collisions, and provide methods of resolving such collisions. Finally, we design strategies to protect against possible attacks on Txilm.
Expand
Xavier Bultel, Pascal Lafourcade, Russell W. F. Lai, Giulio Malavolta, Dominique Schröder, Sri Aravinda Krishnan Thyagarajan
ePrint Report ePrint Report
Sanitizable signatures allow designated parties (the sanitizers) to apply arbitrary modifications to some restricted parts of signed messages. A secure scheme should not only be unforgeable, but also protect privacy and hold both the signer and the sanitizer accountable. Two important security properties that are seemingly difficult to achieve simultaneously and efficiently are invisibility and unlinkability. While invisibility ensures that the admissible modifications are hidden from external parties, unlinkability says that sanitized signatures cannot be linked to their sources. Achieving both properties simultaneously is crucial for applications where sensitive personal data is signed with respect to data-dependent admissible modifications. The existence of an efficient construction achieving both properties was recently posed as an open question by Camenisch et al. (PKC’17). In this work, we propose a solution to this problem with a two-step construction. First, we construct (non-accountable) invisible and unlinkable sanitizable signatures from signatures on equivalence classes and other basic primitives. Second, we put forth a generic transformation using verifiable ring signatures to turn any non-accountable sanitizable signature into an accountable one while preserving all other properties. When instantiating in the generic group and random oracle model, the efficiency of our construction is comparable to that of prior constructions, while providing stronger security guarantees.
Expand
Tatiana Bradley, Stanislaw Jarecki, Jiayu Xu
ePrint Report ePrint Report
Password-Authenticated Key Exchange (PAKE) protocols allow two parties that share a password to establish a shared key in a way that is immune to oine attacks. Asymmetric PAKE (aPAKE) [21] adapts this notion to the common client-server setting, where the server stores a one-way hash of the password instead of the password itself, and server compromise allows the adversary to recover the password only via the (inevitable) offline dictionary attack. Most aPAKE protocols, however, allow an attacker to pre-compute a dictionary of hashed passwords, thus instantly learning the password on server compromise. Recently, Jarecki, Krawczyk, and Xu formalized a Universally Composable strong aPAKE (saPAKE) [24], which requires the password hash to be salted so that the dictionary attack can only start after the server compromise leaks the salt and the salted hash. The UC saPAKE protocol shown in [24], called OPAQUE, uses 3 protocol ows, 3-4 exponentiations per party, and relies on the One-More Diffie-Hellman assumption in ROM.

We propose an alternative UC saPAKE construction based on a novel use of the encryption+SPHF paradigm for UC PAKE design [27, 20]. Compared to OPAQUE, our protocol uses only 2 flows, has comparable costs, avoids hashing onto a group, and relies on different assumptions, namely Decisional Diffie-Hellman (DDH), Strong Diffie-Hellman (SDH), and an assumption that the Boneh-Boyen function is a Salted Tight One-Way Function (STOWF). We formalize a UC model for STOWF and analyze the Boneh-Boyen function as UC STOWF in the generic group model and ROM.

Our saPAKE protocol employs a new form of Conditional Key Encapsulation Mechanism (CKEM), a generalization of SPHF, which we call an implicit-statement CKEM. This strengthening of SPHF allows for a UC (sa)PAKE design where only the client commits to its password, and only the server performs an SPHF, compared to the standard UC PAKE design paradigm where the encrypt+SPHF subroutine is used symmetrically by both parties.
Expand
Vipul Goyal, Yanyi Liu, Yifan Song
ePrint Report ePrint Report
We study the communication complexity of unconditionally secure MPC with guaranteed output delivery over point-to-point channels for corruption threshold $t < n/3$. We ask the question: “is it possible to construct MPC in this setting s.t. the communication complexity per multiplication gate is linear in the number of parties?” While a number of works have focused on reducing the communication complexity in this setting, the answer to the above question has remained elusive for over a decade. We resolve the above question in the affirmative by providing an MPC with communication complexity $O(Cn\kappa + n^3\kappa)$ where $\kappa$ is the size of an element in the field, $C$ is the size of the (arithmetic) circuit, and, $n$ is the number of parties. This represents a strict improvement over the previously best known communication complexity of $O(Cn\kappa+D_Mn^2\kappa+ n^3\kappa)$ where $D_M$ is the multiplicative depth of the circuit. To obtain this result, we introduce a novel technique called 4-consistent tuples of sharings which we believe to be of independent interest.
Expand
Shweta Agrawal, Monosij Maitra, Shota Yamada
ePrint Report ePrint Report
Waters [Crypto, 2012] provided the first attribute based encryption scheme ABE for Deterministic Finite Automata (DFA) from a parametrized or ``q-type'' assumption over bilinear maps. Obtaining a construction from static assumptions has been elusive, despite much progress in the area of ABE.

In this work, we construct the first attribute based encryption scheme for DFA from static assumptions on pairings, namely, the DLIN assumption. Our scheme supports unbounded length inputs, unbounded length machines and unbounded key requests. In more detail, secret keys in our construction are associated with a DFA $M$ of unbounded length, ciphertexts are associated with a tuple $(x, b)$ where $x$ is a public attribute of unbounded length and $b$ is a secret message bit, and decryption recovers $b$ if and only if $M(x)=1$.

Our techniques are at least as interesting as our final result. We present a simple compiler that combines constructions of unbounded ABE schemes for monotone span programs (MSP) in a black box way to construct ABE for DFA. In more detail, we find a way to embed DFA computation into monotone span programs, which lets us compose existing constructions (modified suitably) of unbounded key-policy ABE (kpABE) and unbounded ciphertext-policy ABE (cpABE) for MSP in a simple and modular way to obtain key-policy ABE for DFA. Our construction uses its building blocks in a symmetric way -- by swapping the use of the underlying kpABE and cpABE, we also obtain a construction of ciphertext-policy ABE for DFA.

Our work extends techniques developed recently by Agrawal, Maitra and Yamada [Crypto 2019], which show how to construct ABE that support unbounded machines and unbounded inputs by combining ABE schemes that are bounded in one co-ordinate. At the heart of our work is the observation that unbounded, multi-use ABE for MSP already achieve most of what we need to build ABE for DFA.
Expand
Aydin Abadi, Michele Ciampi, Aggelos Kiayias, Vassilis Zikas
ePrint Report ePrint Report
Timestamping is an important cryptographic primitive with numerous applications. The availability of a decentralized blockchain such as that offered by the Bitcoin protocol offers new possibilities to realise timestamping services. Nevertheless, to our knowledge, there are no recent blockchain-based proposals that are formally proved in a composable setting.

In this work, we put forth the first formal treatment of timestamping cryptographic primitives in the UC framework with respect to a global clock -we refer to the corresponding primitives as timed to indicate this association. We propose timed versions of primitives commonly used for authenticating information, such as digital signatures, non-interactive zero-knowledge proofs, and signatures of knowledge and show how those can be UC-securely implemented by a protocol that makes ideal (blackbox) access to a global transaction ledger based on the ledger proposed by Badertscher et al. [CRYPTO 2017] which is UC realized by the Bitcoin backbone protocol [Eurocrypt 2015]. Our definitions introduce a fine-grained treatment of the different timestamping guarantees, namely security against postdating and backdating attacks; our results treat each of these cases separately and in combination, and shed light on the assumptions that they rely on. Our constructions rely on a relaxation of an ideal beacon functionality, which we implement UC-securely assuming the ledger functionality. Given the many potential uses of such a beacon in cryptographic protocols this result may be of independent interest.
Expand

03 June 2019

Prabhanjan Ananth, Aayush Jain, Huijia Lin, Christian Matt, Amit Sahai
ePrint Report ePrint Report
The existence of secure indistinguishability obfuscators (iO) has far-reaching implications, significantly expanding the scope of problems amenable to cryptographic study. All known approaches to constructing iO rely on $d$-linear maps.

While secure bilinear maps are well established in cryptographic literature, the security of candidates for $d>2$ is poorly understood. We propose a new approach to constructing iO for general circuits. Unlike all previously known realizations of iO, we avoid the use of $d$-linear maps of degree $d \ge 3$.

At the heart of our approach is the assumption that a new weak pseudorandom object exists. We consider two related variants of these objects, which we call perturbation resilient generator ($\Delta$RG) and pseudo flawed-smudging generator (PFG), respectively. At a high level, both objects are polynomially expanding functions whose outputs partially hide (or smudge) small noise vectors when added to them. We further require that they are computable by a family of degree-3 polynomials over $\mathbb{Z}$. We show how they can be used to construct functional encryption schemes with weak security guarantees. Finally, we use novel amplification techniques to obtain full security.

As a result, we obtain iO for general circuits assuming:

- Subexponentially secure LWE

- Bilinear Maps

- $\textrm{poly}(\lambda)$-secure 3-block-local PRGs

- $\Delta$RGs or PFGs
Expand
Jonathan Bootle, Vadim Lyubashevsky, Gregor Seiler
ePrint Report ePrint Report
A key component of many lattice-based protocols is a zero-knowledge proof of knowledge of a vector $\vec{s}$ with small coefficients satisfying $A\vec{s}=\vec{u}\bmod\,q$. While there exist fairly efficient proofs for a relaxed version of this equation which prove the knowledge of $\vec{s}'$ and $c$ satisfying $A\vec{s}'=\vec{u}c$ where $\|\vec{s}'\|\gg\|\vec{s}\|$ and $c$ is some small element in the ring over which the proof is performed, the proofs for the exact version of the equation are considerably less practical. The best such proof technique is an adaptation of Stern's protocol (Crypto '93), for proving knowledge of nearby codewords, to larger moduli. The scheme is a $\Sigma$-protocol, each of whose iterations has soundness error $2/3$, and thus requires over $200$ repetitions to obtain soundness error of $2^{-128}$, which is the main culprit behind the large size of the proofs produced.

In this paper, we propose the first lattice-based proof system that significantly outperforms Stern-type proofs for proving knowledge of a short $\vec{s}$ satisfying $A\vec{s}=\vec{u}\bmod\,q$. Unlike Stern's proof, which is combinatorial in nature, our proof is more algebraic and uses various relaxed zero-knowledge proofs as sub-routines. The main savings in our proof system comes from the fact that each round has soundness error of $1/n$, where $n$ is the number of columns of $A$. For typical applications, $n$ is a few thousand, and therefore our proof needs to be repeated around $10$ times to achieve a soundness error of $2^{-128}$. For concrete parameters, it produces proofs that are around an order of magnitude smaller than those produced using Stern's approach.
Expand
Shahla Atapoor, Karim Baghery
ePrint Report ePrint Report
A Simulation Extractable (SE) zk-SNARK enables a prover to prove that she knows a witness for an instance in a way that the proof: (1) is succinct and can be verified very efficiently; (2) does not leak information about the witness; (3) is simulation-extractable -an adversary cannot come out with a new valid proof unless it knows a witness, even if it has already seen arbitrary number of simulated proofs. Non-malleable succinct proofs and very efficient verification make SE zk-SNARKs an elegant tool in various privacy-preserving applications such as cryptocurrencies, smart contracts and etc. In Eurocrypt 2016, Groth proposed the most efficient pairing-based zk-SNARK in the CRS model, but its proof is vulnerable to malleability attacks. In this paper, we show that one can efficiently achieve simulation extractability in Groth's zk-SNARK by some changes in the underlying language using an OR construction. Analysis show that in practical cases overload has minimal effects on the efficiency of original scheme which currently is the most efficient zk-SNARK. In new construction, proof size will be extended by one element from $\mathbb{G}_1$, one element from $\mathbb{G}_2$ plus a bit string, that totally will be still less than 200 bytes for 128-bit security. Its verification is dominated with 4 parings which is the most efficient verification among current SE zk-SNARKs.
Expand
Nir Bitansky, Omer Paneth
ePrint Report ePrint Report
We construct the first three message statistical zero knowledge arguments for all of NP, matching the known lower bound. We do so based on keyless multi-collision resistant hash functions and other standard primitives (based on the Learning with Errors assumption) --- the same assumptions used to obtain round optimal computational zero knowledge. The main component in our constructions is a statistically witness indistinguishable argument of knowledge based on a new notion of statistically hiding commitments with subset opening.
Expand
◄ Previous Next ►