International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

06 June 2019

Zheng Wang, Cong Ling
ePrint Report ePrint Report
Sampling from the lattice Gaussian distribution has emerged as an important problem in coding, decoding and cryptography. In this paper, the classic Metropolis-Hastings (MH) algorithm in Markov chain Monte Carlo (MCMC) methods is adopted for lattice Gaussian sampling. Two MH-based algorithms are proposed, which overcome the limitation of Klein's algorithm. The first one, referred to as the independent Metropolis-Hastings-Klein (MHK) algorithm, establishes a Markov chain via an independent proposal distribution. We show that the Markov chain arising from this independent MHK algorithm is uniformly ergodic, namely, it converges to the stationary distribution exponentially fast regardless of the initial state. Moreover, the rate of convergence is analyzed in terms of the theta series, leading to predictable mixing time. A symmetric Metropolis-Klein (SMK) algorithm is also proposed, which is proven to be geometrically ergodic.
Expand
Jintai Ding, Pedro Branco, Kevin Schmitt
ePrint Report ePrint Report
Key Exchange (KE) is, undoubtedly, one of the most used cryptographic primitives in practice. Its authenticated version, Authenticated Key Exchange (AKE), avoids man-in-the-middle-based attacks by providing authentication for both parties involved. It is widely used on the Internet, in protocols such as TLS or SSH. In this work, we provide new constructions for KE and AKE based on ideal lattices in the Random Oracle Model (ROM). The contributions of this work can be summarized as follows:

1) It is well-known that RLWE-based KE protocols are not robust for key reuses since the signal function leaks information about the secret key. We modify the design of previous RLWE-based KE schemes to allow key reuse in the ROM. Our construction makes use of a new technique called pasteurization which enforces a supposedly RLWE sample sent by the other party to be indeed indistinguishable from a uniform sample and, therefore, ensures no information leakage in the whole KE process.

2) We build a new AKE scheme based on the construction above. The scheme provides implicit authentication (that is, it does not require the use of any other authentication mechanism, like a signature scheme) and it is proven secure in the Bellare-Rogaway model with weak Perfect Forward Secrecy in the ROM. It improves previous designs for AKE schemes based on lattices in several aspects. Our construction just requires sampling from only one discrete Gaussian distribution and avoids rejection sampling and noise flooding techniques, unlike previous proposals (Zhang et al., EUROCRYPT 2015). Thus, the scheme is much more efficient than previous constructions in terms of computational and communication complexity.

Since our constructions are provably secure assuming the hardness of the RLWE problem, they are considered to be robust against quantum adversaries and, thus, suitable for post-quantum applications.
Expand

05 June 2019

Huanyu Wang, Martin Brisfors, Sebastian Forsmark, Elena Dubrova
ePrint Report ePrint Report
Deep learning side-channel attacks are an emerging threat to the security of implementations of cryptographic algorithms. The attacker first trains a model on a large set of side-channel traces captured from a chip with a known key. The trained model is then used to recover the unknown key from a few traces captured from a victim chip. The first successful attacks have been demonstrated recently. However, they typically train and test on power traces captured from the same device. In this paper, we show that it is important to train and test on traces captured from different boards and using diverse implementations of the cryptographic algorithm under attack. Otherwise, it is easy to overestimate the classification accuracy. For example, if we train and test an MLP model on power traces captured from the same board, we can recover all key byte values with 96% accuracy from a single trace. However, the single-trace attack accuracy drops to 2.45% if we test on traces captured from a board different from the one we used for training, even if both boards carry identical chips.
Expand
Mohammad Mahmoody, Caleb Smith, David J. Wu
ePrint Report ePrint Report
Boneh, Bonneau, B{\"u}nz, and Fisch (CRYPTO 2018) recently introduced the notion of a \emph{verifiable delay function} (VDF). VDFs are functions that take a long \emph{sequential} time $T$ to compute, but whose outputs $y \gets Eval(x)$ can be quickly verified (possibly given a proof $\pi$ that is also computed along $Eval(x)$) in time $t \ll T$ (e.g., $t=poly(\lambda, \log T)$ where $\lambda$ is the security parameter). The first security requirement on a VDF asks that no polynomial-time algorithm can find a convincing proof $\pi'$ that verifies for an input $x$ and a different output $y' \neq y$. The second security requirement is that that no polynomial-time algorithm running in \emph{sequential} time $T'<T$ (e.g., $T'=T^{1/10}$) can compute $y$. Starting from the work of Boneh et al., there are now multiple constructions of VDFs from various algebraic assumptions.

In this work, we study whether VDFs can be constructed from ideal hash functions as modeled in the random oracle model (ROM). In the ROM, we measure the running time by the number of oracle queries and the sequentiality by the number of \emph{rounds} of oracle queries it makes. We show that \emph{statistically-unique} VDFs (i.e., where no algorithm can find a convincing different solution $y' \neq y$) cannot be constructed in the ROM. More formally, we give an attacker that finds the solution $y$ in $\approx t$ \emph{rounds} of queries and asking only $poly(T)$ queries in total.
Expand

04 June 2019

Gebze/Istanbul, Turkey, 13 November - 15 November 2019
Event Calendar Event Calendar
Event date: 13 November to 15 November 2019
Submission deadline: 20 August 2019
Notification: 10 October 2019
Expand
Christian Badertscher, Daniel Jost, Ueli Maurer
ePrint Report ePrint Report
Proofs of knowledge (PoK) are one of the most fundamental notions in cryptography and have been used as a building block in numerous applications. The appeal of this notion is that it is parameterized by generic relations which an application can suitably instantiate. On the other hand, in many applications, a more generalized proof system would be desirable that captures aspects not considered by the low-level abstraction boundary of PoKs. First, the context in which the protocol is executed is encoded using a static auxiliary input, which is insufficient to represent a world with more dynamic setup, or even the case where the relation to be proven does depend on a setup. Second, proofs of knowledge do by definition not take into account the statement derivation process. Yet, it often impacts either the complexity of the associated interactive proof or the effective zero-knowledge guarantees that can still be provided by the proof system. Some of this critique has been observed and partially addressed by Bernhard et al. (PKC'15), who consider PoK in the presence of a random oracle, and Choudhuri et al. (Eurocrypt'19), who need PoK schemes in the presence of a ledger functionality.

However, the theoretical foundation of a generalized notion of PoK with setup-dependent relations is still missing. As a first contribution, we introduce this new notion and call it agree-and-proof. Agree-and-prove rigorously extends the basic PoK framework to include the missing aspects. The new notion provides clear semantics of correctness, soundness, and zero-knowledge in the presence of generic setup and under dynamic statement derivation.

As a second contribution, we show that the agree-and-prove notion is the natural abstraction for applications that are in fact generalized PoKs, but for which the existing isolated notions do not reveal this intrinsic connection. First, we consider proofs-of-ownership of files for client-side file deduplication. We cast the problem and some of its prominent schemes in our agree-and-prove framework and formally analyze their security. Finally, leveraging our generalized zero-knowledge formalization, we devise a novel scheme that is provably the privacy-preserving analogon of the known Merkle-Tree based proof-of-ownership protocol. As a second application, we consider entity authentication and two-factor authentication. We thereby demonstrate that the agree-and-prove notion can not only phrase generalized PoKs, but also, along the same lines, proofs of possession or ability, such as proving the correct usage of a hardware token.
Expand
Shivam Bhasin, Anupam Chattopadhyay, Annelie Heuser, Dirmanto Jap, Stjepan Picek, Ritu Ranjan Shrivastwa
ePrint Report ePrint Report
Profiled side-channel attacks represent a practical threat to digital devices, thereby having the potential to disrupt the foundation of e-commerce, Internet-of-Things (IoT), and smart cities. In the profiled side-channel attack, adversary gains knowledge about the target device by getting access to a cloned device. Though these two devices are different in real-world scenarios, yet, unfortunately, a large part of research works simplifies the setting by using only a single device for both profiling and attacking. There, the portability issue is conveniently ignored in order to ease the experimental procedure. In parallel to the above developments, machine learning techniques are used in recent literature demonstrating excellent performance in profiled side-channel attacks. Again, unfortunately, the portability is neglected. In this paper, we consider realistic side-channel scenarios and commonly used machine learning techniques to evaluate the influence of portability on the efficacy of an attack. Our experimental results show that portability plays an important role and should not be disregarded as it contributes to a significant overestimate of the attack efficiency, which can easily be an order of magnitude size. After establishing the importance of portability, we propose a new model called the Multiple Device Model (MDM) that formally incorporates the device to device variation during a profiled side-channel attack. We show through experimental studies, how machine learning and MDM significantly enhances the capacity for practical side-channel attacks. More precisely, we demonstrate how MDM is able to improve the results by $>10\times$, completely negating the influence of portability.
Expand
Zheng Wang, Cong Ling
ePrint Report ePrint Report
Sampling from the lattice Gaussian distribution plays an important role in various research fields. In this paper, the Markov chain Monte Carlo (MCMC)-based sampling technique is advanced in several fronts. Firstly, the spectral gap for the independent Metropolis-Hastings-Klein (MHK) algorithm is derived, which is then extended to Peikert's algorithm and rejection sampling; we show that independent MHK exhibits faster convergence. Then, the performance of bounded distance decoding using MCMC is analyzed, revealing a flexible trade-off between the decoding radius and complexity. MCMC is further applied to trapdoor sampling, again offering a trade-off between security and complexity. Finally, the independent multiple-try Metropolis-Klein (MTMK) algorithm is proposed to enhance the convergence rate. The proposed algorithms allow parallel implementation, which is beneficial for practical applications.
Expand
Nico Döttling, Sanjam Garg, Giulio Malavolta, Prashant Nalini Vasudevan
ePrint Report ePrint Report
A Verifiable Delay Function (VDF) is a function that takes at least $T$ sequential steps to evaluate and produces a unique output that can be verified efficiently, in time essentially independent of $T$. In this work we study tight VDFs, where the function can be evaluated in time not much more than the sequentiality bound $T$.

On the negative side, we show the impossibility of a black-box construction from random oracles of a VDF that can be evaluated in time $T + O(T^\delta)$ for any constant $\delta < 1$. On the positive side, we show that any VDF with an inefficient prover (running in time $cT$ for some constant $c$) that has a natural self-composability property can be generically transformed into a VDF with a tight prover efficiency of $T+O(1)$. Our compiler introduces only a logarithmic factor overhead in the proof size and in the number of parallel threads needed by the prover. As a corollary, we obtain a simple construction of a tight VDF from any succinct non-interactive argument combined with repeated hashing. This is in contrast with prior generic constructions (Boneh et al, CRYPTO 2018) that required the existence of incremental verifiable computation, which entails stronger assumptions and complex machinery.
Expand
Jun Furukawa, Yehuda Lindell
ePrint Report ePrint Report
Secure multiparty computation (MPC) enables a set of parties to securely carry out a joint computation of their private inputs without revealing anything but the output. Protocols for semi-honest adversaries guarantee security as long as the corrupted parties run the specified protocol and ensure that nothing is leaked in the transcript. In contrast, protocols for malicious adversaries guarantee security in the presence of arbitrary adversaries who can run any attack strategy. Security for malicious adversaries is typically what is needed in practice (and is always preferred), but comes at a significant cost.

In this paper, we present the first protocol for a two-thirds honest majority that achieves security in the presence of malicious adversaries at essentially the exact same cost as the best known protocols for semi-honest adversaries. Our construction is not a general transformation and thus it is possible that better semi-honest protocols will be constructed which do not support our transformation. Nevertheless, for the current state-of-the-art for many parties (based on Shamir sharing), our protocol invokes the best semi-honest multiplication protocol exactly once per multiplication gate (plus some additional local computation that is negligible to the overall cost). Concretely, the best version of our protocol requires each party to send on average of just $2\frac23$ elements per multiplication gate (when the number of multiplication gates is at least the number of parties). This is four times faster than the previous-best protocol of Barak et al. (ACM CCS 2018) for small fields, and twice as fast as the previous-best protocol of Chida et al. (CRYPTO 2018) for large fields.
Expand
Leonard Assouline, Tianren Liu
ePrint Report ePrint Report
Private Simultaneous Messages (PSM) is a minimal model for information-theoretic non-interactive multi-party computation. In the 2-party case, Beimel et al. showed every function $f:[N]\times[N]\to\{0,1\}$ admits a 2-party PSM with communication complexity $O(\sqrt N)$. Recently, Applebaum et al. studied the multi-party case, showed every function $f:[N]^3\to\{0,1\}$ admits a 3-party PSM with communication computing $O(N)$.

We provide new upper bounds for general $k$-party case. Our upper bounds matches previous results when $k=2$ or $3$, and improve the communication complexity for infinitely many $k>3$. Our technique also implies 2-party PSM with unbalanced communication complexity. Concretely,

- For infinitely many $k$ --- in particular, including all $k \leq 19$ --- we construct $k$-party PSM protocols for arbitrary function $f:[N]^k\to\{0,1\}$, whose communication complexity is $O_k(N^{\frac{k-1}{2}})$. We also provide evidence suggesting the existence of such protocol for all $k$.

- For many $0<\eta<1$ --- including all rational $\eta = d/k$ such that $k\leq 12$ --- we construct 2-party PSM protocols for arbitrary function $f:[N]\times[N]\to\{0,1\}$, whose communication complexity is $O_\eta(N^\eta)$, $O_\eta(N^{1-\eta})$. We also provide evidence suggesting the existence of such protocol for all rational $\eta$.
Expand
Seetal Potluri, Akash Kumar, Aydin Aysu
ePrint Report ePrint Report
SAT-attack is known to successfully decrypt a functionally correct key of a locked combinational circuit. It is possible to extend the SAT-attack to sequential circuits through the scan-chain by selectively initializing the combinational logic and analyzing the responses. Recently, sequential locking was proposed as a defense to SAT-attack, which works by locking the scan-chains of flip-flops. ScanSAT [1], however, showed that it is possible to convert the sequentially locked instance to a locked combinational instance, and thereby decrypt the entire sequential key using SAT-attack. In this paper, we propose SeqL, a secure sequential lock defense against ScanSAT. SeqL provides functional isolation, and also encrypts selective flip-flop inputs, thereby mitigating ScanSAT and other related SAT-attacks. We conduct a formal study of the sequential locking problem and demonstrate automating our proposed defense on any given sequential circuit. We show that SeqL hides functionally correct keys from the attacker, thereby increasing the likelihood of functional output corruption. When tested on sequential benchmarks (ITC’99) and pipelined combinational benchmarks (ISCAS’85, MCNC), SeqL gave 100% resilience to ScanSAT.
Expand
Daniel J. Bernstein
ePrint Report ePrint Report
There are many proposed lattice-based encryption systems. How do these systems compare in the security that they provide against known attacks, under various limits on communication volume? There are several reasons to be skeptical of graphs that claim to answer this question. Part of the problem is with the underlying data points, and part of the problem is with how the data points are converted into graphs.
Expand
Brandon Goodell, Sarang Noether, RandomRun
ePrint Report ePrint Report
We describe an efficient linkable ring signature scheme, compact linkable spontaneous anonymous group (CLSAG) signatures, for use in confidential transactions. Compared to the existing signature scheme used in Monero, CLSAG signatures are both smaller and more efficient to generate and verify for ring sizes of interest. We generalize the construction and show how it can be used to produce signatures with coins of different type in the same transaction.
Expand
Fabrice Benhamouda, Akshay Degwekar, Yuval Ishai, Tal Rabin
ePrint Report ePrint Report
We consider the following basic question: to what extent are standard secret sharing schemes and protocols for secure multiparty computation that build on them resilient to leakage? We focus on a simple local leakage model, where the adversary can apply an arbitrary function of a bounded output length to the secret state of each party, but cannot otherwise learn joint information about the states.

We show that additive secret sharing schemes and high-threshold instances of Shamir’s secret sharing scheme are secure under local leakage attacks when the underlying field is of a large prime order and the number of parties is sufficiently large. This should be contrasted with the fact that any linear secret sharing scheme over a small characteristic field is clearly insecure under local leakage attacks, regardless of the number of parties. Our results are obtained via tools from Fourier analysis and additive combinatorics.

We present two types of applications of the above results and techniques. As a positive application, we show that the “GMW protocol” for honest-but-curious parties, when implemented using shared products of random field elements (so-called “Beaver Triples”), is resilient in the local leakage model for sufficiently many parties and over certain fields. This holds even when the adversary has full access to a constant fraction of the views. As a negative application, we rule out multiparty variants of the share conversion scheme used in the 2-party homomorphic secret sharing scheme of Boyle et al. (Crypto 2016).
Expand
Navid Ghaedi Bardeh, Sondre Rønjom
ePrint Report ePrint Report
In this paper we present exchange equivalence attacks which is a cryptanalytic attack technique suitable for SPN-like block cipher designs. Our new technique results in a secret-key chosen plaintext distinguisher for 6-round AES. The complexity of the distinguisher is about $2^{88.2}$ in terms of data, memory and computational complexity. The distinguishing attack for AES reduced to 6 rounds is a straight-forward extension of an exchange attack for 5-round AES that requires about $2^{30}$ in terms of chosen plaintexts and computation. This is also a new record for AES reduced to 5 rounds. The main result of this paper is that AES up to at least 6 rounds is biased when restricted to exchange invariant sets of plaintexts.
Expand
Muhammad Ishaq, Ana Milanova, Vassilis Zikas
ePrint Report ePrint Report
Multi-party computation (MPC) protocols have been extensively optimized in an effort to bring this technology to practice, which has already started bearing fruits. The choice of which MPC protocol to use depends on the computation we are trying to perform. Protocol mixing is an effective black-box ---with respect to the MPC protocols---approach to optimize performance. Despite, however, considerable progress in the recent years existing works are heuristic and either give no guarantee or require an exponential (brute-force) search to find the optimal assignment, a problem which was conjectured to be NP hard.

We provide a theoretically founded approach to optimal (MPC) protocol assignment, i.e., optimal mixing, and prove that under mild and natural assumptions, the problem is tractable both in theory and in practice for computing best two-out-of-three combinations. Concretely, for the case of two protocols, we utilize program analysis techniques---which we tailor to MPC---to define a new integer program, which we term the ``Optimal Protocol Assignment" (in short, OPA) problem whose solution is the optimal (mixed) protocol assignment for these two protocols. Most importantly, we prove that the solution to the linear program corresponding to the relaxation of OPA is integral, and hence is also a solution to OPA. Since linear programming can be efficiently solved, this yields the first efficient protocol mixer. We showcase the quality of our OPA solver by applying it to standard benchmarks from the mixing literature. Our OPA solver can be applied on any two-out-of-three protocol combinations to obtain a best two-out-of-three protocol assignment.
Expand
Nico Döttling, Russell W. F. Lai, Giulio Malavolta
ePrint Report ePrint Report
A proof of sequential work allows a prover to convince a verifier that a certain amount of sequential steps have been computed. In this work we introduce the notion of incremental proofs of sequential work where a prover can carry on the computation done by the previous prover incrementally, without affecting the resources of the individual provers or the size of the proofs.

To date, the most efficient instance of proofs of sequential work [Cohen and Pietrzak, Eurocrypt 2018] for $N$ steps require the prover to have $\sqrt{N}$ memory and to run for $N + \sqrt{N}$ steps. Using incremental proofs of sequential work we can bring down the prover's storage complexity to $\log N$ and its running time to $N$.

We propose two different constructions of incremental proofs of sequential work: Our first scheme requires a single processor and introduces a poly-logarithmic factor in the proof size when compared with the proposals of Cohen and Pietrzak. Our second scheme assumes $\log N$ parallel processors but brings down the overhead of the proof size to a factor of $9$. Both schemes are simple to implement and only rely on hash functions (modelled as random oracles).
Expand
Donghui Ding, Xin Jiang, Jiaping Wang, Hao Wang, Xiaobing Zhang, Yi Sun
ePrint Report ePrint Report
Current blockchains are restricted by the low throughput. Aimed at this problem, we propose Txilm, a protocol that compresses the size of transaction presentation in each block and thus saves the bandwidth of the blockchain network. In this protocol, a block carries short hashes of TXIDs instead of complete transactions. Combined with the transaction list sorted by TXIDs, Txilm realizes 80 times of data size reduction compared with the original blockchains. We also evaluate the probability of hash collisions, and provide methods of resolving such collisions. Finally, we design strategies to protect against possible attacks on Txilm.
Expand
Xavier Bultel, Pascal Lafourcade, Russell W. F. Lai, Giulio Malavolta, Dominique Schröder, Sri Aravinda Krishnan Thyagarajan
ePrint Report ePrint Report
Sanitizable signatures allow designated parties (the sanitizers) to apply arbitrary modifications to some restricted parts of signed messages. A secure scheme should not only be unforgeable, but also protect privacy and hold both the signer and the sanitizer accountable. Two important security properties that are seemingly difficult to achieve simultaneously and efficiently are invisibility and unlinkability. While invisibility ensures that the admissible modifications are hidden from external parties, unlinkability says that sanitized signatures cannot be linked to their sources. Achieving both properties simultaneously is crucial for applications where sensitive personal data is signed with respect to data-dependent admissible modifications. The existence of an efficient construction achieving both properties was recently posed as an open question by Camenisch et al. (PKC’17). In this work, we propose a solution to this problem with a two-step construction. First, we construct (non-accountable) invisible and unlinkable sanitizable signatures from signatures on equivalence classes and other basic primitives. Second, we put forth a generic transformation using verifiable ring signatures to turn any non-accountable sanitizable signature into an accountable one while preserving all other properties. When instantiating in the generic group and random oracle model, the efficiency of our construction is comparable to that of prior constructions, while providing stronger security guarantees.
Expand
◄ Previous Next ►